Search This Blog

Thursday, July 16, 2020

Theories of second-language acquisition

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Theories_of_second-language_acquisition

The main purpose of theories of second-language acquisition (SLA) is to shed light on how people who already know one language learn a second language. The field of second-language acquisition involves various contributions, such as linguistics, sociolinguistics, psychology, cognitive science, neuroscience, and education. These multiple fields in second-language acquisition can be grouped as four major research strands: (a) linguistic dimensions of SLA, (b) cognitive (but not linguistic) dimensions of SLA, (c) socio-cultural dimensions of SLA, and (d) instructional dimensions of SLA. While the orientation of each research strand is distinct, they are in common in that they can guide us to find helpful condition to facilitate successful language learning. Acknowledging the contributions of each perspective and the interdisciplinarity between each field, more and more second language researchers are now trying to have a bigger lens on examining the complexities of second language acquisition.

History

As second-language acquisition began as an interdisciplinary field, it is hard to pin down a precise starting date. However, there are two publications in particular that are seen as instrumental to the development of the modern study of SLA: (1) Corder's 1967 essay The Significance of Learners' Errors, and (2) Selinker's 1972 article Interlanguage. Corder's essay rejected a behaviorist account of SLA and suggested that learners made use of intrinsic internal linguistic processes; Selinker's article argued that second-language learners possess their own individual linguistic systems that are independent from both the first and second languages.

In the 1970s the general trend in SLA was for research exploring the ideas of Corder and Selinker, and refuting behaviorist theories of language acquisition. Examples include research into error analysis, studies in transitional stages of second-language ability, and the "morpheme studies" investigating the order in which learners acquired linguistic features. The 70s were dominated by naturalistic studies of people learning English as a second language.

By the 1980s, the theories of Stephen Krashen had become the prominent paradigm in SLA. In his theories, often collectively known as the Input Hypothesis, Krashen suggested that language acquisition is driven solely by comprehensible input, language input that learners can understand. Krashen's model was influential in the field of SLA and also had a large influence on language teaching, but it left some important processes in SLA unexplained. Research in the 1980s was characterized by the attempt to fill in these gaps. Some approaches included White's descriptions of learner competence, and Pienemann's use of speech processing models and lexical functional grammar to explain learner output. This period also saw the beginning of approaches based in other disciplines, such as the psychological approach of connectionism.

The 1990s saw a host of new theories introduced to the field, such as Michael Long's interaction hypothesis, Merrill Swain's output hypothesis, and Richard Schmidt's noticing hypothesis. However, the two main areas of research interest were linguistic theories of SLA based upon Noam Chomsky's universal grammar, and psychological approaches such as skill acquisition theory and connectionism. The latter category also saw the new theories of processability and input processing in this time period. The 1990s also saw the introduction of sociocultural theory, an approach to explain second-language acquisition in terms of the social environment of the learner.

In the 2000s research was focused on much the same areas as in the 1990s, with research split into two main camps of linguistic and psychological approaches. VanPatten and Benati do not see this state of affairs as changing in the near future, pointing to the support both areas of research have in the wider fields of linguistics and psychology, respectively.

Universal grammar

From the field of linguistics, the most influential theory by far has been Chomsky's theory of Universal Grammar (UG). The core of this theory lies on the existence of an innate universal grammar, grounded on the poverty of the stimulus. The UG model of principles, basic properties which all languages share, and parameters, properties which can vary between languages, has been the basis for much second-language research.

From a UG perspective, learning the grammar of a second language is simply a matter of setting the correct parameters. Take the pro-drop parameter, which dictates whether or not sentences must have a subject in order to be grammatically correct. This parameter can have two values: positive, in which case sentences do not necessarily need a subject, and negative, in which case subjects must be present. In German the sentence "Er spricht" (he speaks) is grammatical, but the sentence "Spricht" (speaks) is ungrammatical. In Italian, however, the sentence "Parla" (speaks) is perfectly normal and grammatically correct. A German speaker learning Italian would only need to deduce that subjects are optional from the language he hears, and then set his pro-drop parameter for Italian accordingly. Once he has set all the parameters in the language correctly, then from a UG perspective he can be said to have learned Italian, i.e. he will always produce perfectly correct Italian sentences.

Universal Grammar also provides a succinct explanation for much of the phenomenon of language transfer. Spanish learners of English who make the mistake "Is raining" instead of "It is raining" have not yet set their pro-drop parameters correctly and are still using the same setting as in Spanish.

The main shortcoming of Universal Grammar in describing second-language acquisition is that it does not deal at all with the psychological processes involved with learning a language. UG scholarship is only concerned with whether parameters are set or not, not with how they are set. Schachter (1988) is a useful critique of research testing the role of Universal Grammar in second language acquisition.

Input hypothesis

Learners' most direct source of information about the target language is the target language itself. When they come into direct contact with the target language, this is referred to as "input." When learners process that language in a way that can contribute to learning, this is referred to as "intake". However, it must be at a level that is comprehensible to them. In his monitor theory, Krashen advanced the concept that language input should be at the "i+1" level, just beyond what the learner can fully understand; this input is comprehensible, but contains structures that are not yet fully understood. This has been criticized on the basis that there is no clear definition of i+1, and that factors other than structural difficulty (such as interest or presentation) can affect whether input is actually turned into intake. The concept has been quantified, however, in vocabulary acquisition research; Nation reviews various studies which indicate that about 98% of the words in running text should be previously known in order for extensive reading to be effective.

In his Input Hypothesis, Krashen proposes that language acquisition takes place only when learners receive input just beyond their current level of L2 competence. He termed this level of input “i+1.” However, in contrast to emergentist and connectionist theories, he follows the innate approach by applying Chomsky's Government and binding theory and concept of Universal grammar (UG) to second-language acquisition. He does so by proposing a Language Acquisition Device that uses L2 input to define the parameters of the L2, within the constraints of UG, and to increase the L2 proficiency of the learner. In addition, Krashen (1982)’s Affective Filter Hypothesis holds that the acquisition of a second language is halted if the learner has a high degree of anxiety when receiving input. According to this concept, a part of the mind filters out L2 input and prevents intake by the learner, if the learner feels that the process of SLA is threatening. As mentioned earlier, since input is essential in Krashen’s model, this filtering action prevents acquisition from progressing.

A great deal of research has taken place on input enhancement, the ways in which input may be altered so as to direct learners' attention to linguistically important areas. Input enhancement might include bold-faced vocabulary words or marginal glosses in a reading text. Research here is closely linked to research on pedagogical effects, and comparably diverse.

Monitor model

Other concepts have also been influential in the speculation about the processes of building internal systems of second-language information. Some thinkers hold that language processing handles distinct types of knowledge. For instance, one component of the Monitor Model, propounded by Krashen, posits a distinction between “acquisition” and “learning.” According to Krashen, L2 acquisition is a subconscious process of incidentally “picking up” a language, as children do when becoming proficient in their first languages. Language learning, on the other hand, is studying, consciously and intentionally, the features of a language, as is common in traditional classrooms. Krashen sees these two processes as fundamentally different, with little or no interface between them. In common with connectionism, Krashen sees input as essential to language acquisition.

Further, Bialystok and Smith make another distinction in explaining how learners build and use L2 and interlanguage knowledge structures. They argue that the concept of interlanguage should include a distinction between two specific kinds of language processing ability. On one hand is learners’ knowledge of L2 grammatical structure and ability to analyze the target language objectively using that knowledge, which they term “representation,” and, on the other hand is the ability to use their L2 linguistic knowledge, under time constraints, to accurately comprehend input and produce output in the L2, which they call “control.” They point out that often non-native speakers of a language have higher levels of representation than their native-speaking counterparts have, yet have a lower level of control. Finally, Bialystok has framed the acquisition of language in terms of the interaction between what she calls “analysis” and “control.” Analysis is what learners do when they attempt to understand the rules of the target language. Through this process, they acquire these rules and can use them to gain greater control over their own production.

Monitoring is another important concept in some theoretical models of learner use of L2 knowledge. According to Krashen, the Monitor is a component of an L2 learner's language processing device that uses knowledge gained from language learning to observe and regulate the learner's own L2 production, checking for accuracy and adjusting language production when necessary.

Interaction hypothesis

Long's interaction hypothesis proposes that language acquisition is strongly facilitated by the use of the target language in interaction. Similarly to Krashen's Input Hypothesis, the Interaction Hypothesis claims that comprehensible input is important for language learning. In addition, it claims that the effectiveness of comprehensible input is greatly increased when learners have to negotiate for meaning.

Interactions often result in learners receiving negative evidence. That is, if learners say something that their interlocutors do not understand, after negotiation the interlocutors may model the correct language form. In doing this, learners can receive feedback on their production and on grammar that they have not yet mastered. The process of interaction may also result in learners receiving more input from their interlocutors than they would otherwise. Furthermore, if learners stop to clarify things that they do not understand, they may have more time to process the input they receive. This can lead to better understanding and possibly the acquisition of new language forms. Finally, interactions may serve as a way of focusing learners' attention on a difference between their knowledge of the target language and the reality of what they are hearing; it may also focus their attention on a part of the target language of which they are not yet aware.

Output hypothesis

In the 1980s, Canadian SLA researcher Merrill Swain advanced the output hypothesis, that meaningful output is as necessary to language learning as meaningful input. However, most studies have shown little if any correlation between learning and quantity of output. Today, most scholars contend that small amounts of meaningful output are important to language learning, but primarily because the experience of producing language leads to more effective processing of input.

Critical Period Hypothesis

In 1967, Eric Lenneberg argued the existence of a critical period (approximately 2-13 years old) for the acquisition of a first language. This has attracted much attention in the realm of second language acquisition. For instance, Newport (1990) extended the argument of critical period hypothesis by pointing to a possibility that when a learner is exposed to an L2 might also contribute to their second language acquisition. Indeed, she revealed the correlation between age of arrival and second language performance. In this regard, second language learning might be affected by a learner's maturational state.

Competition model

Some of the major cognitive theories of how learners organize language knowledge are based on analyses of how speakers of various languages analyze sentences for meaning. MacWhinney, Bates, and Kliegl found that speakers of English, German, and Italian showed varying patterns in identifying the subjects of transitive sentences containing more than one noun. English speakers relied heavily on word order; German speakers used morphological agreement, the animacy status of noun referents, and stress; and speakers of Italian relied on agreement and stress. MacWhinney et al. interpreted these results as supporting the Competition Model, which states that individuals use linguistic cues to get meaning from language, rather than relying on linguistic universals. According to this theory, when acquiring an L2, learners sometimes receive competing cues and must decide which cue(s) is most relevant for determining meaning.

Connectionism and second-language acquisition

Connectionism

These findings also relate to Connectionism. Connectionism attempts to model the cognitive language processing of the human brain, using computer architectures that make associations between elements of language, based on frequency of co-occurrence in the language input. Frequency has been found to be a factor in various linguistic domains of language learning. Connectionism posits that learners form mental connections between items that co-occur, using exemplars found in language input. From this input, learners extract the rules of the language through cognitive processes common to other areas of cognitive skill acquisition. Since connectionism denies both innate rules and the existence of any innate language-learning module, L2 input is of greater importance than it is in processing models based on innate approaches, since, in connectionism, input is the source of both the units and the rules of language.

Noticing hypothesis

Attention is another characteristic that some believe to have a role in determining the success or failure of language processing. Richard Schmidt states that although explicit metalinguistic knowledge of a language is not always essential for acquisition, the learner must be aware of L2 input in order to gain from it. In his “noticing hypothesis,” Schmidt posits that learners must notice the ways in which their interlanguage structures differ from target norms. This noticing of the gap allows the learner's internal language processing to restructure the learner's internal representation of the rules of the L2 in order to bring the learner's production closer to the target. In this respect, Schmidt's understanding is consistent with the ongoing process of rule formation found in emergentism and connectionism.

Processability

Some theorists and researchers have contributed to the cognitive approach to second-language acquisition by increasing understanding of the ways L2 learners restructure their interlanguage knowledge systems to be in greater conformity to L2 structures. Processability theory states that learners restructure their L2 knowledge systems in an order of which they are capable at their stage of development. For instance, In order to acquire the correct morphological and syntactic forms for English questions, learners must transform declarative English sentences. They do so by a series of stages, consistent across learners. Clahsen proposed that certain processing principles determine this order of restructuring. Specifically, he stated that learners first, maintain declarative word order while changing other aspects of the utterances, second, move words to the beginning and end of sentences, and third, move elements within main clauses before subordinate clauses.

Automaticity

Thinkers have produced several theories concerning how learners use their internal L2 knowledge structures to comprehend L2 input and produce L2 output. One idea is that learners acquire proficiency in an L2 in the same way that people acquire other complex cognitive skills. Automaticity is the performance of a skill without conscious control. It results from the gradated process of proceduralization. In the field of cognitive psychology, Anderson expounds a model of skill acquisition, according to which persons use procedures to apply their declarative knowledge about a subject in order to solve problems. On repeated practice, these procedures develop into production rules that the individual can use to solve the problem, without accessing long-term declarative memory. Performance speed and accuracy improve as the learner implements these production rules. DeKeyser tested the application of this model to L2 language automaticity. He found that subjects developed increasing proficiency in performing tasks related to the morphosyntax of an artificial language, Autopractan, and performed on a learning curve typical of the acquisition of non-language cognitive skills. This evidence conforms to Anderson's general model of cognitive skill acquisition, supports the idea that declarative knowledge can be transformed into procedural knowledge, and tends to undermine the idea of Krashen that knowledge gained through language “learning” cannot be used to initiate speech production.

Declarative/procedural model

An example of declarative knowledge, procedural knowledge, and conditional knowledge

Michael T. Ullman has used a declarative/procedural model to understand how language information is stored. This model is consistent with a distinction made in general cognitive science between the storage and retrieval of facts, on the one hand, and understanding of how to carry out operations, on the other. It states that declarative knowledge consists of arbitrary linguistic information, such as irregular verb forms, that are stored in the brain's declarative memory. In contrast, knowledge about the rules of a language, such as grammatical word order is procedural knowledge and is stored in procedural memory. Ullman reviews several psycholinguistic and neurolinguistic studies that support the declarative/procedural model.

Memory and second-language acquisition

Perhaps certain psychological characteristics constrain language processing. One area of research is the role of memory. Williams conducted a study in which he found some positive correlation between verbatim memory functioning and grammar learning success for his subjects. This suggests that individuals with less short-term memory capacity might have a limitation in performing cognitive processes for organization and use of linguistic knowledge.

Semantic theory

For the second-language learner, the acquisition of meaning is arguably the most important task. Meaning is at the heart of a language, not the exotic sounds or elegant sentence structure. There are several types of meanings: lexical, grammatical, semantic, and pragmatic. All the different meanings contribute to the acquisition of meaning resulting in the integrated second language possession:

Lexical meaning – meaning that is stored in our mental lexicon;

Grammatical meaning – comes into consideration when calculating the meaning of a sentence. usually encoded in inflectional morphology (ex. - ed for past simple, -‘s for third person possessive)
Semantic meaning – word meaning;

Pragmatic meaning – meaning that depends on context, requires knowledge of the world to decipher; for example, when someone asks on the phone, “Is Mike there?” he doesn’t want to know if Mike is physically there; he wants to know if he can talk to Mike.

Sociocultural theory

Larsen-Freeman
 
Sociocultural theory was originally coined by Wertsch in 1985 and derived from the work of Lev Vygotsky and the Vygotsky Circle in Moscow from the 1920s onwards. Sociocultural theory is the notion that human mental function is from participating cultural mediation integrated into social activities.  The central thread of sociocultural theory focuses on diverse social, historical, cultural, and political contexts where language learning occurs and how learners negotiate or resist the diverse options that surround them. More recently, in accordance with this sociocultural thread, Larsen-Freeman (2011) created the triangle form that shows the interplay of four Important concepts in language learning and education: (a) teacher, (b) learner, (c) language or culture and (d) context. In this regard, what makes sociocultural theory different from other theories is that it argues that second learning acquisition is not a universal process. On the contrary, it views learners as active participants by interacting with others and also the culture of the environment.

Complex Dynamic Systems Theory

Second language acquisition has been usually investigated by applying traditional cross-sectional studies. In these designs usually a pre-test post-test method is used. However, in the 2000s a novel angle emerged in the field of second language research. These studies mainly adopt Dynamic systems theory perspective to analyse longitudinal time-series data. Scientists such as Larsen-Freeman, Verspoor, de Bot, Lowie, van Geert claim that second language acquisition can be best captured by applying longitudinal case study research design rather than cross-sectional designs. In these studies variability is seen a key indicator of development, self-organization from a Dynamic systems parlance. The interconnectedness of the systems is usually analysed by moving correlations.

Manhattan Project National Historical Park

From Wikipedia, the free encyclopedia
 
Manhattan Project National Historical Park
Hanford High School.jpg
Hanford High, a part of the park in Washington.
LocationOak Ridge, Tennessee, Los Alamos, New Mexico and Hanford, Washington, United States
EstablishedNovember 10, 2015
Governing bodyNational Park Service, Department of Energy
WebsiteManhattan Project National Historical Park

Manhattan Project National Historical Park is a United States National Historical Park commemorating the Manhattan Project that is run jointly by the National Park Service and Department of Energy. The park consists of three units: one in Oak Ridge, Tennessee, one in Los Alamos, New Mexico and one in Hanford, Washington. It was established on November 10, 2015 when Secretary of the Interior Sally Jewell and Secretary of Energy Ernest Moniz signed the memorandum of agreement that defined the roles that the two agencies had when managing the park.

The Department of Energy had owned and managed most of the properties located within the three different sites. For over ten years, the DoE worked with the National Park Service and federal, state and local governments and agencies with the intention of turning places of importance into a National Historical Park. After several years of surveying the three sites and five other possible alternatives, the two agencies officially recommended a historical park be established in Hanford, Los Alamos and Oak Ridge.

The Department of Energy would continue to manage and own the sites while the National Park Service would provide interpretive services, visitor centers and park rangers. After two unsuccessful attempts at passing a bill in Congress authorizing the park in 2012 and 2013, the House and Senate ultimately passed the bill in December 2014, with President Obama signing the National Defense Authorization Act shortly thereafter which authorized the Manhattan Project National Historical Park.

Sites

Hanford B Reactor

The Manhattan Project National Historical Park protects many structures associated with the Manhattan Project, but only some are open for touring.

Hanford, Washington

Los Alamos, New Mexico

The Los Alamos visitor center for the Manhattan Project NHP is located at 475 20th Street in downtown Los Alamos. This location is open daily 10-4 weather and staffing permitting. It is in the Los Alamos Community Building on the front left as you face the building from the street (next to the teen center). At the visitor center, visitors can learn about the Manhattan Project and related sites in the vicinity.

There are 3 locations of park that are on Los Alamos National Laboratory property. These locations are currently not open to the public:
  • Gun Site Facilities: three bunkered buildings (TA-8-1, TA-8-2, and TA-8-3), and a portable guard shack (TA-8-172).
  • V-Site Facilities: TA-16-516 and TA-16-517 V-Site Assembly Building
  • Pajarito Site: TA-18-1 Slotin Building, TA-8-2 Battleship Control Building, and the TA-18-29 Pond Cabin.
Controls of the X-10 Graphite Reactor

Oak Ridge, Tennessee

The American Museum of Science and Energy provides bus tours of several buildings in the Clinton Engineer Works including the:

Ideasthesia

From Wikipedia, the free encyclopedia
Example of associations between graphemes and colors that are described more accurately as ideasthesia than as synesthesia

Ideasthesia (alternative spelling ideaesthesia) was introduced by neuroscientist Danko Nikolić and is defined as a phenomenon in which activations of concepts (inducers) evoke perception-like experiences (concurrents). The name comes from the Ancient Greek ἰδέα (idéa) and αἴσθησις (aísthēsis), meaning "sensing concepts" or "sensing ideas". The main reason for introducing the notion of ideasthesia was the problems with synesthesia. While "synesthesia" means "union of senses", empirical evidence indicated that this was an incorrect explanation of a set of phenomena traditionally covered by this heading. Syn-aesthesis denoting also "co-perceiving", implies the association of two sensory elements with little connection to the cognitive level. However, according to others, most phenomena that have inadvertently been linked to synesthesia in fact are induced by the semantic representations. That is, the meaning of the stimulus is what is important rather than its sensory properties, as would be implied by the term synesthesia. In other words, while synesthesia presumes that both the trigger (inducer) and the resulting experience (concurrent) are of sensory nature, ideasthesia presumes that only the resulting experience is of sensory nature while the trigger is semantic. Meanwhile, the concept of ideasthesia developed into a theory of how we perceive and the research has extended to topics other than synesthesia — as the concept of ideasthesia turned out applicable to our everyday perception. Ideasthesia has been even applied to the theory of art. Research on ideasthesia bears important implications for solving the mystery of human conscious experience, which, according to ideasthesia, is grounded in how we activate concepts.

Examples and evidence

A drawing by a synesthete which illustrates time unit-space synesthesia/ideasthesia. The months in a year are organized into a circle surrounding the synesthete's body, each month having a fixed location in space and a unique color.
 
A common example of synesthesia is the association between graphemes and colors, usually referred to as grapheme-color synesthesia. Here, letters of the alphabet are associated with vivid experiences of color. Studies have indicated that the perceived color is context-dependent and is determined by the extracted meaning of a stimulus. For example, an ambiguous stimulus '5' that can be interpreted either as 'S' or '5' will have the color associated with 'S' or with '5', depending on the context in which it is presented. If presented among numbers, it will be interpreted as '5' and will associate the respective color. If presented among letters, it will be interpreted as 'S' and will associate the respective synesthetic color.

Evidence for grapheme-color synesthesia comes also from the finding that colors can be flexibly associated to graphemes, as new meanings become assigned to those graphemes. In one study synesthetes were presented with Glagolitic letters that they have never seen before, and the meaning was acquired through a short writing exercise. The Glagolitic graphemes inherited the colors of the corresponding Latin graphemes as soon as the Glagolitic graphemes acquired the new meaning.

In another study, synesthetes were prompted to form novel synesthetic associations to graphemes never seen before. Synesthetes created those associations within minutes or seconds - which was time too short to account for creation of new physical connections between color representation and grapheme representation areas in the brain, pointing again towards ideasthesia. Although the time course is consistent with postsynaptic AMPA receptor upregulation and/or NMDA receptor coactivation, which would imply that the realtime experience is invoked at the synaptic level of analysis prior to establishment of novel wiring per se, a very intuitively appealing model. 

For lexical-gustatory synesthesia evidence also points towards ideasthesia: In lexical-gustatory synesthesia, verbalisation of the stimulus is not necessary for the experience of concurrents. Instead, it is sufficient to activate the concept.

Another case of synesthesia is swimming-style synesthesia in which each swimming style is associated with a vivid experience of a color. These synesthetes do not need to perform the actual movements of a corresponding swimming style. To activate the concurrent experiences, it is sufficient to activate the concept of a swimming style (e.g., by presenting a photograph of a swimmer or simply talking about swimming).

It has been argued that grapheme-color synesthesia for geminate consonants also provides evidence for ideasthesia.

In pitch-color synesthesia, the same tone will be associated with different colors depending on how it has been named; do-sharp (i.e. di) will have colors similar to do (e.g., a reddish color) and re-flat (i.e. ra) will have color similar to that of re (e.g., yellowish), although the two classes refer to the same tone. Similar semantic associations have been found between the acoustic characteristics of vowels and the notion of size.

One-shot synesthesia: There are synesthetic experiences that can occur just once in a lifetime, and are thus dubbed one-shot synesthesia. Investigation of such cases has indicated that such unique experiences typically occur when a synesthete is involved in an intensive mental and emotional activity such as making important plans for one's future or reflecting on one's life. It has been thus concluded that this is also a form of ideasthesia.

In normal perception

Which one would be called Bouba and which Kiki? Responses are highly consistent among people. This is an example of ideasthesia as the conceptualization of the stimulus plays an important role.

Recently, it has been suggested that the Bouba/Kiki phenomenon is a case of ideasthesia. Most people will agree that the star-shaped object on the left is named Kiki and the round one on the right Bouba. It has been assumed that these associations come from direct connections between visual and auditory cortices. For example, according to that hypothesis, representations of sharp inflections in the star-shaped object would be physically connected to the representations of sharp inflection in the sound of Kiki. However, Gomez et al. have shown that Kiki/Bouba associations are much richer as either word and either image is associated semantically to a number of concepts such as white or black color, feminine vs. masculine, cold vs. hot, and others. These sound-shape associations seem to be related through a large overlap between semantic networks of Kiki and star-shape on one hand, and Bouba and round-shape on the other hand. For example, both Kiki and star-shape are clever, small, thin and nervous. This indicates that behind Kiki-Bouba effect lies a rich semantic network. In other words, our sensory experience is largely determined by the meaning that we assign to stimuli. Food description and wine tasting is another domain in which ideasthetic association between flavor and other modalities such as shape may play an important role. These semantic-like relations play a role in successful marketing; the name of a product should match its other characteristics.

Implications for development of synesthesia

The concept of ideasthesia bears implications for understanding how synesthesia develops in children. Synesthetic children may associate concrete sensory-like experiences primarily to the abstract concepts that they have otherwise difficulties dealing with. Synesthesia may thus be used as a cognitive tool to cope with the abstractness of the learning materials imposed by the educational system — referred to also as a "semantic vacuum hypothesis". This hypothesis explains why the most common inducers in synesthesia are graphemes and time units — both relating to the first truly abstract ideas that a child needs to master.

Implications for art theory

The concept of ideasthesia has been often discussed in relation to art, and also used to formulate a psychological theory of art. According to the theory, we consider something to be a piece of art when experiences induced by the piece are accurately balanced with semantics induced by the same piece. Thus, a piece of art makes us both strongly think and strongly experience. Moreover, the two must be perfectly balanced such that the most salient stimulus or event is both the one that evokes strongest experiences (fear, joy, ... ) and strongest cognition (recall, memory, ...) — in other words, idea is well balanced with aesthesia.

Ideasthesia theory of art may be used for psychological studies of aesthetics. It may also help explain classificatory disputes about art as its main tenet is that experience of art can only be individual, depending on person's unique knowledge, experiences and history. There could exist no general classification of art satisfactorily applicable to each and all individuals.

Neurophysiology of ideasthesia

Ideasthesia is congruent with the theory of brain functioning known as practopoiesis. According to that theory, concepts are not an emergent property of highly developed, specialized neuronal networks in the brain, as is usually assumed; rather, concepts are proposed to be fundamental to the very adaptive principles by which living systems and the brain operate.

McGurk effect

From Wikipedia, the free encyclopedia
 
The McGurk effect is a perceptual phenomenon that demonstrates an interaction between hearing and vision in speech perception. The illusion occurs when the auditory component of one sound is paired with the visual component of another sound, leading to the perception of a third sound. The visual information a person gets from seeing a person speak changes the way they hear the sound. If a person is getting poor quality auditory information but good quality visual information, they may be more likely to experience the McGurk effect. Integration abilities for audio and visual information may also influence whether a person will experience the effect. People who are better at sensory integration have been shown to be more susceptible to the effect. Many people are affected differently by the McGurk effect based on many factors, including brain damage and other disorders.

Background

It was first described in 1976 in a paper by Harry McGurk and John MacDonald, titled "Hearing Lips and Seeing Voices" in Nature (23 Dec 1976). This effect was discovered by accident when McGurk and his research assistant, MacDonald, asked a technician to dub a video with a different phoneme from the one spoken while conducting a study on how infants perceive language at different developmental stages. When the video was played back, both researchers heard a third phoneme rather than the one spoken or mouthed in the video.

This effect may be experienced when a video of one phoneme's production is dubbed with a sound-recording of a different phoneme being spoken. Often, the perceived phoneme is a third, intermediate phoneme. As an example, the syllables /ba-ba/ are spoken over the lip movements of /ga-ga/, and the perception is of /da-da/. McGurk and MacDonald originally believed that this resulted from the common phonetic and visual properties of /b/ and /g/. Two types of illusion in response to incongruent audiovisual stimuli have been observed: fusions ('ba' auditory and 'ga' visual produce 'da') and combinations ('ga' auditory and 'ba' visual produce 'bga'). This is the brain's effort to provide the consciousness with its best guess about the incoming information. The information coming from the eyes and ears is contradictory, and in this instance, the eyes (visual information) have had a greater effect on the brain and thus the fusion and combination responses have been created.

Vision is the primary sense for humans, but speech perception is multimodal, which means that it involves information from more than one sensory modality, in particular, audition and vision. The McGurk effect arises during phonetic processing because the integration of audio and visual information happens early in speech perception. The McGurk effect is very robust; that is, knowledge about it seems to have little effect on one's perception of it. This is different from certain optical illusions, which break down once one 'sees through' them. Some people, including those that have been researching the phenomenon for more than twenty years, experience the effect even when they are aware that it is taking place. With the exception of people who can identify most of what is being said from speech-reading alone, most people are quite limited in their ability to identify speech from visual-only signals. A more extensive phenomenon is the ability of visual speech to increase the intelligibility of heard speech in a noisy environment. Visible speech can also alter the perception of perfectly audible speech sounds when the visual speech stimuli are mismatched with the auditory speech. Normally, speech perception is thought to be an auditory process; however, our use of information is immediate, automatic, and, to a large degree, unconscious and therefore, despite what is widely accepted as true, speech is not only something we hear. Speech is perceived by all of the senses working together (seeing, touching, and listening to a face move). The brain is often unaware of the separate sensory contributions of what it perceives. Therefore, when it comes to recognizing speech the brain cannot differentiate whether it is seeing or hearing the incoming information.

It has also been examined in relation to witness testimony. Wareham and Wright's 2005 study showed that inconsistent visual information can change the perception of spoken utterances, suggesting that the McGurk effect may have many influences in everyday perception. Not limited to syllables, the effect can occur in whole words and have an effect on daily interactions that people are unaware of. Research into this area can provide information on not only theoretical questions, but also it can provide therapeutic and diagnostic relevance for those with disorders relating to audio and visual integration of speech cues.

Internal factors

Damage

Both hemispheres of the brain make a contribution to the McGurk effect. They work together to integrate speech information that is received through the auditory and visual senses. A McGurk response is more likely to occur in right-handed individuals for whom the face has privileged access to the right hemisphere and words to the left hemisphere. In people that have had callosotomies done, the McGurk effect is still present but significantly slower. In people with lesions to the left hemisphere of the brain, visual features often play a critical role in speech and language therapy. People with lesions in the left hemisphere of the brain show a greater McGurk effect than normal controls. Visual information strongly influences speech perception in these people. There is a lack of susceptibility to the McGurk illusion if left hemisphere damage resulted in a deficit to visual segmental speech perception. In people with right hemisphere damage, impairment on both visual-only and audio-visual integration tasks is exhibited, although they are still able to integrate the information to produce a McGurk effect. Integration only appears if visual stimuli is used to improve performance when the auditory signal is impoverished but audible. Therefore, there is a McGurk effect exhibited in people with damage to the right hemisphere of the brain but the effect is not as strong as a normal group.

Disorders

Dyslexia

Dyslexic individuals exhibit a smaller McGurk effect than normal readers of the same chronological age, but they showed the same effect as reading-level age-matched readers. Dyslexics particularly differed for combination responses, not fusion responses. The smaller McGurk effect may be due to the difficulties dyslexics have in perceiving and producing consonant clusters.

Specific language impairment

Children with specific language impairment show a significantly lower McGurk effect than the average child. They use less visual information in speech perception, or have a reduced attention to articulatory gestures, but have no trouble perceiving auditory-only cues.

Autism spectrum disorders

Children with autism spectrum disorders (ASD) showed a significantly reduced McGurk effect than children without. However, if the stimulus was nonhuman (for example bouncing a tennis ball to the sound of a bouncing beach ball) then they scored similarly to children without ASD. Younger children with ASD show a very reduced McGurk effect; however, this diminishes with age. As the individuals grow up, the effect they show becomes closer to those that did not have ASD. It has been suggested that the weakened McGurk effect seen in people with ASD is due to deficits in identifying both the auditory and visual components of speech rather than in the integration of said components (although distinguishing speech components as speech components may be isomorphic to integrating them).

Language-learning disabilities

Adults with language-learning disabilities exhibit a much smaller McGurk effect than other adults. These people are not as influenced by visual input as most people. Therefore, people with poor language skills will produce a smaller McGurk effect. A reason for the smaller effect in this population is that there may be uncoupled activity between anterior and posterior regions of the brain, or left and right hemispheres. Cerebellar or basal ganglia etiology is also possible.

Alzheimer’s disease

In patients with Alzheimer's disease (AD), there is a smaller McGurk effect exhibited than in those without. Often a reduced size of the corpus callosum produces a hemisphere disconnection process. Less influence on visual stimulus is seen in patients with AD, which is a reason for the lowered McGurk effect.

Schizophrenia

The McGurk effect is not as pronounced in schizophrenic individuals as in non-schizophrenic individuals. However, it is not significantly different in adults. Schizophrenia slows down the development of audiovisual integration and does not allow it to reach its developmental peak. However, no degradation is observed. Schizophrenics are more likely to rely on auditory cues than visual cues in speech perception.

Aphasia

People with aphasia show impaired perception of speech in all conditions (visual-only, auditory-only, and audio-visual), and therefore exhibited a small McGurk effect. The greatest difficulty for aphasics is in the visual-only condition showing that they use more auditory stimuli in speech perception.

External factors

Cross-dubbing

Discrepancy in vowel category significantly reduced the magnitude of the McGurk effect for fusion responses. Auditory /a/ tokens dubbed onto visual /i/ articulations were more compatible than the reverse. This could be because /a/ has a wide range of articulatory configurations whereas /i/ is more limited, which makes it much easier for subjects to detect discrepancies in the stimuli. /i/ vowel contexts produce the strongest effect, while /a/ produces a moderate effect, and /u/ has almost no effect.

Mouth visibility

The McGurk effect is stronger when the right side of the speaker's mouth (on the viewer's left) is visible. People tend to get more visual information from the right side of a speaker's mouth than the left or even the whole mouth. This relates to the hemispheric attention factors discussed in the brain hemispheres section above.

Visual distractors

The McGurk effect is weaker when there is a visual distractor present that the listener is attending to. Visual attention modulates audiovisual speech perception. Another form of distraction is movement of the speaker. A stronger McGurk effect is elicited if the speaker's face/head is motionless, rather than moving.

Syllable structure

A strong McGurk effect can be seen for click-vowel syllables compared to weak effects for isolated clicks. This shows that the McGurk effect can happen in a non-speech environment. Phonological significance is not a necessary condition for a McGurk effect to occur; however, it does increase the strength of the effect.

Gender

Females show a stronger McGurk effect than males. Women show significantly greater visual influence on auditory speech than men did for brief visual stimuli, but no difference is apparent for full stimuli. Another aspect regarding gender is the issue of male faces and voices as stimuli in comparison to female faces and voices as stimuli. Although, there is no difference in the strength of the McGurk effect for either situation. If a male face is dubbed with a female voice, or vice versa, there is still no difference in strength of the McGurk effect. Knowing that the voice you hear is different from the face you see – even if different genders – doesn’t eliminate the McGurk effect.

Familiarity

Subjects who are familiar with the faces of the speakers are less susceptible to the McGurk effect than those who are unfamiliar with the faces of the speakers. On the other hand, there was no difference regarding voice familiarity.

Expectation

Semantic congruency had a significant impact on the McGurk illusion. The effect is experienced more often and rated as clearer in the semantically congruent condition relative to the incongruent condition. When a person was expecting a certain visual or auditory appearance based on the semantic information leading up to it, the McGurk effect was greatly increased.

Self influence

The McGurk effect can be observed when the listener is also the speaker or articulator. While looking at oneself in the mirror and articulating visual stimuli while listening to another auditory stimulus, a strong McGurk effect can be observed. In the other condition, where the listener speaks auditory stimuli softly while watching another person articulate the conflicting visual gestures, a McGurk effect can still be seen, although it is weaker.

Temporal synchrony

Temporal synchrony is not necessary for the McGurk effect to be present. Subjects are still strongly influenced by auditory stimuli even when it lagged the visual stimuli by 180 milliseconds (point at which McGurk effect begins to weaken). There was less tolerance for the lack of synchrony if the auditory stimuli preceded the visual stimuli. In order to produce a significant weakening of the McGurk effect, the auditory stimuli had to precede the visual stimuli by 60 milliseconds, or lag by 240 milliseconds.

Physical task diversion

The McGurk effect was greatly reduced when attention was diverted to a tactile task (touching something). Touch is a sensory perception like vision and audition, therefore increasing attention to touch decreases the attention to auditory and visual senses.

Gaze

The eyes do not need to fixate in order to integrate audio and visual information in speech perception. There was no difference in the McGurk effect when the listener was focusing anywhere on the speaker's face. The effect does not appear if the listener focuses beyond the speaker's face. In order for the McGurk effect to become insignificant, the listener's gaze must deviate from the speaker's mouth by at least 60 degrees.

Other languages

People of all languages rely to some extent on visual information in speech perception, but the intensity of the McGurk effect can change between languages. Dutch, English, Spanish, German, Italian and Turkish  language listeners experience a robust McGurk effect, while it is weaker for Japanese and Chinese listeners. Most research on the McGurk effect between languages has been conducted between English and Japanese. There is a smaller McGurk effect in Japanese listeners than in English listeners. The cultural practice of face avoidance in Japanese people may have an effect on the McGurk effect, as well as tone and syllabic structures of the language. This could also be why Chinese listeners are less susceptible to visual cues, and similar to Japanese, produce a smaller effect than English listeners. Studies have also shown that Japanese listeners do not show a developmental increase in visual influence after the age of six, as English children do. Japanese listeners are more able to identify an incompatibility between the visual and auditory stimulus than English listeners are. This result could be in relation to the fact that in Japanese, consonant clusters do not exist. In noisy environments where speech is unintelligible, however, people of all languages resort to using visual stimuli and are then equally subject to the McGurk effect. The McGurk effect works with speech perceivers of every language for which it has been tested.

Hearing impairment

Experiments have been conducted involving hard of hearing individuals as well as individuals that have had cochlear implants. These individuals tend to weigh visual information from speech more heavily than auditory information. In comparison to normal hearing individuals, this is not different unless there is more than one syllable, such as a word. Regarding the McGurk experiment, responses from cochlear implanted users produced the same responses as normal hearing individuals when an auditory bilabial stimulus is dubbed onto a visual velar stimulus. However, when an auditory dental stimulus is dubbed onto a visual bilabial stimulus, the responses are quite different. The McGurk effect is still present in individuals with impaired hearing or using cochlear implants, although it is quite different in some aspects.

Infants

By measuring an infant's attention to certain audiovisual stimuli, a response that is consistent with the McGurk effect can be recorded. From just minutes to a couple of days old, infants can imitate adult facial movements, and within weeks of birth, infants can recognize lip movements and speech sounds. At this point, the integration of audio and visual information can happen, but not at a proficient level. The first evidence of the McGurk effect can be seen at four months of age; however, more evidence is found for 5-month-olds. Through the process of habituating an infant to a certain stimulus and then changing the stimulus (or part of it, such as ba-voiced/va-visual to da-voiced/va-visual), a response that simulates the McGurk effect becomes apparent. The strength of the McGurk effect displays a developmental pattern that increases throughout childhood and extends into adulthood.

Phonemic restoration effect

From Wikipedia, the free encyclopedia
 
Phonemic restoration effect is a perceptual phenomenon where under certain conditions, sounds actually missing from a speech signal can be restored by the brain and may appear to be heard. The effect occurs when missing phonemes in an auditory signal are replaced with a noise that would have the physical properties to mask those phonemes, creating an ambiguity. In such ambiguity, the brain tends towards filling in absent phonemes. The effect can be so strong that some listeners may not even notice that there are phonemes missing. This effect is commonly observed in a conversation with heavy background noise, making it difficult to properly hear every phoneme being spoken. Different factors can change the strength of the effect, including how rich the context or linguistic cues are in speech, as well as the listener's state, such as their hearing status or age.

This effect is more important to humans than what was initially thought. Linguists have pointed out that at least the English language has many false starts and extraneous sounds. The phonemic restoration effect is the brain's way of resolving those imperfections in our speech. Without this effect interfering with our language processing, there would be a greater need for much more accurate speech signals and human speech could require much more precision. For experiments, white noise is necessary because it takes the place of these imperfections in speech. One of the most important factors in language is continuity and in turn intelligibility.

Phonemic Restoration Effect

Background

The phonemic restoration effect was first documented in a 1970 paper by Richard M. Warren entitled "Perceptual Restoration of Missing Speech Sounds". The purpose of the experiment was to give a reason to why in background of extraneous sounds, masked individual phonemes were still comprehensible.
“The state governors met with their respective legislatures convening in the capital city.”
In his initial experiments, Warren provided the sentence shown and first replaced the first 's' phoneme in legislatures with extraneous noise, in the form of a cough. In a small group of 20 subjects, 19 did not notice a missing phoneme and one person misidentified the missing phoneme. This indicated that in the absence of a phoneme, the brain filled in the missing phoneme, through top-down processing. This was a phenomenon that was somewhat known at the time, but no one was able to pinpoint why it was occurring or had labeled it. He again did the same experiment with the sentence:
'“It was found that the wheel was on the axle.”'
He replaced the 'wh' sound in wheel and the same results were found. All people tested wrote down wheel. Warren later did much research for next several decades on the subject.

Since Warren, much research has been done to test the various aspects of the effect. These aspects include how many phonemes can be removed, what noise is played in replacement of the phoneme, and how different contexts alter the effect.

Neuroanatomy

Human auditory cortex

Neurally, the signs of interrupted or stopped speech can be suppressed in the thalamus and auditory cortex, possibly as a consequence of top-down processing by the auditory system. Key aspects of the speech signal itself are considered to be resolved somewhere in the interface between auditory and language-specific areas (an example is Wernicke's area), in order for the listener to determine what is being said. Normally, the latter is thought to be instantiated at the end stages of the language processing system, but for restorative processes, much remains unknown about whether the same stages are responsible for the ability to actually fill-in the missing phoneme.

Phonemic restoration is one of several phenomena demonstrating that prior, existing knowledge in the brain provides it with tools to attempt a guess at missing information, something in principle similar to an optical illusion. It is believed that humans and other vertebrates have evolved the ability to complete acoustic signals that are critical but communicated under naturally noisy conditions. For humans, while it is not fully known at what point in the processing hierarchy the phonemic restoration effect occurs, evidence points to dynamic restorative processes already occurring with basic modulations of sound set at natural articulation rates. Recent research using direct neurophysiological recordings from human epilepsy patients implanted with electrodes over auditory and language cortex has shown that the lateral superior temporal gyrus (STG; a core part of Wernicke's area) represents the missing sound that listeners perceive. This research also demonstrated that perception-related neural activity in the STG is modulated by left inferior frontal cortex, which contains signals that predict what sound listeners will report hearing up to about 300 milliseconds before the sound is even presented.

Factors

Hearing impairment

People with mild and moderate hearing loss were tested for the effectiveness of phonemic restoration. Those with mild hearing loss performed at the same level of a normal listener. Those with moderate hearing loss had almost no perception and failed to identify the missing phonemes. This research is also dependent on the amount of words the observer is comfortable understanding because of the nature of top-down processing.

Cochlear implants

Cochlear Implant

For people with cochlear implants, acoustic simulations of the implant indicated the importance of spectral resolution. When the brain is using top-down processing, it uses as much information as it can to make a decision on if the filler signal in the gap belongs to the speech, and with lower resolution, there is less information to make a correct guess. A study with actual cochlear implant users indicated that some implant users can benefit from phonemic restoration, but again they seem to need more speech information (longer duty cycle in this case) to achieve this.

Age

The age effects were studied in children and older adults, to observe if children can benefit from phonemic restoration and if so, at what capacity, and if older adults maintain the restoration capacity in the face of age-related neurophysiological changes. 

Children are able to produce results comparable to adults by about the age of 5, however still not doing as well as adults. At such an early age most information is processed through bottom-up processing due to the lack of information to recall from. However, this does mean they are able to use previous knowledge of words to fill in the missing phonemes with much less of their brain developed than adults.

Older adults (older than 65 years) with no or minimal hearing loss show benefit from phonemic restoration. In some conditions restoration effect can be stronger in older adults than in younger adults, even when the overall speech perception scores are lower in older adults. This observation is likely due to strong linguistic and vocabulary skills that are maintained in advanced age.

Gender

In children, there was no effect of gender on phonemic restoration.

In adults, instead of completely replacing the phonemes, researchers masked them with tones that are informative(helped the listeners pick the correct phoneme), uninformative(neither helped or hurt the listener select the correct phoneme), or misinformative (hurt the listener in picking the correct phoneme). The results showed that women were much more affected by informative and misinformative cues than men. This evidence suggests that women are influenced by top-down semantic information more than men.

Setting

The effect reverses in a reverberation room, which echoes real life more so than the typical quiet rooms used for experimentation. This allows for echoes of the spoken phonemes to act as the replacement noise for the missing phonemes. The additional produced white noise that replaces the phoneme adds its own echo and causes listeners to not perform as well.

Rate

Another study by Warren was done to determine the effect of the duration of the replacement phoneme on comprehension. Because the brain processes information optimally at a certain rate, when the gap became approximately the length of the word is when the effect started top breakdown and become ineffective. At this point the effect is no longer effective because the observer is now cognisant of the gap.

Multisensory

Much like the McGurk Effect, when listeners were also able to see the words being spoken, they were much more likely to correctly identify the missing phonemes. Like every sense, the brain will use every piece of information it deems important to make a judgement about what it is perceiving. Using the visual cues of mouth movements, the brain will you both in top-down processing to make a decision about what phoneme is supposed to be heard. Vision is the primary sense for humans and for the most part assists in speech perception the most.

Context

Because languages are distinctly structured, the brain has some sense of what word is to come next in a proper sentence. When listeners were listening to sentences with proper structure with missing phonemes, they performed much better than with a nonsensical sentence without a proper structure. This comes from the predictive nature of the pre-frontal cortex in determining what word should be coming next in order for the sentence to make sense. Top-down processing relies on the surrounding information in a sentence to fill in the missing information. If the sentence does not make sense to the observer then there will be little at the top of the process for the observer to go off of. If a puzzle piece of a familiar picture was missing, it would be very simple for the brain to know what that puzzle piece would look like. If the picture of something that makes no sense to the human brain and has never been seen before, the brain will have much more difficulty understanding what is missing.

Intensity

Only when the intensity of the noise replacing the phonemes is the same or louder as the surrounding words, does the effect properly work. This effect is made apparent when listeners hear a sentence with gaps replaced by white noise repeat over and over with the white noise volume increasing with each iteration. The sentence becomes more and more clear to the listener as the white noise is louder. 

Dichotic listening

When a word with the segment 's' is removed and replaced by silence and a comparable noise segment were presented dichotically. Simply put, one ear was hearing the full sentence without phoneme excision and the other ear was hearing a sentence with a 's' sound removed. This version of the phonemic restoration effect was particularly strong because the brain was doing much less guess work with the sentence, because the information was given to the observer. Observers reported hearing exactly the same sentence in both ears, regardless of one of their ears missing a phoneme.

Language

The restoration effect is studied mostly in English and Dutch, where the restoration effect seemed similar between the two languages. While no research directly compared the restoration effect further for other languages, it is assumed that this effect is universal for all languages.

Lie point symmetry

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_point_symmetry     ...