Search This Blog

Monday, May 4, 2020

Animal language

From Wikipedia, the free encyclopedia
 
Animal languages are forms of non-human animal communication that show similarities to human language. Animals communicate by using a variety of signs such as sounds or movements. Such signing may be considered complex enough to be called a form of language if the inventory of signs is large, the signs are relatively arbitrary, and the animals seem to produce them with a degree of volition (as opposed to relatively automatic conditioned behaviors or unconditioned instincts, usually including facial expressions). In experimental tests, animal communication may also be evidenced through the use of lexigrams (as used by chimpanzees and bonobos). While the term "animal language" is widely used, researchers agree that animal languages are not as complex or expressive as human language.

Many researchers argue that animal communication lacks a key aspect of human language, that is, the creation of new patterns of signs under varied circumstances. (In contrast, for example, humans routinely produce entirely new combinations of words.) Some researchers, including the linguist Charles Hockett, argue that human language and animal communication differ so much that the underlying principles are unrelated. Accordingly, linguist Thomas A. Sebeok has proposed to not use the term "language" for animal sign systems. Marc Hauser, Noam Chomsky, and W. Tecumseh Fitch assert an evolutionary continuum exists between the communication methods of animal and human language.

Aspects of human language

Human and chimp, in this case Claudine André with a bonobo.

The following properties of human language have been argued to separate it from animal communication:
  • Arbitrariness: there is usually no rational relationship between a sound or sign and its meaning. For example, there is nothing intrinsically house-like about the word "house".
  • Discreteness: language is composed of small, repeatable parts (discrete units) that are used in combination to create meaning.
  • Displacement: languages can be used to communicate ideas about things that are not in the immediate vicinity either spatially or temporally.
  • Duality of patterning: the smallest meaningful units (words, morphemes) consist of sequences of units without meaning. This is also referred to as double articulation.
  • Productivity: users can understand and create an indefinitely large number of utterances.
  • Semanticity: specific signals have specific meanings.
Research with apes, like that of Francine Patterson with Koko (gorilla) or Allen and Beatrix Gardner with Washoe (chimpanzee), suggested that apes are capable of using language that meets some of these requirements such as arbitrariness, discreteness, and productivity.

In the wild, chimpanzees have been seen "talking" to each other when warning about approaching danger. For example, if one chimpanzee sees a snake, he makes a low, rumbling noise, signaling for all the other chimps to climb into nearby trees. In this case, the chimpanzees' communication does not indicate displacement, as it is entirely contained to an observable event.

Arbitrariness has been noted in meerkat calls; bee dances demonstrate elements of spatial displacement; and cultural transmission has possibly occurred between the celebrated bonobos Kanzi and Panbanisha.

Human language may not be completely "arbitrary." Research has shown that almost all humans naturally demonstrate limited crossmodal perception (e.g. synesthesia) and multisensory integration, as illustrated by the Kiki and Booba study. Other recent research has tried to explain how the structure of human language emerged, comparing two different aspects of hierarchical structure present in animal communication and proposing that human language arose out of these two separate systems.

Claims that animals have language skills akin to humans however, are extremely controversial. As Steven Pinker illustrates in his book The Language Instinct, claims that chimpanzees can acquire language are exaggerated and rest on very limited or specious data.

The American linguist Charles Hockett theorized that there are sixteen features of human language that distinguished human communication from that of animals. He called these the design features of language. The features mentioned below have so far been found in all spoken human languages and at least one is missing from all other animal communication systems.
  • Vocal-auditory channel: sounds emitted from the mouth and perceived by the auditory system. This applies to many animal communication systems, but there are many exceptions. Ex. An alternative to vocal-auditory communication is visual communication. An example is cobras extending the ribs behind their heads to send the message of intimidation or of feeling threatened. In humans, sign languages provide many examples of fully formed languages that use a visual channel.
  • Broadcast transmission and directional reception: this requires that the recipient can tell the direction that the signal comes from and thus the originator of the signal.
  • Rapid fading (transitory nature): Signal lasts a short time. This is true of all systems involving sound. It does not take into account audio recording technology and is also not true for written language. It tends not to apply to animal signals involving chemicals and smells which often fade slowly. For example, a skunk's smell, produced in its glands, lingers to deter a predator from attacking.
  • Interchangeability: All utterances that are understood can be produced. This is different from some communication systems where, for example, males produce one set of behaviours and females another and they are unable to interchange these messages so that males use the female signal and vice versa. For example, Heliothine moths have differentiated communication: females are able to send a chemical to indicate preparedness to mate, while males cannot send the chemical.
  • Total feedback: The sender of a message is aware of the message being sent.
  • Specialization: The signal produced is intended for communication and is not due to another behavior. For example, dog panting is a natural reaction to being overheated, but is not produced to specifically relay a particular message.
  • Semanticity: There is some fixed relationship between a signal and a meaning.

Primate: studied examples

Humans are able to distinguish real words from fake words based on the phonological order of the word itself. In a 2013 study, baboons have been shown to have this skill, as well. The discovery has led researchers to believe that reading is not as advanced a skill as previously believed, but instead based on the ability to recognize and distinguish letters from one another. The experimental setup consisted of six young adult baboons, and results were measured by allowing the animals to use a touch screen and selecting whether or not the displayed word was indeed a real word, or a nonword such as "dran" or "telk." The study lasted for six weeks, with approximately 50,000 tests completed in that time. The experimenters explain the use of bigrams, which are combinations of two (usually different) letters. They tell us that the bigrams used in nonwords are rare, while the bigrams used in real words are more common. Further studies will attempt to teach baboons how to use an artificial alphabet.
In a 2016 study, a team of biologists from several universities concluded that macaques possess vocal tracts physically capable of speech, "but lack a speech-ready brain to control it".

Non-primates: studied examples

Among the most studied examples of animal languages are:

Birds

  • Bird songs: Songbirds can be very articulate. Grey parrots are famous for their ability to mimic human language, and at least one specimen, Alex, appeared able to answer a number of simple questions about objects he was presented with. Parrots, hummingbirds and songbirds – display vocal learning patterns.

Insects

  • Bee dance: Used to communicate direction and distance of food source in many species of bees.

Mammals

  • African forest elephants: Cornell University's Elephant Listening Project began in 1999 when Katy Payne began studying the calls of African forest elephants in Dzanga National Park in the Central African Republic. Andrea Turkalo has continued Payne's work in Dzanga National Park observing elephant communication. For nearly 20 years, Turkalo has spent the majority of her time using a spectrogram to record the noises that the elephants make. After extensive observation and research, she has been able to recognize elephants by their voices. Researchers hope to translate these voices into an elephant dictionary, but that will likely not occur for many years. Because elephant calls are often made at very low frequencies, this spectrogram is able to detect lower frequencies that human ears are unable to hear, allowing Turkalo to get a better idea of what she perceives the elephants to be saying. Cornell's research on African forest elephants has challenged the idea that humans are considerably better at using language and that animals only have a small repertoire of information that they can convey to others. As Turkalo explained on 60 Minutes' "The Secret Language of Elephants," "Many of their calls are in some ways similar to human speech."
  • Mustached bats: Since these animals spend most of their lives in the dark, they rely heavily on their auditory system to communicate. This acoustic communication includes echolocation or using calls to locate each other in the darkness. Studies have shown that mustached bats use a wide variety of calls to communicate with one another. These calls include 33 different sounds, or "syllables," that the bats then either use alone or combine in various ways to form "composite" syllables.
  • Prairie dogs: Dr. Con Slobodchikoff studied prairie dog communication and discovered:
    • different alarm calls for different species of predators;
    • different escape behaviors for different species of predators;
    • transmission of semantic information, in that playbacks of alarm calls in the absence of predators lead to escape behaviors that are appropriate to the type of predator which elicited the alarm calls;
    • alarm calls containing descriptive information about the general size, color, and speed of travel of the predator.

Aquatic mammals

  • Bottlenose dolphins: Dolphins can hear one another up to 6 miles apart underwater. In one National Geographic article, the success of a mother dolphin communicating with her baby using a telephone was outlined. Researchers noted that it appeared that both dolphins knew who they were speaking with and what they were speaking about. Not only do dolphins communicate via nonverbal cues, but they also seem to chatter and respond to other dolphin's vocalizations.
Spectrogram of humpback whale vocalizations. Detail is shown for the first 24 seconds of the 37 second recording humpback whale "song". The ethereal whale "songs" and echolocation "clicks" are visible as horizontal striations and vertical sweeps respectively.
  • Whales: Two groups of whales, the humpback whale and a subspecies of blue whale found in the Indian Ocean, are known to produce repetitious sounds at varying frequencies known as whale song. Male humpback whales perform these vocalizations only during the mating season, and so it is surmised the purpose of songs is to aid sexual selection. Humpbacks also make a sound called a feeding call, five to ten seconds in length of near constant frequency. Humpbacks generally feed cooperatively by gathering in groups, swimming underneath shoals of fish and all lunging up vertically through the fish and out of the water together. Prior to these lunges, whales make their feeding call. The exact purpose of the call is not known, but research suggests that fish react to it. When the sound was played back to them, a group of herring responded to the sound by moving away from the call, even though no whale was present.
  • Sea lions: Beginning in 1971 and continuing until present day, Dr. Ronald J. Schusterman and his research associates have studied sea lions' cognitive ability. They have discovered that sea lions are able to recognize relationships between stimuli based on similar functions or connections made with their peers, rather than only the stimuli's common features. This is called "equivalence classification". This ability to recognize equivalence may be a precursor to language. Research is currently being conducted at the Pinniped Cognition & Sensory Systems Laboratory to determine how sea lions form these equivalence relationships. Sea lions have also been proven to be able to understand simple syntax and commands when taught an artificial sign language similar to the one used with primates. The sea lions studied were able to learn and use a number of syntactic relations between the signs they were taught, such as how the signs should be arranged in relation to each other. However, the sea lions rarely used the signs semantically or logically. In the wild it's thought that sea lions use the reasoning skills associated with equivalence classification in order to make important decisions that can affect their rate of survival (e.g. recognizing friends and family or avoiding enemies and predators). Sea lions use the following to display their language:
    • Sea lions use their bodies in various postural positions to display communication.
    • Sea lion's vocal cords limit their ability to convey sounds to a range of barks, chirps, clicks, moans, growls and squeaks.
    • There has yet to be an experiment which proves for certain that sea lions use echolocation as a means of communication.
The effects of learning on auditory signaling in these animals is of special interest. Several investigators have pointed out that some marine mammals appear to have an extraordinary capacity to alter both the contextual and structural features of their vocalizations as a result of experience. Janik and Slater (2000) have stated that learning can modify the emission of vocalizations in one of two ways: (1) by influencing the context in which a particular signal is used and/or (2) by altering the acoustic structure of the call itself. Male California sea lions can learn to inhibit their barking in the presence of any male dominant to them, but vocalize normally when dominant males are absent. Recent work on gray seals show different call types can be selectively conditioned and placed under biased control of different cues (Schusterman, in press) and the use of food reinforcement can also modify vocal emissions. "Hoover", a captive male harbor seal demonstrated a convincing case of vocal mimicry. However similar observations have not been reported since. Still shows under the right circumstances pinnipeds may use auditory experience, in addition to environmental consequences such as food reinforcement and social feedback to modify their vocal emissions. 

In a 1992 study, Robert Gisiner and Ronald J. Schusterman conducted experiments in which they attempted to teach Rocky, a female California sea lion, syntax. Rocky was taught signed words, then she was asked to perform various tasks dependent on word order after viewing a signed instruction. It was found that Rocky was able to determine relations between signs and words, and form a basic form of syntax. A 1993 study by Ronald J Schusterman and David Kastak found that the California sea lion was capable of understanding abstract concepts such as symmetry, sameness and transitivity. This provides a strong backing to the theory that equivalence relations can form without language.

The distinctive sound of sea lions is produced both above and below water. To mark territory, sea lions "bark", with non-alpha males making more noise than alphas. Although females also bark, they do so less frequently and most often in connection with birthing pups or caring for their young. Females produce a highly directional bawling vocalization, the pup attraction call, which helps mother and pup locate one another. As noted in Animal Behavior, their amphibious lifestyle has made them need acoustic communication for social organization while on land.

Sea lions can hear frequencies as low as 100 Hz and as high as 40,000 Hz and vocalize between the ranges of 100 to 10,000 Hz.

Mollusks

  • Caribbean reef squid have been shown to communicate using a variety of color, shape, and texture changes. Squid are capable of rapid changes in skin color and pattern through nervous control of chromatophores. In addition to camouflage and appearing larger in the face of a threat, squids use color, patterns, and flashing to communicate with one another in various courtship rituals. Caribbean reef squid can send one message via color patterns to a squid on their right, while they send another message to a squid on their left.

Comparison of the terms "animal language" and "animal communication"

It is worth distinguishing "animal language" from "animal communication", although there is some comparative interchange in certain cases (e.g. Cheney & Seyfarth's vervet monkey call studies). Thus "animal language" typically does not include bee dancing, bird song, whale song, dolphin signature whistles, prairie dogs, nor the communicative systems found in most social mammals. The features of language as listed above are a dated formulation by Hockett in 1960. Through this formulation Hockett made one of the earliest attempts to break down features of human language for the purpose of applying Darwinian gradualism. Although an influence on early animal language efforts (see below), is today not considered the key architecture at the core of "animal language" research.

"Clever Hans", an Orlov Trotter horse that was claimed to have been able to perform arithmetic and other intellectual tasks.
 
Animal Language results are controversial for several reasons. (For a related controversy, see also Clever Hans.) In the 1970s John Lilly was attempting to "break the code": to fully communicate ideas and concepts with wild populations of dolphins so that we could "speak" to them, and share our cultures, histories, and more. This effort failed. Early chimpanzee work was with chimpanzee infants raised as if they were human; a test of the nature vs. nurture hypothesis. Chimpanzees have a laryngeal structure very different from that of humans, and it has been suggested that chimpanzees are not capable of voluntary control of their breathing, although better studies are needed to accurately confirm this. This combination is thought to make it very difficult for the chimpanzees to reproduce the vocal intonations required for human language. Researchers eventually moved towards a gestural (sign language) modality, as well as "keyboard" devices laden with buttons adorned with symbols (known as "lexigrams") that the animals could press to produce artificial language. Other chimpanzees learned by observing human subjects performing the task. This latter group of researchers studying chimpanzee communication through symbol recognition (keyboard) as well as through the use of sign language (gestural), are on the forefront of communicative breakthroughs in the study of animal language, and they are familiar with their subjects on a first name basis: Sarah, Lana, Kanzi, Koko, Sherman, Austin and Chantek.

Perhaps the best known critic of "Animal Language" is Herbert Terrace. Terrace's 1979 criticism using his own research with the chimpanzee Nim Chimpsky was scathing and basically spelled the end of animal language research in that era, most of which emphasized the production of language by animals. In short, he accused researchers of over-interpreting their results, especially as it is rarely parsimonious to ascribe true intentional "language production" when other simpler explanations for the behaviors (gestural hand signs) could be put forth. Also, his animals failed to show generalization of the concept of reference between the modalities of comprehension and production; this generalization is one of many fundamental ones that are trivial for human language use. The simpler explanation according to Terrace was that the animals had learned a sophisticated series of context-based behavioral strategies to obtain either primary (food) or social reinforcement, behaviors that could be over-interpreted as language use. 

In 1984 during this anti-Animal Language backlash, Louis Herman published an account of artificial language in the bottlenosed dolphin in the journal Cognition. A major difference between Herman's work and previous research was his emphasis on a method of studying language comprehension only (rather than language comprehension and production by the animal(s)), which enabled rigorous controls and statistical tests, largely because he was limiting his researchers to evaluating the animals' physical behaviors (in response to sentences) with blinded observers, rather than attempting to interpret possible language utterances or productions. The dolphins' names here were Akeakamai and Phoenix. Irene Pepperberg used the vocal modality for language production and comprehension in a grey parrot named Alex in the verbal mode, and Sue Savage-Rumbaugh continues to study bonobos such as Kanzi and Panbanisha. R. Schusterman duplicated many of the dolphin results in his California sea lions ("Rocky"), and came from a more behaviorist tradition than Herman's cognitive approach. Schusterman's emphasis is on the importance on a learning structure known as "equivalence classes."

However, overall, there has not been any meaningful dialog between the linguistics and animal language spheres, despite capturing the public's imagination in the popular press. Also, the growing field of language evolution is another source of future interchange between these disciplines. Most primate researchers tend to show a bias toward a shared pre-linguistic ability between humans and chimpanzees, dating back to a common ancestor, while dolphin and parrot researchers stress the general cognitive principles underlying these abilities. More recent related controversies regarding animal abilities include the closely linked areas of Theory of mind, Imitation (e.g. Nehaniv & Dautenhahn, 2002), Animal Culture (e.g. Rendell & Whitehead, 2001), and Language Evolution (e.g. Christiansen & Kirby, 2003).

There has been a recent emergence in animal language research which has contested the idea that animal communication is less sophisticated than human communication. Denise Herzing has done research on dolphins in the Bahamas whereby she created a two-way conversation via a submerged keyboard. The keyboard allows divers to communicate with wild dolphins. By using sounds and symbols on each key the dolphins could either press the key with their nose or mimic the whistling sound emitted in order to ask humans for a specific prop. This ongoing experiment has shown that in non-linguistic creatures brilliant and rapid thinking does occur despite our previous conceptions of animal communication. Further research done with Kanzi using lexigrams has strengthened the idea that animal communication is much more complex then we once thought.

Receptive aphasia

From Wikipedia, the free encyclopedia

Receptive aphasia
Other namesWernicke's aphasia, fluent aphasia, sensory aphasia
BrocasAreaSmall.png
Broca's area and Wernicke's area
SpecialtyNeurology 

Wernicke's aphasia, also known as receptive aphasia, sensory aphasia or posterior aphasia, is a type of aphasia in which individuals have difficulty understanding written and spoken language. Patients with Wernicke's aphasia demonstrate fluent speech, which is characterized by typical speech rate, intact syntactic abilities and effortless speech output. Writing often reflects speech in that it tends to lack content or meaning. In most cases, motor deficits (i.e. hemiparesis) do not occur in individuals with Wernicke's aphasia. Therefore, they may produce a large amount of speech without much meaning. Individuals with Wernicke's aphasia are typically unaware of their errors in speech and do not realize their speech may lack meaning. They typically remain unaware of even their most profound language deficits.

Like many acquired language disorders, Wernicke's aphasia can be experienced in many different ways and to many different degrees. Patients diagnosed with Wernicke's aphasia can show severe language comprehension deficits; however, this is dependent on the severity and extent of the lesion. Severity levels may range from being unable to understand even the simplest spoken and/or written information to missing minor details of a conversation. Many diagnosed with Wernicke's aphasia have difficulty with repetition in words and sentences and/or working memory.

Wernicke's aphasia was named after German physician Carl Wernicke, who is credited with discovering the area of the brain responsible for language comprehension (Wernicke's area).

Signs and symptoms

The following are common symptoms seen in patients with Wernicke's aphasia:

Impaired comprehension: deficits in understanding (receptive) written and spoken language. This is because Wernicke's area is responsible for assigning meaning to the language that is heard, so if it is damaged, the brain cannot comprehend the information that is being received.

Poor word retrieval: ability to retrieve target words is impaired. This is also referred to as anomia.
Fluent speech: individuals with Wernicke's aphasia do not have difficulty with producing connected speech that flows. Although the connection of the words may be appropriate, the words they are using may not belong together or make sense (see Production of jargon below). 

Production of jargon: speech that lacks content, consists of typical intonation, and is structurally intact. Jargon can consist of a string of neologisms, as well as a combination of real words that do not make sense together in context. May include word salads

Awareness: Individuals with Wernicke's aphasia are often not aware of their incorrect productions, which would further explain why they do not correct themselves when they produce jargon, paraphasias, or neologisms.

Paraphasias:
  • Phonemic (literal) paraphasias: involves the substitution, addition, or rearrangement of sounds so that an error can be defined as sounding like the target word. Often, half of the word is still intact which allows for easy comparison to the appropriate, original word.
    • Ex: "bap" for "map"
  • Semantic (verbal) paraphasias: saying a word that is related to the target word in meaning or category; frequently observed in Wernicke's aphasia.
    • Ex: "jet" for "airplane" or "knife" for "fork"
Neologisms: nonwords that have no relation to the target word.
  • Ex: "dorflur" for "shoe"
Circumlocution: talking around the target word.
  • Ex: "uhhh it's white...it's flat...you write on it…" (when referencing paper)
Press of speech: run-on speech.
  • If a clinician asks, "what do you do at a supermarket?" And the individual responds with "Well, the supermarket is a place. It is a place with a lot of food. My favorite food is italian food. At a supermarket, I buy different kinds of food. There are carts and baskets. Supermarkets have lots of customers, and workers…."
Lack of hemiparesis: typically, no motor deficits are seen with a localized lesion in Wernicke's area.
Reduced retention span: reduced ability to retain information for extended periods of time.

Impairments in reading and writing: impairments can be seen in both reading and writing with differing severity levels.

How to differentiate from other types of aphasia
  • Expressive aphasia (non-fluent Broca's aphasia): individuals have great difficulty forming complete sentences with generally only basic content words (leaving out words like "is" and "the").
  • Global aphasia: individuals have extreme difficulties with both expressive (producing language) and receptive (understanding language).
  • Anomic aphasia: the biggest hallmark is an individuals poor word finding abilities; their speech is fluent and appropriate, but full of circumlocutions (evident in both writing and speech).
  • Conduction aphasia: individual can comprehend what is being said and is fluent in spontaneous speech, but they cannot repeat what is being said to them.

Causes

The most common cause of Wernicke's aphasia is stroke. Strokes may occur when blood flow to the brain is completely interrupted or severely reduced. This has a direct effect on the amount of oxygen and nutrients being able to supply the brain, which causes brain cells to die within minutes. The primary classifications of stroke are hemorrhagic (ruptured blood vessel), or ischemic (blood clot reduces or completely stops blood flow). Two of the most common types of hemorrhagic stroke are subarachnoid hemorrhage and intracerebral hemorrhage. Subarachnoid hemorrhage is when an artery near the surface of the brain bursts causing blood to leak into the space between the brain and skull. Meanwhile intracerebral hemorrhage occurs when a blood vessel inside the brain bursts, causing spillage into surrounding brain tissue. Three main causes of these hemorrhagic strokes are hypertension (uncontrolled high blood pressure), aneurisms (weak spots in blood vessel walls), and arteriovenous malformations (rupture of abnormal tangle of thin-walled blood vessels). As previously noted the other major classification for a stroke is an ischemic stroke. The ischemic strokes, which are the most common form of stroke, are further broken down and can be classified as embolic or thrombotic. Embolic strokes occur when a blood clot forms away from the brain, typically in the heart. A small portion of this clot breaks away and travels through the blood vessels until eventually reaching a small enough vessel in the brain that it can no longer pass through, causing a blockage. Thrombotic strokes on the other hand are due to the formation of a blood clot directly formed in one of the arteries that supply the brain. In general, stroke is the number one leading cause of disability worldwide.,

"The middle cerebral arteries supply blood to the cortical areas involved in speech, language and swallowing. The left middle cerebral artery provides Broca's area, Wernicke's area, Heschl's gyrus, and the angular gyrus with blood". Therefore, in patients with Wernicke's aphasia, there is typically an occlusion to the left middle cerebral artery. 

As a result of the occlusion in the left middle cerebral artery, Wernicke's aphasia is most commonly caused by a lesion in the posterior superior temporal gyrus (Wernicke's area). This area is posterior to the primary auditory cortex (PAC) which is responsible for decoding individual speech sounds. Wernicke's primary responsibility is to assign meaning to these speech sounds. The extent of the lesion will determine the severity of the patients deficits related to language. Damage to the surrounding areas (perisylvian region) may also result in Wernicke's aphasia symptoms due to variation in individual neuroanatomical structure and any co-occurring damage in adjacent areas of the brain.

Diagnosis

"Aphasia is usually first recognized by the physician who treats the person for his or her brain injury. Most individuals will undergo a magnetic resonance imaging (MRI) or computed tomography (CT) scan to confirm the presence of a brain injury and to identify its precise location." In circumstances where a person is showing possible signs of aphasia, the physician will refer him or her to a speech-language pathologist (SLP) for a comprehensive speech and language evaluation. SLPs will examine the individual's ability to express him or herself through speech, understand language in written and spoken forms, write independently, and perform socially.

The American Speech, Language, Hearing Association (ASHA) states a comprehensive assessment should be conducted in order to analyze the patient's communication functioning on multiple levels; as well as the effect of possible communication deficits on activities of daily living. Typical components of an aphasia assessment include: case history, self report, oral-motor examination, language skills, identification of environmental and personal factors, and the assessment results. A comprehensive aphasia assessment includes both formal and informal measures.
Formal assessments:
  • Boston Diagnostic Aphasia Examination (BDAE): diagnoses the presence and type of aphasia, focusing on location of lesion and the underlying linguistic processes. 
  • Western Aphasia Battery - Revised (WAB): determines the presence, severity, and type of aphasia; and can also determine baseline abilities of patient.
  • Communication Activities of Daily Living - Second Edition (CADL-2): measures functional communication abilities; focuses on reading, writing, social interactions, and varying levels of communication.
  • Revised Token Test (RTT): assess receptive language and auditory comprehension; focuses on patient's ability to follow directions.
Informal assessments: 

Informal assessments aide in the diagnosis of patients with suspected aphasia.
  • Conversational speech and language sample
  • Family interview
  • Case history or medical chart review
  • Behavioral observations
Diagnostic information should be scored and analyzed appropriately. Treatment plans and individual goals should be developed based on diagnostic information, as well as patient and caregiver needs, desires, and priorities.

Treatment

According to Bates et al. (2005), "the primary goal of rehabilitation is to prevent complications, minimize impairments, and maximize function". The topics of intensity and timing of intervention are widely debated across various fields. Results are contradictory: some studies indicate better outcomes with early intervention, while other studies indicate starting therapy too early may be detrimental to the patient's recovery. Recent research suggests, that therapy be functional and focus on communication goals that are appropriate for the patient's individual lifestyle.

Specific treatment considerations for working with individuals with Wernicke's aphasia (or those who exhibit deficits in auditory comprehension) include using familiar materials, using shorter and slower utterances when speaking, giving direct instructions, and using repetition as needed.

Neuroplasticity: Role in Recovery 

Neuroplasticity is defined as the brain's ability to reorganize itself, lay new pathways, and rearrange existing ones, as a result of experience. Neuronal changes after damage to the brain such as collateral sprouting, increased activation of the homologous areas, and map extension demonstrate the brain's neuroplastic abilities. According to Thomson, "Portions of the right hemisphere, extended left brain sites, or both have been shown to be recruited to perform language functions after brain damage. All of the neuronal changes recruit areas not originally or directly responsible for large portions of linguistic processing. Principles of neuroplasticity have been proven effective in neurorehabilitation after damage to the brain. These principles include: incorporating multiple modalities into treatment to create stronger neural connections, using stimuli that evoke positive emotion, linking concepts with simultaneous and related presentations, and finding the appropriate intensity and duration of treatment for each individual patient.

Auditory comprehension treatment

Auditory comprehension is a primary focus in treatment for Wernicke's aphasia, as it is the main deficit related to this diagnosis. Therapy activities may include:
  • Single-word comprehension: A common treatment method used to support single-word comprehension skills is known as a pointing drill. Through this method, clinicians lay out a variety of images in front of a patient. The patient is asked to point to the image that corresponds to the word provided by the clinician.
  • Understanding spoken sentences: "Treatment to improve comprehension of spoken sentences typically consists of drills in which patients answer questions, follow directions or verify the meaning of sentences".
  • Understanding conversation: An effective treatment method to support comprehension of discourse includes providing a patient with a conversational sample and asking him or her questions about that sample. Individuals with less severe deficits in auditory comprehension may also be able to retell aspects of the conversation.

Word retrieval

Anomia is consistently seen in aphasia, so many treatment techniques aim to help patients with word finding problems. One example of a semantic approach is referred to as semantic feature analyses. The process includes naming the target object shown in the picture and producing words that are semantically related to the target. Through production of semantically similar features, participants develop more skilled in naming stimuli due to the increase in lexical activation.

Restorative therapy approach

Neuroplasticity is a central component to restorative therapy to compensate for brain damage. This approach is especially useful in Wernicke's aphasia patients that have suffered from a stroke to the left brain hemisphere. 

Schuell's stimulation approach is a main method in traditional aphasia therapy that follows principles to retrieve function in the auditory modality of language and influence surrounding regions through stimulation. The guidelines to have the most effective stimulation are as follows: Auditory stimulation of language should be intensive and always present when other language modalities are stimulated. 
  • The stimulus should be presented at a difficulty level equal to or just below the patient’s ability.
  • Sensory stimulation must be present and repeated throughout the treatment.
  • Each stimulus applied should produce a response; if there is no response more stimulation cues should be provided.
  • Response to stimuli should be maximized to create more opportunities for success and feedback for the speech-language pathologist.
  • The feedback of the speech-language pathologist should promote further success and patient and encouragement.
  • Therapy should follow an intensive and systemic method to create success by progressing in difficulty.
  • Therapies should be varied and build off of mastered therapy tasks. 

Schuell’s stimulation utilizes stimulation through therapy tasks beginning at a simplified task and progressing to become more difficult including:
Point to tasks. During these tasks the patient is directed to point to an object or multiple objects. As the skill is learned the level of complexity increases by increasing the number of objects the patient must point to. 
  • Simple: "Point to the book."
  • Complex: "Point to the book and then to the ceiling after touching your ear."
Following directions with objects. During these tasks the patient is instructed to follow the instruction of manually following directions that increase in complexity as the skill is learned. 

* Simple: "Pick up the book." 

* Complex: "Pick up the book and put it down on the bench after I move the cup." 

Yes or no questions - This task requires the patient to respond to various yes or no questions that can range from simple to complex.

Paraphrasing and retelling - This task requires the patient to read a paragraph and, afterwords, paraphrase it aloud. This is the most complex of Schuell’s stimulation tasks because it requires comprehension, recall, and communication.

Social approach to treatment

The social approach involves a collaborative effort on behalf of patients and clinicians to determine goals and outcomes for therapy that could improve the patient's quality of life. A conversational approach is thought to provide opportunities for development and the use of strategies to overcome barriers to communication. The main goals of this treatment method are to improve the patient's conversational confidence and skills in natural contexts using conversational coaching, supported conversations, and partner training.
  • Conversational coaching involves patients with aphasia and their speech language pathologists, who serve as a "coach" discussing strategies to approach various communicative scenarios. The "coach" will help the patient develop a script for a scenario (such as ordering food at a restaurant), and help the patient practice and perform the scenario in and out of the clinic while evaluating the outcome.
  • Supported conversation also involves using a communicative partner who supports the patient's learning by providing contextual cues, slowing their own rate of speech, and increasing their message's redundancy to promote the patient's comprehension.
Additionally, it is important to include the families of patients with aphasia in treatment programs. Clinicians can teach family members how to support one another, and how to adjust their speaking patterns to facilitate their loved one's treatment and rehabilitation.

Prognosis

Prognosis is strongly dependent on the location and extent of the lesion (damage) to the brain. Many personal factors also influence how a person will recover, which include age, previous medical history, level of education, gender, and motivation. All of these factors influence the brain's ability to adapt to change, restore previous skills, and learn new skills. It is important to remember that all the presentations of Receptive Aphasia may vary. The presentation of symptoms and prognosis are both dependent on personal components related to the individual's neural organization before the stroke, the extent of the damage, and the influence of environmental and behavioral factors after the damage occurs. The quicker a diagnosis of a stroke is made by a medical team, the more positive the patient's recovery may be. A medical team will work to control the signs and symptoms of the stroke and rehabilitation therapy will begin to manage and recover lost skills. The rehabilitation team may consist of a certified speech-language pathologist, physical therapist, occupational therapist, and the family or caregivers. The length of therapy will be different for everyone, but research suggests that intense therapy over a short amount of time can improve outcomes of speech and language therapy for patients with aphasia. Research is not suggesting the only way therapy should be administered, but gives insight on how therapy affects the patient's prognosis.

Expressive aphasia

From Wikipedia, the free encyclopedia
 
Expressive aphasia
Other namesBroca's aphasia, non-fluent aphasia, agrammatic aphasia
BrocasAreaSmall.png
Broca's area and Wernicke's area
SpecialtyNeurology 

Expressive aphasia, also known as Broca's aphasia, is a type of aphasia characterized by partial loss of the ability to produce language (spoken, manual, or written), although comprehension generally remains intact. A person with expressive aphasia will exhibit effortful speech. Speech generally includes important content words but leaves out function words that have only grammatical significance and not real-world meaning, such as prepositions and articles. This is known as "telegraphic speech". The person's intended message may still be understood, but their sentence will not be grammatically correct. In very severe forms of expressive aphasia, a person may only speak using single word utterances. Typically, comprehension is mildly to moderately impaired in expressive aphasia due to difficulty understanding complex grammar.

It is caused by acquired damage to the anterior regions of the brain, such as Broca's area. It is one subset of a larger family of disorders known collectively as aphasia. Expressive aphasia contrasts with receptive aphasia, in which patients are able to speak in grammatical sentences that lack semantic significance and generally also have trouble with comprehension. Expressive aphasia differs from dysarthria, which is typified by a patient's inability to properly move the muscles of the tongue and mouth to produce speech. Expressive aphasia also differs from apraxia of speech, which is a motor disorder characterized by an inability to create and sequence motor plans for speech.

Signs and symptoms

Broca's (expressive) aphasia is a type of non-fluent aphasia in which an individual's speech is halting and effortful. Misarticulations or distortions of consonants and vowels, namely phonetic dissolution, are common. Individuals with expressive aphasia may only produce single words, or words in groups of two or three. Long pauses between words are common and multi-syllabic words may be produced one syllable at a time with pauses between each syllable. The prosody of a person with Broca's aphasia is compromised by shortened length of utterances and the presence of self-repairs and disfluencies. Intonation and stress patterns are also deficient.

For example, in the following passage, a patient with Broca's aphasia is trying to explain how he came to the hospital for dental surgery:
Yes... ah... Monday... er... Dad and Peter H... (his own name), and Dad.... er... hospital... and ah... Wednesday... Wednesday, nine o'clock... and oh... Thursday... ten o'clock, ah doctors... two... an' doctors... and er... teeth... yah.
The speech of a person with expressive aphasia contains mostly content words such as nouns, verbs, and some adjectives. However, function words like conjunctions, articles, and prepositions are rarely used except for “and” which is prevalent in the speech of most patients with aphasia. The omission of function words makes the person's speech agrammatic. A communication partner of a person with aphasia may say that the person's speech sounds telegraphic due to poor sentence construction and disjointed words. For example, a person with expressive aphasia might say "Smart... university... smart... good... good..."

Self-monitoring is typically well preserved in patients with Broca's aphasia. They are usually aware of their communication deficits, and are more prone to depression and outbursts from frustration than are patients with other forms of aphasia.

In general, word comprehension is preserved, allowing patients to have functional receptive language skills. Individuals with Broca's aphasia understand most of the everyday conversation around them, but higher-level deficits in receptive language can occur. Because comprehension is substantially impaired for more complex sentences, it is better to use simple language when speaking with an individual with expressive aphasia. This is exemplified by the difficulty to understand phrases or sentences with unusual structure. A typical patient with Broca's aphasia will misinterpret "the man is bitten by the dog" by switching the subject and object to “the dog is bitten by the man.”

Typically, people with expressive aphasia can understand speech and read better than they can produce speech and write. The person's writing will resemble their speech and will be effortful, lacking cohesion, and containing mostly content words. Letters will likely be formed clumsily and distorted and some may even be omitted. Although listening and reading are generally intact, subtle deficits in both reading and listening comprehension are almost always present during assessment of aphasia.

Because Broca's area is anterior to the primary motor cortex, which is responsible for movement of the face, hands, and arms, a lesion affecting Broca's areas may also result in hemiparesis (weakness of both limbs on the same side of the body) or hemiplegia (paralysis of both limbs on the same side of the body). The brain is wired contralaterally, which means the limbs on right side of the body are controlled by the left hemisphere and vice versa. Therefore, when Broca's area or surrounding areas in the left hemisphere are damaged, hemiplegia or hemiparesis often occurs on the right side of the body in individuals with Broca's aphasia.

Severity of expressive aphasia varies among patients. Some people may only have mild deficits and detecting problems with their language may be difficult. In the most extreme cases, patients may be able to produce only a single word. Even in such cases, over-learned and rote-learned speech patterns may be retained for instance, some patients can count from one to ten, but cannot produce the same numbers in novel conversation.

Manual language and aphasia

In deaf patients who use manual language (such as American Sign Language), damage to the left hemisphere of the brain leads to disruptions in their signing ability. Paraphasic errors similar to spoken language have been observed; whereas in spoken language a phonemic substitution would occur (e.g. "tagle" instead of "table"), in ASL case studies errors in movement, hand position, and morphology have been noted. Agrammatism, or the lack of grammatical morphemes in sentence production, has also been observed in lifelong users of ASL who have left hemisphere damage. The lack of syntactic accuracy shows that the errors in signing are not due to damage to the motor cortex, but rather are a manifestation of the damage to the language-producing area of the brain. Similar symptoms have been seen in a patient with left hemisphere damage whose first language was British Sign Language, further showing that damage to the left hemisphere primarily hinders linguistic ability, not motor ability. In contrast, patients who have damage to non-linguistic areas on the left hemisphere have been shown to be fluent in signing, but are unable to comprehend written language.

Overlap with receptive aphasia

In addition to difficulty expressing oneself, individuals with expressive aphasia are also noted to commonly have trouble with comprehension in certain linguistic areas. This agrammatism overlaps with receptive aphasia, but can be seen in patients who have expressive aphasia without being diagnosed as having receptive aphasia. The most well-noted of these are object-relative clauses, object Wh- questions, and topicalized structures (placing the topic at the beginning of the sentence). These three concepts all share phrasal movement, which can cause words to lose their thematic roles when they change order in the sentence. This is often not an issue for people without agrammatic aphasias, but many people with aphasia rely heavily on word order to understand roles that words play within the sentence.

Causes

More common

Less common

Common causes

The most common cause of expressive aphasia is stroke. A stroke is caused by hypoperfusion (lack of oxygen) to an area of the brain, which is commonly caused by thrombosis or embolism. Some form of aphasia occurs in 34 to 38% of stroke patients. Expressive aphasia occurs in approximately 12% of new cases of aphasia caused by stroke.

In most cases, expressive aphasia is caused by a stroke in Broca's area or the surrounding vicinity. Broca's area is in the lower part of the premotor cortex in the language dominant hemisphere and is responsible for planning motor speech movements. However, cases of expressive aphasia have been seen in patients with strokes in other areas of the brain. Patients with classic symptoms of expressive aphasia in general have more acute brain lesions, whereas patients with larger, widespread lesions exhibit a variety of symptoms that may be classified as global aphasia or left unclassified.

Expressive aphasia can also be caused by trauma to the brain, tumor, cerebral hemorrhage and by extradural abscess.

Understanding lateralization of brain function is important for understanding which areas of the brain cause expressive aphasia when damaged. In the past, it has been believed that the area for language production differs between left and right-handed individuals. If this were true, damage to the homologous region of Broca's area in the right hemisphere should cause aphasia in a left-handed individual. More recent studies have shown that even left-handed individuals typically have language functions only in the left hemisphere. However, left-handed individuals are more likely to have a dominance of language in the right hemisphere.

Uncommon causes

Less common causes of expressive aphasia include primary autoimmune phenomenon and autoimmune phenomenon that are secondary to cancer (as a paraneoplastic syndrome) have been listed as the primary hypothesis for several cases of aphasia, especially when presenting with other psychiatric disturbances and focal neurological deficits. Many case reports exist describing paraneoplastic aphasia, and the reports that are specific tend to describe expressive aphasia. Although most cases attempt to exclude micrometastasis, it is likely that some cases of paraneoplastic aphasia are actually extremely small metastasis to the vocal motor regions.

Neurodegenerative disorders may present with aphasia. Alzheimer's disease may present with either fluent aphasia or expressive aphasia. There are case reports of Creutzfeldt-Jakob disease presenting with expressive aphasia.

Diagnosis

Expressive aphasia is classified as non-fluent aphasia, as opposed to fluent aphasia. Diagnosis is done on a case-by-case basis, as lesions often affect the surrounding cortex and deficits are highly variable among patients with aphasia.

A physician is typically the first person to recognize aphasia in a patient who is being treated for damage to the brain. Routine processes for determining the presence and location of lesion in the brain include magnetic resonance imaging (MRI) and computed tomography (CT) scans. The physician will complete a brief assessment of the patient's ability to understand and produce language. For further diagnostic testing, the physician will refer the patient to a speech-language pathologist, who will complete a comprehensive evaluation.

In order to diagnose a patient who is suffering from Broca's aphasia, there are certain commonly used tests and procedures. The Western Aphasia Battery (WAB) classifies individuals based on their scores on the subtests; spontaneous speech, auditory comprehension, repetition, and naming. The Boston Diagnostic Aphasia Examination (BDAE) can inform users what specific type of aphasia they may have, infer the location of lesion, and assess current language abilities. The Porch Index of Communication Ability (PICA) can predict potential recovery outcomes of the patients with aphasia. Quality of life measurement is also an important assessment tool. Tests such as the Assessment for Living with Aphasia (ALA) and the Satisfaction with Life Scale (SWLS) allow for therapists to target skills that are important and meaningful for the individual.

In addition to formal assessments, patient and family interviews are valid and important sources of information. The patient's previous hobbies, interests, personality, and occupation are all factors that will not only impact therapy but may motivate them throughout the recovery process. Patient interviews and observations allow professionals to learn the priorities of the patient and family and determine what the patient hopes to regain in therapy. Observations of the patient may also be beneficial to determine where to begin treatment. The current behaviors and interactions of the patient will provide the therapist with more insight about the client and their individual needs. Other information about the patient can be retrieved from medical records, patient referrals from physicians, and the nursing staff.

In non-speaking patients who use manual languages, diagnosis is often based on interviews from the patient's acquaintances, noting the differences in sign production pre- and post-damage to the brain. Many of these patients will also begin to rely on non-linguistic gestures to communicate, rather than signing since their language production is hindered.

Treatment

Currently, there is no standard treatment for expressive aphasia. Most aphasia treatment is individualized based on a patient's condition and needs as assessed by a speech language pathologist. Patients go through a period of spontaneous recovery following brain injury in which they regain a great deal of language function.

In the months following injury or stroke, most patients receive traditional treatment for a few hours per day. Among other exercises, patients practice the repetition of words and phrases. Mechanisms are also taught in traditional treatment to compensate for lost language function such as drawing and using phrases that are easier to pronounce.

Emphasis is placed on establishing a basis for communication with family and caregivers in everyday life. Treatment is individualized based on the patient's own priorities, along with the family's input.

A patient may have the option of individual or group treatment. Although less common, group treatment has been shown to have advantageous outcomes. Some types of group treatments include family counseling, maintenance groups, support groups and treatment groups.

Melodic intonation therapy

Melodic intonation therapy was inspired by the observation that individuals with non-fluent aphasia sometimes can sing words or phrases that they normally cannot speak. "Melodic Intonation Therapy was begun as an attempt to use the intact melodic/prosodic processing skills of the right hemisphere in those with aphasia to help cue retrieval words and expressive language." It is believed that this is because singing capabilities are stored in the right hemisphere of the brain, which is likely to remain unaffected after a stroke in the left hemisphere. However, recent evidence demonstrates that the capability of individuals with aphasia to sing entire pieces of text may actually result from rhythmic features and the familiarity with the lyrics.

The goal of Melodic Intonation Therapy is to utilize singing to access the language-capable regions in the right hemisphere and use these regions to compensate for lost function in the left hemisphere. The natural musical component of speech was used to engage the patients' ability to produce phrases. A clinical study revealed that singing and rhythmic speech may be similarly effective in the treatment of non-fluent aphasia and apraxia of speech. Moreover, evidence from randomized controlled trials is still needed to confirm that Melodic Intonation Therapy is suitable to improve propositional utterances and speech intelligibility in individuals with (chronic) non-fluent aphasia and apraxia of speech.

Melodic Intonation Therapy appears to work particularly well in patients who have had a unilateral, left hemisphere stroke, show poor articulation, are non-fluent or have severely restricted speech output, have moderately preserved auditory comprehension, and show good motivation. MIT therapy on average lasts for 1.5 hours per day for five days per week. At the lowest level of therapy, simple words and phrases (such as "water" and "I love you") are broken down into a series of high- and low-pitch syllables. With increased treatment, longer phrases are taught and less support is provided by the therapist. Patients are taught to say phrases using the natural melodic component of speaking and continuous voicing is emphasized. The patient is also instructed to use the left hand to tap the syllables of the phrase while the phrases are spoken. Tapping is assumed to trigger the rhythmic component of speaking to utilize the right hemisphere.

FMRI studies have shown that Melodic Intonation Therapy (MIT) uses both sides of the brain to recover lost function, as opposed to traditional therapies that utilize only the left hemisphere. In MIT, individuals with small lesions in the left hemisphere seem to recover by activation of the left hemisphere perilesional cortex. Meanwhile, individuals with larger left-hemisphere lesions show a recruitment of the use of language-capable regions in the right hemisphere. The interpretation of these results is still a matter of debate. For example, it remains unclear whether changes in neural activity in the right hemisphere result from singing or from the intensive use of common phrases, such as "thank you", "how are you?" or "I am fine." This type of phrases falls into the category of formulaic language and is known to be supported by neural networks of the intact right hemisphere.

A pilot study reported positive results when comparing the efficacy of a modified form of MIT to no treatment in people with nonfluent aphasia with damage to their left-brain. A randomized controlled trial was conducted and the study reported benefits of utilizing modified MIT treatment early in the recovery phase for people with nonfluent aphasia.

Melodic Intonation Therapy is used by music therapists, board-certified professionals that use music as a therapeutic tool to effect certain non-musical outcomes in their patients. Speech language pathologists can also use this therapy for individuals who have had a left hemisphere stroke and non-fluent aphasias such as Broca's or even apraxia of speech.

Constraint-induced therapy

Constraint-induced aphasia therapy (CIAT) is based on similar principles as constraint-induced movement therapy developed by Dr. Edward Taub at the University of Alabama at Birmingham. Constraint-induced movement therapy is based on the idea that a person with an impairment (physical or communicative) develops a "learned nonuse" by compensating for the lost function with other means such as using an unaffected limb by a paralyzed individual or drawing by a patient with aphasia. In constraint-induced movement therapy, the alternative limb is constrained with a glove or sling and the patient is forced to use the affected limb. In constraint-induced aphasia therapy the interaction is guided by communicative need in a language game context, picture cards, barriers making it impossible to see other players' cards, and other materials, so that patients are encouraged ("constrained") to use the remaining verbal abilities to succeed in the communication game.

Two important principles of constraint-induced aphasia therapy are that treatment is very intense, with sessions lasting for up to 6 hours over the course of 10 days and that language is used in a communication context in which it is closely linked to (nonverbal) actions. These principles are motivated by neuroscience insights about learning at the level of nerve cells (synaptic plasticity) and the coupling between cortical systems for language and action in the human brain. Constraint-induced therapy contrasts sharply with traditional therapy by the strong belief that mechanisms to compensate for lost language function, such as gesturing or writing, should not be used unless absolutely necessary, even in everyday life.

It is believed that CIAT works by the mechanism of increased neuroplasticity. By constraining an individual to use only speech, it is believed that the brain is more likely to reestablish old neural pathways and recruit new neural pathways to compensate for lost function. 

The strongest results of CIAT have been seen in patients with chronic aphasia (lasting over 6 months). Studies of CIAT have confirmed that further improvement is possible even after a patient has reached a "plateau" period of recovery. It has also been proven that the benefits of CIAT are retained long term. However, improvements only seem to be made while a patient is undergoing intense therapy. Recent work has investigated combining constraint-induced aphasia therapy with drug treatment, which led to an amplification of therapy benefits.

Medication

In addition to active speech therapy, pharmaceuticals have also been considered as a useful treatment for expressive aphasia. This area of study is relatively new and much research continues to be conducted. 

The following drugs have been suggested for use in treating aphasia and their efficacy has been studied in control studies.
The most effect has been shown by piracetam and amphetamine, which may increase cerebral plasticity and result in an increased capability to improve language function. It has been seen that piracetam is most effective when treatment is begun immediately following stroke. When used in chronic cases it has been much less efficient.

Bromocriptine has been shown by some studies to increase verbal fluency and word retrieval with therapy than with just therapy alone. Furthermore, its use seems to be restricted to non-fluent aphasia.

Donepezil has shown a potential for helping chronic aphasia.

No study has established irrefutable evidence that any drug is an effective treatment for aphasia therapy. Furthermore, no study has shown any drug to be specific for language recovery. Comparison between the recovery of language function and other motor function using any drug has shown that improvement is due to a global increase plasticity of neural networks.

Transcranial magnetic stimulation

In transcranial magnetic stimulation (TMS), magnetic fields are used to create electrical currents in specified cortical regions. The procedure is a painless and noninvasive method of stimulating the cortex. TMS works by suppressing the inhibition process in certain areas of the brain. By suppressing the inhibition of neurons by external factors, the targeted area of the brain may be reactivated and thereby recruited to compensate for lost function. Research has shown that patients can demonstrate increased object naming ability with regular transcranial magnetic stimulation than patients not receiving TMS. Furthermore, research suggests this improvement is sustained upon the completion of TMS therapy. However, some patients fail to show any significant improvement from TMS which indicates the need for further research of this treatment.

Treatment of underlying forms

Described as the linguistic approach to the treatment of expressive aphasia, treatment begins by emphasizing and educating patients on the thematic roles of words within sentences. Sentences that are usually problematic will be reworded into active-voiced, declarative phrasings of their non-canonical counterparts. The simpler sentence phrasings are then transformed into variations that are more difficult to interpret. For example, many individuals who have expressive aphasia struggle with Wh- sentences. "What" and "who" questions are problematic sentences that this treatment method attempts to improve, and they are also two interrogative particles that are strongly related to each other because they reorder arguments from the declarative counterparts. For instance, therapists have used sentences like, "Who is the boy helping?" and "What is the boy fixing?" because both verbs are transitive- they require two arguments in the form of a subject and a direct object, but not necessarily an indirect object. In addition, certain question particles are linked together based on how the reworded sentence is formed. Training "who" sentences increased the generalizations of non-trained "who" sentences as well as untrained "what" sentences, and vice versa. Likewise, "where" and "when" question types are very closely linked. "What" and "who" questions alter placement of arguments, and "where" and "when" sentences move adjunct phrases. Training is in the style of: "The man parked the car in the driveway. What did the man park in the driveway?" Sentence training goes on in this manner for more domains, such as clefts and sentence voice.

Results: Patients’ use of sentence types used in the TUF treatment will improve, subjects will generalize sentences of similar category to those used for treatment in TUF, and results are applied to real-world conversations with others. Generalization of sentence types used can be improved when the treatment progresses in the order of more complex sentences to more elementary sentences. Treatment has been shown to affect on-line (real-time) processing of trained sentences and these results can be tracked using fMRI mappings. Training of Wh- sentences has led improvements in three main areas of discourse for aphasics: increased average length of utterances, higher proportions of grammatical sentences, and larger ratios of numbers of verbs to nouns produced. Patients also showed improvements in verb argument structure productions and assigned thematic roles to words in utterances with more accuracy. In terms of on-line sentence processing, patients having undergone this treatment discriminate between anomalous and non-anomalous sentences with more accuracy than control groups and are closer to levels of normalcy than patients not having participated in this treatment.

Mechanisms of recovery

Mechanisms for recovery differ from patient to patient. Some mechanisms for recovery occur spontaneously after damage to the brain, whereas others are caused by the effects of language therapy. FMRI studies have shown that recovery can be partially attributed to the activation of tissue around the damaged area and the recruitment of new neurons in these areas to compensate for the lost function. Recovery may also be caused in very acute lesions by a return of blood flow and function to damaged tissue that has not died around an injured area. It has been stated by some researchers that the recruitment and recovery of neurons in the left hemisphere opposed to the recruitment of similar neurons in the right hemisphere is superior for long-term recovery and continued rehabilitation. It is thought that, because the right hemisphere is not intended for full language function, using the right hemisphere as a mechanism of recovery is effectively a "dead-end" and can lead only to partial recovery.

It has been proven that, among all types of therapies, one of the most important factors and best predictors for a successful outcome is the intensity of the therapy. By comparing the length and intensity of various methods of therapies, it was proven that intensity is a better predictor of recovery than the method of therapy used.

Prognosis

In most individuals with expressive aphasia, the majority of recovery is seen within the first year following a stroke or injury. The majority of this improvement is seen in the first four weeks in therapy following a stroke and slows thereafter. However, this timeline will vary depending upon the type of stroke experienced by the patient. Patients who experienced an ischemic stroke may recover in the days and weeks following the stroke, and then experience a plateau and gradual slowing of recovery. On the contrary, patients who experienced a hemorrhagic stroke experience a slower recovery in the first 4–8 weeks, followed by a faster recovery which eventually stabilizes.[57]
Numerous factors impact the recovery process and outcomes. Site and extent of lesion greatly impacts recovery. Other factors that may affect prognosis are age, education, gender, and motivation. Occupation, handedness, personality, and emotional state may also be associated with recovery outcomes.

Studies have also found that prognosis of expressive aphasia correlates strongly with the initial severity of impairment. However, it has been seen that continued recovery is possible years after a stroke with effective treatment. Timing and intensity of treatment is another factor that impacts outcomes. Research suggests that even in later stages of recovery, intervention is effective at improving function, as well as, preventing loss of function.

Unlike receptive aphasia, patients with expressive aphasia are aware of their errors in language production. This may further motivate a person with expressive aphasia to progress in treatment, which would affect treatment outcomes. On the other hand, awareness of impairment may lead to higher levels of frustration, depression, anxiety, or social withdrawal, which have been proven to negatively affect a person's chance of recovery.

History

Expressive aphasia was first identified by the French neurologist Paul Broca. By examining the brains of deceased individuals having acquired expressive aphasia in life, he concluded that language ability is localized in the ventroposterior region of the frontal lobe. One of the most important aspects of Paul Broca's discovery was the observation that the loss of proper speech in expressive aphasia is due to the brain's loss of ability to produce language, as opposed to the mouth's loss of ability to produce words.

The discoveries of Paul Broca were made during the same period of time as the German Neurologist Carl Wernicke, who was also studying brains of aphasiacs post-mortem and identified the region now known as Wernicke's area. Discoveries of both men contributed to the concept of localization, which states that specific brain functions are all localized to a specific area of the brain. While both men made significant contributions to the field of aphasia, it was Carl Wernicke who realized the difference between patients with aphasia that could not produce language and those that could not comprehend language (the essential difference between expressive and receptive aphasia).

Cooperative

From Wikipedia, the free encyclopedia ...