Search This Blog

Sunday, December 24, 2023

Aphasia

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Aphasia

Aphasia
Regions of the left hemisphere that can give rise to aphasia when damaged
Pronunciation
SpecialtyNeurology, Psychiatry
TreatmentSpeech therapy

In aphasia (sometimes called dysphasia), a person may be unable to comprehend or unable to formulate language because of damage to specific brain regions. The major causes are stroke and head trauma; prevalence is hard to determine but aphasia due to stroke is estimated to be 0.1–0.4% in the Global North. Aphasia can also be the result of brain tumors, epilepsy, autoimmune neurological diseases, brain infections, or neurodegenerative diseases (such as dementias).

To be diagnosed with aphasia, a person's language must be significantly impaired in one (or more) of the four aspects of communication. Alternatively, in the case of progressive aphasia, it must have significantly declined over a short period of time. The four aspects of communication are spoken language production and comprehension, and written language production and comprehension, impairments in any of these aspects can impact on functional communication.

The difficulties of people with aphasia can range from occasional trouble finding words, to losing the ability to speak, read, or write; intelligence, however, is unaffected. Expressive language and receptive language can both be affected as well. Aphasia also affects visual language such as sign language. In contrast, the use of formulaic expressions in everyday communication is often preserved. For example, while a person with aphasia, particularly expressive aphasia (Broca's aphasia), may not be able to ask a loved one when their birthday is, they may still be able to sing "Happy Birthday". One prevalent deficit in all aphasias is anomia, which is a difficulty in finding the correct word.

With aphasia, one or more modes of communication in the brain have been damaged and are therefore functioning incorrectly. Aphasia is not caused by damage to the brain that results in motor or sensory deficits, which produces abnormal speech; that is, aphasia is not related to the mechanics of speech but rather the individual's language cognition (although a person can have both problems, as an example, if they have a haemorrhage that damaged a large area of the brain). An individual's language is the socially shared set of rules, as well as the thought processes that go behind communication (as it affects both verbal and nonverbal language). It is not a result of a more peripheral motor or sensory difficulty, such as paralysis affecting the speech muscles or a general hearing impairment.

Neurodevelopmental forms of auditory processing disorder are differentiable from aphasia in that aphasia is by definition caused by acquired brain injury, but acquired epileptic aphasia has been viewed as a form of APD.

Signs and symptoms

People with aphasia may experience any of the following behaviors due to an acquired brain injury, although some of these symptoms may be due to related or concomitant problems, such as dysarthria or apraxia, and not primarily due to aphasia. Aphasia symptoms can vary based on the location of damage in the brain. Signs and symptoms may or may not be present in individuals with aphasia and may vary in severity and level of disruption to communication. Often those with aphasia may have a difficulty with naming objects, so they might use words such as thing or point at the objects. When asked to name a pencil they may say it is a "thing used to write".

  • Inability to comprehend language
  • Inability to pronounce, not due to muscle paralysis or weakness
  • Inability to form words
  • Inability to recall words (anomia)
  • Poor enunciation
  • Excessive creation and use of personal neologisms
  • Inability to repeat a phrase
  • Persistent repetition of one syllable, word, or phrase (stereotypies, recurrent/recurring utterances/speech automatism) also known as perseveration.
  • Paraphasia (substituting letters, syllables or words)
  • Agrammatism (inability to speak in a grammatically correct fashion)
  • speaking in incomplete sentences
  • Inability to read
  • Inability to write
  • Limited verbal output
  • Difficulty in naming
  • Speech disorder
  • Speaking gibberish
  • Inability to follow or understand simple requests

Related behaviors

Given the previously stated signs and symptoms, the following behaviors are often seen in people with aphasia as a result of attempted compensation for incurred speech and language deficits:

  • Self-repairs: Further disruptions in fluent speech as a result of mis-attempts to repair erred speech production.
  • Struggle in non-fluent aphasias: A severe increase in expelled effort to speak after a life where talking and communicating was an ability that came so easily can cause visible frustration.
  • Preserved and automatic language: A behavior in which some language or language sequences that were used so frequently prior to onset are still produced with more ease than other language post onset.

Subcortical

  • Subcortical aphasias characteristics and symptoms depend upon the site and size of subcortical lesion. Possible sites of lesions include the thalamus, internal capsule, and basal ganglia.

Cognitive deficits

While aphasia has traditionally been described in terms of language deficits, there is increasing evidence that many people with aphasia commonly experience co-occurring non-linguistic cognitive deficits in areas such as attention, memory, executive functions and learning. By some accounts, cognitive deficits, such as attention and working memory constitute the underlying cause of language impairment in people with aphasia. Others suggest that cognitive deficits often co-occur but are comparable to cognitive deficits in stroke patients without aphasia and reflect general brain dysfunction following injury. Whilst it has been shown that cognitive neural networks support language reorganisation after stroke.  The degree to which deficits in attention and other cognitive domains underlie language deficits in aphasia is still unclear.

In particular, people with aphasia often demonstrate short-term and working memory deficits. These deficits can occur in both the verbal domain as well as the visuospatial domain. Furthermore, these deficits are often associated with performance on language specific tasks such as naming, lexical processing, and sentence comprehension, and discourse production. Other studies have found that most, but not all people with aphasia demonstrate performance deficits on tasks of attention, and their performance on these tasks correlate with language performance and cognitive ability in other domains. Even patients with mild aphasia, who score near the ceiling on tests of language often demonstrate slower response times and interference effects in non-verbal attention abilities.

In addition to deficits in short-term memory, working memory, and attention, people with aphasia can also demonstrate deficits in executive function. For instance, people with aphasia may demonstrate deficits in initiation, planning, self-monitoring, and cognitive flexibility. Other studies have found that people with aphasia demonstrate reduced speed and efficiency during completion executive function assessments.

Regardless of their role in the underlying nature of aphasia, cognitive deficits have a clear role in the study and rehabilitation of aphasia. For instance, the severity of cognitive deficits in people with aphasia has been associated with lower quality of life, even more so than the severity of language deficits. Furthermore, cognitive deficits may influence the learning process of rehabilitation and language treatment outcomes in aphasia. Non-linguistic cognitive deficits have also been the target of interventions directed at improving language ability, though outcomes are not definitive. While some studies have demonstrated language improvement secondary to cognitively-focused treatment, others have found little evidence that the treatment of cognitive deficits in people with aphasia has an influence on language outcomes.

One important caveat in the measurement and treatment of cognitive deficits in people with aphasia is the degree to which assessments of cognition rely on language abilities for successful performance. Most studies have attempted to circumvent this challenge by utilizing non-verbal cognitive assessments to evaluate cognitive ability in people with aphasia. However, the degree to which these tasks are truly 'non-verbal' and not mediated by language in unclear. For instance, Wall et al. found that language and non-linguistic performance was related, except when non-linguistic performance was measured by 'real life' cognitive tasks.

Causes

Aphasia is most often caused by stroke, where about a quarter of patients who experience an acute stroke develop aphasia. However, any disease or damage to the parts of the brain that control language can cause aphasia. Some of these can include brain tumors, traumatic brain injury, epilepsy and progressive neurological disorders. In rare cases, aphasia may also result from herpesviral encephalitis. The herpes simplex virus affects the frontal and temporal lobes, subcortical structures, and the hippocampal tissue, which can trigger aphasia. In acute disorders, such as head injury or stroke, aphasia usually develops quickly. When caused by brain tumor, infection, or dementia, it develops more slowly.

Substantial damage to tissue anywhere within the region shown in blue (on the figure in the infobox above) can potentially result in aphasia. Aphasia can also sometimes be caused by damage to subcortical structures deep within the left hemisphere, including the thalamus, the internal and external capsules, and the caudate nucleus of the basal ganglia. The area and extent of brain damage or atrophy will determine the type of aphasia and its symptoms. A very small number of people can experience aphasia after damage to the right hemisphere only. It has been suggested that these individuals may have had an unusual brain organization prior to their illness or injury, with perhaps greater overall reliance on the right hemisphere for language skills than in the general population.

Primary progressive aphasia (PPA), while its name can be misleading, is actually a form of dementia that has some symptoms closely related to several forms of aphasia. It is characterized by a gradual loss in language functioning while other cognitive domains are mostly preserved, such as memory and personality. PPA usually initiates with sudden word-finding difficulties in an individual and progresses to a reduced ability to formulate grammatically correct sentences (syntax) and impaired comprehension. The etiology of PPA is not due to a stroke, traumatic brain injury (TBI), or infectious disease; it is still uncertain what initiates the onset of PPA in those affected by it.

Epilepsy can also include transient aphasia as a prodromal or episodic symptom. However, the repeated seizure activity within language regions may also lead to chronic, and progressive aphasia. Aphasia is also listed as a rare side-effect of the fentanyl patch, an opioid used to control chronic pain.

Diagnosis

Neuroimaging methods

Magnetic resonance imaging (MRI) and functional magnetic resonance imaging (fMRI) are the most common neuroimaging tools used in identifying aphasia and studying the extent of damage in the loss of language abilities. This is done by doing MRI scans and locating the extent of lesions or damage within brain tissue, particularly within areas of the left frontal and temporal regions- where a lot of language related areas lie. In fMRI studies a language related task is often completed and then the BOLD image is analyzed. If there are lower than normal BOLD responses that indicate a lessening of blood flow to the affected area and can show quantitatively that the cognitive task is not being completed.

There are limitations to the use of fMRI in aphasic patients particularly. Because a high percentage of aphasic patients develop it because of stroke there can be infarct present which is the total loss of blood flow. This can be due to the thinning of blood vessels or the complete blockage of it. This is important in fMRI as it relies on the BOLD response (the oxygen levels of the blood vessels), and this can create a false hyporesponse upon fMRI study. Due to the limitations of fMRI such as a lower spatial resolution, it can show that some areas of the brain are not active during a task when they in reality are. Additionally, with stroke being the cause of many cases of aphasia the extent of damage to brain tissue can be difficult to quantify therefore the effects of stroke brain damage on the functionality of the patient can vary.

Neural substrates of aphasia subtypes

MRI is often used to predict or confirm the subtype of aphasia present. Researchers compared 3 subtypes of aphasia- nonfluent-variant primary progressive aphasia (nfPPA), logopenic-variant primary progressive aphasia (lvPPA), and semantic-variant primary progressive aphasia (svPPA), with primary progressive aphasia (PPA) and Alzheimer's disease. This was done by analyzing the MRIs of patients with each of the subsets of PPA. Images which compare subtypes of aphasia as well as for finding the extent of lesions are generated by overlapping images of different participant's brains (if applicable) and isolating areas of lesions or damage using third party software such as MRIcron. MRI has also been used to study the relationship between the type of aphasia developed and the age of the person with aphasia. It was found that patients with fluent aphasia are on average older than people with non-fluent aphasia. It was also found that among patients with lesions confined to the anterior portion of the brain an unexpected portion of them presented with fluent aphasia and were remarkably older than those with non-fluent aphasia. This effect was not found when the posterior portion of the brain was studied.

Associated conditions

In a study on the features associated with different disease trajectories in Alzheimer's disease (AD)-related primary progressive aphasia (PPA), it was found that metabolic patterns via PET SPM analysis can help predict progression of total loss of speech and functional autonomy in AD and PPA patients. This was done by comparing an MRI or CT image of the brain and presence of a radioactive biomarker with normal levels in patients without Alzheimer's Disease. Apraxia is another disorder often correlated with aphasia. This is due to a subset of apraxia which affects speech. Specifically, this subset affects the movement of muscles associated with speech production, apraxia and aphasia are often correlated due to the proximity of neural substrates associated with each of the disorders. Researchers concluded that there were 2 areas of lesion overlap between patients with apraxia and aphasia, the anterior temporal lobe and the left inferior parietal lobe.

Treatment and neuroimaging

Evidence for positive treatment outcomes can also be quantified using neuroimaging tools. The use of fMRI and an automatic classifier can help predict language recovery outcomes in stroke patients with 86% accuracy when coupled with age and language test scores. The stimuli tested were sentences both correct and incorrect and the subject had to press a button whenever the sentence was incorrect. The fMRI data collected focused on responses in regions of interest identified by healthy subjects.  Recovery from aphasia can also be quantified using diffusion tensor imaging. The accurate fasciculus (AF) connects the right and left superior temporal lobe, premotor regions/posterior inferior frontal gyrus. and the primary motor cortex. In a study which enrolled patients in a speech therapy program, an increase in AF fibers and volume was found in patients after 6-weeks in the program which correlated with long-term improvement in those patients. The results of the experiment are pictured in Figure 2. This implies that DTI can be used to quantify the improvement in patients after speech and language treatment programs are applied.

Classification

Aphasia is best thought of as a collection of different disorders, rather than a single problem. Each individual with aphasia will present with their own particular combination of language strengths and weaknesses. Consequently, it is a major challenge just to document the various difficulties that can occur in different people, let alone decide how they might best be treated. Most classifications of the aphasias tend to divide the various symptoms into broad classes. A common approach is to distinguish between the fluent aphasias (where speech remains fluent, but content may be lacking, and the person may have difficulties understanding others), and the nonfluent aphasias (where speech is very halting and effortful, and may consist of just one or two words at a time).

However, no such broad-based grouping has proven fully adequate, or reliable. There is wide variation among people even within the same broad grouping, and aphasias can be highly selective. For instance, people with naming deficits (anomic aphasia) might show an inability only for naming buildings, or people, or colors. Unfortunately, assessments that characterize aphasia in these groupings have persisted. This is not helpful to people living with aphasia, and provides inaccurate descriptions of an individual pattern of difficulties.

It is important to note that there are typical difficulties with speech and language that come with normal aging as well. As we age, language can become more difficult to process resulting in a slowing of verbal comprehension, reading abilities and more likely word finding difficulties. With each of these though, unlike some aphasias, functionality within daily life remains intact.

Boston classification

Major characteristics of different types of aphasia according to the Boston classification
Type of aphasia Speech repetition Naming Auditory comprehension Fluency
Expressive aphasia (Broca's aphasia) Moderate–severe Moderate–severe Mild difficulty Non-fluent, effortful, slow
Receptive aphasia (Wernicke's aphasia) Mild–severe Mild–severe Defective Fluent paraphasic
Conduction aphasia Poor Poor Relatively good Fluent
Mixed transcortical aphasia Moderate Poor Poor Non-fluent
Transcortical motor aphasia Good Mild–severe Mild Non-fluent
Transcortical sensory aphasia Good Moderate–severe Poor Fluent
Global aphasia Poor Poor Poor Non-fluent
Anomic aphasia Mild Moderate–severe Mild Fluent
  • Individuals with receptive aphasia (Wernicke's aphasia), also referred to as fluent aphasia, may speak in long sentences that have no meaning, add unnecessary words, and even create new "words" (neologisms). For example, someone with receptive aphasia may say, "delicious taco", meaning "The dog needs to go out so I will take him for a walk". They have poor auditory and reading comprehension, and fluent, but nonsensical, oral and written expression. Individuals with receptive aphasia usually have great difficulty understanding the speech of both themselves and others and are, therefore, often unaware of their mistakes. Receptive language deficits usually arise from lesions in the posterior portion of the left hemisphere at or near Wernicke's area. It is often the result of trauma to the temporal region of the brain, specifically damage to Wernicke's area. Trauma can be the result from an array of problems, however it is most commonly seen as a result of stroke.
  • Individuals with expressive aphasia (Broca's aphasia) frequently speak short, meaningful phrases that are produced with great effort. It is thus characterized as a nonfluent aphasia. Affected people often omit small words such as "is", "and", and "the". For example, a person with expressive aphasia may say, "walk dog", which could mean "I will take the dog for a walk", "you take the dog for a walk" or even "the dog walked out of the yard." Individuals with expressive aphasia are able to understand the speech of others to varying degrees. Because of this, they are often aware of their difficulties and can become easily frustrated by their speaking problems. While Broca's aphasia may appear to be solely an issue with language production, evidence suggests that it may be rooted in an inability to process syntactical information. Individuals with expressive aphasia may have a speech automatism (also called recurring or recurrent utterance). These speech automatisms can be repeated lexical speech automatisms; e.g., modalisations ('I can't..., I can't...'), expletives/swearwords, numbers ('one two, one two') or non-lexical utterances made up of repeated, legal but meaningless, consonant-vowel syllables (e.g.., /tan tan/, /bi bi/). In severe cases, the individual may be able to utter only the same speech automatism each time they attempt speech.
  • Individuals with anomic aphasia have difficulty with naming. People with this aphasia may have difficulties naming certain words, linked by their grammatical type (e.g., difficulty naming verbs and not nouns) or by their semantic category (e.g., difficulty naming words relating to photography but nothing else) or a more general naming difficulty. People tend to produce grammatic, yet empty, speech. Auditory comprehension tends to be preserved. Anomic aphasia is the aphasial presentation of tumors in the language zone; it is the aphasial presentation of Alzheimer's disease. Anomic aphasia is the mildest form of aphasia, indicating a likely possibility for better recovery.
  • Individuals with transcortical sensory aphasia, in principle the most general and potentially among the most complex forms of aphasia, may have similar deficits as in receptive aphasia, but their repetition ability may remain intact.
  • Global aphasia is considered a severe impairment in many language aspects since it impacts expressive and receptive language, reading, and writing. Despite these many deficits, there is evidence that has shown individuals benefited from speech language therapy. Even though individuals with global aphasia will not become competent speakers, listeners, writers, or readers, goals can be created to improve the individual's quality of life. Individuals with global aphasia usually respond well to treatment that includes personally relevant information, which is also important to consider for therapy.
  • Individuals with conduction aphasia have deficits in the connections between the speech-comprehension and speech-production areas. This might be caused by damage to the arcuate fasciculus, the structure that transmits information between Wernicke's area and Broca's area. Similar symptoms, however, can be present after damage to the insula or to the auditory cortex. Auditory comprehension is near normal, and oral expression is fluent with occasional paraphasic errors. Paraphasic errors include phonemic/literal or semantic/verbal. Repetition ability is poor. Conduction and transcortical aphasias are caused by damage to the white matter tracts. These aphasias spare the cortex of the language centers but instead create a disconnection between them. Conduction aphasia is caused by damage to the arcuate fasciculus. The arcuate fasciculus is a white matter tract that connects Broca's and Wernicke's areas. People with conduction aphasia typically have good language comprehension, but poor speech repetition and mild difficulty with word retrieval and speech production. People with conduction aphasia are typically aware of their errors. Two forms of conduction aphasia have been described: reproduction conduction aphasia (repetition of a single relatively unfamiliar multisyllabic word) and repetition conduction aphasia (repetition of unconnected short familiar words.
  • Transcortical aphasias include transcortical motor aphasia, transcortical sensory aphasia, and mixed transcortical aphasia. People with transcortical motor aphasia typically have intact comprehension and awareness of their errors, but poor word finding and speech production. People with transcortical sensory and mixed transcortical aphasia have poor comprehension and unawareness of their errors. Despite poor comprehension and more severe deficits in some transcortical aphasias, small studies have indicated that full recovery is possible for all types of transcortical aphasia.

Classical-localizationist approaches

Cortex

Localizationist approaches aim to classify the aphasias according to their major presenting characteristics and the regions of the brain that most probably gave rise to them. Inspired by the early work of nineteenth-century neurologists Paul Broca and Carl Wernicke, these approaches identify two major subtypes of aphasia and several more minor subtypes:

  • Expressive aphasia (also known as "motor aphasia" or "Broca's aphasia"), which is characterized by halted, fragmented, effortful speech, but well-preserved comprehension relative to expression. Damage is typically in the anterior portion of the left hemisphere, most notably Broca's area. Individuals with Broca's aphasia often have right-sided weakness or paralysis of the arm and leg, because the left frontal lobe is also important for body movement, particularly on the right side.
  • Receptive aphasia (also known as "sensory aphasia" or "Wernicke's aphasia"), which is characterized by fluent speech, but marked difficulties understanding words and sentences. Although fluent, the speech may lack in key substantive words (nouns, verbs, adjectives), and may contain incorrect words or even nonsense words. This subtype has been associated with damage to the posterior left temporal cortex, most notably Wernicke's area. These individuals usually have no body weakness, because their brain injury is not near the parts of the brain that control movement.
  • Conduction aphasia, where speech remains fluent, and comprehension is preserved, but the person may have disproportionate difficulty repeating words or sentences. Damage typically involves the arcuate fasciculus and the left parietal region.
  • Transcortical motor aphasia and transcortical sensory aphasia, which are similar to Broca's and Wernicke's aphasia respectively, but the ability to repeat words and sentences is disproportionately preserved.

Recent classification schemes adopting this approach, such as the Boston-Neoclassical Model, also group these classical aphasia subtypes into two larger classes: the nonfluent aphasias (which encompasses Broca's aphasia and transcortical motor aphasia) and the fluent aphasias (which encompasses Wernicke's aphasia, conduction aphasia and transcortical sensory aphasia). These schemes also identify several further aphasia subtypes, including: anomic aphasia, which is characterized by a selective difficulty finding the names for things; and global aphasia, where both expression and comprehension of speech are severely compromised.

Many localizationist approaches also recognize the existence of additional, more "pure" forms of language disorder that may affect only a single language skill. For example, in pure alexia, a person may be able to write but not read, and in pure word deafness, they may be able to produce speech and to read, but not understand speech when it is spoken to them.

Cognitive neuropsychological approaches

Although localizationist approaches provide a useful way of classifying the different patterns of language difficulty into broad groups, one problem is that most individuals do not fit neatly into one category or another. Another problem is that the categories, particularly the major ones such as Broca's and Wernicke's aphasia, still remain quite broad and do not meaningfully reflect a person's difficulties. Consequently, even amongst those who meet the criteria for classification into a subtype, there can be enormous variability in the types of difficulties they experience.

Instead of categorizing every individual into a specific subtype, cognitive neuropsychological approaches aim to identify the key language skills or "modules" that are not functioning properly in each individual. A person could potentially have difficulty with just one module, or with a number of modules. This type of approach requires a framework or theory as to what skills/modules are needed to perform different kinds of language tasks. For example, the model of Max Coltheart identifies a module that recognizes phonemes as they are spoken, which is essential for any task involving recognition of words. Similarly, there is a module that stores phonemes that the person is planning to produce in speech, and this module is critical for any task involving the production of long words or long strings of speech. Once a theoretical framework has been established, the functioning of each module can then be assessed using a specific test or set of tests. In the clinical setting, use of this model usually involves conducting a battery of assessments, each of which tests one or a number of these modules. Once a diagnosis is reached as to the skills/modules where the most significant impairment lies, therapy can proceed to treat these skills.

Progressive aphasias

Primary progressive aphasia (PPA) is a neurodegenerative focal dementia that can be associated with progressive illnesses or dementia, such as frontotemporal dementia / Pick Complex Motor neuron disease, Progressive supranuclear palsy, and Alzheimer's disease, which is the gradual process of progressively losing the ability to think. Gradual loss of language function occurs in the context of relatively well-preserved memory, visual processing, and personality until the advanced stages. Symptoms usually begin with word-finding problems (naming) and progress to impaired grammar (syntax) and comprehension (sentence processing and semantics). The loss of language before the loss of memory differentiates PPA from typical dementias. People with PPA may have difficulties comprehending what others are saying. They can also have difficulty trying to find the right words to make a sentence. There are three classifications of Primary Progressive Aphasia : Progressive nonfluent aphasia (PNFA), Semantic Dementia (SD), and Logopenic progressive aphasia (LPA).

Progressive Jargon Aphasia is a fluent or receptive aphasia in which the person's speech is incomprehensible, but appears to make sense to them. Speech is fluent and effortless with intact syntax and grammar, but the person has problems with the selection of nouns. Either they will replace the desired word with another that sounds or looks like the original one or has some other connection or they will replace it with sounds. As such, people with jargon aphasia often use neologisms, and may perseverate if they try to replace the words they cannot find with sounds. Substitutions commonly involve picking another (actual) word starting with the same sound (e.g., clocktower – colander), picking another semantically related to the first (e.g., letter – scroll), or picking one phonetically similar to the intended one (e.g., lane – late).

Deaf aphasia

There have been many instances showing that there is a form of aphasia among deaf individuals. Sign languages are, after all, forms of language that have been shown to use the same areas of the brain as verbal forms of language. Mirror neurons become activated when an animal is acting in a particular way or watching another individual act in the same manner. These mirror neurons are important in giving an individual the ability to mimic movements of hands. Broca's area of speech production has been shown to contain several of these mirror neurons resulting in significant similarities of brain activity between sign language and vocal speech communication. People use facial movements to create, what other people perceive, to be faces of emotions. While combining these facial movements with speech, a more full form of language is created which enables the species to interact with a much more complex and detailed form of communication. Sign language also uses these facial movements and emotions along with the primary hand movement way of communicating. These facial movement forms of communication come from the same areas of the brain. When dealing with damages to certain areas of the brain, vocal forms of communication are in jeopardy of severe forms of aphasia. Since these same areas of the brain are being used for sign language, these same, at least very similar, forms of aphasia can show in the Deaf community. Individuals can show a form of Wernicke's aphasia with sign language and they show deficits in their abilities in being able to produce any form of expressions. Broca's aphasia shows up in some people, as well. These individuals find tremendous difficulty in being able to actually sign the linguistic concepts they are trying to express.

Severity

The severity of the type of aphasia varies depending on the size of the stroke. However, there is much variance between how often one type of severity occurs in certain types of aphasia. For instance, any type of aphasia can range from mild to profound. Regardless of the severity of aphasia, people can make improvements due to spontaneous recovery and treatment in the acute stages of recovery. Additionally, while most studies propose that the greatest outcomes occur in people with severe aphasia when treatment is provided in the acute stages of recovery, Robey (1998) also found that those with severe aphasia are capable of making strong language gains in the chronic stage of recovery as well. This finding implies that persons with aphasia have the potential to have functional outcomes regardless of how severe their aphasia may be. While there is no distinct pattern of the outcomes of aphasia based on severity alone, global aphasia typically makes functional language gains, but may be gradual since global aphasia affects many language areas.

Prevention

Aphasia is largely caused by unavoidable instances. However, some precautions can be taken to decrease risk for experiencing one of the two major causes of aphasia: stroke and traumatic brain injury (TBI). To decrease the probability of having an ischemic or hemorrhagic stroke, one should take the following precautions:

  • Exercising regularly
  • Eating a healthy diet, avoiding cholesterol in particular
  • Keeping alcohol consumption low and avoiding tobacco use
  • Controlling blood pressure
  • Going to the emergency room immediately if you begin to experience unilateral extremity (especially leg) swelling, warmth, redness, and/or tenderness as these are symptoms of a deep vein thrombosis which can lead to a stroke

To prevent aphasia due to traumatic injury, one should take precautionary measures when engaging in dangerous activities such as:

  • Wearing a helmet when operating a bicycle, motor cycle, ATV, or any other moving vehicle that could potentially be involved in an accident
  • Wearing a seatbelt when driving or riding in a car
  • Wearing proper protective gear when playing contact sports, especially American football, rugby, and hockey, or refraining from such activities
  • Minimizing anticoagulant use (including aspirin) if at all possible as they increase the risk of hemorrhage after a head injury

Additionally, one should always seek medical attention after sustaining head trauma due to a fall or accident. The sooner that one receives medical attention for a traumatic brain injury, the less likely one is to experience long-term or severe effects.

Management

Most acute cases of aphasia recover some or most skills by participating in speech and language therapy. Recovery and improvement can continue for years after the stroke. After the onset of aphasia, there is approximately a six-month period of spontaneous recovery; during this time, the brain is attempting to recover and repair the damaged neurons. Improvement varies widely, depending on the aphasia's cause, type, and severity. Recovery also depends on the person's age, health, motivation, handedness, and educational level.

Speech and language therapy that is higher intensity, higher dose or provided over a long duration of time leads to significantly better functional communication but people might be more likely to drop out of high intensity treatment (up to 15 hours per week). A total of 20-50 hours of speech and language therapy is necessary for the best recovery. The most improvement happens when 2-5 hours of therapy is provided each week over 4-5 days. Recovery is further improved when besides the therapy people practice tasks at home. Speech and language therapy is also effective if it is delivered online through video or by a family member who has been trained by a professional therapist.

Recovery with therapy is also dependent on the recency of stroke and the age of the person. Receiving therapy within a month after the stroke leads to the greatest improvements. 3 or 6 months after the stroke more therapy will be needed but symptoms can still be improved. People with aphasia who are younger than 55 years are the most likely to improve but people older than 75 years can still get better with therapy.

There is no one treatment proven to be effective for all types of aphasias. The reason that there is no universal treatment for aphasia is because of the nature of the disorder and the various ways it is presented. Aphasia is rarely exhibited identically, implying that treatment needs to be catered specifically to the individual. Studies have shown that, although there is no consistency on treatment methodology in literature, there is a strong indication that treatment, in general, has positive outcomes. Therapy for aphasia ranges from increasing functional communication to improving speech accuracy, depending on the person's severity, needs and support of family and friends. Group therapy allows individuals to work on their pragmatic and communication skills with other individuals with aphasia, which are skills that may not often be addressed in individual one-on-one therapy sessions. It can also help increase confidence and social skills in a comfortable setting.

Evidence does not support the use of transcranial direct current stimulation (tDCS) for improving aphasia after stroke. Moderate quality evidence does indicate naming performance improvements for nouns but not verbs using tDCS

Specific treatment techniques include the following:

  • Copy and recall therapy (CART) – repetition and recall of targeted words within therapy may strengthen orthographic representations and improve single word reading, writing, and naming
  • Visual communication therapy (VIC) – the use of index cards with symbols to represent various components of speech
  • Visual action therapy (VAT) – typically treats individuals with global aphasia to train the use of hand gestures for specific items
  • Functional communication treatment (FCT) – focuses on improving activities specific to functional tasks, social interaction, and self-expression
  • Promoting aphasic's communicative effectiveness (PACE) – a means of encouraging normal interaction between people with aphasia and clinicians. In this kind of therapy, the focus is on pragmatic communication rather than treatment itself. People are asked to communicate a given message to their therapists by means of drawing, making hand gestures or even pointing to an object
  • Melodic intonation therapy (MIT) – aims to use the intact melodic/prosodic processing skills of the right hemisphere to help cue retrieval of words and expressive language
  • Centeredness Theory Interview (CTI) - Uses client centered goal formation into the nature of current patient interactions as well as future / desired interactions to improve subjective well-being, cognition and communication.
  • Other – i.e. drawing as a way of communicating, trained conversation partners

Semantic feature analysis (SFA) – a type of aphasia treatment that targets word-finding deficits. It is based on the theory that neural connections can be strengthened by using related words and phrases that are similar to the target word, to eventually activate the target word in the brain. SFA can be implemented in multiple forms such as verbally, written, using picture cards, etc. The SLP provides prompting questions to the individual with aphasia in order for the person to name the picture provided. Studies show that SFA is an effective intervention for improving confrontational naming.

Melodic intonation therapy is used to treat non-fluent aphasia and has proved to be effective in some cases. However, there is still no evidence from randomized controlled trials confirming the efficacy of MIT in chronic aphasia. MIT is used to help people with aphasia vocalize themselves through speech song, which is then transferred as a spoken word. Good candidates for this therapy include people who have had left hemisphere strokes, non-fluent aphasias such as Broca's, good auditory comprehension, poor repetition and articulation, and good emotional stability and memory. An alternative explanation is that the efficacy of MIT depends on neural circuits involved in the processing of rhythmicity and formulaic expressions (examples taken from the MIT manual: "I am fine," "how are you?" or "thank you"); while rhythmic features associated with melodic intonation may engage primarily left-hemisphere subcortical areas of the brain, the use of formulaic expressions is known to be supported by right-hemisphere cortical and bilateral subcortical neural networks.

Systematic reviews support the effectiveness and importance of partner training. According to the National Institute on Deafness and Other Communication Disorders (NIDCD), involving family with the treatment of an aphasic loved one is ideal for all involved, because while it will no doubt assist in their recovery, it will also make it easier for members of the family to learn how best to communicate with them.

When a person's speech is insufficient, different kinds of augmentative and alternative communication could be considered such as alphabet boards, pictorial communication books, specialized software for computers or apps for tablets or smartphones.

When addressing Wernicke's aphasia, according to Bakheit et al. (2007), the lack of awareness of the language impairments, a common characteristic of Wernicke's aphasia, may affect the rate and extent of therapy outcomes. Robey (1998) determined that at least 2 hours of treatment per week is recommended for making significant language gains. Spontaneous recovery may cause some language gains, but without speech-language therapy, the outcomes can be half as strong as those with therapy.

When addressing Broca's aphasia, better outcomes occur when the person participates in therapy, and treatment is more effective than no treatment for people in the acute period. Two or more hours of therapy per week in acute and post-acute stages produced the greatest results. High-intensity therapy was most effective, and low-intensity therapy was almost equivalent to no therapy.

People with global aphasia are sometimes referred to as having irreversible aphasic syndrome, often making limited gains in auditory comprehension, and recovering no functional language modality with therapy. With this said, people with global aphasia may retain gestural communication skills that may enable success when communicating with conversational partners within familiar conditions. Process-oriented treatment options are limited, and people may not become competent language users as readers, listeners, writers, or speakers no matter how extensive therapy is. However, people's daily routines and quality of life can be enhanced with reasonable and modest goals. After the first month, there is limited to no healing to language abilities of most people. There is a grim prognosis leaving 83% who were globally aphasic after the first month they will remain globally aphasic at the first year. Some people are so severely impaired that their existing process-oriented treatment approaches offer no signs of progress, and therefore cannot justify the cost of therapy.

Perhaps due to the relative rareness of conduction aphasia, few studies have specifically studied the effectiveness of therapy for people with this type of aphasia. From the studies performed, results showed that therapy can help to improve specific language outcomes. One intervention that has had positive results is auditory repetition training. Kohn et al. (1990) reported that drilled auditory repetition training related to improvements in spontaneous speech, Francis et al. (2003) reported improvements in sentence comprehension, and Kalinyak-Fliszar et al. (2011) reported improvements in auditory-visual short-term memory.

Individualized service delivery

Intensity of treatment should be individualized based on the recency of stroke, therapy goals, and other specific characteristics such as age, size of lesion, overall health status, and motivation. Each individual reacts differently to treatment intensity and is able to tolerate treatment at different times post-stroke. Intensity of treatment after a stroke should be dependent on the person's motivation, stamina, and tolerance for therapy.

Outcomes

If the symptoms of aphasia last longer than two or three months after a stroke, a complete recovery is unlikely. However, it is important to note that some people continue to improve over a period of years and even decades. Improvement is a slow process that usually involves both helping the individual and family understand the nature of aphasia and learning compensatory strategies for communicating.

After a traumatic brain injury (TBI) or cerebrovascular accident (CVA), the brain undergoes several healing and re-organization processes, which may result in improved language function. This is referred to as spontaneous recovery. Spontaneous recovery is the natural recovery the brain makes without treatment, and the brain begins to reorganize and change in order to recover. There are several factors that contribute to a person's chance of recovery caused by stroke, including stroke size and location. Age, sex, and education have not been found to be very predictive. There is also research pointing to damage in the left hemisphere healing more effectively than the right.

Specific to aphasia, spontaneous recovery varies among affected people and may not look the same in everyone, making it difficult to predict recovery.

Though some cases of Wernicke's aphasia have shown greater improvements than more mild forms of aphasia, people with Wernicke's aphasia may not reach as high a level of speech abilities as those with mild forms of aphasia.

Prevalence

Aphasia affects about two million people in the U.S. and 250,000 people in Great Britain. Nearly 180,000 people acquire the disorder every year in the U.S., 170,000 due to stroke. Any person of any age can develop aphasia, given that it is often caused by a traumatic injury. However, people who are middle aged and older are the most likely to acquire aphasia, as the other etiologies are more likely at older ages. For example, approximately 75% of all strokes occur in individuals over the age of 65. Strokes account for most documented cases of aphasia: 25% to 40% of people who survive a stroke develop aphasia as a result of damage to the language-processing regions of the brain.

History

The first recorded case of aphasia is from an Egyptian papyrus, the Edwin Smith Papyrus, which details speech problems in a person with a traumatic brain injury to the temporal lobe.

During the second half of the 19th century, aphasia was a major focus for scientists and philosophers who were working in the beginning stages of the field of psychology. In medical research, speechlessness was described as an incorrect prognosis, and there was no assumption that underlying language complications existed. Broca and his colleagues were some of the first to write about aphasia, but Wernicke was the first credited to have written extensively about aphasia being a disorder that contained comprehension difficulties. Despite claims of who reported on aphasia first, it was F.J. Gall that gave the first full description of aphasia after studying wounds to the brain, as well as his observation of speech difficulties resulting from vascular lesions. A recent book on the entire history of aphasia is available (Reference: Tesak, J. & Code, C. (2008) Milestones in the History of Aphasia: Theories and Protagonists. Hove, East Sussex: Psychology Press).

Etymology

Aphasia is from Greek a- ("without", negative prefix) + phásis (φάσις, "speech").

The word aphasia comes from the word ἀφασία aphasia, in Ancient Greek, which means "speechlessness", derived from ἄφατος aphatos, "speechless" from ἀ- a-, "not, un" and φημί phemi, "I speak".

Further research

Research is currently being done using functional magnetic resonance imaging (fMRI) to witness the difference in how language is processed in normal brains vs aphasic brains. This will help researchers to understand exactly what the brain must go through in order to recover from Traumatic Brain Injury (TBI) and how different areas of the brain respond after such an injury.

Another intriguing approach being tested is that of drug therapy. Research is in progress that will hopefully uncover whether or not certain drugs might be used in addition to speech-language therapy in order to facilitate recovery of proper language function. It's possible that the best treatment for Aphasia might involve combining drug treatment with therapy, instead of relying on one over the other.

One other method being researched as a potential therapeutic combination with speech-language therapy is brain stimulation. One particular method, Transcranial Magnetic Stimulation (TMS), alters brain activity in whatever area it happens to stimulate, which has recently led scientists to wonder if this shift in brain function caused by TMS might help people re-learn languages.

The research being put into Aphasia has only just begun. Researchers appear to have multiple ideas on how Aphasia could be more effectively treated in the future.

Auditory agnosia

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Auditory_agnosia

Auditory agnosia is a form of agnosia that manifests itself primarily in the inability to recognize or differentiate between sounds. It is not a defect of the ear or "hearing", but rather a neurological inability of the brain to process sound meaning. While auditory agnosia impairs the understanding of sounds, other abilities such as reading, writing, and speaking are not hindered. It is caused by bilateral damage to the anterior superior temporal gyrus, which is part of the auditory pathway responsible for sound recognition, the auditory "what" pathway.

Persons with auditory agnosia can physically hear the sounds and describe them using unrelated terms, but are unable to recognize them. They might describe the sound of some environmental sounds, such as a motor starting, as resembling a lion roaring, but would not be able to associate the sound with "car" or "engine", nor would they say that it was a lion creating the noise. All auditory agnosia patients read lips in order to enhance the speech comprehension.

It is yet unclear whether auditory agnosia (also called general auditory agnosia) is a combination of milder disorders, such auditory verbal agnosia (pure word deafness), non-verbal auditory agnosia, amusia and word-meaning deafness, or a mild case of the more severe disorder, cerebral deafness. Typically, a person with auditory agnosia would be incapable of comprehending spoken language as well as environmental sounds. Some may say that the milder disorders are how auditory agnosia occurs. There are few cases where a person may not be able to understand spoken language. This is called verbal auditory agnosia or pure word deafness. Nonverbal auditory agnosia is diagnosed when a person’s understanding of environmental sounds is inhibited. Combined, these two disorders portray auditory agnosia. The blurriness between the combination of these disorders may lead to discrepancies in reporting. As of 2014, 203 patients with auditory perceptual deficits due to CNS damage were reported in the medical literature, of which 183 diagnosed with general auditory agnosia or word deafness, 34 with cerebral deafness, 51 with non-verbal auditory agnosia-amusia and 8 word meaning deafness (for a list of patients see).

History

A relationship between hearing and the brain was first documented by Ambroise Paré, a 16th century battlefield doctor, who associated parietal lobe damage with acquired deafness (reported in Henschen, 1918). Systematic research into the manner in which the brain processes sounds, however, only began toward the end of the 19th century. In 1874, Wernicke was the first to ascribe to a brain region a role in auditory perception. Wernicke proposed that the impaired perception of language in his patients was due to losing the ability to register sound frequencies that are specific to spoken words (he also suggested that other aphasic symptoms, such as speaking, reading and writing errors occur because these speech specific frequencies are required for feedback). Wernicke localized the perception of spoken words to the posterior half of the left STG (superior temporal gyrus). Wernicke also distinguished between patients with auditory agnosia (which he labels as receptive aphasia) with patients who cannot detect sound at any frequency (which he labels as cortical deafness).

In 1877, Kussmaul was the first to report auditory agnosia in a patient with intact hearing, speaking, and reading-writing abilities. This case-study led Kussmaul to propose of distinction between the word perception deficit and Wernicke's sensory aphasia. He coined the former disorder as "word deafness". Kussmaul also localized this disorder to the left STG. Wernicke interpreted Kussmaul's case as an incomplete variant of his sensory aphasia.

In 1885, Lichtheim also reported of an auditory agnosia patient. This patient, in addition to word deafness, was impaired at recognizing environmental sounds and melodies. Based on this case study, as well as other aphasic patients, Lichtheim proposed that the language reception center receives afferents from upstream auditory and visual word recognition centers, and that damage to these regions results in word deafness or word blindness (i.e., alexia), respectively. Because the lesion of Lichtheim's auditory agnosia patient was sub-cortical deep to the posterior STG (superior temporal gyrus), Lichtheim renamed auditory agnosia as "sub-cortical speech deafness".

The language model proposed by Wernicke and Lichtheim wasn't accepted at first. For example, in 1897 Bastian argued that, because aphasic patients can repeat single words, their deficit is in the extraction of meaning from words. He attributed both aphasia and auditory agnosia to damage in Lichtheim's auditory word center. He hypothesized that aphasia is the outcome of partial damage to the left auditory word center, whereas auditory agnosia is the result of complete damage to the same area. Bastian localized the auditory word center to the posterior MTG (middle temporal gyrus).

Other opponents to the Wernicke-Lichtheim model were Sigmund Freud and Carl Freund. Freud (1891) suspected that the auditory deficits in aphasic patients was due to a secondary lesion to cochlea. This assertion was confirmed by Freund (1895), who reported two auditory agnosia patients with cochlear damage (although in a later autopsy, Freund reported also the presence of a tumor in the left STG in one of these patients). This argument, however, was refuted by Bonvicini (1905), who measured the hearing of an auditory agnosia patient with tuning forks, and confirmed intact pure tone perception. Similarly, Barrett's aphasic patient, who was incapable of comprehending speech, had intact hearing thresholds when examined with tuning forks and with a Galton whistle. The most adverse opponent to the model of Wernicke and Lichtheim was Marie (1906), who argued that all aphasic symptoms manifest because of a single lesion to the language reception center, and that other symptoms such as auditory disturbances or paraphasia are expressed because the lesion encompasses also sub-cortical motor or sensory regions.

In the following years, increasing number of clinical reports validated the view that the right and left auditory cortices project to a language reception center located in the posterior half of the left STG, and thus established the Wernicke-Lichtheim model. This view was also consolidated by Geschwind (1965) who reported that, in humans, the left planum temporale is larger in the left hemisphere than on the right. Geschwind interpreted this asymmetry as anatomical verification for the role of left posterior STG in the perception of language.

The Wernicke-Lichtheim-Geschwind model persisted throughout the 20th century. However, with the advent of MRI and its usage for lesion mapping, it was shown that this model is based on incorrect correlation between symptoms and lesions. Although this model is considered outdated, it is still widely mentioned in Psychology and medical textbooks, and consequently in medical reports of auditory agnosia patients. As will be mentioned below, based on cumulative evidence the process of sound recognition was recently shifted to the left and right anterior auditory cortices, instead of the left posterior auditory cortex.

Related disorders

After auditory agnosia was first discovered, subsequent patients were diagnosed with different types of hearing impairments. In some reports, the deficit was restricted to spoken words, environmental sounds or music. In one case study, each of the three sound types (music, environmental sounds, speech) was also shown to recover independently (Mendez and Geehan, 1988-case 2). It is yet unclear whether general auditory agnosia is a combination of milder auditory disorders, or whether the source of this disorder is at an earlier auditory processing stage.

Cerebral deafness

Cerebral deafness (also known as cortical deafness or central deafness) is a disorder characterized by complete deafness that is the result of damage to the central nervous system. The primary distinction between auditory agnosia and cerebral deafness is the ability to detect pure tones, as measured with pure tone audiometry. Using this test, auditory agnosia patients were often reported capable of detecting pure tones almost as good as healthy individuals, whereas cerebral deafness patients found this task almost impossible or they required very loud presentations of sounds (above 100 dB). In all reported cases, cerebral deafness was associated with bilateral temporal lobe lesions. A study that compared the lesions of two cerebral deafness patients to an auditory agnosia patient concluded that cerebral deafness is the result of complete de-afferentation of the auditory cortices, whereas in auditory agnosia some thalamo-cortical fibers are spared. In most cases the disorder is transient and the symptoms mitigate into auditory agnosia (although chronic cases were reported). Similarly, a monkey study that ablated both auditory cortices of monkeys reported of deafness that lasted 1 week in all cases, and that was gradually mitigated into auditory agnosia in a period of 3–7 weeks.

Pure word deafness

Since the early days of aphasia research, the relationship between auditory agnosia and speech perception has been debated. Lichtheim (1885) proposed that auditory agnosia is the result of damage to a brain area dedicated to the perception of spoken words, and consequently renamed this disorder from 'word deafness' to 'pure word deafness'. The description of word deafness as being exclusively for words was adopted by the scientific community despite the patient reported by Lichtheim's who also had more general auditory deficits. Some researchers who surveyed the literature, however, argued against labeling this disorder as pure word deafness on the account that all patients reported impaired at perceiving spoken words were also noted with other auditory deficits or aphasic symptoms. In one review of the literature, Ulrich (1978) presented evidence for separation of word deafness from more general auditory agnosia, and suggested naming this disorder "linguistic auditory agnosia" (this name was later rephrased into "verbal auditory agnosia"). To contrast this disorder with auditory agnosia in which speech repetition is intact (word meaning deafness), the name "word sound deafness" and "phonemic deafness" (Kleist, 1962) were also proposed. Although some researchers argued against the purity of word deafness, some anecdotal cases with exclusive impaired perception of speech were documented. On several occasions, patients were reported to gradually transition from pure word deafness to general auditory agnosia/cerebral deafness or recovery from general auditory agnosia/cerebral deafness to pure word deafness.

In a review of the auditory agnosia literature, Phillips and Farmer showed that patients with word deafness are impaired in their ability to discriminate gaps between click sounds as long as 15-50 milliseconds, which is consistent with the duration of phonemes. They also showed that patients with general auditory agnosia are impaired in their ability to discriminate gaps between click sounds as long as 100–300 milliseconds. The authors further showed that word deafness patients liken their auditory experience to hearing foreign language, whereas general auditory agnosia described speech as incomprehensible noise. Based on these findings, and because both word deafness and general auditory agnosia patients were reported to have very similar neuroanatomical damage (bilateral damage to the auditory cortices), the authors concluded that word deafness and general auditory agnosia is the same disorder, but with a different degree of severity.

Pinard et al also suggested that pure word deafness and general auditory agnosia represent different degrees of the same disorder. They suggested that environmental sounds are spared in the mild cases because they are easier to perceive than speech parts. They argued that environmental sounds are more distinct than speech sounds because they are more varied in their duration and loudness. They also proposed that environmental sounds are easier to perceive because they are composed of a repetitive pattern (e.g., the bark of a dog or the siren of the ambulance).

Auerbach et al considered word deafness and general auditory agnosia as two separate disorders, and labelled general auditory agnosia as pre-phonemic auditory agnosia and word deafness as post-phonemic auditory agnosia. They suggested that pre-phonemic auditory agnosia manifests because of general damage to the auditory cortex of both hemispheres, and that post-phonemic auditory agnosia manifests because of damage to a spoken word recognition center in the left hemisphere. Recent evidence, possibly verified Auerbach hypothesis, since an epileptic patient who undergone electro-stimulation to the anterior superior temporal gyrus was demonstrated a transient loss of speech comprehension, but with intact perception of environmental sounds and music.

Non-verbal auditory agnosia

The term auditory agnosia was originally coined by Freud (1891) to describe patients with selective impairment of environmental sounds. In a review of the auditory agnosia literature, Ulrich re-named this disorder as non-verbal auditory agnosia (although sound auditory agnosia and environmental sound auditory agnosia are also commonly used). This disorder is very rare and only 18 cases have been documented. In contradiction to pure word deafness and general auditory agnosia, this disorder is likely under-diagnosed because patients are often not aware of their disorder, and thus don't seek medical intervention.

Throughout the 20th century, all reported non-verbal auditory agnosia patients had bilateral or right temporal lobe damage. For this reason, the right hemisphere was traditionally attributed with the perception of environmental sounds. However, Tanaka et al reported 8 patients with non-verbal auditory agnosia, 4 with right hemisphere lesions and 4 with left hemisphere lesions. Saygin et al also reported a patient with damage to the left auditory cortex.

The underlying deficit in non-verbal auditory agnosia appears to be varied. Several patients were characterized by impaired discrimination of pitch whereas others reported with impaired discrimination of timbre and rhythm (discrimination of pitch was relatively preserved in one of these cases). In contrast, to patients with pure word deafness and general auditory agnosia, patients with non-verbal auditory agnosia were reported impaired at discriminating long gaps between click sounds, but impaired at short gaps. A possible neuroanatomical structure that relays longer sound duration was suggested by Tanaka et al. By comparing the lesions of two cortically deaf patients with the lesion of a word deafness patient, they proposed the existence of two thalamocortical pathways that inter-connect the MGN with the auditory cortex. They suggested that spoken words are relayed via a direct thalamocortical pathway that passes underneath the putamen, and that environmental sounds are relayed via a separate thalamocortical pathway that passes above the putamen near the parietal white matter.

Amusia

Auditory agnosia patients are often impaired in the discrimination of all sounds, including music. However, in two such patients music perception was spared and in one patient music perception was enhanced. The medical literature reports of 33 patients diagnosed with an exclusive deficit for the discrimination and recognition of musical segments (i.e., amusia). The damage in all these cases was localized to the right hemisphere or was bilateral. (with the exception of one case.) The damage in these cases tended to focus around the temporal pole. Consistently, removal of the anterior temporal lobe was also associated with loss of music perception, and recordings directly from the anterior auditory cortex revealed that in both hemispheres, music is perceived medially to speech. These findings therefore imply that the loss of music perception in auditory agnosia is because of damage to the medial anterior STG. In contrast to the association of amusia specific to recognition of melodies (amelodia) with the temporal pole, posterior STG damage was associated with loss of rhythm perception (arryhthmia). Conversely, in two patients rhythm perception was intact, while recognition/discrimination of musical segments was impaired. Amusia also dissociates in regard to enjoyment from music. In two reports, amusic patients, who weren't able to distinguish musical instruments, reported that they still enjoy listening to music. On the other hand, a patient with left hemispheric damage in the amygdala was reported to perceive, but not enjoy, music.

Word meaning deafness / associative auditory agnosia

In 1928, Kleist suggested that the etiology of word deafness could be due either to impaired perception of the sound (apperceptive auditory agnosia), or to impaired extraction of meaning from a sound (associative auditory agnosia). This hypothesis was first tested by Vignolo et al (1969), who examined unilateral stroke patients. They reported that patients with left hemisphere damage were impaired in matching environmental sounds with their corresponding pictures, whereas patients with right hemisphere damage were impaired in the discrimination of meaningless noise segments. The researchers then concluded that left hemispheric damage results in associative auditory agnosia, and right hemisphere damage results in apperceptive auditory agnosia. Although the conclusion reached by this study could be considered over-reaching, associative auditory agnosia could correspond with the disorder word meaning deafness.

Patients with word meaning deafness are characterized by impaired speech recognition but intact repetition of speech and left hemisphere damage. These patients often repeat words in an attempt to extract its meaning (e.g., "Jar....Jar....what is a jar?"). In the first documented case, Bramwell (1897 - translated by Ellis, 1984) reported a patient, who in order to comprehend speech wrote what she heard and then read her own handwriting. Kohn and Friedman, and Symonds also reported word meaning deafness patients who are able to write to dictation. In at least 12 cases, patients with symptoms that correspond with word meaning deafness were diagnosed as auditory agnosia. Unlike most auditory agnosia patients, word meaning deafness patients are not impaired at discriminating gaps of click sounds. It is yet unclear whether word meaning deafness is also synonymous with the disorder deep dysphasia, in which patients cannot repeat nonsense words and produce semantic paraphasia during repetition of real words. Word meaning deafness is also often confused with transcortical sensory aphasia, but such patients differ from the latter by their ability to express themselves appropriately orally or in writing.

Neurological mechanism

Auditory agnosia (with the exception of non-verbal auditory agnosia and amusia) is strongly dependent on damage to both hemispheres. The order of hemispheric damage is irrelevant to manifestation of symptoms, and years could take between the damage of the first hemisphere and the second hemisphere (after which the symptoms suddenly emerge). A study that compared lesion locations, reported that in all cases with bilateral hemispheric damage, at least in one side the lesion included Heschl's gyrus or its underlying white matter. A rare insight into the etiology of this disorder was reported in a study of an auditory agnosia patient with damage to the brainstem, instead of cortex. fMRI scanning of the patient revealed weak activation of the anterior Heschl's gyrus (area R) and anterior superior temporal gyrus. These brain areas are part of the auditory 'what' pathway, and are known from both human and monkey research to participate in the recognition of sounds.

Parsing

From Wikipedia, the free encyclopedia

Parsing, syntax analysis, or syntactic analysis is the process of analyzing a string of symbols, either in natural language, computer languages or data structures, conforming to the rules of a formal grammar. The term parsing comes from Latin pars (orationis), meaning part (of speech).

The term has slightly different meanings in different branches of linguistics and computer science. Traditional sentence parsing is often performed as a method of understanding the exact meaning of a sentence or word, sometimes with the aid of devices such as sentence diagrams. It usually emphasizes the importance of grammatical divisions such as subject and predicate.

Within computational linguistics the term is used to refer to the formal analysis by a computer of a sentence or other string of words into its constituents, resulting in a parse tree showing their syntactic relation to each other, which may also contain semantic information. Some parsing algorithms may generate a parse forest or list of parse trees for a syntactically ambiguous input.

The term is also used in psycholinguistics when describing language comprehension. In this context, parsing refers to the way that human beings analyze a sentence or phrase (in spoken language or text) "in terms of grammatical constituents, identifying the parts of speech, syntactic relations, etc." This term is especially common when discussing which linguistic cues help speakers interpret garden-path sentences.

Within computer science, the term is used in the analysis of computer languages, referring to the syntactic analysis of the input code into its component parts in order to facilitate the writing of compilers and interpreters. The term may also be used to describe a split or separation.

Human languages

Traditional methods

The traditional grammatical exercise of parsing, sometimes known as clause analysis, involves breaking down a text into its component parts of speech with an explanation of the form, function, and syntactic relationship of each part. This is determined in large part from study of the language's conjugations and declensions, which can be quite intricate for heavily inflected languages. To parse a phrase such as "man bites dog" involves noting that the singular noun "man" is the subject of the sentence, the verb "bites" is the third person singular of the present tense of the verb "to bite", and the singular noun "dog" is the object of the sentence. Techniques such as sentence diagrams are sometimes used to indicate relation between elements in the sentence.

Parsing was formerly central to the teaching of grammar throughout the English-speaking world, and widely regarded as basic to the use and understanding of written language. However, the general teaching of such techniques is no longer current.

Computational methods

In some machine translation and natural language processing systems, written texts in human languages are parsed by computer programs. Human sentences are not easily parsed by programs, as there is substantial ambiguity in the structure of human language, whose usage is to convey meaning (or semantics) amongst a potentially unlimited range of possibilities, but only some of which are germane to the particular case. So an utterance "Man bites dog" versus "Dog bites man" is definite on one detail but in another language might appear as "Man dog bites" with a reliance on the larger context to distinguish between those two possibilities, if indeed that difference was of concern. It is difficult to prepare formal rules to describe informal behaviour even though it is clear that some rules are being followed.

In order to parse natural language data, researchers must first agree on the grammar to be used. The choice of syntax is affected by both linguistic and computational concerns; for instance some parsing systems use lexical functional grammar, but in general, parsing for grammars of this type is known to be NP-complete. Head-driven phrase structure grammar is another linguistic formalism which has been popular in the parsing community, but other research efforts have focused on less complex formalisms such as the one used in the Penn Treebank. Shallow parsing aims to find only the boundaries of major constituents such as noun phrases. Another popular strategy for avoiding linguistic controversy is dependency grammar parsing.

Most modern parsers are at least partly statistical; that is, they rely on a corpus of training data which has already been annotated (parsed by hand). This approach allows the system to gather information about the frequency with which various constructions occur in specific contexts. (See machine learning.) Approaches which have been used include straightforward PCFGs (probabilistic context-free grammars), maximum entropy, and neural nets. Most of the more successful systems use lexical statistics (that is, they consider the identities of the words involved, as well as their part of speech). However such systems are vulnerable to overfitting and require some kind of smoothing to be effective.

Parsing algorithms for natural language cannot rely on the grammar having 'nice' properties as with manually designed grammars for programming languages. As mentioned earlier some grammar formalisms are very difficult to parse computationally; in general, even if the desired structure is not context-free, some kind of context-free approximation to the grammar is used to perform a first pass. Algorithms which use context-free grammars often rely on some variant of the CYK algorithm, usually with some heuristic to prune away unlikely analyses to save time. (See chart parsing.) However some systems trade speed for accuracy using, e.g., linear-time versions of the shift-reduce algorithm. A somewhat recent development has been parse reranking in which the parser proposes some large number of analyses, and a more complex system selects the best option. In natural language understanding applications, semantic parsers convert the text into a representation of its meaning.

Psycholinguistics

In psycholinguistics, parsing involves not just the assignment of words to categories (formation of ontological insights), but the evaluation of the meaning of a sentence according to the rules of syntax drawn by inferences made from each word in the sentence (known as connotation). This normally occurs as words are being heard or read.

Neurolinguistics generally understands parsing to be a function of working memory, meaning that parsing is used to keep several parts of one sentence at play in the mind at one time, all readily accessible to be analyzed as needed. Because the human working memory has limitations, so does the function of sentence parsing. This is evidenced by several different types of syntactically complex sentences that propose potentially issues for mental parsing of sentences.

The first, and perhaps most well-known, type of sentence that challenges parsing ability is the garden-path sentence. These sentences are designed so that the most common interpretation of the sentence appears grammatically faulty, but upon further inspection, these sentences are grammatically sound. Garden-path sentences are difficult to parse because they contain a phrase or a word with more than one meaning, often their most typical meaning being a different part of speech. For example, in the sentence,” the horse raced past the barn fell”, raced is initially interpreted as a past tense verb, but in this sentence, it functions as part of an adjective phrase. Since parsing is used to identify parts of speech, these sentences challenge the parsing ability of the reader.

Another type of sentence that is difficult to parse is an attachment ambiguity, which includes a phrase that could potentially modify different parts of a sentence, and therefore presents a challenge in identifying syntactic relationship (i.e. “The boy saw the lady with the telescope”, in which the ambiguous phrase with the telescope could modify the boy saw or the lady.) 

A third type of sentence that challenges parsing ability is center embedding, in which phrases are placed in the center of other similarly formed phrases (i.e. “The rat the cat the man hit chased ran into the trap”.) Sentences with 2 or in the most extreme cases 3 center embeddings are challenging for mental parsing, again because of ambiguity of syntactic relationship. 

Within neurolinguistics there are multiple theories that aim to describe how parsing takes place in the brain. One such model is a more traditional generative model of sentence processing, which theorizes that within the brain there is a distinct module designed for sentence parsing, which is preceded by access to lexical recognition and retrieval, and then followed by syntactic processing that considers a single syntactic result of the parsing, only returning to revise that syntactic interpretation if a potential problem is detected. The opposing, more contemporary model theorizes that within the mind, the processing of a sentence is not modular, or happening in strict sequence. Rather, it poses that several different syntactic possibilities can be considered at the same time, because lexical access, syntactic processing, and determination of meaning occur in parallel in the brain. In this way these processes are integrated. 

Although there is still much to learn about the neurology of parsing, studies have shown evidence that several areas of the brain might play a role in parsing. These include the left anterior temporal pole, the left inferior frontal gyrus, the left superior temporal gyrus, the left superior frontal gyrus, the right posterior cingulate cortex, and the left angular gyrus. Although it has not been absolutely proven, it has been suggested that these different structures might favor either phrase-structure parsing or dependency-structure parsing, meaning different types of parsing could be processed in different ways which have yet to be understood. 

Discourse analysis

Discourse analysis examines ways to analyze language use and semiotic events. Persuasive language may be called rhetoric.

Computer languages

Parser

A parser is a software component that takes input data (frequently text) and builds a data structure – often some kind of parse tree, abstract syntax tree or other hierarchical structure, giving a structural representation of the input while checking for correct syntax. The parsing may be preceded or followed by other steps, or these may be combined into a single step. The parser is often preceded by a separate lexical analyser, which creates tokens from the sequence of input characters; alternatively, these can be combined in scannerless parsing. Parsers may be programmed by hand or may be automatically or semi-automatically generated by a parser generator. Parsing is complementary to templating, which produces formatted output. These may be applied to different domains, but often appear together, such as the scanf/printf pair, or the input (front end parsing) and output (back end code generation) stages of a compiler.

The input to a parser is often text in some computer language, but may also be text in a natural language or less structured textual data, in which case generally only certain parts of the text are extracted, rather than a parse tree being constructed. Parsers range from very simple functions such as scanf, to complex programs such as the frontend of a C++ compiler or the HTML parser of a web browser. An important class of simple parsing is done using regular expressions, in which a group of regular expressions defines a regular language and a regular expression engine automatically generating a parser for that language, allowing pattern matching and extraction of text. In other contexts regular expressions are instead used prior to parsing, as the lexing step whose output is then used by the parser.

The use of parsers varies by input. In the case of data languages, a parser is often found as the file reading facility of a program, such as reading in HTML or XML text; these examples are markup languages. In the case of programming languages, a parser is a component of a compiler or interpreter, which parses the source code of a computer programming language to create some form of internal representation; the parser is a key step in the compiler frontend. Programming languages tend to be specified in terms of a deterministic context-free grammar because fast and efficient parsers can be written for them. For compilers, the parsing itself can be done in one pass or multiple passes – see one-pass compiler and multi-pass compiler.

The implied disadvantages of a one-pass compiler can largely be overcome by adding fix-ups, where provision is made for code relocation during the forward pass, and the fix-ups are applied backwards when the current program segment has been recognized as having been completed. An example where such a fix-up mechanism would be useful would be a forward GOTO statement, where the target of the GOTO is unknown until the program segment is completed. In this case, the application of the fix-up would be delayed until the target of the GOTO was recognized. Conversely, a backward GOTO does not require a fix-up, as the location will already be known.

Context-free grammars are limited in the extent to which they can express all of the requirements of a language. Informally, the reason is that the memory of such a language is limited. The grammar cannot remember the presence of a construct over an arbitrarily long input; this is necessary for a language in which, for example, a name must be declared before it may be referenced. More powerful grammars that can express this constraint, however, cannot be parsed efficiently. Thus, it is a common strategy to create a relaxed parser for a context-free grammar which accepts a superset of the desired language constructs (that is, it accepts some invalid constructs); later, the unwanted constructs can be filtered out at the semantic analysis (contextual analysis) step.

For example, in Python the following is syntactically valid code:

x = 1;
print(x);

The following code, however, is syntactically valid in terms of the context-free grammar, yielding a syntax tree with the same structure as the previous, but violates the semantic rule requiring variables to be initialized before use:

x = 1
print(y)

Overview of process

Flow of data in a typical parser

The following example demonstrates the common case of parsing a computer language with two levels of grammar: lexical and syntactic.

The first stage is the token generation, or lexical analysis, by which the input character stream is split into meaningful symbols defined by a grammar of regular expressions. For example, a calculator program would look at an input such as "12 * (3 + 4)^2" and split it into the tokens 12, *, (, 3, +, 4, ), ^, 2, each of which is a meaningful symbol in the context of an arithmetic expression. The lexer would contain rules to tell it that the characters *, +, ^, ( and ) mark the start of a new token, so meaningless tokens like "12*" or "(3" will not be generated.

The next stage is parsing or syntactic analysis, which is checking that the tokens form an allowable expression. This is usually done with reference to a context-free grammar which recursively defines components that can make up an expression and the order in which they must appear. However, not all rules defining programming languages can be expressed by context-free grammars alone, for example type validity and proper declaration of identifiers. These rules can be formally expressed with attribute grammars.

The final phase is semantic parsing or analysis, which is working out the implications of the expression just validated and taking the appropriate action. In the case of a calculator or interpreter, the action is to evaluate the expression or program; a compiler, on the other hand, would generate some kind of code. Attribute grammars can also be used to define these actions.

Types of parsers

The task of the parser is essentially to determine if and how the input can be derived from the start symbol of the grammar. This can be done in essentially two ways:

Top-down parsing
Top-down parsing can be viewed as an attempt to find left-most derivations of an input-stream by searching for parse trees using a top-down expansion of the given formal grammar rules. Tokens are consumed from left to right. Inclusive choice is used to accommodate ambiguity by expanding all alternative right-hand-sides of grammar rules. This is known as the primordial soup approach. Very similar to sentence diagramming, primordial soup breaks down the constituencies of sentences.
Bottom-up parsing
A parser can start with the input and attempt to rewrite it to the start symbol. Intuitively, the parser attempts to locate the most basic elements, then the elements containing these, and so on. LR parsers are examples of bottom-up parsers. Another term used for this type of parser is Shift-Reduce parsing.

LL parsers and recursive-descent parser are examples of top-down parsers that cannot accommodate left recursive production rules. Although it has been believed that simple implementations of top-down parsing cannot accommodate direct and indirect left-recursion and may require exponential time and space complexity while parsing ambiguous context-free grammars, more sophisticated algorithms for top-down parsing have been created by Frost, Hafiz, and Callaghan which accommodate ambiguity and left recursion in polynomial time and which generate polynomial-size representations of the potentially exponential number of parse trees. Their algorithm is able to produce both left-most and right-most derivations of an input with regard to a given context-free grammar.

An important distinction with regard to parsers is whether a parser generates a leftmost derivation or a rightmost derivation (see context-free grammar). LL parsers will generate a leftmost derivation and LR parsers will generate a rightmost derivation (although usually in reverse).

Some graphical parsing algorithms have been designed for visual programming languages. Parsers for visual languages are sometimes based on graph grammars.

Adaptive parsing algorithms have been used to construct "self-extending" natural language user interfaces.

Implementation

The simplest parser APIs read the entire input file, do some intermediate computation, and then write the entire output file. (Such as in-memory multi-pass compilers).

Those simple parsers won't work when there isn't enough memory to store the entire input file or the entire output file. They also won't work for never-ending streams of data from the real world.

Some alternative API approaches for parsing such data:

  • push parsers that call the registered handlers (callbacks) as soon as the parser detects relevant tokens in the input stream (such as Expat)
  • pull parsers
  • incremental parsers (such as incremental chart parsers) that, as the text of the file is edited by a user, does not need to completely re-parse the entire file.
  • Active vs passive parsers

Parser development software

Some of the well known parser development tools include the following:

Lookahead

C program that cannot be parsed with less than 2 token lookahead. Top: C grammar excerpt. Bottom: a parser has digested the tokens "int v;main(){" and is about to choose a rule to derive Stmt. Looking only at the first lookahead token "v", it cannot decide which of both alternatives for Stmt to choose; the latter requires peeking at the second token.

Lookahead establishes the maximum incoming tokens that a parser can use to decide which rule it should use. Lookahead is especially relevant to LL, LR, and LALR parsers, where it is often explicitly indicated by affixing the lookahead to the algorithm name in parentheses, such as LALR(1).

Most programming languages, the primary target of parsers, are carefully defined in such a way that a parser with limited lookahead, typically one, can parse them, because parsers with limited lookahead are often more efficient. One important change to this trend came in 1990 when Terence Parr created ANTLR for his Ph.D. thesis, a parser generator for efficient LL(k) parsers, where k is any fixed value.

LR parsers typically have only a few actions after seeing each token. They are shift (add this token to the stack for later reduction), reduce (pop tokens from the stack and form a syntactic construct), end, error (no known rule applies) or conflict (does not know whether to shift or reduce).

Lookahead has two advantages.

  • It helps the parser take the correct action in case of conflicts. For example, parsing the if statement in the case of an else clause.
  • It eliminates many duplicate states and eases the burden of an extra stack. A C language non-lookahead parser will have around 10,000 states. A lookahead parser will have around 300 states.

Example: Parsing the Expression 1 + 2 * 3

Set of expression parsing rules (called grammar) is as follows,
Rule1: E → E + E Expression is the sum of two expressions.
Rule2: E → E * E Expression is the product of two expressions.
Rule3: E → number Expression is a simple number
Rule4: + has less precedence than *

Most programming languages (except for a few such as APL and Smalltalk) and algebraic formulas give higher precedence to multiplication than addition, in which case the correct interpretation of the example above is 1 + (2 * 3). Note that Rule4 above is a semantic rule. It is possible to rewrite the grammar to incorporate this into the syntax. However, not all such rules can be translated into syntax.

Simple non-lookahead parser actions

Initially Input = [1, +, 2, *, 3]

  1. Shift "1" onto stack from input (in anticipation of rule3). Input = [+, 2, *, 3] Stack = [1]
  2. Reduces "1" to expression "E" based on rule3. Stack = [E]
  3. Shift "+" onto stack from input (in anticipation of rule1). Input = [2, *, 3] Stack = [E, +]
  4. Shift "2" onto stack from input (in anticipation of rule3). Input = [*, 3] Stack = [E, +, 2]
  5. Reduce stack element "2" to Expression "E" based on rule3. Stack = [E, +, E]
  6. Reduce stack items [E, +, E] and new input "E" to "E" based on rule1. Stack = [E]
  7. Shift "*" onto stack from input (in anticipation of rule2). Input = [3] Stack = [E,*]
  8. Shift "3" onto stack from input (in anticipation of rule3). Input = [] (empty) Stack = [E, *, 3]
  9. Reduce stack element "3" to expression "E" based on rule3. Stack = [E, *, E]
  10. Reduce stack items [E, *, E] and new input "E" to "E" based on rule2. Stack = [E]

The parse tree and resulting code from it is not correct according to language semantics.

To correctly parse without lookahead, there are three solutions:

  • The user has to enclose expressions within parentheses. This often is not a viable solution.
  • The parser needs to have more logic to backtrack and retry whenever a rule is violated or not complete. The similar method is followed in LL parsers.
  • Alternatively, the parser or grammar needs to have extra logic to delay reduction and reduce only when it is absolutely sure which rule to reduce first. This method is used in LR parsers. This correctly parses the expression but with many more states and increased stack depth.
Lookahead parser actions
  1. Shift 1 onto stack on input 1 in anticipation of rule3. It does not reduce immediately.
  2. Reduce stack item 1 to simple Expression on input + based on rule3. The lookahead is +, so we are on path to E +, so we can reduce the stack to E.
  3. Shift + onto stack on input + in anticipation of rule1.
  4. Shift 2 onto stack on input 2 in anticipation of rule3.
  5. Reduce stack item 2 to Expression on input * based on rule3. The lookahead * expects only E before it.
  6. Now stack has E + E and still the input is *. It has two choices now, either to shift based on rule2 or reduction based on rule1. Since * has higher precedence than + based on rule4, we shift * onto stack in anticipation of rule2.
  7. Shift 3 onto stack on input 3 in anticipation of rule3.
  8. Reduce stack item 3 to Expression after seeing end of input based on rule3.
  9. Reduce stack items E * E to E based on rule2.
  10. Reduce stack items E + E to E based on rule1.

The parse tree generated is correct and simply more efficient than non-lookahead parsers. This is the strategy followed in LALR parsers.

Politics of Europe

From Wikipedia, the free encyclopedia ...