A written language is the representation of a language by means of writing. This involves the use of visual symbols, known as graphemes, to represent linguistic units such as phonemes, syllables, morphemes, or words. However, it is important to note that written language is not merely spoken or signed language
written down, though it can approximate that. Instead, it is a separate
system with its own norms, structures, and stylistic conventions, and
it often evolves differently than its corresponding spoken or signed
language.
Written languages serve as crucial tools for communication,
enabling the recording, preservation, and transmission of information,
ideas, and culture across time and space. The specific form a written
language takes – its alphabet or script, its spelling conventions, and
its punctuation system, among other features – is determined by its orthography.
The development and use of written language have had profound
impacts on human societies throughout history, influencing social
organization, cultural identity, technology, and the dissemination of
knowledge. In contemporary times, the advent of digital technology has
led to significant changes in the ways we use written language, from the
creation of new written genres and conventions to the evolution of
writing systems themselves.
Comparison with spoken and signed language
Written language, spoken language, and signed language are three distinct modalities of communication, each with its own unique characteristics and conventions.
Spoken and signed language is often more dynamic and flexible,
reflecting the immediate context of the conversation, the speaker's
emotions, and other non-verbal cues. It tends to use more informal
language, contractions, and colloquialisms, and it is typically
structured in shorter sentences. Spoken and signed language often includes false starts and hesitations. Because spoken and signed language tend to be interactive, they include elements that facilitate turn taking, including prosodic features, such as trailing off, and fillers that indicate the speaker/signer is not yet finished their turn.
In contrast, written language is typically more structured and
formal. It allows for planning, revision, and editing, which can lead to
more complex sentences and a more extensive vocabulary. Written
language also has to convey meaning without the aid of tone of voice,
facial expressions, or body language, which often results in more
explicit and detailed descriptions. It may include typographic elements like typeface choices, font sizes, and bold face. The types of errors found in the modalities also differ.
The author of a written text is often difficult to discern simply
by reading the printed text, even if the author is known to the reader,
though stylistic
elements may help to identify them. In contrast a speaker is typically
more identifiable from their voice. In written language, handwriting is a similar identifier.
Moreover, written languages generally change more slowly than
their spoken or signed counterparts. This can lead to situations where
the written form of a language maintains archaic features or spellings
that no longer reflect current pronunciation. Over time, such divergence may lead to a situation known as diglossia.
Despite their differences, spoken, signed, and written language
forms do influence each other, and the boundaries between them can be
fluid, particularly in informal written contexts such as text messaging
or social media posts.
Grammar
There are too many grammatical differences to address, but here is a sample. In terms of clause types, written language is predominantly declarative (e.g., It's red.) and typically contains fewer imperatives (e.g., Make it red.), interrogatives (e.g., Is it red?), and exclamatives (e.g., How red it is!) than spoken or signed language. Noun phrases are generally predominantly third person, but they are even more so in written language. Verb phrases in spoken English are more likely to be in simple aspect than in perfect or progressive aspect, and almost all of the past perfect verbs appear in written fiction.
Information packaging
Information packaging
is the way that information is packaged within a sentence, that is the
linear order in which information is presented. For example, On the hill, there was a tree has a different informational structure than There was a tree on the hill.
While, in English, at least, the second structure is more common, the
first example is relatively much more common in written language than in
spoken language. Another example is that a construction like it was difficult to follow him is relatively more common in written language than in spoken language, compared to the alternative packaging to follow him was difficult. A final example, again from English, is that the passive voice is relatively more common in writing than in speaking.
Vocabulary
Written language typically has higher lexical density
than spoken or signed language, meaning there is a wider range of
vocabulary used and individual words are less likely to be repeated. It
also includes fewer first and second-person pronouns and fewer
interjections. Written English has fewer verbs and more nouns than
spoken English, but even accounting for that, verbs like think, say, know, and guess appear relatively less commonly with a content clause complement (e.g., I think that it's OK.) in written English than in spoken English.
Diglossia is a sociolinguistic phenomenon where two distinct
varieties of a language – often one spoken and one written – are used by
a single language community in different social contexts.
The so-called "high variety", often the written language, is used
in formal contexts, such as literature, formal education, or official
communications. This variety tends to be more standardized and
conservative, and may incorporate older or more formal vocabulary and
grammar.
The "low variety", often the spoken language, is used in everyday
conversation and informal contexts. It is typically more dynamic and
innovative, and may incorporate regional dialects, slang, and other
informal language features.
The existence of diglossia can have significant implications for
language education, literacy, and sociolinguistic dynamics within a
language community.
The first writing can be dated back to the Neolithic
era, with clay tablets being used to keep track of livestock and
commodities. However, the first example of written language can be dated
to Uruk, at the end of the 4th millennium BCE. An ancient Mesopotamian poem tells a tale about the invention of writing.
"Because
the messenger's mouth was heavy and he couldn't repeat, the Lord of
Kulaba patted some clay and put words on it, like a tablet. Until then,
there had been no putting words on clay."
—Enmerkar and the Lord of Aratta, c. 1800 BCE.
Scholars mark the difference between prehistory and history with the invention of the first written language.
However, that leaves the argument of what is and is not a written
language, an argument over the transition of history to pre-history
being whether a piece of writing in proto-writing, or genuine writing,
making the matter largely subjective, leaving the line in a gray-area. A general consensus is that writing is a method of recording information, composed of graphemes, which also may be glyphs, and it should represent some form of spoken language as well, falling hand in hand with containing information, allowing numbers to be counted as writing as well.
Origins of written language
The origins of written language are tied to the development of human civilization. The earliest forms of writing were born out of the necessity to record trade transactions, historical events, and cultural traditions.
The first known true writing systems were developed during the early Bronze Age (late 4th millennium BC) in ancient Sumer, present-day southern Iraq. This system, known as cuneiform, was pictographic at first, but later evolved into an alphabet, a series of wedge-shaped signs used to represent language phonemically.
Simultaneously, in ancient Egypt, hieroglyphic writing was developed, which also began as pictographic and later included phonemic elements. In the Indus Valley, an ancient civilization developed a form of writing known as the Indus script around 2600 BC, although its precise nature remains undeciphered. The Chinese script, one of the oldest continuously used writing systems in the world, originated around the late 2nd millennium BC, evolving from oracle bone script used for divination purposes.
Writing systems evolved independently in different parts of the world, including Mesoamerica, where the Olmec and Maya civilizations developed scripts in the 1st millennium BC.
Writing systems around the world can be broadly classified into several types: logographic, syllabic, alphabetic, and featural. There are also phonetic systems,
which are used only in technical applications. Also, writing systems
for signed languages have been developed, but, apart from SignWriting, none is in general use.
The distinctions are based on the predominant type of grapheme used. In linguistics and orthography, a grapheme is the smallest unit of a writing system of any given language. It is an abstract concept, similar to a character in computing or a glyph in typography. It differs, though, in that a grapheme may be composed of multiple characters. For example, in English, th is a grapheme composed of the characters t and h. When they occur together, they are typically read /θ/ (as in bath) or /ð/ (as in them). Different writing systems may combine elements of these types. For example, Japanese uses a combination of logographic kanji, syllabic kana, and Arabic numerals.
In these systems, each grapheme more or less represents a word or a morpheme (a meaningful unit of language). Arabic numerals are examples of logographs. Upon seeing the numeral 3, for instance, the reader understands both the intended number and its pronunciation in the appropriate language. Chinese characters, Japanese kanji, and ancient Egyptian hieroglyphs are more general examples of logographic writing systems. The Japanese word kanji, for instance, may be written 漢字. The first character is read kan and means roughly "Chinese", while the second is read ji and means roughly "character".
A syllabary is a set of graphemes that represent syllables or sometimes mora and can be combined to write words. Examples include the Japanese kana scripts and the Cherokee syllabary. For example, in Japanese, kana can be written かんじ, with か being the syllable ka, ん being a syllabic n, and じ being ji. Unlike the kanji, the individual kana denote only sounds, and are not associated with any particular words or meanings.
In featural writing systems, the shapes of the characters are not
arbitrary but encode features of the modality they represent. The Korean
Hangul script is a prime example of a featural system.
For example, in Hangul, the phoneme /k/, which is represented by the
character 'ㄱ', is articulated at the back of the mouth. The shape of the
character 'ㄱ' mimics the shape of the tongue when pronouncing the sound
/k/. Similarly, the phoneme /n/, represented by the character 'ㄴ', is
articulated at the front of the mouth, and the shape of the character
'ㄴ' is reminiscent of the tongue's position when pronouncing /n/. SignWriting is another featural system, which represents the physical formation of signs.
Orthography is the conventional elements of the writing system of a language.
It involves the use of graphemes and the standardized ways these
symbols are arranged to represent words, including spelling. In the kanji
examples above, it was noted that the word is typically written as 漢字,
though it may also be written as かんじ. This conventionalized fact is part
of Japanese orthography. Similarly, the fact that sorry is spelled as it is and not some other way (e.g., sawry) is an orthographic fact of English.
In some orthographies, there is a one-to-one correspondence between phonemes and graphemes, as in Serbian and Finnish.
These are known as shallow orthographies. In contrast, orthographies
like that of English and French are known as deep orthographies due to
the complex relationships between sounds and symbols. For instance, in
English, the phoneme /f/ can be represented by the graphemes f (as in fish), ph (as in phone), or gh (as in enough).
Orthographic systems can also include rules about punctuation,
capitalization, word breaks, and emphasis. They may also include
specific conventions for representing foreign words and names, and for
handling spelling changes to reflect changes in pronunciation or meaning
over time.
Relationship between spoken, signed, and written languages
Spoken,
signed, and written languages are integral facets of human
communication, each with its unique characteristics and functions. They
often influence each other, and the boundaries between them can be
fluid. For example, in spoken and written language interaction, speech-to-text technologies convert spoken language into written text, and text-to-speech technologies do the reverse.
Understanding the relationship between these language forms is
essential to the study of linguistics, communication, and social
interaction. It also has practical implications for education,
technology development, and the promotion of linguistic diversity and
inclusivity.
In spoken language
Spoken
language is the most prevalent and fundamental form of human
communication. It is typically characterized by a high degree of
spontaneity and is often shaped by various sociocultural factors.
Spoken language forms the basis of written language, which allows for
communication across time and space. Written language often reflects the
phonetic and structural characteristics of the spoken language from
which it evolves. However, over time, written language can also develop
its own unique structures and conventions, which can in turn influence
spoken language.
An example of written language influencing spoken language can be seen
in the general advice to speakers to avoid fillers and their general deprecation. For example, in the article "We, um, have, like, a problem: excessive use of fillers in scientific speech", the authors mock their use:
Based
on this large sample size of observations, we believe that when it
comes to scientific speaking, we, um… have, er… a problem. Like, a big
problem, you know? If you are unaware of this problem, then speaking for
those of us who are all too conscious of the issue, we are envious.
It is also the case that formal written registers
are often perceived as prestige varieties, and that speakers are
encouraged to mimic them. For example, the common English dialect in Singapore is often derided as "Singlish",
and Singaporean children are typically taught Standard English in
schools and are often corrected when they use features of Singlish. The
government has also run campaigns to promote the use of Standard English
over Singlish, reflecting a societal preference for the formal register
of English that's more closely aligned with written language.
But spoken languages clearly continue to influence written
languages throughout their lives too. For example, written Chinese is
standardized on the basis of Mandarin, specifically the Beijing dialect, which is the official spoken language in China. But spoken Cantonese has had an increasing influence on the written language of Cantonese speakers. One example is 咗 (jo2), which is a verb particle indicating completed action. While this word does not exist in Standard Chinese, it is commonly used in written Cantonese.
In signed language
Signed languages, used predominantly by the Deaf
community, are visual-gestural languages that have developed
independently of spoken languages and have their own grammatical and
syntactical structures. Yet, they also interact with spoken and written languages, especially through the process of code-switching, where elements of a spoken or written language are incorporated into signed language.
A notable example of this can be seen in American Sign Language (ASL) and English bilingual communities. These communities often include deaf individuals who use ASL as their primary language but also use English for reading, writing, and sometimes speaking, as well as hearing
individuals who use both ASL and English. In these communities, it is
common to see code-switching between ASL and English. For instance, a
person might adopt English word order for some particular purpose or expression, or use fingerspelling
(spelling out English words using ASL handshapes) for an English word
that does not have a commonly used sign in ASL. This is especially
common in educational settings, where the language of instruction is
often English, and in written communication, where English is typically
used.
Written language and society
The
development and use of written language has had profound impacts on
human societies, influencing everything from social organization and
cultural identity to technology and the dissemination of knowledge.
Though these are generally thought to be positive, in his dialogue "Phaedrus," Plato, through the voice of Socrates,
expressed concern that reliance on writing would weaken the ability to
memorize and understand, as written words would "create forgetfulness in
the learners' souls, because they will not use their memories." He
further argued that written words, being unable to answer questions or
clarify themselves, are inferior to the living, interactive discourse of
oral communication.
Written language facilitates the preservation and transmission of
culture, history, and knowledge across time and space, allowing
societies to develop complex systems of law, administration, and
education. For example, the invention of writing in ancient Mesopotamia enabled the creation of detailed legal codes, like the Code of Hammurabi.
The advent of digital technology has revolutionized written
communication, leading to the emergence of new written genres and
conventions, such as texting and social media interactions. This has implications for social relationships, education, and professional communication.
Literacy and Social Mobility
Literacy
can be understood in various dimensions. On one hand, it can be viewed
as the ability to recognize and correctly process graphemes, the
smallest units of written language. On the other hand, literacy can be
defined more broadly as proficiency with written language, which
involves understanding the conventions, grammar, and context in which
written language is used. Of course, this second conception presupposes
the first.
This proficiency with written language is a key driver of social mobility.
Firstly, it underpins success in formal education, where the ability to
comprehend textbooks, write essays, and interact with written
instructional materials is fundamental. High literacy skills can lead to
better academic performance, opening doors to higher education and
specialized training opportunities.
In the job market, proficiency in written language is often a
determinant of employment opportunities. Many professions require a high
level of literacy, from drafting reports and proposals to interpreting
technical manuals. The ability to effectively use written language can
lead to higher paying jobs and upward career progression.
At the societal level, literacy in written language enables
individuals to participate fully in civic life. It empowers individuals
to make informed decisions, from understanding news articles and
political debates to navigating legal documents. This can lead to more
active citizenship and democratic participation.
However, disparities in literacy rates and proficiency with written language can contribute to social inequalities.
Socio-economic status, race, gender, and geographic location can all
influence an individual's access to quality literacy instruction.
Addressing these disparities through inclusive and equitable education
policies is crucial for promoting social mobility and reducing
inequality.
Marshall McLuhan's perspective
Marshall McLuhan's ideas about written language are primarily found in "The Gutenberg Galaxy: The Making of Typographic Man". In this work, McLuhan argued that the invention and spread of the printing press, and the shift from oral
to written culture that it spurred, fundamentally changed the nature of
human society. This change, he suggested, led to the rise of individualism, nationalism, and other aspects of modernity.
McLuhan proposed that written language, especially as reproduced
in large quantities by the printing press, contributed to a linear and
sequential mode of thinking, as opposed to the more holistic and
contextual thinking fostered by oral cultures. He associated this linear
mode of thought with a shift towards more detached and objective forms
of reasoning, which he saw as characteristic of the modern age.
Furthermore, McLuhan theorized about the effects of different
media on human consciousness and society. He famously asserted that "the medium is the message,"
meaning that the form of a medium embeds itself in any message it would
transmit or convey, creating a symbiotic relationship by which the
medium influences how the message is perceived.
While McLuhan's ideas are influential, they have also been
critiqued and debated. Some scholars argue that he overemphasized the
role of the medium (in this case, written language) at the expense of
the content of communication.
It has also been suggested that his theories are overly deterministic,
not sufficiently accounting for the ways in which people can use and
interpret media in varied ways.
There
is no generally accepted term for this concept. Most treatments of the
subject do not include a name for the language under consideration (e.g.
Bengtson and Ruhlen). The terms proto-world and proto-human are in occasional use. Merritt Ruhlen used the term proto-sapiens.
History of the idea
The first serious scientific attempt to establish the reality of monogenesis was that of Alfredo Trombetti, in his book L'unità d'origine del linguaggio, published in 1905. Trombetti estimated that the common ancestor of existing languages had been spoken between 100,000 and 200,000 years ago.
Monogenesis was dismissed by many linguists in the late 19th and early 20th centuries when the doctrine of the polygenesis of the human races and their languages was popularised.
The best-known supporter of monogenesis in America in the mid-20th century was Morris Swadesh. He pioneered two important methods for investigating deep relationships between languages, lexicostatistics and glottochronology.
In the second half of the 20th century, Joseph Greenberg
produced a series of large-scale classifications of the world's
languages. These were and are controversial but widely discussed.
Although Greenberg did not produce an explicit argument for monogenesis,
all of his classification work was geared toward this end. As he
stated: "The ultimate goal is a comprehensive classification of what is very likely a single language family."
The first concrete attempt to estimate the date of the hypothetical ancestor language was that of Alfredo Trombetti, who concluded it was spoken between 100,000 and 200,000 years ago, or close to the first emergence of Homo sapiens.
It is uncertain or disputed whether the earliest members of Homo sapiens had fully developed language. Some scholars link the emergence of language proper (out of a proto-linguistic stage that may have lasted considerably longer) to the development of behavioral modernity toward the end of the Middle Paleolithic or at the beginning of the Upper Paleolithic, roughly 50,000 years ago.
Thus, in the opinion of Richard Klein, the ability to produce complex speech only developed some 50,000 years ago (with the appearance of modern humans or Cro-magnons).
Johanna Nichols (1998) argued that vocal languages must have begun diversifying in our species at least 100,000 years ago.
In 2011, an article in the journal Science proposed an African origin of modern human languages. It was suggested that human language predates the out-of-Africa migrations
of 50,000 to 70,000 years ago and that language might have been the
essential cultural and cognitive innovation that facilitated human
colonization of the globe.
In Perreault and Mathew (2012), an estimate of the time of the first emergence of human language was based on phonemic
diversity. This is based on the assumption that phonemic diversity
evolves much more slowly than grammar or vocabulary, slowly increasing
over time (but reduced among small founding populations).
The largest phoneme inventories are found among African languages,
while the smallest inventories are found in South America and Oceania,
some of the last regions of the globe to be colonized.
The authors used data from the colonization of Southeast Asia to
estimate the rate of increase in phonemic diversity.
Applying this rate to African languages, Perreault and Mathew (2012)
arrived at an estimated age of 150,000 to 350,000 years, compatible with
the emergence and early dispersal of H. sapiens.
The validity of this approach has been criticized as flawed.
Claimed characteristics
Speculation on the "characteristics" of proto-world is limited to linguistic typology, i.e. the identification of universal features shared by all human languages, such as grammar (in the sense of "fixed or preferred sequences of linguistic elements"), and recursion, but beyond this, nothing is known of it.
Christopher Ehret has hypothesized that proto-human had a very complex consonant system, including clicks.
Ruhlen
tentatively traces several words back to the ancestral language, based
on the occurrence of similar sound-and-meaning forms in languages across
the globe. Bengtson and Ruhlen identify 27 "global etymologies". The following table lists a selection of these forms:
Source:. The symbol V stands for "a vowel whose precise character is unknown" (ib. 105).
Based on these correspondences, Ruhlen lists these roots for the ancestor language:
ku = 'who'
ma = 'what'
akwa = 'water'
sum = 'hair'
čuna = 'nose, smell'
Selected items from Bengtson's and Ruhlen's (1994) list of 27 "global etymologies":
No.
Root
Gloss
4
čun(g)a
'nose; to smell'
10
ku(n)
'who?'
26
tsuma
'hair'
27
ʔaq'wa
'water'
Syntax
There are competing theories about the basic word order of the hypothesized proto-human. These usually assume subject-initial ordering because it is the most common globally. Derek Bickerton proposes SVO (subject-verb-object) because this word order (like its mirror OVS) helps differentiate between the subject and object in the absence of evolved case markers by separating them with the verb.
By contrast, Talmy Givón hypothesizes that proto-human had SOV (subject-object-verb), based on the observation that many old languages (e.g., Sanskrit and Latin)
had dominant SOV, but the proportion of SVO has increased over time. On
such a basis, it is suggested that human languages are shifting
globally from the original SOV to the modern SVO. Givón bases his theory
on the empirical claim that word-order change mostly results in SVO and
never in SOV.
Exploring Givón's idea in their 2011 paper, Murray Gell-Mann and Merritt Ruhlen
stated that shifts to SOV are also attested. However, when these are
excluded, the data indeed supported Givón's claim. The authors justified
the exclusion by pointing out that the shift to SOV is unexceptionally a
matter of borrowing the order from a neighboring language. Moreover,
they argued that, since many languages have already changed to SVO, a
new trend towards VSO and VOS ordering has arisen.
Harald Hammarström
reanalysed the data. In contrast to such claims, he found that a shift
to SOV is in every case the most common type, suggesting that there is,
rather, an unchanged universal tendency towards SOV regardless of the
way that languages change and that the relative increase of SVO is a
historical effect of European colonialism.
Criticism
Many
linguists reject the methods used to determine these forms. Several
areas of criticism are raised with the methods Ruhlen and Gell-Mann
employed. The essential basis of these criticisms is that the words
being compared do not show common ancestry; the reasons for this vary.
One is onomatopoeia: for example, the suggested root for smell listed above, *čuna,
may simply be a result of many languages employing an onomatopoeic word
that sounds like sniffing, snuffling, or smelling. Another is the taboo quality of certain words. Lyle Campbell points out that many established proto-languages do not contain an equivalent word for *putV
'vulva' because of how often such taboo words are replaced in the
lexicon, and notes that it "strains credibility to imagine" that a
proto-world form of such a word would survive in many languages.
Using the criteria that Bengtson and Ruhlen employ to find
cognates to their proposed roots, Campbell found seven possible matches
to their root for woman *kuna in Spanish, including cónyuge 'wife', 'spouse', chica 'girl', and cana 'old woman' (adjective). He then goes on to show how what Bengtson and Ruhlen would identify as reflexes of *kuna cannot possibly be related to a proto-world word for woman. Cónyuge, for example, comes from the Latin root meaning 'to join', so its origin had nothing to do with the word woman; chica is related to a Latin word meaning 'insignificant thing'; cana comes from the Latin word for white, and again shows a history unrelated to the word woman. Campbell asserts is that these types of problems are endemic to the methods used by Ruhlen and others.
Some linguists question the very possibility of tracing language
elements so far back into the past. Campbell notes that given the time
elapsed since the origin of human language, every word from that time
would have been replaced or changed beyond recognition in all languages
today. Campbell harshly criticizes efforts to reconstruct a proto-human
language, saying "the search for global etymologies is at best a
hopeless waste of time, at worst an embarrassment to linguistics as a
discipline, unfortunately confusing and misleading to those who might
look to linguistics for understanding in this area."
In aphasia (sometimes called dysphasia), a person may be unable to comprehend or unable to formulate language because of damage to specific brain regions. The major causes are stroke and head trauma; prevalence is hard to determine but aphasia due to stroke is estimated to be 0.1–0.4% in the Global North. Aphasia can also be the result of brain tumors, epilepsy, autoimmune neurological diseases, brain infections, or neurodegenerative diseases (such as dementias).
To be diagnosed with aphasia, a person's language must be
significantly impaired in one (or more) of the four aspects of
communication. Alternatively, in the case of progressive aphasia, it
must have significantly declined over a short period of time. The four
aspects of communication are spoken language production and
comprehension, and written language production and comprehension, impairments in any of these aspects can impact on functional communication.
The difficulties of people with aphasia can range from occasional
trouble finding words, to losing the ability to speak, read, or write;
intelligence, however, is unaffected. Expressive language and receptive language can both be affected as well. Aphasia also affects visual language such as sign language. In contrast, the use of formulaic expressions in everyday communication is often preserved. For example, while a person with aphasia, particularly expressive aphasia (Broca's
aphasia), may not be able to ask a loved one when their birthday is,
they may still be able to sing "Happy Birthday". One prevalent deficit
in all aphasias is anomia, which is a difficulty in finding the correct word.
With aphasia, one or more modes of communication in the brain
have been damaged and are therefore functioning incorrectly. Aphasia is
not caused by damage to the brain that results in motor or sensory
deficits, which produces abnormal speech; that is, aphasia is not related to the mechanics of speech
but rather the individual's language cognition (although a person can
have both problems, as an example, if they have a haemorrhage that
damaged a large area of the brain). An individual's language is the
socially shared set of rules, as well as the thought processes that go
behind communication (as it affects both verbal and nonverbal language).
It is not a result of a more peripheral motor or sensory difficulty,
such as paralysis affecting the speech muscles or a general hearing impairment.
Neurodevelopmental forms of auditory processing disorder
are differentiable from aphasia in that aphasia is by definition caused
by acquired brain injury, but acquired epileptic aphasia has been
viewed as a form of APD.
Signs and symptoms
People
with aphasia may experience any of the following behaviors due to an
acquired brain injury, although some of these symptoms may be due to
related or concomitant problems, such as dysarthria or apraxia,
and not primarily due to aphasia. Aphasia symptoms can vary based on
the location of damage in the brain. Signs and symptoms may or may not
be present in individuals with aphasia and may vary in severity and
level of disruption to communication. Often those with aphasia may have a difficulty with naming objects, so they might use words such as thing or point at the objects. When asked to name a pencil they may say it is a "thing used to write".
Given
the previously stated signs and symptoms, the following behaviors are
often seen in people with aphasia as a result of attempted compensation
for incurred speech and language deficits:
Self-repairs: Further disruptions in fluent speech as a result of mis-attempts to repair erred speech production.
Struggle in non-fluent aphasias: A severe increase in expelled
effort to speak after a life where talking and communicating was an
ability that came so easily can cause visible frustration.
Preserved and automatic language: A behavior in which some language
or language sequences that were used so frequently prior to onset are
still produced with more ease than other language post onset.
Subcortical
Subcortical
aphasias characteristics and symptoms depend upon the site and size of
subcortical lesion. Possible sites of lesions include the thalamus, internal capsule, and basal ganglia.
Cognitive deficits
While
aphasia has traditionally been described in terms of language deficits,
there is increasing evidence that many people with aphasia commonly
experience co-occurring non-linguistic cognitive deficits in areas such as attention, memory, executive functions and learning. By some accounts, cognitive deficits, such as attention and working
memory constitute the underlying cause of language impairment in people
with aphasia.
Others suggest that cognitive deficits often co-occur but are
comparable to cognitive deficits in stroke patients without aphasia and
reflect general brain dysfunction following injury. Whilst it has been shown that cognitive neural networks support language reorganisation after stroke.
The degree to which deficits in attention and other cognitive domains underlie language deficits in aphasia is still unclear.
In particular, people with aphasia often demonstrate short-term and working memory deficits. These deficits can occur in both the verbal domain as well as the visuospatial domain.
Furthermore, these deficits are often associated with performance on
language specific tasks such as naming, lexical processing, and sentence
comprehension, and discourse production.
Other studies have found that most, but not all people with aphasia
demonstrate performance deficits on tasks of attention, and their
performance on these tasks correlate with language performance and
cognitive ability in other domains.
Even patients with mild aphasia, who score near the ceiling on tests of
language often demonstrate slower response times and interference
effects in non-verbal attention abilities.
In addition to deficits in short-term memory, working memory, and
attention, people with aphasia can also demonstrate deficits in
executive function. For instance, people with aphasia may demonstrate deficits in initiation, planning, self-monitoring, and cognitive flexibility.
Other studies have found that people with aphasia demonstrate reduced
speed and efficiency during completion executive function assessments.
Regardless of their role in the underlying nature of aphasia,
cognitive deficits have a clear role in the study and rehabilitation of
aphasia. For instance, the severity of cognitive deficits in people with
aphasia has been associated with lower quality of life, even more so
than the severity of language deficits. Furthermore, cognitive deficits may influence the learning process of rehabilitation and language treatment outcomes in aphasia.
Non-linguistic cognitive deficits have also been the target of
interventions directed at improving language ability, though outcomes
are not definitive. While some studies have demonstrated language improvement secondary to cognitively-focused treatment,
others have found little evidence that the treatment of cognitive
deficits in people with aphasia has an influence on language outcomes.
One important caveat in the measurement and treatment of
cognitive deficits in people with aphasia is the degree to which
assessments of cognition rely on language abilities for successful
performance.
Most studies have attempted to circumvent this challenge by utilizing
non-verbal cognitive assessments to evaluate cognitive ability in people
with aphasia. However, the degree to which these tasks are truly
'non-verbal' and not mediated by language in unclear. For instance, Wall et al.
found that language and non-linguistic performance was related, except
when non-linguistic performance was measured by 'real life' cognitive
tasks.
Causes
Aphasia is most often caused by stroke, where about a quarter of patients who experience an acute stroke develop aphasia.
However, any disease or damage to the parts of the brain that control
language can cause aphasia. Some of these can include brain tumors,
traumatic brain injury, epilepsy and progressive neurological disorders. In rare cases, aphasia may also result from herpesviral encephalitis. The herpes simplex virus affects the frontal and temporal lobes, subcortical structures, and the hippocampal tissue, which can trigger aphasia. In acute disorders, such as head injury or stroke, aphasia usually develops quickly. When caused by brain tumor, infection, or dementia, it develops more slowly.
Substantial damage to tissue anywhere within the region shown in
blue (on the figure in the infobox above) can potentially result in
aphasia.
Aphasia can also sometimes be caused by damage to subcortical
structures deep within the left hemisphere, including the thalamus, the internal and external capsules, and the caudate nucleus of the basal ganglia. The area and extent of brain damage or atrophy will determine the type of aphasia and its symptoms. A very small number of people can experience aphasia after damage to the right hemisphere
only. It has been suggested that these individuals may have had an
unusual brain organization prior to their illness or injury, with
perhaps greater overall reliance on the right hemisphere for language
skills than in the general population.
Primary progressive aphasia
(PPA), while its name can be misleading, is actually a form of dementia
that has some symptoms closely related to several forms of aphasia. It
is characterized by a gradual loss in language functioning while other
cognitive domains are mostly preserved, such as memory and personality.
PPA usually initiates with sudden word-finding difficulties in an
individual and progresses to a reduced ability to formulate
grammatically correct sentences (syntax) and impaired comprehension. The
etiology of PPA is not due to a stroke, traumatic brain injury (TBI),
or infectious disease; it is still uncertain what initiates the onset of
PPA in those affected by it.
Epilepsy can also include transient aphasia as a prodromal or episodic symptom.
However, the repeated seizure activity within language regions may also
lead to chronic, and progressive aphasia. Aphasia is also listed as a
rare side-effect of the fentanyl patch, an opioid used to control chronic pain.
Diagnosis
Neuroimaging methods
Magnetic resonance imaging (MRI) and functional magnetic resonance imaging
(fMRI) are the most common neuroimaging tools used in identifying
aphasia and studying the extent of damage in the loss of language
abilities. This is done by doing MRI scans and locating the extent of
lesions or damage within brain tissue, particularly within areas of the
left frontal and temporal regions- where a lot of language related areas
lie. In fMRI studies a language related task is often completed and
then the BOLD image is analyzed. If there are lower than normal BOLD
responses that indicate a lessening of blood flow to the affected area
and can show quantitatively that the cognitive task is not being
completed.
There are limitations to the use of fMRI in aphasic patients
particularly. Because a high percentage of aphasic patients develop it
because of stroke there can be infarct
present which is the total loss of blood flow. This can be due to the
thinning of blood vessels or the complete blockage of it. This is
important in fMRI as it relies on the BOLD response (the oxygen levels
of the blood vessels), and this can create a false hyporesponse upon
fMRI study.
Due to the limitations of fMRI such as a lower spatial resolution, it
can show that some areas of the brain are not active during a task when
they in reality are. Additionally, with stroke
being the cause of many cases of aphasia the extent of damage to brain
tissue can be difficult to quantify therefore the effects of stroke
brain damage on the functionality of the patient can vary.
Neural substrates of aphasia subtypes
MRI is often used to predict or confirm the subtype of aphasia
present. Researchers compared 3 subtypes of aphasia- nonfluent-variant
primary progressive aphasia (nfPPA), logopenic-variant primary
progressive aphasia (lvPPA), and semantic-variant primary progressive
aphasia (svPPA), with primary progressive aphasia (PPA) and Alzheimer's disease. This was done by analyzing the MRIs of patients with each of the subsets of PPA.
Images which compare subtypes of aphasia as well as for finding the
extent of lesions are generated by overlapping images of different
participant's brains (if applicable) and isolating areas of lesions or
damage using third party software such as MRIcron. MRI has also been
used to study the relationship between the type of aphasia developed and
the age of the person with aphasia. It was found that patients with
fluent aphasia are on average older than people with non-fluent aphasia.
It was also found that among patients with lesions confined to the
anterior portion of the brain an unexpected portion of them presented
with fluent aphasia and were remarkably older than those with non-fluent
aphasia. This effect was not found when the posterior portion of the
brain was studied.
Associated conditions
In a study on the features associated with different disease trajectories in Alzheimer's disease
(AD)-related primary progressive aphasia (PPA), it was found that
metabolic patterns via PET SPM analysis can help predict progression of
total loss of speech and functional autonomy in AD and PPA patients.
This was done by comparing an MRI or CT image of the brain and presence
of a radioactive biomarker with normal levels in patients without
Alzheimer's Disease.
Apraxia is another disorder often correlated with aphasia. This is due
to a subset of apraxia which affects speech. Specifically, this subset
affects the movement of muscles associated with speech production, apraxia and aphasia are often correlated due to the proximity of neural substrates associated with each of the disorders.
Researchers concluded that there were 2 areas of lesion overlap between
patients with apraxia and aphasia, the anterior temporal lobe and the
left inferior parietal lobe.
Treatment and neuroimaging
Evidence for positive treatment outcomes can also be quantified
using neuroimaging tools. The use of fMRI and an automatic classifier
can help predict language recovery outcomes in stroke patients with 86%
accuracy when coupled with age and language test scores. The stimuli
tested were sentences both correct and incorrect and the subject had to
press a button whenever the sentence was incorrect. The fMRI data
collected focused on responses in regions of interest identified by
healthy subjects. Recovery from aphasia can also be quantified using diffusion tensor
imaging. The accurate fasciculus (AF) connects the right and left
superior temporal lobe, premotor regions/posterior inferior frontal
gyrus. and the primary motor cortex. In a study which enrolled patients
in a speech therapy program, an increase in AF fibers and volume was
found in patients after 6-weeks in the program which correlated with
long-term improvement in those patients. The results of the experiment are pictured in Figure 2. This implies that DTI can be used to quantify the improvement in patients after speech and language treatment programs are applied.
Classification
Aphasia
is best thought of as a collection of different disorders, rather than a
single problem. Each individual with aphasia will present with their
own particular combination of language strengths and weaknesses.
Consequently, it is a major challenge just to document the various
difficulties that can occur in different people, let alone decide how
they might best be treated. Most classifications of the aphasias tend to
divide the various symptoms into broad classes. A common approach is to
distinguish between the fluent aphasias (where speech remains fluent,
but content may be lacking, and the person may have difficulties
understanding others), and the nonfluent aphasias (where speech is very
halting and effortful, and may consist of just one or two words at a
time).
However, no such broad-based grouping has proven fully adequate,
or reliable. There is wide variation among people even within the same
broad grouping, and aphasias can be highly selective. For instance,
people with naming deficits (anomic aphasia) might show an inability
only for naming buildings, or people, or colors.
Unfortunately, assessments that characterize aphasia in these groupings
have persisted. This is not helpful to people living with aphasia, and
provides inaccurate descriptions of an individual pattern of
difficulties.
It is important to note that there are typical difficulties with
speech and language that come with normal aging as well. As we age,
language can become more difficult to process resulting in a slowing of
verbal comprehension, reading abilities and more likely word finding
difficulties. With each of these though, unlike some aphasias,
functionality within daily life remains intact.
Boston classification
Major characteristics of different types of aphasia according to the Boston classification
Individuals with receptive aphasia (Wernicke's
aphasia), also referred to as fluent aphasia, may speak in long
sentences that have no meaning, add unnecessary words, and even create
new "words" (neologisms).
For example, someone with receptive aphasia may say, "delicious taco",
meaning "The dog needs to go out so I will take him for a walk". They
have poor auditory and reading comprehension, and fluent, but
nonsensical, oral and written expression. Individuals with receptive
aphasia usually have great difficulty understanding the speech of both
themselves and others and are, therefore, often unaware of their
mistakes. Receptive language deficits usually arise from lesions in the
posterior portion of the left hemisphere at or near Wernicke's area. It is often the result of trauma to the temporal region of the brain, specifically damage to Wernicke's area. Trauma can be the result from an array of problems, however it is most commonly seen as a result of stroke.
Individuals with expressive aphasia (Broca's
aphasia) frequently speak short, meaningful phrases that are produced
with great effort. It is thus characterized as a nonfluent aphasia.
Affected people often omit small words such as "is", "and", and "the".
For example, a person with expressive aphasia may say, "walk dog", which
could mean "I will take the dog for a walk", "you take the dog for a
walk" or even "the dog walked out of the yard." Individuals with
expressive aphasia are able to understand the speech of others to
varying degrees. Because of this, they are often aware of their
difficulties and can become easily frustrated by their speaking
problems.
While Broca's aphasia may appear to be solely an issue with language
production, evidence suggests that it may be rooted in an inability to
process syntactical information.
Individuals with expressive aphasia may have a speech automatism (also
called recurring or recurrent utterance). These speech automatisms can
be repeated lexical speech automatisms; e.g., modalisations ('I
can't..., I can't...'), expletives/swearwords, numbers ('one two, one
two') or non-lexical utterances made up of repeated, legal but
meaningless, consonant-vowel syllables (e.g.., /tan tan/, /bi bi/). In
severe cases, the individual may be able to utter only the same speech
automatism each time they attempt speech.
Individuals with anomic aphasia
have difficulty with naming. People with this aphasia may have
difficulties naming certain words, linked by their grammatical type
(e.g., difficulty naming verbs and not nouns) or by their semantic
category (e.g., difficulty naming words relating to photography but
nothing else) or a more general naming difficulty. People tend to
produce grammatic, yet empty, speech. Auditory comprehension tends to be
preserved.
Anomic aphasia is the aphasial presentation of tumors in the language
zone; it is the aphasial presentation of Alzheimer's disease. Anomic aphasia is the mildest form of aphasia, indicating a likely possibility for better recovery.
Individuals with transcortical sensory aphasia, in principle the
most general and potentially among the most complex forms of aphasia,
may have similar deficits as in receptive aphasia, but their repetition
ability may remain intact.
Global aphasia is considered a severe impairment in many language
aspects since it impacts expressive and receptive language, reading, and
writing. Despite these many deficits, there is evidence that has shown individuals benefited from speech language therapy.
Even though individuals with global aphasia will not become competent
speakers, listeners, writers, or readers, goals can be created to
improve the individual's quality of life.
Individuals with global aphasia usually respond well to treatment that
includes personally relevant information, which is also important to
consider for therapy.
Individuals with conduction aphasia have deficits in the connections
between the speech-comprehension and speech-production areas. This
might be caused by damage to the arcuate fasciculus, the structure that transmits information between Wernicke's area and Broca's area. Similar symptoms, however, can be present after damage to the insula or to the auditory cortex.
Auditory comprehension is near normal, and oral expression is fluent
with occasional paraphasic errors. Paraphasic errors include
phonemic/literal or semantic/verbal. Repetition ability is poor.
Conduction and transcortical aphasias are caused by damage to the white
matter tracts. These aphasias spare the cortex of the language centers
but instead create a disconnection between them. Conduction aphasia is
caused by damage to the arcuate fasciculus. The arcuate fasciculus is a
white matter tract that connects Broca's and Wernicke's areas. People
with conduction aphasia typically have good language comprehension, but
poor speech repetition and mild difficulty with word retrieval and
speech production. People with conduction aphasia are typically aware of
their errors. Two forms of conduction aphasia have been described: reproduction conduction aphasia (repetition of a single relatively unfamiliar multisyllabic word) and repetition conduction aphasia (repetition of unconnected short familiar words.
Transcortical aphasias include transcortical motor aphasia,
transcortical sensory aphasia, and mixed transcortical aphasia. People
with transcortical motor aphasia typically have intact comprehension and
awareness of their errors, but poor word finding and speech production.
People with transcortical sensory and mixed transcortical aphasia have
poor comprehension and unawareness of their errors.
Despite poor comprehension and more severe deficits in some
transcortical aphasias, small studies have indicated that full recovery
is possible for all types of transcortical aphasia.
Classical-localizationist approaches
Localizationist approaches aim to classify the aphasias according to
their major presenting characteristics and the regions of the brain that
most probably gave rise to them. Inspired by the early work of nineteenth-century neurologists Paul Broca and Carl Wernicke, these approaches identify two major subtypes of aphasia and several more minor subtypes:
Expressive aphasia
(also known as "motor aphasia" or "Broca's aphasia"), which is
characterized by halted, fragmented, effortful speech, but
well-preserved comprehension relative to expression. Damage is typically in the anterior portion of the left hemisphere, most notably Broca's area. Individuals with Broca's aphasia often have right-sided weakness
or paralysis of the arm and leg, because the left frontal lobe is also
important for body movement, particularly on the right side.
Receptive aphasia
(also known as "sensory aphasia" or "Wernicke's aphasia"), which is
characterized by fluent speech, but marked difficulties understanding
words and sentences. Although fluent, the speech may lack in key
substantive words (nouns, verbs, adjectives), and may contain incorrect
words or even nonsense words. This subtype has been associated with
damage to the posterior left temporal cortex, most notably Wernicke's
area. These individuals usually have no body weakness, because their
brain injury is not near the parts of the brain that control movement.
Conduction aphasia,
where speech remains fluent, and comprehension is preserved, but the
person may have disproportionate difficulty repeating words or
sentences. Damage typically involves the arcuate fasciculus and the left parietal region.
Recent classification schemes adopting this approach, such as the Boston-Neoclassical Model,
also group these classical aphasia subtypes into two larger classes:
the nonfluent aphasias (which encompasses Broca's aphasia and
transcortical motor aphasia) and the fluent aphasias (which encompasses
Wernicke's aphasia, conduction aphasia and transcortical sensory
aphasia). These schemes also identify several further aphasia subtypes,
including: anomic aphasia, which is characterized by a selective difficulty finding the names for things; and global aphasia, where both expression and comprehension of speech are severely compromised.
Many localizationist approaches also recognize the existence of
additional, more "pure" forms of language disorder that may affect only a
single language skill. For example, in pure alexia, a person may be able to write but not read, and in pure word deafness, they may be able to produce speech and to read, but not understand speech when it is spoken to them.
Cognitive neuropsychological approaches
Although
localizationist approaches provide a useful way of classifying the
different patterns of language difficulty into broad groups, one problem
is that most individuals do not fit neatly into one category or
another.
Another problem is that the categories, particularly the major ones
such as Broca's and Wernicke's aphasia, still remain quite broad and do
not meaningfully reflect a person's difficulties. Consequently, even
amongst those who meet the criteria for classification into a subtype,
there can be enormous variability in the types of difficulties they
experience.
Instead of categorizing every individual into a specific subtype,
cognitive neuropsychological approaches aim to identify the key
language skills or "modules" that are not functioning properly in each
individual. A person could potentially have difficulty with just one
module, or with a number of modules. This type of approach requires a
framework or theory as to what skills/modules are needed to perform
different kinds of language tasks. For example, the model of Max Coltheart identifies a module that recognizes phonemes
as they are spoken, which is essential for any task involving
recognition of words. Similarly, there is a module that stores phonemes
that the person is planning to produce in speech, and this module is
critical for any task involving the production of long words or long
strings of speech. Once a theoretical framework has been established,
the functioning of each module can then be assessed using a specific
test or set of tests. In the clinical setting, use of this model usually
involves conducting a battery of assessments,
each of which tests one or a number of these modules. Once a diagnosis
is reached as to the skills/modules where the most significant
impairment lies, therapy can proceed to treat these skills.
Progressive aphasias
Primary progressive aphasia (PPA) is a neurodegenerative focal dementia that can be associated with progressive illnesses or dementia, such as frontotemporal dementia / Pick ComplexMotor neuron disease, Progressive supranuclear palsy, and Alzheimer's disease,
which is the gradual process of progressively losing the ability to
think. Gradual loss of language function occurs in the context of
relatively well-preserved memory, visual processing, and personality
until the advanced stages. Symptoms usually begin with word-finding
problems (naming) and progress to impaired grammar (syntax) and
comprehension (sentence processing and semantics). The loss of language
before the loss of memory differentiates PPA from typical dementias.
People with PPA may have difficulties comprehending what others are
saying. They can also have difficulty trying to find the right words to
make a sentence. There are three classifications of Primary Progressive Aphasia : Progressive nonfluent aphasia (PNFA), Semantic Dementia (SD), and Logopenic progressive aphasia (LPA).
Progressive Jargon Aphasia
is a fluent or receptive aphasia in which the person's speech is
incomprehensible, but appears to make sense to them. Speech is fluent
and effortless with intact syntax and grammar, but the person has problems with the selection of nouns.
Either they will replace the desired word with another that sounds or
looks like the original one or has some other connection or they will
replace it with sounds. As such, people with jargon aphasia often use neologisms, and may perseverate
if they try to replace the words they cannot find with sounds.
Substitutions commonly involve picking another (actual) word starting
with the same sound (e.g., clocktower – colander), picking another
semantically related to the first (e.g., letter – scroll), or picking
one phonetically similar to the intended one (e.g., lane – late).
Deaf aphasia
There
have been many instances showing that there is a form of aphasia among
deaf individuals. Sign languages are, after all, forms of language that
have been shown to use the same areas of the brain as verbal forms of
language. Mirror neurons become activated when an animal is acting in a
particular way or watching another individual act in the same manner.
These mirror neurons are important in giving an individual the ability
to mimic movements of hands. Broca's area of speech production has been
shown to contain several of these mirror neurons resulting in
significant similarities of brain activity between sign language and
vocal speech communication. People use facial movements to create, what
other people perceive, to be faces of emotions. While combining these
facial movements with speech, a more full form of language is created
which enables the species to interact with a much more complex and
detailed form of communication. Sign language also uses these facial
movements and emotions along with the primary hand movement way of
communicating. These facial movement forms of communication come from
the same areas of the brain. When dealing with damages to certain areas
of the brain, vocal forms of communication are in jeopardy of severe
forms of aphasia. Since these same areas of the brain are being used for
sign language, these same, at least very similar, forms of aphasia can
show in the Deaf community. Individuals can show a form of Wernicke's
aphasia with sign language and they show deficits in their abilities in
being able to produce any form of expressions. Broca's aphasia shows up
in some people, as well. These individuals find tremendous difficulty in
being able to actually sign the linguistic concepts they are trying to
express.
Severity
The
severity of the type of aphasia varies depending on the size of the
stroke. However, there is much variance between how often one type of
severity occurs in certain types of aphasia. For instance, any type of
aphasia can range from mild to profound. Regardless of the severity of
aphasia, people can make improvements due to spontaneous recovery and
treatment in the acute stages of recovery.
Additionally, while most studies propose that the greatest outcomes
occur in people with severe aphasia when treatment is provided in the
acute stages of recovery, Robey (1998) also found that those with severe
aphasia are capable of making strong language gains in the chronic
stage of recovery as well.
This finding implies that persons with aphasia have the potential to
have functional outcomes regardless of how severe their aphasia may be.
While there is no distinct pattern of the outcomes of aphasia based on
severity alone, global aphasia typically makes functional language
gains, but may be gradual since global aphasia affects many language
areas.
Prevention
Aphasia
is largely caused by unavoidable instances. However, some precautions
can be taken to decrease risk for experiencing one of the two major
causes of aphasia: stroke and traumatic brain injury (TBI). To decrease
the probability of having an ischemic or hemorrhagic stroke, one should
take the following precautions:
Exercising regularly
Eating a healthy diet, avoiding cholesterol in particular
Keeping alcohol consumption low and avoiding tobacco use
Controlling blood pressure
Going to the emergency room immediately if you begin to experience
unilateral extremity (especially leg) swelling, warmth, redness, and/or
tenderness as these are symptoms of a deep vein thrombosis which can
lead to a stroke
To prevent aphasia due to traumatic injury, one should take
precautionary measures when engaging in dangerous activities such as:
Wearing a helmet when operating a bicycle, motor cycle, ATV, or
any other moving vehicle that could potentially be involved in an
accident
Wearing a seatbelt when driving or riding in a car
Wearing proper protective gear when playing contact sports,
especially American football, rugby, and hockey, or refraining from such
activities
Minimizing anticoagulant use (including aspirin) if at all possible as they increase the risk of hemorrhage after a head injury
Additionally, one should always seek medical attention after
sustaining head trauma due to a fall or accident. The sooner that one
receives medical attention for a traumatic brain injury, the less likely
one is to experience long-term or severe effects.
Management
Most acute cases of aphasia recover some or most skills by participating in speech and language therapy.
Recovery and improvement can continue for years after the stroke. After
the onset of aphasia, there is approximately a six-month period of
spontaneous recovery; during this time, the brain is attempting to
recover and repair the damaged neurons. Improvement varies widely,
depending on the aphasia's cause, type, and severity. Recovery also
depends on the person's age, health, motivation, handedness, and educational level.
Speech and language therapy that is higher intensity, higher dose
or provided over a long duration of time leads to significantly better
functional communication but people might be more likely to drop out of
high intensity treatment (up to 15 hours per week).
A total of 20-50 hours of speech and language therapy is necessary for
the best recovery. The most improvement happens when 2-5 hours of
therapy is provided each week over 4-5 days. Recovery is further
improved when besides the therapy people practice tasks at home.Speech and language therapy is also effective if it is delivered online through video or by a family member who has been trained by a professional therapist.
Recovery with therapy is also dependent on the recency of stroke
and the age of the person. Receiving therapy within a month after the
stroke leads to the greatest improvements. 3 or 6 months after the
stroke more therapy will be needed but symptoms can still be improved.
People with aphasia who are younger than 55 years are the most likely to
improve but people older than 75 years can still get better with
therapy.
There is no one treatment proven to be effective for all types of
aphasias. The reason that there is no universal treatment for aphasia
is because of the nature of the disorder and the various ways it is
presented. Aphasia is rarely exhibited identically, implying that
treatment needs to be catered specifically to the individual. Studies
have shown that, although there is no consistency on treatment
methodology in literature, there is a strong indication that treatment,
in general, has positive outcomes.
Therapy for aphasia ranges from increasing functional communication to
improving speech accuracy, depending on the person's severity, needs and
support of family and friends.
Group therapy allows individuals to work on their pragmatic and
communication skills with other individuals with aphasia, which are
skills that may not often be addressed in individual one-on-one therapy
sessions. It can also help increase confidence and social skills in a
comfortable setting.
Evidence does not support the use of transcranial direct current stimulation
(tDCS) for improving aphasia after stroke. Moderate quality evidence
does indicate naming performance improvements for nouns but not verbs
using tDCS
Specific treatment techniques include the following:
Copy and recall therapy (CART) – repetition and recall of
targeted words within therapy may strengthen orthographic
representations and improve single word reading, writing, and naming
Visual communication therapy (VIC) – the use of index cards with symbols to represent various components of speech
Visual action therapy (VAT) – typically treats individuals with
global aphasia to train the use of hand gestures for specific items
Functional communication treatment (FCT) – focuses on improving
activities specific to functional tasks, social interaction, and
self-expression
Promoting aphasic's communicative effectiveness (PACE) – a means of
encouraging normal interaction between people with aphasia and
clinicians. In this kind of therapy, the focus is on pragmatic
communication rather than treatment itself. People are asked to
communicate a given message to their therapists by means of drawing,
making hand gestures or even pointing to an object
Melodic intonation therapy (MIT) – aims to use the intact
melodic/prosodic processing skills of the right hemisphere to help cue
retrieval of words and expressive language
Centeredness Theory Interview (CTI) - Uses client centered goal
formation into the nature of current patient interactions as well as
future / desired interactions to improve subjective well-being,
cognition and communication.
Other – i.e. drawing as a way of communicating, trained conversation partners
Semantic feature analysis (SFA) – a type of aphasia treatment that
targets word-finding deficits. It is based on the theory that neural
connections can be strengthened by using related words and phrases that
are similar to the target word, to eventually activate the target word
in the brain. SFA can be implemented in multiple forms such as verbally,
written, using picture cards, etc. The SLP provides prompting questions
to the individual with aphasia in order for the person to name the
picture provided. Studies show that SFA is an effective intervention for improving confrontational naming.
Melodic intonation therapy is used to treat non-fluent aphasia and has proved to be effective in some cases. However, there is still no evidence from randomized controlled trials
confirming the efficacy of MIT in chronic aphasia. MIT is used to help
people with aphasia vocalize themselves through speech song, which is
then transferred as a spoken word. Good candidates for this therapy
include people who have had left hemisphere strokes, non-fluent aphasias
such as Broca's, good auditory comprehension, poor repetition and
articulation, and good emotional stability and memory.
An alternative explanation is that the efficacy of MIT depends on
neural circuits involved in the processing of rhythmicity and formulaic expressions
(examples taken from the MIT manual: "I am fine," "how are you?" or
"thank you"); while rhythmic features associated with melodic intonation
may engage primarily left-hemisphere subcortical areas of the brain,
the use of formulaic expressions is known to be supported by
right-hemisphere cortical and bilateral subcortical neural networks.
Systematic reviews support the effectiveness and importance of partner training.
According to the National Institute on Deafness and Other Communication
Disorders (NIDCD), involving family with the treatment of an aphasic
loved one is ideal for all involved, because while it will no doubt
assist in their recovery, it will also make it easier for members of the
family to learn how best to communicate with them.
When a person's speech is insufficient, different kinds of augmentative and alternative communication
could be considered such as alphabet boards, pictorial communication
books, specialized software for computers or apps for tablets or
smartphones.
When addressing Wernicke's aphasia, according to Bakheit et al.
(2007), the lack of awareness of the language impairments, a common
characteristic of Wernicke's aphasia, may affect the rate and extent of
therapy outcomes. Robey (1998) determined that at least 2 hours of treatment per week is recommended for making significant language gains.
Spontaneous recovery may cause some language gains, but without
speech-language therapy, the outcomes can be half as strong as those
with therapy.
When addressing Broca's aphasia, better outcomes occur when the
person participates in therapy, and treatment is more effective than no
treatment for people in the acute period. Two or more hours of therapy per week in acute and post-acute stages produced the greatest results. High-intensity therapy was most effective, and low-intensity therapy was almost equivalent to no therapy.
People with global aphasia are sometimes referred to as having
irreversible aphasic syndrome, often making limited gains in auditory
comprehension, and recovering no functional language modality with
therapy. With this said, people with global aphasia may retain gestural
communication skills that may enable success when communicating with
conversational partners within familiar conditions. Process-oriented
treatment options are limited, and people may not become competent
language users as readers, listeners, writers, or speakers no matter how
extensive therapy is. However, people's daily routines and quality of life can be enhanced with reasonable and modest goals.
After the first month, there is limited to no healing to language
abilities of most people. There is a grim prognosis leaving 83% who were
globally aphasic after the first month they will remain globally
aphasic at the first year. Some people are so severely impaired that
their existing process-oriented treatment approaches offer no signs of
progress, and therefore cannot justify the cost of therapy.
Perhaps due to the relative rareness of conduction aphasia, few
studies have specifically studied the effectiveness of therapy for
people with this type of aphasia. From the studies performed, results
showed that therapy can help to improve specific language outcomes. One
intervention that has had positive results is auditory repetition
training. Kohn et al. (1990) reported that drilled auditory repetition
training related to improvements in spontaneous speech, Francis et al.
(2003) reported improvements in sentence comprehension, and
Kalinyak-Fliszar et al. (2011) reported improvements in auditory-visual
short-term memory.
Individualized service delivery
Intensity
of treatment should be individualized based on the recency of stroke,
therapy goals, and other specific characteristics such as age, size of
lesion, overall health status, and motivation. Each individual reacts differently to treatment intensity and is able to tolerate treatment at different times post-stroke. Intensity of treatment after a stroke should be dependent on the person's motivation, stamina, and tolerance for therapy.
Outcomes
If
the symptoms of aphasia last longer than two or three months after a
stroke, a complete recovery is unlikely. However, it is important to
note that some people continue to improve over a period of years and
even decades. Improvement is a slow process that usually involves both
helping the individual and family understand the nature of aphasia and
learning compensatory strategies for communicating.
After a traumatic brain injury (TBI) or cerebrovascular accident
(CVA), the brain undergoes several healing and re-organization
processes, which may result in improved language function. This is
referred to as spontaneous recovery. Spontaneous recovery is the natural
recovery the brain makes without treatment, and the brain begins to
reorganize and change in order to recover.
There are several factors that contribute to a person's chance of
recovery caused by stroke, including stroke size and location. Age, sex, and education have not been found to be very predictive. There is also research pointing to damage in the left hemisphere healing more effectively than the right.
Specific to aphasia, spontaneous recovery varies among affected
people and may not look the same in everyone, making it difficult to
predict recovery.
Though some cases of Wernicke's aphasia have shown greater
improvements than more mild forms of aphasia, people with Wernicke's
aphasia may not reach as high a level of speech abilities as those with
mild forms of aphasia.
Prevalence
Aphasia affects about two million people in the U.S. and 250,000 people in Great Britain. Nearly 180,000 people acquire the disorder every year in the U.S., 170,000 due to stroke.
Any person of any age can develop aphasia, given that it is often
caused by a traumatic injury. However, people who are middle aged and
older are the most likely to acquire aphasia, as the other etiologies
are more likely at older ages. For example, approximately 75% of all strokes occur in individuals over the age of 65. Strokes account for most documented cases of aphasia:
25% to 40% of people who survive a stroke develop aphasia as a result
of damage to the language-processing regions of the brain.
During the second half of the 19th century, aphasia was a major
focus for scientists and philosophers who were working in the beginning
stages of the field of psychology.
In medical research, speechlessness was described as an incorrect
prognosis, and there was no assumption that underlying language
complications existed.
Broca and his colleagues were some of the first to write about aphasia,
but Wernicke was the first credited to have written extensively about
aphasia being a disorder that contained comprehension difficulties.
Despite claims of who reported on aphasia first, it was F.J. Gall that
gave the first full description of aphasia after studying wounds to the
brain, as well as his observation of speech difficulties resulting from
vascular lesions. A recent book on the entire history of aphasia is available (Reference: Tesak, J. & Code, C. (2008) Milestones in the History of Aphasia: Theories and Protagonists. Hove, East Sussex: Psychology Press).
Etymology
Aphasia is from Greeka- ("without", negative prefix) + phásis (φάσις, "speech").
The word aphasia comes from the word ἀφασία aphasia, in Ancient Greek, which means "speechlessness", derived from ἄφατος aphatos, "speechless" from ἀ- a-, "not, un" and φημί phemi, "I speak".
Further research
Research
is currently being done using functional magnetic resonance imaging
(fMRI) to witness the difference in how language is processed in normal
brains vs aphasic brains. This will help researchers to understand
exactly what the brain must go through in order to recover from
Traumatic Brain Injury (TBI) and how different areas of the brain
respond after such an injury.
Another intriguing approach being tested is that of drug therapy.
Research is in progress that will hopefully uncover whether or not
certain drugs might be used in addition to speech-language therapy in
order to facilitate recovery of proper language function. It's possible
that the best treatment for Aphasia might involve combining drug
treatment with therapy, instead of relying on one over the other.
One other method being researched as a potential therapeutic
combination with speech-language therapy is brain stimulation. One
particular method, Transcranial Magnetic Stimulation (TMS), alters brain
activity in whatever area it happens to stimulate, which has recently
led scientists to wonder if this shift in brain function caused by TMS
might help people re-learn languages.
The research being put into Aphasia has only just begun.
Researchers appear to have multiple ideas on how Aphasia could be more
effectively treated in the future.