Search This Blog

Thursday, November 15, 2018

Acetylcholine

From Wikipedia, the free encyclopedia

Acetylcholine
Acetylcholine.svg
Clinical data
SynonymsACh
Physiological data
Source tissuesmotor neurons, parasympathetic nervous system, brain
Target tissuesskeletal muscles, brain, many other organs
Receptorsnicotinic, muscarinic
Agonistsnicotine, muscarine, cholinesterase inhibitors
Antagoniststubocurarine, atropine
Precursorcholine, acetyl-CoA
Biosynthesischoline acetyltransferase
Metabolismacetylcholinesterase

Acetylcholine (ACh) is an organic chemical that functions in the brain and body of many types of animals, including humans, as a neurotransmitter—a chemical message released by nerve cells to send signals to other cells [neurons, muscle cells, and gland cells]. Its name is derived from its chemical structure: it is an ester of acetic acid and choline. Parts in the body that use or are affected by acetylcholine are referred to as cholinergic. Substances that interfere with acetylcholine activity are called anticholinergics. Acetylcholine is the neurotransmitter used at the neuromuscular junction—in other words, it is the chemical that motor neurons of the nervous system release in order to activate muscles. This property means that drugs that affect cholinergic systems can have very dangerous effects ranging from paralysis to convulsions. Acetylcholine is also a neurotransmitter in the autonomic nervous system, both as an internal transmitter for the sympathetic nervous system and as the final product released by the parasympathetic nervous system.

The Acetylcholine (ACh), has also been traced in cells of non-neural origins and microbes. Recently, enzymes related to its synthesis, degradation and cellular uptake have been traced back to early origins of unicellular eukaryotes. The protist pathogen Acanthamoeba spp. has shown the presence of ACh, which provides growth and proliferative signals via a membrane located M1-muscarinic receptor homolog. In the brain, acetylcholine functions as a neurotransmitter and as a neuromodulator. The brain contains a number of cholinergic areas, each with distinct functions; such as playing an important role in arousal, attention, memory and motivation.

Partly because of its muscle-activating function, but also because of its functions in the autonomic nervous system and brain, a large number of important drugs exert their effects by altering cholinergic transmission. Numerous venoms and toxins produced by plants, animals, and bacteria, as well as chemical nerve agents such as Sarin, cause harm by inactivating or hyperactivating muscles via their influences on the neuromuscular junction. Drugs that act on muscarinic acetylcholine receptors, such as atropine, can be poisonous in large quantities, but in smaller doses they are commonly used to treat certain heart conditions and eye problems. Scopolamine, which acts mainly on muscarinic receptors in the brain, can cause delirium and amnesia. The addictive qualities of nicotine are derived from its effects on nicotinic acetylcholine receptors in the brain.

Chemistry

Acetylcholine is a choline molecule that has been acetylated at the oxygen atom. Because of the presence of a highly polar, charged ammonium group, acetylcholine does not penetrate lipid membranes. Because of this, when the drug is introduced externally, it remains in the extracellular space and does not pass through the blood–brain barrier. A synonym of this drug is miochol.

Biochemistry

Acetylcholine is synthesized in certain neurons by the enzyme choline acetyltransferase from the compounds choline and acetyl-CoA. Cholinergic neurons are capable of producing ACh. An example of a central cholinergic area is the nucleus basalis of Meynert in the basal forebrain.

The enzyme acetylcholinesterase converts acetylcholine into the inactive metabolites choline and acetate. This enzyme is abundant in the synaptic cleft, and its role in rapidly clearing free acetylcholine from the synapse is essential for proper muscle function. Certain neurotoxins work by inhibiting acetylcholinesterase, thus leading to excess acetylcholine at the neuromuscular junction, causing paralysis of the muscles needed for breathing and stopping the beating of the heart.

Functions

Acetylcholine pathway.

Acetylcholine functions in both the central nervous system (CNS) and the peripheral nervous system (PNS). In the CNS, cholinergic projections from the basal forebrain to the cerebral cortex and hippocampus support the cognitive functions of those target areas. In the PNS, acetylcholine activates muscles and is a major neurotransmitter in the autonomic nervous system.

Cellular effects

Acetylcholine processing in a synapse. After release acetylcholine is broken down by the enzyme acetylcholinesterase.

Like many other biologically active substances, acetylcholine exerts its effects by binding to and activating receptors located on the surface of cells. There are two main classes of acetylcholine receptor, nicotinic and muscarinic. They are named for chemicals that can selectively activate each type of receptor without activating the other: muscarine is a compound found in the mushroom Amanita muscaria; nicotine is found in tobacco.

Nicotinic acetylcholine receptors are ligand-gated ion channels permeable to sodium, potassium, and calcium ions. In other words, they are ion channels embedded in cell membranes, capable of switching from a closed to an open state when acetylcholine binds to them; in the open state they allow ions to pass through. Nicotinic receptors come in two main types, known as muscle-type and neuronal-type. The muscle-type can be selectively blocked by curare, the neuronal-type by hexamethonium. The main location of muscle-type receptors is on muscle cells, as described in more detail below. Neuronal-type receptors are located in autonomic ganglia (both sympathetic and parasympathetic), and in the central nervous system.

Muscarinic acetylcholine receptors have a more complex mechanism, and affect target cells over a longer time frame. In mammals, five subtypes of muscarinic receptors have been identified, labeled M1 through M5. All of them function as G protein-coupled receptors, meaning that they exert their effects via a second messenger system. The M1, M3, and M5 subtypes are Gq-coupled; they increase intracellular levels of IP3 and calcium by activating phospholipase C. Their effect on target cells is usually excitatory. The M2 and M4 subtypes are Gi/Go-coupled; they decrease intracellular levels of cAMP by inhibiting adenylate cyclase. Their effect on target cells is usually inhibitory. Muscarinic acetylcholine receptors are found in both the central nervous system and the peripheral nervous system of the heart, lungs, upper gastrointestinal tract, and sweat glands.

Neuromuscular junction

Muscles contract when they receive signals from motor neurons. The neuromuscular junction is the site of the signal exchange. The steps of this process in vertebrates occur as follows: (1) The action potential reaches the axon terminal. (2) Calcium ions flow into the axon terminal. (3) Acetylcholine is released into the synaptic cleft. (4) Acetylcholine binds to postsynaptic receptors. (5) This binding causes ion channels to open and allows sodium ions to flow into the muscle cell. (6) The flow of sodium ions across the membrane into the muscle cell generates an action potential which induces muscle contraction. Labels: A: Motor neuron axon B: Axon terminal C: Synaptic cleft D: Muscle cell E: Part of a Myofibril

Acetylcholine is the substance the nervous system uses to activate skeletal muscles, a kind of striated muscle. These are the muscles used for all types of voluntary movement, in contrast to smooth muscle tissue, which is involved in a range of involuntary activities such as movement of food through the gastrointestinal tract and constriction of blood vessels. Skeletal muscles are directly controlled by motor neurons located in the spinal cord or, in a few cases, the brainstem. These motor neurons send their axons through motor nerves, from which they emerge to connect to muscle fibers at a special type of synapse called the neuromuscular junction.

When a motor neuron generates an action potential, it travels rapidly along the nerve until it reaches the neuromuscular junction, where it initiates an electrochemical process that causes acetylcholine to be released into the space between the presynaptic terminal and the muscle fiber. The acetylcholine molecules then bind to nicotinic ion-channel receptors on the muscle cell membrane, causing the ion channels to open. Calcium ions then flow into the muscle cell, initiating a sequence of steps that finally produce muscle contraction.

Factors that decrease release of acetylcholine (and thereby affecting P-type calcium channels):

1) Antibiotics (clindamycin, polymyxin)
2) Magnesium:  antagonizes P-type calcium channels
3) Hypocalcemia
4) Anticonvulsants
5) Diuretics (furosemide)
6) Eaton-Lambert syndrome:  inhibits P-type calcium channels
7) Botulinum toxin:  inhibits SNARE proteins

Calcium channel blockers (nifedipine, diltiazem) do not affect P-channels.  These drugs affect L-type calcium channels.

Autonomic nervous system

Components and connections of the parasympathetic nervous system

The autonomic nervous system controls a wide range of involuntary and unconscious body functions. Its main branches are the sympathetic nervous system and parasympathetic nervous system. Broadly speaking, the function of the sympathetic nervous system is to mobilize the body for action; the phrase often invoked to describe it is fight-or-flight. The function of the parasympathetic nervous system is to put the body in a state conducive to rest, regeneration, digestion, and reproduction; the phrase often invoked to describe it is "rest and digest" or "feed and breed". Both of these aforementioned systems use acetylcholine, but in different ways.

At a schematic level, the sympathetic and parasympathetic nervous systems are both organized in essentially the same way: preganglionic neurons in the central nervous system send projections to neurons located in autonomic ganglia, which send output projections to virtually every tissue of the body. In both branches the internal connections, the projections from the central nervous system to the autonomic ganglia, use acetylcholine as a neurotransmitter to innervate (or excite) cholinergic neurons (neurons expressing nicotinic acetylcholine receptors). In the parasympathetic nervous system the output connections, the projections from ganglion neurons to tissues that don't belong to the nervous system, also release acetylcholine but act on muscarinic receptors. In the sympathetic nervous system the output connections mainly release noradrenaline, although acetylcholine is released at a few points, such as the sudomotor innervation of the sweat glands.

Direct vascular effects

Acetylcholine in the serum exerts a direct effect on vascular tone by binding to muscarinic receptors present on vascular endothelium. These cells respond by increasing production of nitric oxide, which signals the surrounding smooth muscle to relax, leading to vasodilation.

Central nervous system

Micrograph of the nucleus basalis (of Meynert), which produces acetylcholine in the CNS. LFB-HE stain

In the central nervous system, ACh has a variety of effects on plasticity, arousal and reward. ACh has an important role in the enhancement of alertness when we wake up, in sustaining attention and in learning and memory.

Damage to the cholinergic (acetylcholine-producing) system in the brain has been shown to be associated with the memory deficits associated with Alzheimer's disease. ACh has also been shown to promote REM sleep.

In the brainstem acetylcholine originates from the Pedunculopontine nucleus and laterodorsal tegmental nucleus collectively known as the mesopontine tegmentum area or pontomesencephalotegmental complex. In the basal forebrain, it originates from the basal nucleus of Meynert and medial septal nucleus:
In addition, ACh acts as an important internal transmitter in the striatum, which is part of the basal ganglia. It is released by cholinergic interneurons. In humans, non-human primates and rodents, these interneurons respond to salient environmental stimuli with responses that are temporally aligned with the responses of dopaminergic neurons of the substantia nigra.

Memory

Acetylcholine has been implicated in learning and memory in several ways. The anticholinergic drug, scopolamine, impairs acquisition of new information in humans and animals. In animals, disruption of the supply of acetylcholine to the neocortex impairs the learning of simple discrimination tasks, comparable to the acquisition of factual information and disruption of the supply of acetylcholine to the hippocampus and adjacent cortical areas produces forgetting comparable to anterograde amnesia in humans.

Diseases and disorders

Myasthenia gravis

The disease myasthenia gravis, characterized by muscle weakness and fatigue, occurs when the body inappropriately produces antibodies against acetylcholine nicotinic receptors, and thus inhibits proper acetylcholine signal transmission. Over time, the motor end plate is destroyed. Drugs that competitively inhibit acetylcholinesterase (e.g., neostigmine, physostigmine, or primarily pyridostigmine) are effective in treating this disorder. They allow endogenously released acetylcholine more time to interact with its respective receptor before being inactivated by acetylcholinesterase in the synaptic cleft (the space between nerve and muscle).

Pharmacology

Blocking, hindering or mimicking the action of acetylcholine has many uses in medicine. Drugs acting on the acetylcholine system are either agonists to the receptors, stimulating the system, or antagonists, inhibiting it. Acetylcholine receptor agonists and antagonists can either have an effect directly on the receptors or exert their effects indirectly, e.g., by affecting the enzyme acetylcholinesterase, which degrades the receptor ligand. Agonists increase the level of receptor activation, antagonists reduce it.

Acetylcholine itself does not have therapeutic value as a drug for intravenous administration because of its multi-faceted action(non-selective) and rapid inactivation by cholinesterase. However, it is used in the form of eye drops to cause constriction of the pupil during cataract surgery, which facilitates quick post-operational recovery.

Nicotine

Nicotine binds to and activates nicotinic acetylcholine receptors, mimicking the effect of acetylcholine at these receptors. When ACh interacts with a nicotinic ACh receptor, it opens a Na+ channel and Na+ ions flow into the membrane. This causes a depolarization, and results in an excitatory post-synaptic potential. Thus, ACh is excitatory on skeletal muscle; the electrical response is fast and short-lived.

Atropine

Atropine is a non-selective competitive antagonist with Acetylcholine at muscarinic receptors.

Cholinesterase inhibitors

Many ACh receptor agonists work indirectly by inhibiting the enzyme acetylcholinesterase. The resulting accumulation of acetylcholine causes continuous stimulation of the muscles, glands, and central nervous system, which can result in fatal convulsions if the dose is high.
They are examples of enzyme inhibitors, and increase the action of acetylcholine by delaying its degradation; some have been used as nerve agents (Sarin and VX nerve gas) or pesticides (organophosphates and the carbamates). Many toxins and venoms produced by plants and animals also contain cholinesterase inhibitors. In clinical use, they are administered in low doses to reverse the action of muscle relaxants, to treat myasthenia gravis, and to treat symptoms of Alzheimer's disease (rivastigmine, which increases cholinergic activity in the brain).

Synthesis inhibitors

Organic mercurial compounds, such as methylmercury, have a high affinity for sulfhydryl groups, which causes dysfunction of the enzyme choline acetyltransferase. This inhibition may lead to acetylcholine deficiency, and can have consequences on motor function.

Release inhibitors

Botulinum toxin (Botox) acts by suppressing the release of acetylcholine, whereas the venom from a black widow spider (alpha-latrotoxin) has the reverse effect. ACh inhibition causes paralysis. When bitten by a black widow spider, one experiences the wastage of ACh supplies and the muscles begin to contract. If and when the supply is depleted, paralysis occurs.

Comparative biology and evolution

Acetylcholine is used by organisms in all domains of life for a variety of purposes. It is believed that choline, a precursor to acetylcholine, was used by single celled organisms billions of years ago for synthesizing cell membrane phospholipids. Following the evolution of choline transporters, the abundance of intracellular choline paved the way for choline to become incorporated into other synthetic pathways, including acetylcholine production. Acetylcholine is used by bacteria, fungi, and a variety of other animals. Many of the uses of acetylcholine rely on its action on ion channels via GPCRs like membrane proteins.

The two major types of acetylcholine receptors, muscarinic and nicotinic receptors, have convergently evolved to be responsive to acetylcholine. This means that rather than having evolved from a common homolog, these receptors evolved from separate receptor families. It is estimated that the nicotinic receptor family dates back longer than 2.5 billion years. Likewise, muscarinic receptors are thought to have diverged from other GPCRs at least 0.5 billion years ago. Both of these receptor groups have evolved numerous subtypes with unique ligand affinities and signaling mechanisms. The diversity of the receptor types enables acetylcholine to creating varying responses depending on which receptor types are activated, and allow for acetylcholine to dynamically regulate physiological processes.

History

Acetylcholine (ACh) was first identified in 1915 by Henry Hallett Dale for its actions on heart tissue. It was confirmed as a neurotransmitter by Otto Loewi, who initially gave it the name Vagusstoff because it was released from the vagus nerve. Both received the 1936 Nobel Prize in Physiology or Medicine for their work. Acetylcholine was also the first neurotransmitter to be identified.

Neurolinguistics

From Wikipedia, the free encyclopedia
Surface of the human brain, with Brodmann areas numbered
 
An image of neural pathways in the brain taken using diffusion tensor imaging
 
Neurolinguistics is the study of the neural mechanisms in the human brain that control the comprehension, production, and acquisition of language. As an interdisciplinary field, neurolinguistics draws methods and theories from fields such as neuroscience, linguistics, cognitive science, communication disorders and neuropsychology. Researchers are drawn to the field from a variety of backgrounds, bringing along a variety of experimental techniques as well as widely varying theoretical perspectives. Much work in neurolinguistics is informed by models in psycholinguistics and theoretical linguistics, and is focused on investigating how the brain can implement the processes that theoretical and psycholinguistics propose are necessary in producing and comprehending language. Neurolinguists study the physiological mechanisms by which the brain processes information related to language, and evaluate linguistic and psycholinguistic theories, using aphasiology, brain imaging, electrophysiology, and computer modeling.

History


Neurolinguistics is historically rooted in the development in the 19th century of aphasiology, the study of linguistic deficits (aphasias) occurring as the result of brain damage. Aphasiology attempts to correlate structure to function by analyzing the effect of brain injuries on language processing. One of the first people to draw a connection between a particular brain area and language processing was Paul Broca, a French surgeon who conducted autopsies on numerous individuals who had speaking deficiencies, and found that most of them had brain damage (or lesions) on the left frontal lobe, in an area now known as Broca's area. Phrenologists had made the claim in the early 19th century that different brain regions carried out different functions and that language was mostly controlled by the frontal regions of the brain, but Broca's research was possibly the first to offer empirical evidence for such a relationship, and has been described as "epoch-making" and "pivotal" to the fields of neurolinguistics and cognitive science. Later, Carl Wernicke, after whom Wernicke's area is named, proposed that different areas of the brain were specialized for different linguistic tasks, with Broca's area handling the motor production of speech, and Wernicke's area handling auditory speech comprehension. The work of Broca and Wernicke established the field of aphasiology and the idea that language can be studied through examining physical characteristics of the brain. Early work in aphasiology also benefited from the early twentieth-century work of Korbinian Brodmann, who "mapped" the surface of the brain, dividing it up into numbered areas based on each area's cytoarchitecture (cell structure) and function; these areas, known as Brodmann areas, are still widely used in neuroscience today.

The coining of the term "neurolinguistics" is attributed to Edith Crowell Trager, Henri Hecaen and Alexandr Luria, in the late 1940s and 1950s; Luria's book "Problems in Neurolinguistics" is likely the first book with Neurolinguistics in the title. Harry Whitaker popularized neurolinguistics in the United States in the 1970s, founding the journal "Brain and Language" in 1974.

Although aphasiology is the historical core of neurolinguistics, in recent years the field has broadened considerably, thanks in part to the emergence of new brain imaging technologies (such as PET and fMRI) and time-sensitive electrophysiological techniques (EEG and MEG), which can highlight patterns of brain activation as people engage in various language tasks; electrophysiological techniques, in particular, emerged as a viable method for the study of language in 1980 with the discovery of the N400, a brain response shown to be sensitive to semantic issues in language comprehension. The N400 was the first language-relevant event-related potential to be identified, and since its discovery EEG and MEG have become increasingly widely used for conducting language research.

Discipline

Interaction with other fields

Neurolinguistics is closely related to the field of psycholinguistics, which seeks to elucidate the cognitive mechanisms of language by employing the traditional techniques of experimental psychology; today, psycholinguistic and neurolinguistic theories often inform one another, and there is much collaboration between the two fields.

Much work in neurolinguistics involves testing and evaluating theories put forth by psycholinguists and theoretical linguists. In general, theoretical linguists propose models to explain the structure of language and how language information is organized, psycholinguists propose models and algorithms to explain how language information is processed in the mind, and neurolinguists analyze brain activity to infer how biological structures (populations and networks of neurons) carry out those psycholinguistic processing algorithms. For example, experiments in sentence processing have used the ELAN, N400, and P600 brain responses to examine how physiological brain responses reflect the different predictions of sentence processing models put forth by psycholinguists, such as Janet Fodor and Lyn Frazier's "serial" model, and Theo Vosse and Gerard Kempen's "unification model". Neurolinguists can also make new predictions about the structure and organization of language based on insights about the physiology of the brain, by "generalizing from the knowledge of neurological structures to language structure".

Neurolinguistics research is carried out in all the major areas of linguistics; the main linguistic subfields, and how neurolinguistics addresses them, are given in the table below.

SubfieldDescriptionResearch questions in neurolinguistics
Phonetics the study of speech sounds how the brain extracts speech sounds from an acoustic signal, how the brain separates speech sounds from background noise
Phonology the study of how sounds are organized in a language how the phonological system of a particular language is represented in the brain
Morphology and lexicology the study of how words are structured and stored in the mental lexicon how the brain stores and accesses words that a person knows
Syntax the study of how multiple-word utterances are constructed how the brain combines words into constituents and sentences; how structural and semantic information is used in understanding sentences
Semantics the study of how meaning is encoded in language

Topics considered

Neurolinguistics research investigates several topics, including where language information is processed, how language processing unfolds over time, how brain structures are related to language acquisition and learning, and how neurophysiology can contribute to speech and language pathology.

Localizations of language processes

Much work in neurolinguistics has, like Broca's and Wernicke's early studies, investigated the locations of specific language "modules" within the brain. Research questions include what course language information follows through the brain as it is processed, whether or not particular areas specialize in processing particular sorts of information, how different brain regions interact with one another in language processing, and how the locations of brain activation differs when a subject is producing or perceiving a language other than his or her first language.

Time course of language processes

Another area of neurolinguistics literature involves the use of electrophysiological techniques to analyze the rapid processing of language in time. The temporal ordering of specific patterns of in brain activity may reflect discrete computational processes that the brain undergoes during language processing; for example, one neurolinguistic theory of sentence parsing proposes that three brain responses (the ELAN, N400, and P600) are products of three different steps in syntactic and semantic processing.

Language acquisition

Another topic is the relationship between brain structures and language acquisition. Research in first language acquisition has already established that infants from all linguistic environments go through similar and predictable stages (such as babbling), and some neurolinguistics research attempts to find correlations between stages of language development and stages of brain development, while other research investigates the physical changes (known as neuroplasticity) that the brain undergoes during second language acquisition, when adults learn a new language. Neuroplasticity is observed when both Second Language acquisition and Language Learning experience are induced, the result of this language exposure concludes that an increase of gray and white matter could be found in children, young adults and the elderly.

Language pathology

Neurolinguistic techniques are also used to study disorders and breakdowns in language, such as aphasia and dyslexia, and how they relate to physical characteristics of the brain.

Technology used

Images of the brain recorded with PET (top) and fMRI (bottom). In the PET image, the red areas are the most active. In the fMRI image, the yellowest areas are the areas that show the greatest difference in activation between two tasks (watching a moving stimulus, versus watching a black screen).

Since one of the focuses of this field is the testing of linguistic and psycholinguistic models, the technology used for experiments is highly relevant to the study of neurolinguistics. Modern brain imaging techniques have contributed greatly to a growing understanding of the anatomical organization of linguistic functions. Brain imaging methods used in neurolinguistics may be classified into hemodynamic methods, electrophysiological methods, and methods that stimulate the cortex directly.

Hemodynamic

Hemodynamic techniques take advantage of the fact that when an area of the brain works at a task, blood is sent to supply that area with oxygen (in what is known as the Blood Oxygen Level-Dependent, or BOLD, response). Such techniques include PET and fMRI. These techniques provide high spatial resolution, allowing researchers to pinpoint the location of activity within the brain; temporal resolution (or information about the timing of brain activity), on the other hand, is poor, since the BOLD response happens much more slowly than language processing. In addition to demonstrating which parts of the brain may subserve specific language tasks or computations, hemodynamic methods have also been used to demonstrate how the structure of the brain's language architecture and the distribution of language-related activation may change over time, as a function of linguistic exposure.

In addition to PET and fMRI, which show which areas of the brain are activated by certain tasks, researchers also use diffusion tensor imaging (DTI), which shows the neural pathways that connect different brain areas, thus providing insight into how different areas interact. Functional near-infrared spectroscopy (fNIRS) is another hemodynamic method used in language tasks.

Electrophysiological

Brain waves recorded using EEG

Electrophysiological techniques take advantage of the fact that when a group of neurons in the brain fire together, they create an electric dipole or current. The technique of EEG measures this electric current using sensors on the scalp, while MEG measures the magnetic fields that are generated by these currents. In addition to these non-invasive methods, electrocorticography has also been used to study language processing. These techniques are able to measure brain activity from one millisecond to the next, providing excellent temporal resolution, which is important in studying processes that take place as quickly as language comprehension and production. On the other hand, the location of brain activity can be difficult to identify in EEG; consequently, this technique is used primarily to how language processes are carried out, rather than where. Research using EEG and MEG generally focuses on event-related potentials (ERPs), which are distinct brain responses (generally realized as negative or positive peaks on a graph of neural activity) elicited in response to a particular stimulus. Studies using ERP may focus on each ERP's latency (how long after the stimulus the ERP begins or peaks), amplitude (how high or low the peak is), or topography (where on the scalp the ERP response is picked up by sensors). Some important and common ERP components include the N400 (a negativity occurring at a latency of about 400 milliseconds), the mismatch negativity, the early left anterior negativity (a negativity occurring at an early latency and a front-left topography), the P600, and the lateralized readiness potential.

Experimental design

Experimental techniques

Neurolinguists employ a variety of experimental techniques in order to use brain imaging to draw conclusions about how language is represented and processed in the brain. These techniques include the subtraction paradigm, mismatch design, violation-based studies, various forms of priming, and direct stimulation of the brain.

Subtraction

Many language studies, particularly in fMRI, use the subtraction paradigm, in which brain activation in a task thought to involve some aspect of language processing is compared against activation in a baseline task thought to involve similar non-linguistic processes but not to involve the linguistic process. For example, activations while participants read words may be compared to baseline activations while participants read strings of random letters (in attempt to isolate activation related to lexical processing—the processing of real words), or activations while participants read syntactically complex sentences may be compared to baseline activations while participants read simpler sentences.

Mismatch paradigm

The mismatch negativity (MMN) is a rigorously documented ERP component frequently used in neurolinguistic experiments. It is an electrophysiological response that occurs in the brain when a subject hears a "deviant" stimulus in a set of perceptually identical "standards" (as in the sequence s s s s s s s d d s s s s s s d s s s s s d). Since the MMN is elicited only in response to a rare "oddball" stimulus in a set of other stimuli that are perceived to be the same, it has been used to test how speakers perceive sounds and organize stimuli categorically. For example, a landmark study by Colin Phillips and colleagues used the mismatch negativity as evidence that subjects, when presented with a series of speech sounds with acoustic parameters, perceived all the sounds as either /t/ or /d/ in spite of the acoustic variability, suggesting that the human brain has representations of abstract phonemes—in other words, the subjects were "hearing" not the specific acoustic features, but only the abstract phonemes. In addition, the mismatch negativity has been used to study syntactic processing and the recognition of word category.

Violation-based


Many studies in neurolinguistics take advantage of anomalies or violations of syntactic or semantic rules in experimental stimuli, and analyzing the brain responses elicited when a subject encounters these violations. For example, sentences beginning with phrases such as *the garden was on the worked, which violates an English phrase structure rule, often elicit a brain response called the early left anterior negativity (ELAN). Violation techniques have been in use since at least 1980, when Kutas and Hillyard first reported ERP evidence that semantic violations elicited an N400 effect. Using similar methods, in 1992, Lee Osterhout first reported the P600 response to syntactic anomalies. Violation designs have also been used for hemodynamic studies (fMRI and PET): Embick and colleagues, for example, used grammatical and spelling violations to investigate the location of syntactic processing in the brain using fMRI. Another common use of violation designs is to combine two kinds of violations in the same sentence and thus make predictions about how different language processes interact with one another; this type of crossing-violation study has been used extensively to investigate how syntactic and semantic processes interact while people read or hear sentences.

Priming

In psycholinguistics and neurolinguistics, priming refers to the phenomenon whereby a subject can recognize a word more quickly if he or she has recently been presented with a word that is similar in meaning or morphological makeup (i.e., composed of similar parts). If a subject is presented with a "prime" word such as doctor and then a "target" word such as nurse, if the subject has a faster-than-usual response time to nurse then the experimenter may assume that word nurse in the brain had already been accessed when the word doctor was accessed. Priming is used to investigate a wide variety of questions about how words are stored and retrieved in the brain and how structurally complex sentences are processed.

Stimulation

Transcranial magnetic stimulation (TMS), a new noninvasive technique for studying brain activity, uses powerful magnetic fields that are applied to the brain from outside the head. It is a method of exciting or interrupting brain activity in a specific and controlled location, and thus is able to imitate aphasic symptoms while giving the researcher more control over exactly which parts of the brain will be examined. As such, it is a less invasive alternative to direct cortical stimulation, which can be used for similar types of research but requires that the subject's scalp be removed, and is thus only used on individuals who are already undergoing a major brain operation (such as individuals undergoing surgery for epilepsy). The logic behind TMS and direct cortical stimulation is similar to the logic behind aphasiology: if a particular language function is impaired when a specific region of the brain is knocked out, then that region must be somehow implicated in that language function. Few neurolinguistic studies to date have used TMS; direct cortical stimulation and cortical recording (recording brain activity using electrodes placed directly on the brain) have been used with macaque monkeys to make predictions about the behavior of human brains.

Subject tasks

In many neurolinguistics experiments, subjects do not simply sit and listen to or watch stimuli, but also are instructed to perform some sort of task in response to the stimuli. Subjects perform these tasks while recordings (electrophysiological or hemodynamic) are being taken, usually in order to ensure that they are paying attention to the stimuli. At least one study has suggested that the task the subject does has an effect on the brain responses and the results of the experiment.

Lexical decision

The lexical decision task involves subjects seeing or hearing an isolated word and answering whether or not it is a real word. It is frequently used in priming studies, since subjects are known to make a lexical decision more quickly if a word has been primed by a related word (as in "doctor" priming "nurse").

Grammaticality judgment, acceptability judgment

Many studies, especially violation-based studies, have subjects make a decision about the "acceptability" (usually grammatical acceptability or semantic acceptability) of stimuli. Such a task is often used to "ensure that subjects [are] reading the sentences attentively and that they [distinguish] acceptable from unacceptable sentences in the way the [experimenter] expect[s] them to do."

Experimental evidence has shown that the instructions given to subjects in an acceptability judgment task can influence the subjects' brain responses to stimuli. One experiment showed that when subjects were instructed to judge the "acceptability" of sentences they did not show an N400 brain response (a response commonly associated with semantic processing), but that they did show that response when instructed to ignore grammatical acceptability and only judge whether or not the sentences "made sense".

Probe verification

Some studies use a "probe verification" task rather than an overt acceptability judgment; in this paradigm, each experimental sentence is followed by a "probe word", and subjects must answer whether or not the probe word had appeared in the sentence. This task, like the acceptability judgment task, ensures that subjects are reading or listening attentively, but may avoid some of the additional processing demands of acceptability judgments, and may be used no matter what type of violation is being presented in the study.

Truth-value judgment

Subjects may be instructed not to judge whether or not the sentence is grammatically acceptable or logical, but whether the proposition expressed by the sentence is true or false. This task is commonly used in psycholinguistic studies of child language.

Active distraction and double-task

Some experiments give subjects a "distractor" task to ensure that subjects are not consciously paying attention to the experimental stimuli; this may be done to test whether a certain computation in the brain is carried out automatically, regardless of whether the subject devotes attentional resources to it. For example, one study had subjects listen to non-linguistic tones (long beeps and buzzes) in one ear and speech in the other ear, and instructed subjects to press a button when they perceived a change in the tone; this supposedly caused subjects not to pay explicit attention to grammatical violations in the speech stimuli. The subjects showed a mismatch response (MMN) anyway, suggesting that the processing of the grammatical errors was happening automatically, regardless of attention—or at least that subjects were unable to consciously separate their attention from the speech stimuli.

Another related form of experiment is the double-task experiment, in which a subject must perform an extra task (such as sequential finger-tapping or articulating nonsense syllables) while responding to linguistic stimuli; this kind of experiment has been used to investigate the use of working memory in language processing.

Language processing in the brain

From Wikipedia, the free encyclopedia

Dual stream connectivity between the auditory cortex and frontal lobe of monkeys and humans. Top: The auditory cortex of the monkey (left) and human (right) is schematically depicted on the supratemporal plane and observed from above (with the parieto- frontal operculi removed). Bottom: The brain of the monkey (left) and human (right) is schematically depicted and displayed from the side. Orange frames mark the region of the auditory cortex, which is displayed in the top sub-figures. Top and Bottom: Blue colors mark regions affiliated with the ADS, and red colors mark regions affiliated with the AVS (dark red and blue regions mark the primary auditory fields). Abbreviations: AMYG-amygdala, HG-Heschl’s gyrus, FEF-frontal eye field, IFG-inferior frontal gyrus, INS-insula, IPS-intra parietal sulcus, MTG-middle temporal gyrus, PC-pitch center, PMd-dorsal premotor cortex, PP-planum polare, PT-planum temporale, TP-temporal pole, Spt-sylvian parietal-temporal, pSTG/mSTG/aSTG-posterior/middle/anterior superior temporal gyrus, CL/ ML/AL/RTL-caudo-/middle-/antero-/rostrotemporal-lateral belt area, CPB/RPB-caudal/rostral parabelt fields.

Language processing refers to the way humans use words to communicate ideas and feelings, and how such communications are processed and understood. Language processing is considered to be an uniquely human ability that is not produced with the same grammatical understanding or systematicity in even human's closest primate relatives.

Throughout the 20th century the dominant model for language processing in the brain was the Geschwind-Lichteim-Wernicke model, which is based primarily on the analysis of brain damaged patients. However, due to improvements in intra-cortical electrophysiological recordings of monkey and human brains, as well non-invasive techniques such as fMRI, PET, MEG and EEG, a dual auditory pathway has been revealed. In accordance with this model, there are two pathways that connect the auditory cortex to the frontal lobe, each pathway accounting for different linguistic roles. The auditory ventral stream connects the auditory cortex with the middle temporal gyrus and temporal pole, which in turn connects with the inferior frontal gyrus. This pathway is responsible for sound recognition, and is accordingly known as the auditory 'what' pathway. The auditory dorsal stream connects the auditory cortex with the parietal lobe, which in turn connects with inferior frontal gyrus. In both humans and non-human primates, the auditory dorsal stream is responsible for sound localization, and is accordingly known as the auditory 'where' pathway. In humans, this pathway (especially in the left hemisphere) is also responsible for speech production, speech repetition, lip-reading, and phonological working memory and long-term memory. In accordance with the 'from where to what' model of language evolution. the reason the ADS is characterized with such a broad range of functions is that each indicates a different stage in language evolution.

History of neurolinguistics

Throughout the 20th century, our knowledge of language processing in the brain was dominated by the Wernicke-Lichtheim-Geschwind model. This model is primarily based on research conducted on brain-damaged individuals who were reported to possess a variety of language related disorders. In accordance with this model, words are perceived via a specialized word reception center (Wernicke’s area) that is located in the left temporoparietal junction. This region then projects to a word production center (Broca’s area) that is located in the left inferior frontal gyrus. Because almost all language input was thought to funnel via Wernicke’s area and all language output to funnel via Broca’s area, it became extremely difficult to identify the basic properties of each region. This lack of clear definition for the contribution of Wernicke’s and Broca’s regions to human language rendered it extremely difficult to identify their homologues in other primates. With the advent of the MRI and its application for lesion mappings, however, it was shown that this model is based on incorrect correlations between symptoms and lesions. The refutation of such an influential and dominant model opened the door to new models of language processing in the brain.

Anatomy of the auditory ventral and dorsal streams

In the last two decades, significant advances occurred in our understanding of the neural processing of sounds in primates. Initially by recording of neural activity in the auditory cortices of monkeys and later elaborated via histological staining and fMRI scanning studies, 3 auditory fields were identified in the primary auditory cortex, and 9 associative auditory fields were shown to surround them (Figure 1 top left). Anatomical tracing and lesion studies further indicated of a separation between the anterior and posterior auditory fields, with the anterior primary auditory fields (areas R-RT) projecting to the anterior associative auditory fields (areas AL-RTL), and the posterior primary auditory field (area A1) projecting to the posterior associative auditory fields (areas CL-CM). Recently, evidence accumulated that indicates homology between the human and monkey auditory fields. In humans, histological staining studies revealed two separate auditory fields in the primary auditory region of Heschl’s gyrus, and by mapping the tonotopic organization of the human primary auditory fields with high resolution fMRI and comparing it to the tonotopic organization of the monkey primary auditory fields, homology was established between the human anterior primary auditory field and monkey area R (denoted in humans as area hR) and the human posterior primary auditory field and the monkey area A1 (denoted in humans as area hA1). Intra-cortical recordings from the human auditory cortex further demonstrated similar patterns of connectivity to the auditory cortex of the monkey. Recording from the surface of the auditory cortex (supra-temporal plane) reported that the anterior Heschl’s gyrus (area hR) projects primarily to the middle-anterior superior temporal gyrus (mSTG-aSTG) and the posterior Heschl’s gyrus (area hA1) projects primarily to the posterior superior temporal gyrus (pSTG) and the planum temporale (area PT; Figure 1 top right). Consistent with connections from area hR to the aSTG and hA1 to the pSTG is an fMRI study of a patient with impaired sound recognition (auditory agnosia), who was shown with reduced bilateral activation in areas hR and aSTG but with spared activation in the mSTG-pSTG. This connectivity pattern is also corroborated by a study that recorded activation from the lateral surface of the auditory cortex and reported of simultaneous non-overlapping activation clusters in the pSTG and mSTG-aSTG while listening to sounds.

Downstream to the auditory cortex, anatomical tracing studies in monkeys delineated projections from the anterior associative auditory fields (areas AL-RTL) to ventral prefrontal and premotor cortices in the inferior frontal gyrus (IFG) and amygdala. Cortical recording and functional imaging studies in macaque monkeys further elaborated on this processing stream by showing that acoustic information flows from the anterior auditory cortex to the temporal pole (TP) and then to the IFG. This pathway is commonly referred to as the auditory ventral stream (AVS; Figure 1, bottom left-red arrows). In contrast to the anterior auditory fields, tracing studies reported that the posterior auditory fields (areas CL-CM) project primarily to dorsolateral prefrontal and premotor cortices (although some projections do terminate in the IFG. Cortical recordings and anatomical tracing studies in monkeys further provided evidence that this processing stream flows from the posterior auditory fields to the frontal lobe via a relay station in the intra-parietal sulcus (IPS). This pathway is commonly referred to as the auditory dorsal stream (ADS; Figure 1, bottom left-blue arrows). Comparing the white matter pathways involved in communication in humans and monkeys with diffusion tensor imaging techniques indicates of similar connections of the AVS and ADS in the two species (Monkey, Human). In humans, the pSTG was shown to project to the parietal lobe (sylvian parietal-temporal junction- inferior parietal lobule; Spt-IPL), and from there to dorsolateral prefrontal and premotor cortices (Figure 1, bottom right-blue arrows), and the aSTG was shown to project to the anterior temporal lobe (middle temporal gyrus-temporal pole; MTG-TP) and from there to the IFG (Figure 1 bottom right-red arrows).

Auditory ventral stream

Sound Recognition

Accumulative converging evidence indicates that the AVS is involved in recognizing auditory objects. At the level of the primary auditory cortex, recordings from monkeys showed higher percentage of neurons selective for learned melodic sequences in area R than area A1, and a study in humans demonstrated more selectivity for heard syllables in the anterior Heschl’s gyrus (area hR) than posterior Heshcl’s gyrus (area hA1). In downstream associative auditory fields, studies from both monkeys and humans reported that the border between the anterior and posterior auditory fields (Figure 1-area PC in the monkey and mSTG in the human) processes pitch attributes that are necessary for the recognition of auditory objects. The anterior auditory fields of monkeys were also demonstrated with selectivity for con-specific vocalizations with intra-cortical recordings and functional imaging One fMRI monkey study further demonstrated a role of the aSTG in the recognition of individual voices. The role of the human mSTG-aSTG in sound recognition was demonstrated via functional imaging studies that correlated activity in this region with isolation of auditory objects from background noise, and with the recognition of spoken words, voices, melodies, environmental sounds, and non-speech communicative sounds. A Meta-analysis of fMRI studies further demonstrated functional dissociation between the left mSTG and aSTG, with the former processing short speech units (phonemes) and the latter processing longer units (e.g., words, environmental sounds). A study that recorded neural activity directly from the left pSTG and aSTG reported that the aSTG, but not pSTG, was more active when the patient listened to speech in her native language than unfamiliar foreign language. Consistently, electro stimulation to the aSTG of this patient resulted in impaired speech perception. Intra-cortical recordings from the right and left aSTG further demonstrated that speech is processed laterally to music. An fMRI study of a patient with impaired sound recognition (auditory agnosia) due to brainstem damage was also shown with reduced activation in areas hR and aSTG of both hemispheres when hearing spoken words and environmental sounds. Recordings from the anterior auditory cortex of monkeys while maintaining learned sounds in working memory, and the debilitating effect of induced lesions to this region on working memory recall, further implicate the AVS in maintaining the perceived auditory objects in working memory. In humans, area mSTG-aSTG was also reported active during rehearsal of heard syllables with MEG and fMRI. The latter study further demonstrated that working memory in the AVS is for the acoustic properties of spoken words and that it is independent to working memory in the ADS, which mediates inner speech. Working memory studies in monkeys also suggest that in monkeys, in contrast to humans, the AVS is the dominant working memory store.

In humans, downstream to the aSTG, the MTG and TP are thought to constitute the semantic lexicon, which is a long-term memory repository of audio-visual representations that are interconnected on the basis of semantic relationships. The primary evidence for this role of the MTG-TP is that patients with damage to this region (e.g., patients with semantic dementia or herpes simplex virus encephalitis) are reported with an impaired ability to describe visual and auditory objects and a tendency to commit semantic errors when naming objects (i.e., semantic paraphasia). Semantic paraphasias were also expressed by aphasic patients with left MTG-TP damage and were shown to occur in non-aphasic patients after electro-stimulation to this region or the underlying white matter pathway. Two meta-analyses of the fMRI literature also reported that the anterior MTG and TP were consistently active during semantic analysis of speech and text; and an intra-cortical recording study correlated neural discharge in the MTG with the comprehension of intelligible sentences.

Sentence comprehension

In addition to extracting meaning from sounds, the MTG-TP region of the AVS appears to have a role in sentence comprehension, possibly by merging concepts together (e.g., merging the concept 'blue' and 'shirt to create the concept of a 'blue shirt'). The role of the MTG in extracting meaning from sentences has been demonstrated in functional imaging studies reporting stronger activation in the anterior MTG when proper sentences are contrasted with lists of words, sentences in a foreign or nonsense language, scrambled sentences, sentences with semantic or syntactic violations and sentence-like sequences of environmental sounds. One fMRI study in which participants were instructed to read a story further correlated activity in the anterior MTG with the amount of semantic and syntactic content each sentence contained. An EEG study that contrasted cortical activity while reading sentences with and without syntactic violations in healthy participants and patients with MTG-TP damage, concluded that the MTG-TP in both hemispheres participate in the automatic (rule based) stage of syntactic analysis (ELAN component), and that the left MTG-TP is also involved in a later controlled stage of syntax analysis (P600 component). Patients with damage to the MTG-TP region have also been reported with impaired sentence comprehension.

Bilaterality

In contradiction to the Wernicke-Lichtheim-Geschwind model that implicates sound recognition to occur solely in the left hemisphere, studies that examined the properties of the right or left hemisphere in isolation via unilateral hemispheric anesthesia (i.e., the WADA procedure) or intra-cortical recordings from each hemisphere provided evidence that sound recognition is processed bilaterally. Moreover, a study that instructed patients with disconnected hemispheres (i.e., split-brain patients) to match spoken words to written words presented to the right or left hemifields, reported vocabulary in the right hemisphere that almost matches in size with the left hemisphere (the right hemisphere vocabulary was equivalent to the vocabulary of a healthy 11-years old child). This bilateral recognition of sounds is also consistent with the finding that unilateral lesion to the auditory cortex rarely results in deficit to auditory comprehension (i.e., auditory agnosia), whereas a second lesion to the remaining hemisphere (which could occur years later) does. Finally, as mentioned earlier, an fMRI scan of an auditory agnosia patient demonstrated bilateral reduced activation in the anterior auditory cortices, and bilateral electro-stimulation to these regions in both hemispheres resulted with impaired speech recognition.

Auditory dorsal stream

Sound localization

The most established role of the ADS is with audiospatial processing. This is evidenced via studies that recorded neural activity from the auditory cortex of monkeys, and correlated the strongest selectivity to changes in sound location with the posterior auditory fields (areas CM-CL), intermediate selectivity with primary area A1, and very weak selectivity with the anterior auditory fields. In humans, behavioral studies of brain damaged patients and EEG recordings from healthy participants demonstrated that sound localization is processed independently of sound recognition, and thus is likely independent of processing in the AVS. Consistently, a working memory study reported two independent working memory storage spaces, one for acoustic properties and one for locations. Functional imaging studies that contrasted sound discrimination and sound localization reported a correlation between sound discrimination and activation in the mSTG-aSTG, and correlation between sound localization and activation in the pSTG and PT, with some studies further reporting of activation in the Spt-IPL region and frontal lobe. Some fMRI studies also reported that the activation in the pSTG and Spt-IPL regions increased when individuals perceived sounds in motion. EEG studies using source-localization also identified the pSTG-Spt region of the ADS as the sound localization processing center. A combined fMRI and MEG study corroborated the role of the ADS with audiospatial processing by demonstrating that changes in sound location resulted in activation spreading from Heschl’s gyrus posteriorly along the pSTG and terminating in the IPL. In another MEG study, the IPL and frontal lobe were shown active during maintenance of sound locations in working memory.

Guidance of eye movements

In addition to localizing sounds, the ADS appears also to encode the sound location in memory, and to use this information for guiding eye movements. Evidence for the role of the ADS in encoding sounds into working memory is provided via studies that trained monkeys in a delayed matching to sample task, and reported of activation in areas CM-CL and IPS during the delay phase. Influence of this spatial information on eye movements occurs via projections of the ADS into the frontal eye field (FEF; a premotor area that is responsible for guiding eye movements) located in the frontal lobe. This is demonstrated with anatomical tracing studies that reported of connections between areas CM-CL-IPS and the FEF, and electro-physiological recordings that reported neural activity in both the IPS and the FEF prior to conducting saccadic eye-movements toward auditory targets.

Integration of locations with auditory objects

A surprising function of the ADS is with the discrimination and possible identification of sounds, which is commonly ascribed with the anterior STG and STS of the AVS. However, electrophysiological recordings from the posterior auditory cortex (areas CM-CL), and IPS of monkeys, as well a PET monkey study reported of neurons that are selective to monkey vocalizations. One of these studies also reported of neurons in areas CM-CL that are characterized with dual selectivity for both a vocalization and a sound location. A monkey study that recorded electrophysiological activity from neurons in the posterior insula also reported of neurons that discriminate monkey calls based on the identity of the speaker. Similarly, human fMRI studies that instructed participants to discriminate voices reported an activation cluster in the pSTG. A study that recorded activity from the auditory cortex of an epileptic patient further reported that the pSTG, but not aSTG, was selective for the presence of a new speaker. A study that scanned fetuses in their third trimester of pregnancy with fMRI further reported of activation in area Spt when the hearing of voices was contrasted to pure tones. The researchers also reported that a sub-region of area Spt was more selective to their mother’s voice than unfamiliar female voices. This study thus suggests that the ADS is capable of identifying voices in addition to discriminating them.

The manner in which sound recognition in the pSTG-PT-Spt regions of the ADS differs from sound recognition in the anterior STG and STS of the AVS was shown via electro-stimulation of an epileptic patient. This study reported that electro-stimulation of the aSTG resulted in changes in the perceived pitch of voices (including the patient’s own voice), whereas electro-stimulation of the pSTG resulted in reports that her voice was “drifting away.” This report indicates a role for the pSTG in the integration of sound location with an individual voice. Consistent with this role of the ADS is a study that reported patients, with AVS damage but spared ADS (surgical removal of the anterior STG/MTG), were no longer capable of isolating environmental sounds in the contralesional space, whereas their ability of isolating and discriminating human voices remained intact. Supporting a role for the pSTG-PT-Spt of the ADS with integrating auditory objects with sound locations are also studies that demonstrate a role for this region in the isolation of specific sounds. For example, two functional imaging studies correlated circumscribed pSTG-PT activation with the spreading of sounds into an increasing number of locations. Accordingly, an fMRI study correlated the perception of acoustic cues that are necessary for separating musical sounds (pitch chroma) with pSTG-PT activation.

Integration of phonemes with lip-movements

Although sound perception is primarily ascribed with the AVS, the ADS appears associated with several aspects of speech perception. For instance, in a meta-analysis of fMRI studies in which the auditory perception of phonemes was contrasted with closely matching sounds, and the studies were rated for the required level of attention, the authors concluded that attention to phonemes correlates with strong activation in the pSTG-pSTS region. An intra-cortical recording study in which participants were instructed to identify syllables also correlated the hearing of each syllable with its own activation pattern in the pSTG. Consistent with the role of the ADS in discriminating phonemes, studies have ascribed the integration of phonemes and their corresponding lip movements (i.e., visemes) to the pSTS of the ADS. For example, an fMRI study has correlated activation in the pSTS with the McGurk illusion (in which hearing the syllable “ba” while seeing the viseme “ga” results in the perception of the syllable “da”). Another study has found that using magnetic stimulation to interfere with processing in this area further disrupts the McGurk illusion. The association of the pSTS with the audio-visual integration of speech has also been demonstrated in a study that presented participants with pictures of faces and spoken words of varying quality. The study reported that the pSTS selects for the combined increase of the clarity of faces and spoken words. Corroborating evidence has been provided by an fMRI study that contrasted the perception of audio-visual speech with audio-visual non-speech (pictures and sounds of tools). This study reported the detection of speech-selective compartments in the pSTS. In addition, an fMRI study that contrasted congruent audio-visual speech with incongruent speech (pictures of still faces) reported pSTS activation.

Phonological long-term memory

A growing body of evidence indicates that humans, in addition to having a long-term store for word meanings located in the MTG-TP of the AVS (i.e., the semantic lexicon), also have a long-term store for the names of objects located in the Spt-IPL region of the ADS (i.e., the phonological lexicon). For example, a study examining patients with damage to the AVS (MTG damage) or damage to the ADS (IPL damage) reported that MTG damage results in individuals incorrectly identifying objects (e.g., calling a “goat” a “sheep,” an example of semantic paraphasia). Conversely, IPL damage results in individuals correctly identifying the object but incorrectly pronouncing its name (e.g., saying “gof” instead of “goat,” an example of phonemic paraphasia). Semantic paraphasia errors have also been reported in patients receiving intra-cortical electrical stimulation of the AVS (MTG), and phonemic paraphasia errors have been reported in patients whose ADS (pSTG, Spt, and IPL) received intra-cortical electrical stimulation. Further supporting the role of the ADS in object naming is an MEG study that localized activity in the IPL during the learning and during the recall of object names. A study that induced magnetic interference in participants’ IPL while they answered questions about an object reported that the participants were capable of answering questions regarding the object’s characteristics or perceptual attributes but were impaired when asked whether the word contained two or three syllables. An MEG study has also correlated recovery from anomia (a disorder characterized by an impaired ability to name objects) with changes in IPL activation. Further supporting the role of the IPL in encoding the sounds of words are studies reporting that, compared to monolinguals, bilinguals have greater cortical density in the IPL but not the MTG. Because evidence shows that, in bilinguals, different phonological representations of the same word share the same semantic representation, this increase in density in the IPL verifies the existence of the phonological lexicon: the semantic lexicon of bilinguals is expected to be similar in size to the semantic lexicon of monolinguals, whereas their phonological lexicon should be twice the size. Consistent with this finding, cortical density in the IPL of monolinguals also correlates with vocabulary size. Notably, the functional dissociation of the AVS and ADS in object-naming tasks is supported by cumulative evidence from reading research showing that semantic errors are correlated with MTG impairment and phonemic errors with IPL impairment. Based on these associations, the semantic analysis of text has been linked to the inferior-temporal gyrus and MTG, and the phonological analysis of text has been linked to the pSTG-Spt- IPL.

Phonological working memory

Working memory is often treated as the temporary activation of the representations stored in long-term memory that are used for speech (phonological representations). This sharing of resources between working memory and speech is evident by the finding that speaking during rehearsal results in a significant reduction in the number of items that can be recalled from working memory (articulatory suppression). The involvement of the phonological lexicon in working memory is also evidenced by the tendency of individuals to make more errors when recalling words from a recently learned list of phonologically similar words than from a list of phonologically dissimilar words (the phonological similarity effect). Studies have also found that speech errors committed during reading are remarkably similar to speech errors made during the recall of recently learned, phonologically similar words from working memory. Patients with IPL damage have also been observed to exhibit both speech production errors and impaired working memory. Finally, the view that verbal working memory is the result of temporarily activating phonological representations in the ADS is compatible with recent models describing working memory as the combination of maintaining representations in the mechanism of attention in parallel to temporarily activating representations in long-term memory. It has been argued that the role of the ADS in the rehearsal of lists of words is the reason this pathway is active during sentence comprehension For a review of the role of the ADS in working memory, see.

The 'from where to what' model of language evolution hypotheses 7 stages of language evolution: 1. The origin of speech is the exchange of contact calls between mothers and offspring used to relocate each other in cases of separation. 2. Offspring of early Homo modified the contact calls with intonations in order to emit two types of contact calls: contact calls that signal low level of distress and contact calls that signal high-level of distress. 3. The use of two types of contact calls enabled the first question-answer conversation. In this scenario, the offspring emits a low-level distress call to express a desire to interact with an object, and the mother responds with a low-level distress call to enable the interaction or high-level distress call to prohibit it. 4. The use of intonations improved over time, and eventually, individuals acquired sufficient vocal control to invent new words to objects. 5. At first, offspring learned the calls from their parents by imitating their lip-movements. 6. As the learning of calls improved, babies learned new calls (i.e., phonemes) through lip imitation only during infancy. After that period, the memory of phonemes lasted for a lifetime, and older children became capable of learning new calls (through mimicry) without observing their parents' lip-movements. 7. Individuals became capable of rehearsing sequences of calls. This enabled the learning of words with several syllables, which increased vocabulary size. Further developments to the brain circuit responsible for rehearsing poly-syllabic words resulted with individuals capable of rehearsing lists of words (phonological working memory), which served as the platform for communication with sentences.

Evolution of language

It is presently unknown why so many functions are ascribed to the human ADS. An attempt to unify these functions under a single framework was conducted in the ‘From where to what’ model of language evolution. In accordance with this model, each function of the ADS indicates of a different intermediate phase in the evolution of language. The roles of sound localization and integration of sound location with voices and auditory objects is interpreted as evidence that the origin of speech is the exchange of contact calls (calls used to report location in cases of separation) between mothers and offspring. The role of the ADS in the perception and production of intonations is interpreted as evidence that speech began by modifying the contact calls with intonations, possibly for distinguishing alarm contact calls from safe contact calls. The role of the ADS in encoding the names of objects (phonological long-term memory) is interpreted as evidence of gradual transition from modifying calls with intonations to complete vocal control. The role of the ADS in the integration of lip movements with phonemes and in speech repetition is interpreted as evidence that spoken words were learned by infants mimicking their parents’ vocalizations, initially by imitating their lip movements. The role of the ADS in phonological working memory is interpreted as evidence that the words learned through mimicry remained active in the ADS even when not spoken. This resulted with individuals capable of rehearsing a list of vocalizations, which enabled the production of words with several syllables. Further developments in the ADS enabled the rehearsal of lists of words, which provided the infra-structure for communicating with sentences.

Hydrogen-like atom

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Hydrogen-like_atom ...