FOXP2 is found in many vertebrates, where it plays an important role in mimicry in birds (such as birdsong) and echolocation in bats. FOXP2 is also required for the proper development of speech and language in humans. In humans, mutations in FOXP2 cause the severe speech and language disorder developmental verbal dyspraxia. Studies of the gene in mice and songbirds indicate that it is necessary for vocal imitation and the related motor learning. Outside the brain, FOXP2 has also been implicated in development of other tissues such as the lung and digestive system.
Initially identified in 1998 as the genetic cause of a speech disorder in a British family designated the KE family, FOXP2 was the first gene discovered to be associated with speech and language and was subsequently dubbed "the language gene".
However, other genes are necessary for human language development, and a
2018 analysis confirmed that there was no evidence of recent positive evolutionary selection of FOXP2 in humans.
Structure and function
As a FOX protein, FOXP2 contains a forkhead-box domain. In addition, it contains a polyglutamine tract, a zinc finger and a leucine zipper.
The protein attaches to the DNA of other proteins and controls their
activity through the forkhead-box domain. Only a few targeted genes have
been identified, however researchers believe that there could be up to
hundreds of other genes targeted by the FOXP2 gene. The forkhead box P2
protein is active in the brain and other tissues before and after birth,
many studies show that it is paramount for the growth of nerve cells
and transmission between them. The FOXP2 gene is also involved in
synaptic plasticity, making it imperative for learning and memory.
FOXP2 is required for proper brain and lung development. Knockout mice with only one functional copy of the FOXP2 gene have significantly reduced vocalizations as pups. Knockout mice with no functional copies of FOXP2 are runted, display abnormalities in brain regions such as the Purkinje layer, and die an average of 21 days after birth from inadequate lung development.
FOXP2 is expressed in many areas of the brain, including the basal ganglia and inferior frontal cortex, where it is essential for brain maturation and speech and language development.
In mice, the gene was found to be twice as highly expressed in male
pups than female pups, which correlated with an almost double increase
in the number of vocalisations the male pups made when separated from
mothers. Conversely, in human children aged 4–5, the gene was found to
be 30% more expressed in the Broca's areas of female children. The researchers suggested that the gene is more active in "the more communicative sex".
Three amino acid substitutions distinguish the human FOXP2 protein from that found in mice, while two amino acid substitutions distinguish the human FOXP2 protein from that found in chimpanzees, but only one of these changes is unique to humans. Evidence from genetically manipulated mice and human neuronal cell models suggests that these changes affect the neural functions of FOXP2.
Clinical significance
The
FOXP2 gene has been implicated in several cognitive functions
including; general brain development, language, and synaptic plasticity.
The FOXP2 gene region acts as a transcription factor for the forkhead
box P2 protein. Transcription factors affect other regions, and the
forkhead box P2 protein has been suggested to also act as a
transcription factor for hundreds of genes. This prolific involvement
opens the possibility that the FOXP2 gene is much more extensive than
originally thought.
Other targets of transcription have been researched without correlation
to FOXP2. Specifically, FOXP2 has been investigated in correlation with
autism and dyslexia, however with no mutation was discovered as the
cause. One well identified target is language. Although some research disagrees with this correlation, the majority of research shows that a mutated FOXP2 causes the observed production deficiency.
There is some evidence that the linguistic impairments associated with a mutation of the FOXP2
gene are not simply the result of a fundamental deficit in motor
control. Brain imaging of affected individuals indicates functional
abnormalities in language-related cortical and basal ganglia regions,
demonstrating that the problems extend beyond the motor system.
Mutations in FOXP2 are among several (26 genes plus 2 intergenic) loci which correlate to ADHD
diagnosis in adults – clinical ADHD is an umbrella label for a
heterogeneous group of genetic and neurological phenomena which may
result from FOXP2 mutations or other causes.
It is theorized that the translocation of the 7q31.2 region of the FOXP2 gene causes a severe language impairment called developmental verbal dyspraxia (DVD) or childhood apraxia of speech (CAS) So far this type of mutation has only been discovered in three families across the world including the original KE family.
A missense mutation causing an arginine-to-histidine substitution
(R553H) in the DNA-binding domain is thought to be the abnormality in
KE.
This would cause a normally basic residue to be fairly acidic and
highly reactive at the body's pH. A heterozygous nonsense mutation,
R328X variant, produces a truncated protein involved in speech and
language difficulties in one KE individual and two of their close family
members. R553H and R328X mutations also affected nuclear localization,
DNA-binding, and the transactivation (increased gene expression)
properties of FOXP2.
These individuals present with deletions, translocations, and
missense mutations. When tasked with repetition and verb generation,
these individuals with DVD/CAS had decreased activation in the putamen
and Broca's area in fMRI studies. These areas are commonly known as
areas of language function.
This is one of the primary reasons that FOXP2 is known as a language
gene. They have delayed onset of speech, difficulty with articulation
including slurred speech, stuttering, and poor pronunciation, as well as
dyspraxia.
It is believed that a major part of this speech deficit comes from an
inability to coordinate the movements necessary to produce normal speech
including mouth and tongue shaping. Additionally, there are more general impairments with the processing of the grammatical and linguistic aspects of speech.
These findings suggest that the effects of FOXP2 are not limited to
motor control, as they include comprehension among other cognitive
language functions. General mild motor and cognitive deficits are noted
across the board. Clinically these patients can also have difficulty coughing, sneezing, or clearing their throats.
While FOXP2 has been proposed to play a critical role in the
development of speech and language, this view has been challenged by the
fact that the gene is also expressed in other mammals as well as birds
and fish that do not speak.
It has also been proposed that the FOXP2 transcription-factor is not so
much a hypothetical 'language gene' but rather part of a regulatory
machinery related to externalization of speech.
Evolution
The FOXP2 gene is highly conserved in mammals. The human gene differs from that in non-human primates by the substitution of two amino acids, a threonine to asparagine substitution at position 303 (T303N) and an asparagine to serine substitution at position 325 (N325S). In mice it differs from that of humans by three substitutions, and in zebra finch by seven amino acids. One of the two amino acid differences between human and chimps also arose independently in carnivores and bats. Similar FOXP2 proteins can be found in songbirds, fish, and reptiles such as alligators.
DNA sampling from Homo neanderthalensis bones indicates that their FOXP2 gene is a little different though largely similar to those of Homo sapiens (i.e. humans). Previous genetic analysis had suggested that the H. sapiens FOXP2 gene became fixed in the population around 125,000 years ago.
Some researchers consider the Neanderthal findings to indicate that the
gene instead swept through the population over 260,000 years ago,
before our most recent common ancestor with the Neanderthals. Other researchers offer alternative explanations for how the H. sapiens version would have appeared in Neanderthals living 43,000 years ago.
According to a 2002 study, the FOXP2 gene showed indications of recent positive selection.Some researchers have speculated that positive selection is crucial for the evolution of language in humans. Others, however, were unable to find a clear association between species with learned vocalizations and similar mutations in FOXP2.
A 2018 analysis of a large sample of globally distributed genomes
confirmed there was no evidence of positive selection, suggesting that
the original signal of positive selection may be driven by sample
composition. Insertion of both human mutations into mice, whose version of FOXP2 otherwise differs from the human and chimpanzee
versions in only one additional base pair, causes changes in
vocalizations as well as other behavioral changes, such as a reduction
in exploratory tendencies, and a decrease in maze learning time. A
reduction in dopamine levels and changes in the morphology of certain
nerve cells are also observed.
FOXP2 downregulates CNTNAP2, a member of the neurexin family found in neurons. CNTNAP2 is associated with common forms of language impairment.
FOXP2 also downregulates SRPX2, the 'Sushi Repeat-containing Protein X-linked 2'. It directly reduces its expression, by binding to its gene's promoter. SRPX2 is involved in glutamatergicsynapse formation in the cerebral cortex
and is more highly expressed in childhood. SRPX2 appears to
specifically increase the number of glutamatergic synapses in the brain,
while leaving inhibitory GABAergic synapses unchanged and not affecting dendritic spine
length or shape. On the other hand, FOXP2's activity does reduce
dendritic spine length and shape, in addition to number, indicating it
has other regulatory roles in dendritic morphology.
In other animals
Chimpanzees
In chimpanzees, FOXP2 differs from the human version by two amino acids.
A study in Germany sequenced FOXP2's complementary DNA in chimps and
other species to compare it with human complementary DNA in order to
find the specific changes in the sequence.
FOXP2 was found to be functionally different in humans compared to
chimps. Since FOXP2 was also found to have an effect on other genes, its
effects on other genes is also being studied.
Researchers deduced that there could also be further clinical
applications in the direction of these studies in regards to illnesses
that show effects on human language ability.
Mice
In a mouse FOXP2gene knockouts, loss of both copies of the gene causes severe motor impairment related to cerebellar abnormalities and lack of ultrasonicvocalisations normally elicited when pups are removed from their mothers.
These vocalizations have important communicative roles in
mother–offspring interactions. Loss of one copy was associated with
impairment of ultrasonic vocalisations and a modest developmental delay.
Male mice on encountering female mice produce complex ultrasonic
vocalisations that have characteristics of song. Mice that have the R552H point mutation carried by the KE family show cerebellar reduction and abnormal synaptic plasticity in striatal and cerebellar circuits.
Humanized FOXP2 mice display altered cortico-basal ganglia circuits. The human allele of the FOXP2 gene was transferred into the mouse embryos through homologous recombination
to create humanized FOXP2 mice. The human variant of FOXP2 also had an
effect on the exploratory behavior of the mice. In comparison to
knockout mice with one non-functional copy of FOXP2, the
humanized mouse model showed opposite effects when testing its effect on
the levels of dopamine, plasticity of synapses, patterns of expression
in the striatum and behavior that was exploratory in nature.
When FOXP2 expression was altered in mice, it affected many
different processes including the learning motor skills and the
plasticity of synapses. Additionally, FOXP2 is found more in the sixth layer of the cortex than in the fifth, and this is consistent with it having greater roles in sensory integration. FOXP2 was also found in the medial geniculate nucleus
of the mouse brain, which is the processing area that auditory inputs
must go through in the thalamus. It was found that its mutations play a
role in delaying the development of language learning. It was also found
to be highly expressed in the Purkinje cells and cerebellar nuclei of
the cortico-cerebellar circuits. High FOXP2 expression has also been
shown in the spiny neurons that express type 1 dopamine receptors in the striatum, substantia nigra, subthalamic nucleus and ventral tegmental area.
The negative effects of the mutations of FOXP2 in these brain regions
on motor abilities were shown in mice through tasks in lab studies. When
analyzing the brain circuitry in these cases, scientists found greater
levels of dopamine and decreased lengths of dendrites, which caused
defects in long-term depression, which is implicated in motor function learning and maintenance. Through EEG
studies, it was also found that these mice had increased levels of
activity in their striatum, which contributed to these results. There is
further evidence for mutations of targets of the FOXP2 gene shown to
have roles in schizophrenia, epilepsy, autism, bipolar disorder and intellectual disabilities.
Bats
FOXP2 has implications in the development of batecholocation. Contrary to apes and mice, FOXP2 is extremely diverse in echolocating bats. Twenty-two sequences of non-bat eutherian
mammals revealed a total number of 20 nonsynonymous mutations in
contrast to half that number of bat sequences, which showed 44
nonsynonymous mutations. All cetaceans share three amino acid substitutions, but no differences were found between echolocating toothed whales and non-echolocating baleen cetaceans. Within bats, however, amino acid variation correlated with different echolocating types.
Birds
In songbirds, FOXP2 most likely regulates genes involved in neuroplasticity.Gene knockdown of FOXP2 in area X of the basal ganglia in songbirds results in incomplete and inaccurate song imitation.[8] Overexpression of FOXP2 was accomplished through injection of adeno-associated virus
serotype 1 (AAV1) into area X of the brain. This overexpression
produced similar effects to that of knockdown; juvenile zebra finch
birds were unable to accurately imitate their tutors. Similarly, in adult canaries, higher FOXP2 levels also correlate with song changes.
Levels of FOXP2 in adult zebra finches are significantly
higher when males direct their song to females than when they sing song
in other contexts.
"Directed" singing refers to when a male is singing to a female usually
for a courtship display. "Undirected" singing occurs when for example, a
male sings when other males are present or is alone.
Studies have found that FoxP2 levels vary depending on the social
context. When the birds were singing undirected song, there was a
decrease of FoxP2 expression in Area X. This downregulation was not
observed and FoxP2 levels remained stable in birds singing directed
song.
Differences between song-learning and non-song-learning birds have been shown to be caused by differences in FOXP2gene expression, rather than differences in the amino acid sequence of the FOXP2 protein.
Zebrafish
In zebrafish, FOXP2 is expressed in the ventral and dorsal thalamus, telencephalon, diencephalon
where it likely plays a role in nervous system development. The
zebrafish FOXP2 gene has an 85% similarity to the human FOX2P ortholog.
History
FOXP2 and its gene were discovered as a result of investigations on an English family known as the KE family, half of whom (15 individuals across three generations) had a speech and language disorder called developmental verbal dyspraxia. Their case was studied at the Institute of Child Health of University College London.[62] In 1990, Myrna Gopnik, Professor of Linguistics at McGill University,
reported that the disorder-affected KE family had severe speech
impediment with incomprehensible talk, largely characterized by
grammatical deficits.
She hypothesized that the basis was not of learning or cognitive
disability, but due to genetic factors affecting mainly grammatical
ability. (Her hypothesis led to a popularised existence of "grammar gene" and a controversial notion of grammar-specific disorder.) In 1995, the University of Oxford and the Institute of Child Health researchers found that the disorder was purely genetic. Remarkably, the inheritance of the disorder from one generation to the next was consistent with autosomal dominant inheritance, i.e., mutation of only a single gene on an autosome (non-sex chromosome) acting in a dominant fashion. This is one of the few known examples of Mendelian
(monogenic) inheritance for a disorder affecting speech and language
skills, which typically have a complex basis involving multiple genetic
risk factors.
In 1998, Oxford University geneticists Simon Fisher, Anthony Monaco, Cecilia S. L. Lai, Jane A. Hurst, and Faraneh Vargha-Khadem identified an autosomal dominant monogenic inheritance that is localized on a small region of chromosome 7 from DNA samples taken from the affected and unaffected members. The chromosomal region (locus) contained 70 genes.
The locus was given the official name "SPCH1" (for
speech-and-language-disorder-1) by the Human Genome Nomenclature
committee. Mapping and sequencing of the chromosomal region was
performed with the aid of bacterial artificial chromosome clones.
Around this time, the researchers identified an individual who was
unrelated to the KE family but had a similar type of speech and language
disorder. In this case, the child, known as CS, carried a chromosomal
rearrangement (a translocation)
in which part of chromosome 7 had become exchanged with part of
chromosome 5. The site of breakage of chromosome 7 was located within
the SPCH1 region.
In 2001, the team identified in CS that the mutation is in the middle of a protein-coding gene. Using a combination of bioinformatics and RNA analyses, they discovered that the gene codes for a novel protein belonging to the forkhead-box (FOX) group of transcription factors. As such, it was assigned with the official name of FOXP2. When the researchers sequenced the FOXP2 gene in the KE family, they found a heterozygouspoint mutation shared by all the affected individuals, but not in unaffected members of the family and other people. This mutation is due to an amino-acid substitution that inhibits the DNA-binding domain of the FOXP2 protein. Further screening of the gene identified multiple additional cases of FOXP2 disruption, including different point mutations and chromosomal rearrangements, providing evidence that damage to one copy of this gene is sufficient to derail speech and language development.
Biolinguistics can be defined as the study of biology and the
evolution of language. It is highly interdisciplinary as it is related
to various fields such as biology, linguistics, psychology, anthropology, mathematics, and neurolinguistics
to explain the formation of language. It seeks to yield a framework by
which we can understand the fundamentals of the faculty of language.
This field was first introduced by Massimo Piattelli-Palmarini [it], professor of Linguistics and Cognitive Science at the University of Arizona. It was first introduced in 1971, at an international meeting at the Massachusetts Institute of Technology (MIT).
Biolinguistics, also called the biolinguistic enterprise or the biolinguistic approach, is believed to have its origins in Noam Chomsky's and Eric Lenneberg's
work on language acquisition that began in the 1950s as a reaction to
the then-dominant behaviorist paradigm. Fundamentally, biolinguistics
challenges the view of human language acquisition as a behavior based on
stimulus-response interactions and associations. Chomsky and Lenneberg militated against it by arguing for the innate knowledge of language. Chomsky in 1960s proposed the Language Acquisition Device (LAD) as a hypothetical tool for language acquisition that only humans are born with. Similarly, Lenneberg (1967) formulated the Critical Period Hypothesis,
the main idea of which being that language acquisition is biologically
constrained. These works were regarded as pioneers in the shaping of
biolinguistic thought, in what was the beginning of a change in paradigm
in the study of language.
Origins of biolinguistics
The
investigation of the biological foundations of language is associated
with two historical periods, namely that of the 19th century (primarily
via Darwinian evolutionary theory) and the 20th century (primarily via
the integration of the mathematical linguistics (in the form of
Chomskyan generative grammar) with neuroscience.
19th century: Darwin's theory of evolution
Darwinism
inspired many researchers to study language, in particular the
evolution of language, via the lens of biology. Darwin's theory
regarding the origin of language attempts to answer three important
questions:
Did individuals undergo something like selection as they evolved?
Did selection play a role in producing the capacity for language in humans?
If selection did play a role, was selection primarily responsible
for the emergence of language, was it just one of the several
contributing causes?
Dating all the way back to 1821, German linguist August Scheilurer
was the representative pioneer of biolinguistics, discussing the
evolution of language based on Darwin's theory of evolution. Since
linguistics had been believed to be a form of historical science under
the influence of the Société de Linguistique de Paris, speculations of the origin of language were not permitted. As a result, hardly did any prominent linguist write about the origin of language apart from German linguist Hugo Schuchardt.
Darwinism addressed the arguments of other researchers and scholars
much as Max Müller by arguing that language use, while requiring a
certain mental capacity, also stimulates brain development, enabling
long trains of thought and strengthening power. Darwin drew an extended
analogy between the evolution of languages and species, noting in each
domain the presence of rudiments, of crossing and blending, and
variation, and remarking on how each development gradually through a
process of struggle.
20th century: Biological foundation of language
The
first phase in the development of biolinguistics runs through the late
1960s with the publication of Lennberg's Biological Foundation of
Language (1967). During the first phase, work focused on:
specifying the boundary conditions for human language as a system of cognition;
language development as it presents itself in the acquisition sequence that children go through when they learn a language
genetics of language disorders that create specific language disabilities, including dyslexia and deafness)
language evolution.
During this period, the greatest progress was made in coming to a
better understanding of the defining properties of human language as a
system of cognition. Three landmark events shaped the modern field of
biolinguistics: two important conferences were convened in the 1970s,
and a retrospective article was published in 1997 by Lyle Jenkins.
1974: The first official biolinguistic conference was organized by him in 1974, bringing together evolutionary biologists, neuroscientists, linguists, and others interested in the development of language in the individual, its origins and evolution.
1976: another conference was held by the New York Academy of
Science, after which numerous works on the origin of language were
published.
1997: For the 40th anniversary of transformational-generative grammar, Lyle Jenkins wrote an article titled "Biolinguistics: Structure development and evolution of language".
The second phase began in the late 1970s . In 1976 Chomsky formulated
the fundamental questions of biolinguistics as follows: i) function,
ii) structure, iii) physical basis, iv) development in the individual,
v) evolutionary development. In the late 1980s a great deal of progress
was made in answering questions about the development of language. This
then prompted further questions about language design,
function, and, the evolution of language. The following year,
Juan Uriagereka, a graduate student of Howard Lasnik, wrote the
introductory text to Minimalist Syntax, Rhyme and Reason. Their work
renewed interest in biolinguistics, catalysing many linguists to look
into biolinguistics with their colleagues in adjacent scientific
disciplines.
Both Jenkins and Uriagereka stressed the importance of addressing the
emergence of the language faculty in humans. At around the same time, geneticists discovered a link between the language deficit manifest by the KE family members and the gene FOXP2. Although FOXP2 is not the gene responsible for language, this discovery brought many linguists and scientists together to interpret this data, renewing the interest of biolinguistics.
Although many linguists have differing opinions when it comes to
the history of biolinguistics, Chomsky believes that its history was
simply that of transformational grammar. While Professor Anna Maria Di Sciullo
claims that the interdisciplinary research of biology and linguistics
in the 1950s-1960s led to the rise of biolinguistics. Furthermore,
Jenkins believes that biolinguistics was the outcome of transformational
grammarians studying human linguistic and biological mechanisms. On the
other hand, linguists Martin Nowak and Charles Yang
argue that biolinguistics, originating in the 1970s, is distinct
transformational grammar; rather a new branch of the linguistics-biology
research paradigm initiated by transformational grammar.
In Aspects of the theory of Syntax,
Chomsky proposed that languages are the product of a biologically
determined capacity present in all humans, located in the brain. He
addresses three core questions of biolinguistics: what constitutes the
knowledge of language, how is knowledge acquired, how is the knowledge
put to use? A great deal of ours must be innate, supporting his claim
with the fact that speakers are capable of producing and understanding
novel sentences without explicit instructions.
Chomsky proposed that the form of the grammar may emerge from the mental
structure afforded by the human brain and argued that formal
grammatical categories such as nouns, verbs, and adjectives do not
exist. The linguistic theory of generative grammar
thereby proposes that sentences are generated by a subconscious set of
procedures which are part of an individual's cognitive ability. These
procedures are modeled through a set of formal grammatical rules which
are thought to generate sentences in a language.
Chomsky focuses on the mind of the language learner or user and
proposed that internal properties of the language faculty are closely
linked to the physical biology of humans. He further introduced the idea
of a Universal Grammar
(UG) theorized to be inherent to all human beings. From the view of
Biolinguistic approach, the process of language acquisition would be
fast and smooth because humans naturally obtain the fundamental
perceptions toward Universal Grammar, which is opposite to the
usage-based approach.
UG refers to the initial state of the faculty of language; a
biologically innate organ that helps the learner make sense of the data
and build up an internal grammar.
The theory suggests that all human languages are subject to universal
principles or parameters that allow for different choices (values). It
also contends that humans possess generative grammar, which is
hard-wired into the human brain in some ways and makes it possible for
young children to do the rapid and universal acquisition of speech.
Elements of linguistic variation then determine the growth of language
in the individual, and variation is the result of experience, given the
genetic endowment and independent principles reducing complexity.
Chomsky's work is often recognized as the weak perspective of
biolinguistics as it does not pull from other fields of study outside of
linguistics.
Modularity Hypothesis
According
to Chomsky, the human's brains consist of various sections which
possess their individual functions, such as the language faculty, visual
recognition.
The acquisition of language is a universal feat and it is believed we
are all born with an innate structure initially proposed by Chomsky in
the 1960s. The Language Acquisition Device
(LAD) was presented as an innate structure in humans which enabled
language learning. Individuals are thought to be "wired" with universal
grammar rules enabling them to understand and evaluate complex syntactic
structures. Proponents of the LAD often quote the argument of the
poverty of negative stimulus, suggesting that children rely on the LAD
to develop their knowledge of a language despite not being exposed to a
rich linguistic environment. Later, Chomsky exchanged this notion
instead for that of Universal Grammar, providing evidence for a
biological basis of language.
The Minimalist Program (MP) was introduced by Chomsky in 1993, and it
focuses on the parallel between language and the design of natural
concepts. Those invested in the Minimalist Program are interested in the
physics and mathematics of language and its parallels with our natural
world. For example, Piatelli-Palmarini studied the isomorphic relationship between the Minimalist Program and Quantum Field Theory.
The Minimalist Program aims to figure out how much of the Principles and Parameters
model can be taken as a result of the hypothetical optimal and
computationally efficient design of the human language faculty and more
developed versions of the Principles and Parameters approach in turn
provide technical principles from which the minimalist program can be
seen to follow.
The program further aims to develop ideas involving the economy of derivation and economy of representation, which had started to become an independent theory in the early 1990s, but were then still considered as peripherals of transformational grammar.
Merge
The Merge
operation is used by Chomsky to explain the structure of syntax trees
within the Minimalist program. Merge itself is a process which provides
the basis of phrasal formation as a result of taking two elements
within a phrase and combining them In A.M. Di Sciullo & D. Isac's The Asymmetry of Merge (2008), they highlight the two key bases of Merge by Chomsky;
Merge is binary
Merge is recursive
In order to understand this, take the following sentence: Emma dislikes the pie
This phrase can be broken down into its lexical items:
The above phrasal representation allows for an understanding of each
lexical item. In order to build a tree using Merge, using bottom-up
formation the two final elements of the phrase are selected and then
combined to form a new element on the tree. In image a) you can see that
the determiner the and the Noun Phrase pie are both
selected. Through the process of Merge, the new formed element on the
tree is the determiner Phrase (DP) which holds, the pie, which is visible in b).
a) Selection of the final two element of the phrase
b) The two selected elements are then "merged" and they produce one new constituent, known as the Determiner Phrase (DP)
c) Selection of DP the pie with V dislikes
d) Merge operation has occurred, yielded new element on tree, V' (V-bar)
e) Selection of V' dislikes the pie and DP subject Emma
f) Merge operation has undergone, yielded new element on tree; VP
Core components
In
a minimalist approach, there are three core components of the language
faculty proposed: Sensory-Motor system (SM), Conceptual-Intentional
system (CI), and Narrow Syntax (NS).
SM includes biological requisites for language production and
perception, such as articulatory organs, and CI meets the biological
requirements related to inference, interpretation, and reasoning, those
involved in other cognitive functions. As SM and CI are finite, the main
function of NS is to make it possible to produce infinite numbers of
sound-meaning pairs.
Relevance of Natural Law
It is possible that the core principles of The Faculty of Language be correlated to natural laws (such as for example, the Fibonacci sequence—
an array of numbers where each consecutive number is a sum of the two
that precede it, see for example the discussion Uriagereka 1997 and
Carnie and Medeiros 2005).
According to the hypothesis being developed, the essential properties
of language arise from nature itself: the efficient growth requirement
appears everywhere, from the pattern of petals in flowers, leaf
arrangements in trees and the spirals of a seashell to the structure of
DNA and proportions of human head and body. Natural Law
in this case would provide insight on concepts such as binary branching
in syntactic trees and well as the Merge operation. This would
translate to thinking it in terms of taking two elements on a syntax
tree and such that their sum yields another element that falls below on
the given syntax tree (Refer to trees above in Minimalist Program).
By adhering to this sum of two elements that precede it, provides
support for binary structures. Furthermore, the possibility of ternary
branching would deviate from the Fibonacci sequence and consequently
would not hold as strong support to the relevance of Natural Law in
syntax.
Biolinguistics: Challenging the Usage-Based Approach
As
mentioned above, biolinguistics challenges the idea that the
acquisition of language is a result of behavior based learning. This
alternative approach the biolinguistics challenges is known as the
usage-based (UB) approach. UB supports that idea that knowledge of human
language is acquired via exposure and usage.
One of the primary issues that is highlighted when arguing against the
Usage-Based approach, is that UB fails to address the issue of poverty
of stimulus, whereas biolinguistics addresses this by way of the Language Acquisition Device.
Lenneberg and the Role of Genes
Another major contributor to the field is Eric Lenneberg. In is book Biological Foundation of Languages,
Lenneberg (1967) suggests that different aspects of human biology that
putatively contribute to language more than genes at play. This
integration of other fields to explain language is recognized as the strong view in biolinguistics
While they are obviously essential, and while genomes are associated
with specific organisms, genes do not store traits (or "faculties") in
the way that linguists—including Chomskyans—sometimes seem to imply.
Contrary to the concept of the existence of a language faculty as
suggested by Chomsky, Lenneberg argues that while there are specific
regions and networks crucially involved in the production of language,
there is no single region to which language capacity is confined and
that speech, as well as language, is not confined to the cerebral cortex.
Lenneberg considered language as a species-specific mental organ with
significant biological properties. He suggested that this organ grows in
the mind/brain of a child in the same way that other biological organs
grow, showing that the child's path to language displays the hallmark of
biological growth. According to Lenneberg, genetic mechanisms plays an
important role in the development of an individual's behavior and is
characterized by two aspects:
The acknowledgement of an indirect relationship between genes and traits, and;
The rejection of the existence of ‘special’ genes for language, that
is, the rejection of the need for a specifically linguistic genotype;
Based on this, Lenneberg goes on further to claim that no kind of
functional principle could be stored in an individual's genes, rejecting
the idea that there exist genes for specific traits, including
language. In other words, that genes can contain traits. He then
proposed that the way in which genes influence the general patterns of
structure and function is by means of their action upon ontogenesis
of genes as a causal agent which is individually the direct and unique
responsible for a specific phenotype, criticizing prior hypothesis by Charles Goodwin.
Recent Developments
Generative Procedure Accepted At Present & Its Developments
In
biolinguistics, language is recognised to be based on recursive
generative procedure that retrieves words from the lexicon and applies
them repeatedly to output phrases. This generative procedure was
hypothesised to be a result of a minor brain mutation due to evidence
that word ordering is limited to externalisation and plays no role in
core syntax or semantics. Thus, different lines of inquiry to explain
this were explored.
The most commonly accepted line of inquiry to explain this is Noam Chomsky's minimalist approach to syntactic representations. In 2016, Chomsky and Berwick defined the minimalist program under the Strong Minimalist Thesis in their book Why Only Us by saying that language is mandated by efficient computations and, thus, keeps to the simplest recursive operations. The main basic operation in the minimalist program is merge.
Under merge there are two ways in which larger expressions can be
constructed: externally and internally. Lexical items that are merged
externally build argument representations with disjoint constituents.
The internal merge creates constituent structures where one is a part of
another. This induces displacement, the capacity to pronounce phrases in one position, but interpret them elsewhere.
Recent investigations of displacement concur to a slight rewiring
in cortical brain regions that could have occurred historically and
perpetuated generative grammar. Upkeeping this line of thought, in 2009,
Ramus and Fishers speculated that a single gene could create a
signalling molecule to facilitate new brain connections or a new area of
the brain altogether via prenatally defined brain regions. This would
result in information processing greatly important to language, as we
know it. The spread of this advantage trait could be responsible for
secondary externalisation and the interaction we engage in. If this holds, then the objective of biolinguistics is to find out as much as we can about the principles underlying mental recursion.
Human versus Animal Communication
Compared
to other topics in linguistics where data can be displayed with
evidence cross-linguistically, due to the nature of biolinguistics, and
that it is applies to the entirety of linguistics rather than just a
specific subsection, examining other species can assist in providing
data. Although animals do not have the same linguistic competencies as
humans, it is assumed that they can provide evidence for some linguistic
competence.
The relatively new science of evo-devo
that suggests everyone is a common descendant from a single tree has
opened pathways into gene and biochemical study. One way in which this
manifested within biolinguistics is through the suggestion of a common
language gene, namely FOXP2.
Though this gene is subject to debate, there have been interesting
recent discoveries made concerning it and the part it plays in the
secondary externalization process. Recent studies of birds and mice
resulted in an emerging consensus that FOXP2 is not a blueprint for
internal syntax nor the narrow faculty of language, but rather makes up
the regulatory machinery pertaining to the process of externalization.
It has been found to assist sequencing sound or gesture one after the
next, hence implying that FOXP2 helps transfer knowledge from declarative to procedural memory. Therefore, FOXP2 has been discovered to be an aid in formulating a linguistic input-output system that runs smoothly.
The Integration Hypothesis
According
to the Integration Hypothesis, human language is the combination of the
Expressive (E) component and the Lexical (L) component. At the level of
words, the L component contains the concept and meaning that we want to
convey. The E component contains grammatical information and
inflection. For phrases, we often see an alternation between the two
components. In sentences, the E component is responsible for providing
the shape and structure to the base-level lexical words, while these
lexical items and their corresponding meanings found in the lexicon make up the L component.
This has consequences for our understanding of: (i) the origins of the E
and L components found in bird and monkey communication systems; (ii)
the rapid emergence of human language as related to words; (iii)
evidence of hierarchical structure within compound words; (iv) the role of phrases in the detection of the structure building operation Merge;
and (v) the application of E and L components to sentences. In this
way, we see that the Integration Hypothesis can be applied to all levels
of language: the word, phrasal, and sentence level.
The Origins of the E and L systems in Bird and Monkey Communication Systems
Through
the application of the Integration Hypothesis, it can be seen that the
interaction between the E and L components enables language structure (E
component) and lexical items (L component) to operate simultaneously
within one form of complex communication: human language. However, these
two components are thought to have emerged from two pre-existing,
separate, communication systems in the animal world. The communication systems of birds and monkeys
have been found to be antecedents to human language. The bird song
communication system is made up entirely of the E component while the
alarm call system used by monkeys is made up of the L component. Human
language is thought to be the byproduct of these two separate systems
found in birds and monkeys, due to parallels between human communication
and these two animal communication systems.
The communication systems of songbirds is commonly described as a
system that is based on syntactic operations. Specifically, bird song
enables the systematic combination of sound elements in order to string
together a song. Likewise, human languages also operate syntactically
through the combination of words, which are calculated systematically.
While the mechanics of bird song thrives off of syntax, it appears as
though the notes, syllables, and motifs that are combined in order to
elicit the different songs may not necessarily contain any meaning. The communication system of songbirds’ also lacks a lexicon
that contains a set of any sort of meaning-to-referent pairs.
Essentially, this means that an individual sound produced by a songbird
does not have meaning associated with it, the way a word does in human
language. Bird song is capable of being structured, but it is not
capable of carrying meaning. In this way, the prominence of syntax and
the absence of lexical meaning presents bird song as a strong candidate
for being a simplified antecedent of the E component that is found in
human language, as this component also lacks lexical information. While
birds that use bird song can rely on just this E component to
communicate, human utterances require lexical meaning in addition to
structural operations a part of the E component, as human language is
unable to operate with just syntactic structure or structural function
words alone. This is evident as human communication does in fact consist
of a lexicon, and humans produce combined sequences of words that are
meaningful, best known as sentences. This suggests that part of human
language must have been adapted from another animal's communication
system in order for the L component to arise .
A well known study by Seyfarth et al.
investigated the referential nature of the alarm calls of vervet
monkeys. These monkeys have three set alarm calls, with each call
directly mapping on to one of the following referents: a leopard, an
eagle, or a snake. Each call is used to warn other monkeys about the
presence of one of these three predators in their immediate
environmental surroundings. The main idea is that the alarm call
contains lexical information that can be used to represent the referent
that is being referred to. Essentially, the entire communication system
used by monkeys is made up of the L system such that only these
lexical-based calls are needed to effectively communicate. This is
similar to the L component found in human language in which content
words are used to refer to a referent in the real world, containing the
relevant lexical information. The L component in human language is,
however, a much more complex variant of the L component found in vervet
monkey communication systems: humans use many more than just 3
word-forms to communicate. While vervet monkeys are capable of
communicating solely with the L component, humans are not, as
communication with just content words does not output well-formed
grammatical sentences. It is for this reason that the L component is
combined with the E component responsible for syntactic structure in
order to output human language.
The Rapid Emergence of Human Language
As
traces of the E and L components have been found in nature, the
integration hypothesis asserts that these two systems existed before
human language, and that it was the combination of these two
pre-existing systems that rapidly led to the emergence of human
language.
The Integration Hypothesis posits that it was the grammatical operator,
Merge, that triggered the combination of the E and L systems to create
human language.
In this view, language emerged rapidly and fully formed, already
containing syntactical structure. This is in contrast to the Gradualist
Approach, where it is thought that early forms of language did not have
syntax. Instead, supporters of the Gradualist Approach believe language
slowly progressed through a series of stages as a result of a simple
combinatory operator that generated flat structures. Beginning with a
one-word stage, then a two-word stage, then a three-word stage, etc.,
language is thought to have developed hierarchy in later stages.
In the article, The precedence of syntax in the rapid emergence of human language in evolution as defined by the integration hypothesis,
Nóbrega & Miyagawa outline the Integration Hypothesis as it applies
to words. To explain the Integration Hypothesis as it relates to words,
everyone must first agree on the definition of a 'word'. While this
seems fairly straightforward in English, this is not the case for other
languages. To allow for cross-linguistic discussion, the idea of a
"root" is used instead, where a "root" encapsulates a concept at the
most basic level. In order to differentiate between "roots" and "words",
it must be noted that "roots" are completely devoid of any information
relating to grammatical category or inflection. Therefore, "roots" form
the lexical component of the Integration Hypothesis while grammatical
category (noun, verb, adjective) and inflectional properties (e.g. case,
number, tense, etc.) form the expressive component.
Thus, at the most basic level for the formation of a "word" in human
language, there must be a combination of the L component with the E
component. When we know a "word" in a language, we must know both
components: the concept that it relates to as well as its grammatical
category and inflection. The former is the L component; the latter is
the E component. The Integration Hypothesis suggests that it was the
grammatical operator Merge that triggered this combination, occurring
when one linguistic object (L layer) satisfies the grammatical feature
of another linguistic object (E layer). This means that L components are
not expected to directly combine with each other.
Based on this analysis, it is believed that human language
emerged in a single step. Before this rapid emergence, the L component,
"roots", existed individually, lacked grammatical features, and were not
combined with each other. However, once this was combined with the E
component, it led to the emergence of human language, with all the
necessary characteristics. Hierarchical structures of syntax are already
present within words because of the integration of these two layers.
This pattern is continued when words are combined with each other to
make phrases, as well as when phrases are combined into sentences.
Therefore, the Integration Hypothesis posits that once these two systems
were integrated, human language appeared fully formed, and did not
require additional stages.
Evidence of Hierarchical Structure Within Compound Words
Compound words are a special point of interest with the Integration
Hypothesis, as they are further evidence that words contain internal
structure. The Integration Hypothesis, analyzes compound words
differently compared to previous gradualist theories of language
development. As previously mentioned, in the Gradualist Approach,
compound words are thought of as part of a proto-syntax stage to the
human language. In this proposal of a lexical protolanguage, compounds are developed in the second stage through a combination of single words by a rudimentary recursive n-ary operation that generates flat structures.
However, the Integration Hypothesis challenges this belief, claiming
that there is evidence to suggest that words are internally complex. In
English for example, the word 'unlockable' is ambiguous because of two
possible structures within. It can either mean something that is able to
be unlocked (unlock-able), or it can mean something that is not
lockable (un-lockable). This ambiguity points to two possible
hierarchical structures within the word: it cannot have the flat
structure posited by the Gradualist Approach. With this evidence,
supporters of the Integration Hypothesis argue that these hierarchical
structures in words are formed by Merge, where the L component and E
component are combined. Thus, Merge is responsible for the formation of
compound words and phrases. This discovery leads to the hypothesis that
words, compounds, and all linguistic objects of the human language are
derived from this integration system, and provides contradictory
evidence to the theory of an existence of a protolanguage.
In the view of compounds as "living fossils", Jackendoff
alleges that the basic structure of compounds does not provide enough
information to offer semantic interpretation. Hence, the semantic
interpretation must come from pragmatics. However, Nórega and Miyagawa
noticed that this claim of dependency on pragmatics is not a property
of compound words that is demonstrated in all languages. The example
provided by Nórega and Miyagawa is the comparison between English (a
Germanic language) and Brazilian Portuguese (a Romance language).
English compound nouns can offer a variety of semantic interpretations.
For example, the compound noun "car man" can have several possible
understandings such as: a man who sells cars, a man who's passionate
about cars, a man who repairs cars, a man who drives cars, etc. In
comparison, the Brazilian Portuguese compound noun "peixe-espada"
translated as "sword fish", only has one understanding of a fish that
resembles a sword.
Consequently, when looking at the semantic interpretations available of
compound words between Germanic languages and Romance languages, the
Romance languages have highly restrictive meanings. This finding
presents evidence that in fact, compounds contain more sophisticated
internal structures than previously thought. Moreover, Nórega and
Miyagawa provide further evidence to counteract the claim of a
protolanguage through examining exocentric VN compounds. As defined, one
of the key components to Merge is the property of being recursive.
Therefore, by observing recursion within exocentric VN compounds of
Romance languages, this proves that there must be an existence of an
internal hierarchical structure which Merge is responsible for
combining. In the data collected by Nórega and Miyagawa,
they observe recursion occurring in several occasions within different
languages. This happens in Catalan, Italian, and Brazilian Portuguese
where a new VN compound is created when a nominal exocentric VN compound
is the complement of a verb. For example, referring to the Catalan
translation of "windshield wipers", [neteja[para-brises]] lit. clean-stop-breeze, we can identify recursion because [para-brises] is the complement of [neteja].
Additionally, we can also note the occurrence of recursion when the
noun of a VN compound contains a list of complements. For example,
referring to the Italian translation of "rings, earrings, or small
jewels holder", [porta[anelli, orecchini o piccoli monili]] lit. carry-rings-earrings-or-small-jewels, there is recursion because of the string of complements [anelli, orecchini o piccoli monili] containing the noun to the verb [porta].
The common claim that compounds are fossils of language often
complements the argument that they contain a flat, linear structure. However, Di Sciullo provided experimental evidence to dispute this.
With the knowledge that there is asymmetry in the internal structure of
exocentric compounds, she uses the experimental results to show that
hierarchical complexity effects are observed from processing of NV
compounds in English. In her experiment, sentences containing
object-verb compounds and sentences containing adjunct-verb compounds
were presented to English speakers, who then assessed the acceptability
of these sentences. Di Sciullo has noted that previous works have
determined adjunct-verb compounds to have more complex structure than
object-verb compounds because adjunct-verb compounds require merge to
occur several times.
In her experiment, there were 10 English speaking participants who
evaluated 60 English sentences. The results revealed that the
adjunct-verb compounds had a lower acceptability rate and the
object-verb compounds had a higher acceptability rate. In other words,
the sentences containing the adjunct-verb compounds were viewed as more
"ill-formed" than the sentences containing the object-verb compounds.
The findings demonstrated that the human brain is sensitive to the
internal structures that these compounds contain. Since adjunct-verb
compounds contain complex hierarchical structures from the recursive
application of Merge, these words are more difficult to decipher and
analyze than the object-verb compounds which encompass simpler
hierarchical structures. This is evidence that compounds could not have
been fossils of a protolanguage without syntax due to their complex
internal hierarchical structures.
Interactions Between E and L Components in Phrases of Human Language
As
previously mentioned, human language is interesting because it
necessarily requires elements from both E and L systems - neither can
stand alone. Lexical items, or what the Integration Hypothesis refers to
as 'roots', are necessary as they refer to things in the world around
us. Expression items, that convey information about category or
inflection (number, tense, case etc.) are also required to shape the
meanings of the roots.
It becomes more clear that neither of these two systems can exist
alone with regards to human language when we look at the phenomenon of
'labeling'. This phenomenon refers to how we classify the grammatical
category of phrases, where the grammatical category of the phrase is
dependent on the grammatical category of one of the words within the
phrase, called the head. For example, in the phrase "buy the books", the
verb "buy" is the head, and we call the entire phrase a verb-phrase.
There is also a smaller phrase within this verb-phrase, a determiner
phrase, "the books" because of the determiner "the". What makes this
phenomenon interesting is that it allows for hierarchical structure
within phrases. This has implications on how we combine words to form
phrases and eventually sentences.
This labelling phenomenon has limitations however. Some labels can
combine and others cannot. For example, two lexical structure labels
cannot directly combine. The two nouns, "Lucy" and "dress" cannot
directly be combined. Likewise, neither can the noun "pencil" be merged
with the adjective "short", nor can the verbs, "want" and "drink" cannot
be merged without anything in between. As represented by the schematic
below, all of these examples are impossible lexical structures. This
shows that there is a limitation where lexical categories can only be
one layer deep. However, these limitations can be overcome with the
insertion of an expression layer in between. For example, to combine
"John" and "book", adding a determiner such as "-'s" makes this a
possible combination.
Another limitation regards the recursive nature of the expressive
layer. While it is true that CP and TP can come together to form
hierarchical structure, this CP TP structure cannot repeat on top of
itself: it is only a single layer deep. This restriction is common to
both the expressive layer in humans, but also in birdsong. This
similarity strengthens the tie between the pre-existing E system posited
to have originated in birdsong and the E layers found in human
language.
Due to these limitations in each system, where both lexical and
expressive categories can only be one layer deep, the recursive and
unbounded hierarchical structure of human language is surprising. The
Integration hypothesis posits that it is the combination of these two
types of layers that results in such a rich hierarchical structure. The
alternation between L layers and E layers is what allows human language
to reach an arbitrary depth of layers. For example, in the phrase "Eat
the cake that Mary baked", the tree structure shows an alternation
between L and E layers. This can easily be described by two phrase
rules: (i) LP → L EP and (ii) EP → E LP. The recursion that is possible
is plainly seen by transforming these phrase rules into bracket
notation. The LP in (i) can be written as [L EP]. Then, adding an E
layer to this LP to create an EP would result in [E [L EP]]. After, a
more complex LP could be obtained by adding an L layer to the EP,
resulting in [L [E [L EP]]]. This can continue forever and would result
in the recognizable deep structures found in human language.
The Operation of E and L Components in the Syntax of Sentences
The
E and L components can be used to explain the syntactic structures that
make up sentences in human languages. The first component, the L
component, contains content words.
This component is responsible for carrying the lexical information that
relays the underlying meaning behind a sentence. However, combinations
consisting solely of L component content words do not result in
grammatical sentences. This issue is resolved through the interaction of
the L component with the E component. The E component is made up of function words:
words that are responsible for inserting syntactic information about
the syntactic categories of L component words, as well as
morphosyntactic information about clause-typing, question, number, case
and focus.
Since these added elements complement the content words in the L
component, the E component can be thought of as being applied to the L
component. Considering that the L component is solely composed of
lexical information and the E component is solely composed of syntactic
information, they do exist as two independent systems. However, for the
rise of such a complex system as human language, the two systems are
necessarily reliant on each other. This aligns with Chomsky's proposal
of duality of semantics which suggests that human language is composed
of these two distinct components.
In this way, it is logical as to why the convergence of these two
components was necessary in order to enable the functionality of human
language as we know it today.
Looking at the following example taken from the article The integration hypothesis of human language evolution and the nature of contemporary languages by Miyagawa et al., each word can be identified as either being either an L component or an E component in the sentence: Did John eat pizza?
The L component words of this sentence are the content words John, eat, and pizza.
Each word only contains lexical information that directly contributes
to the meaning of the sentence. The L component is often referred to as
the base or inner component, due to the inwards positioning of this
constituent in a phrase structure tree. It is evident that the string of
words ‘John eat pizza’ does not form a grammatically well-formed
sentence in English, which suggests that E component words are necessary
to syntactically shape and structure this string of words. The E
component is typically referred to as the outer component that shapes
the inner L component as these elements originate in a position that
orbits around the L component in a phrase structure tree. In this
example, the E component function word that is implemented is did. By inserting this word, two types of structures are added to the expression: tense and clause typing. The word did
is a word that is used to inquire about something that happened in the
past, meaning that it adds the structure of the past tense to this
expression. In this example, this does not explicitly change the form of
the verb, as the verb eat in the past tense still surfaces as eat
without any additional tense markers in this particular environment.
Instead the tense slot can be thought of as being filled by a null
symbol (∅) as this past tense form does not have any phonological
content. Although covert, this null tense marker is an important
contribution from the E component word did. Tense aside, clause typing is also conveyed through the E component. It is interesting that this function word did
surfaces in the sentence initial position because in English, this
indicates that the string of words will manifest as a question. The word
did determines that the structure of the clause type for this
sentence will be in the form of an interrogative question, specifically a
yes–no question. Overall, the integration of the E component with the L component forms the well-formed sentence, Did John eat pizza?, and accounts for all other utterances found in human languages.
Critiques
Alternative Theoretical Approaches
Stemming from the usage-based approach, the Competition Model, developed by Elizabeth Bates and Brian MacWhinney,
views language acquisition as consisting of a series of competitive
cognitive processes that act upon a linguistic signal. This suggests
that language development depends on learning and detecting linguistic
cues with the use of competing general cognitive mechanisms rather than
innate, language-specific mechanisms.
From the side of biosemiotics,
there has been a recent claim that meaning-making begins far before the
emergence of human language. This meaning-making consists of internal
and external cognitive processes. Thus, it holds that such process
organisation could not have only given a rise to language alone.
According to this perspective all living things possess these processes,
regardless of how wide the variation, as a posed to species-specific.
Over-Emphasised Weak Stream Focus
When talking about biolinguistics there are two senses that are
adopted to the term: strong and weak biolinguistics. The weak is founded
on theoretical linguistics that is generativist in persuasion. On the
other hand, the strong stream goes beyond the commonly explored
theoretical linguistics, oriented towards biology, as well as other
relevant fields of study. Since the early emergence of biolinguistics
to its present day, there has been a focused mainly on the weak stream,
seeing little difference between the inquiry into generative linguistics
and the biological nature of language as well as heavily relying on the
Chomskyan origin of the term.
As expressed by research professor and linguist Cedric Boeckx, it
is a prevalent opinion that biolinguistics need to focus on biology as
to give substance to the linguistic theorizing this field has engaged
in. Particular criticisms mentioned include a lack of distinction
between generative linguistics and biolinguistics, lack of discoveries
pertaining to properties of grammar in the context of biology, and lack
of recognition for the importance broader mechanisms, such as biological
non-linguistic properties. After all, it is only an advantage to label
propensity for language as biological if such insight is used towards a
research.
David Poeppel, a neuroscientist
and linguist, has additionally noted that if neuroscience and
linguistics are done wrong, there is a risk of "inter-disciplinary
cross-sterilization," arguing that there is a Granularity Mismatch Problem.
Due to this different levels of representations used in linguistics and
neural science lead to vague metaphors linking brain structures to
linguistic components. Poeppel and Embick also introduce the Ontological Incommensurability Problem, where computational processes described in linguistic theory cannot be restored to neural computational processes.
A recent critique of biolinguistics and 'biologism' in language
sciences in general has been developed by Prakash Mondal who shows that
there are inconsistencies and categorical mismatches in any putative
bridging constraints that purport to relate neurobiological structures
and processes to the logical structures of language that have a
cognitive-representational character.