Search This Blog

Thursday, July 16, 2020

Drug interaction

From Wikipedia, the free encyclopedia
 
A drug interaction is a change in the action or side effects of a drug caused by concomitant administration with a food, beverage, supplement, or another drug.

There are many causes of drug interactions. For example, one drug may alter the pharmacokinetics of another. Alternatively, drug interactions may result from competition for a single receptor or signaling pathway.

The risk of a drug-drug interaction increases with the number of drugs used. Over a third (36%) of the elderly in the U.S. regularly use five or more medications or supplements, and 15% are at risk of a significant drug-drug interaction.

Pharmacodynamic interactions

When two drugs are used together, their effects can be additive (the result is what you expect when you add together the effect of each drug taken independently), synergistic (combining the drugs leads to a larger effect than expected), or antagonistic (combining the drugs leads to a smaller effect than expected). There is sometimes confusion on whether drugs are synergistic or additive, since the individual effects of each drug may vary from patient to patient. A synergistic interaction may be beneficial for patients, but may also increase the risk of overdose. 

Both synergy and antagonism can occur during different phases of the interaction between a drug, and an organism. For example, when synergy occurs at a cellular receptor level this is termed agonism, and the substances involved are termed agonists. On the other hand, in the case of antagonism, the substances involved are known as inverse agonists. The different responses of a receptor to the action of a drug has resulted in a number of classifications, such as "partial agonist", "competitive agonist" etc. These concepts have fundamental applications in the pharmacodynamics of these interactions. The proliferation of existing classifications at this level, along with the fact that the exact reaction mechanisms for many drugs are not well-understood means that it is almost impossible to offer a clear classification for these concepts. It is even possible that many authors would misapply any given classification.

Direct interactions between drugs are also possible and may occur when two drugs are mixed prior to intravenous injection. For example, mixing thiopentone and suxamethonium in the same syringe can lead to the precipitation of thiopentone.

The change in an organism's response upon administration of a drug is an important factor in pharmacodynamic interactions. These changes are extraordinarily difficult to classify given the wide variety of modes of action that exist, and the fact that many drugs can cause their effect through a number of different mechanisms. This wide diversity also means that, in all but the most obvious cases it is important to investigate, and understand these mechanisms. The well-founded suspicion exists that there are more unknown interactions than known ones. 

Effects of the competitive inhibition of an agonist by increases in the concentration of an antagonist. A drugs potency can be affected (the response curve shifted to the right) by the presence of an antagonistic interaction.pA2 known as the Schild representation, a mathematical model of the agonist:antagonist relationship or vice versa. NB: the x-axis is incorrectly labelled and should reflect the agonist concentration, not antagonist concentration.
 
Pharmacodynamic interactions can occur on:
  1. Pharmacological receptors: Receptor interactions are the most easily defined, but they are also the most common. From a pharmacodynamic perspective, two drugs can be considered to be:
    1. Homodynamic, if they act on the same receptor. They, in turn can be:
      1. Pure agonists, if they bind to the main locus of the receptor, causing a similar effect to that of the main drug.
      2. Partial agonists if, on binding to one of the receptor's secondary sites, they have the same effect as the main drug, but with a lower intensity.
      3. Antagonists, if they bind directly to the receptor's main locus but their effect is opposite to that of the main drug. These include:
        1. Competitive antagonists, if they compete with the main drug to bind with the receptor. The amount of antagonist or main drug that binds with the receptor will depend on the concentrations of each one in the plasma.
        2. Uncompetitive antagonists, when the antagonist binds to the receptor irreversibly and is not released until the receptor is saturated. In principle the quantity of antagonist and agonist that binds to the receptor will depend on their concentrations. However, the presence of the antagonist will cause the main drug to be released from the receptor regardless of the main drug's concentration, therefore all the receptors will eventually become occupied by the antagonist.
    2. Heterodynamic competitors, if they act on distinct receptors.
  2. Signal transduction mechanisms: these are molecular processes that commence after the interaction of the drug with the receptor. For example, it is known that hypoglycaemia (low blood glucose) in an organism produces a release of catecholamines, which trigger compensation mechanisms thereby increasing blood glucose levels. The release of catecholamines also triggers a series of symptoms, which allows the organism to recognise what is happening and which act as a stimulant for preventative action (eating sugars). Should a patient be taking a drug such as insulin, which reduces glycaemia, and also be taking another drug such as certain beta-blockers for heart disease, then the beta-blockers will act to block the adrenaline receptors. This will block the reaction triggered by the catecholamines should a hypoglycaemic episode occur. Therefore, the body will not adopt corrective mechanisms and there will be an increased risk of a serious reaction resulting from the ingestion of both drugs at the same time.
  3. Antagonic physiological systems: Imagine a drug A that acts on a certain organ. This effect will increase with increasing concentrations of physiological substance S in the organism. Now imagine a drug B that acts on another organ, which increases the amount of substance S. If both drugs are taken simultaneously it is possible that drug A could cause an adverse reaction in the organism as its effect will be indirectly increased by the action of drug B. An actual example of this interaction is found in the concomitant use of digoxin and furosemide. The former acts on cardiac fibres and its effect is increased if there are low levels of potassium (K) in blood plasma. Furosemide is a diuretic that lowers arterial tension but favours the loss of K+. This could lead to hypokalemia (low levels of potassium in the blood), which could increase the toxicity of digoxin.

Pharmacokinetic interactions

Modifications in the effect of a drug are caused by differences in the absorption, transport, distribution, metabolism or excretion of one or both of the drugs compared with the expected behavior of each drug when taken individually. These changes are basically modifications in the concentration of the drugs. In this respect, two drugs can be homergic if they have the same effect in the organism and heterergic if their effects are different.

Absorption interactions

Changes in motility

Some drugs, such as the prokinetic agents increase the speed with which a substance passes through the intestines. If a drug is present in the digestive tract's absorption zone for less time its blood concentration will decrease. The opposite will occur with drugs that decrease intestinal motility.
  • pH: Drugs can be present in either ionised or non-ionised form, depending on their pKa (pH at which the drug reaches equilibrium between its ionised and non-ionised form). The non-ionized forms of drugs are usually easier to absorb, because they will not be repelled by the lipidic bylayer of the cell, most of them can be absorbed by passive diffusion, unless they are too big or too polarized (like glucose or vancomycin), in which case they may have or not have specific and non specific transporters distributed on the entire intestine internal surface, that carries drugs inside the body. Obviously increasing the absorption of a drug will increase its bioavailability, so, changing the drug's state between ionized or not, can be useful or not for certain drugs.
Certain drugs require an acid stomach pH for absorption. Others require the basic pH of the intestines. Any modification in the pH could change this absorption. In the case of the antacids, an increase in pH can inhibit the absorption of other drugs such as zalcitabine (absorption can be decreased by 25%), tipranavir (25%) and amprenavir (up to 35%). However, this occurs less often than an increase in pH causes an increase in absorption. Such as occurs when cimetidine is taken with didanosine. In this case a gap of two to four hours between taking the two drugs is usually sufficient to avoid the interaction.

Transport and distribution interactions

The main interaction mechanism is competition for plasma protein transport. In these cases the drug that arrives first binds with the plasma protein, leaving the other drug dissolved in the plasma, which modifies its concentration. The organism has mechanisms to counteract these situations (by, for example, increasing plasma clearance), which means that they are not usually clinically relevant. However, these situations should be taken into account if other associated problems are present such as when the method of excretion is affected.

Metabolism interactions

Diagram of cytochrome P450 isoenzyme 2C9 with the haem group in the centre of the enzyme.
 
Many drug interactions are due to alterations in drug metabolism. Further, human drug-metabolizing enzymes are typically activated through the engagement of nuclear receptors. One notable system involved in metabolic drug interactions is the enzyme system comprising the cytochrome P450 oxidases.

CYP450

Cytochrome P450 is a very large family of haemoproteins (hemoproteins) that are characterized by their enzymatic activity and their role in the metabolism of a large number of drugs. Of the various families that are present in human beings the most interesting in this respect are the 1, 2 and 3, and the most important enzymes are CYP1A2, CYP2C9, CYP2C19, CYP2D6, CYP2E1 and CYP3A4. The majority of the enzymes are also involved in the metabolism of endogenous substances, such as steroids or sex hormones, which is also important should there be interference with these substances. As a result of these interactions the function of the enzymes can either be stimulated (enzyme induction) or inhibited.

Enzymatic inhibition

If drug A is metabolized by a cytochrome P450 enzyme and drug B inhibits or decreases the enzyme's activity, then drug A will remain with high levels in the plasma for longer as its inactivation is slower. As a result, enzymatic inhibition will cause an increase in the drug's effect. This can cause a wide range of adverse reactions.

It is possible that this can occasionally lead to a paradoxical situation, where the enzymatic inhibition causes a decrease in the drug's effect: if the metabolism of drug A gives rise to product A2, which actually produces the effect of the drug. If the metabolism of drug A is inhibited by drug B the concentration of A2 that is present in the blood will decrease, as will the final effect of the drug.

Enzymatic induction

If drug A is metabolized by a cytochrome P450 enzyme and drug B induces or increases the enzyme's activity, then blood plasma concentrations of drug A will quickly fall as its inactivation will take place more rapidly. As a result, enzymatic induction will cause a decrease in the drug's effect.

As in the previous case, it is possible to find paradoxical situations where an active metabolite causes the drug's effect. In this case the increase in active metabolite A2 (following the previous example) produces an increase in the drug's effect.

It can often occur that a patient is taking two drugs that are enzymatic inductors, one inductor and the other inhibitor or both inhibitors, which greatly complicates the control of an individual's medication and the avoidance of possible adverse reactions.

An example of this is shown in the following table for the CYP1A2 enzyme, which is the most common enzyme found in the human liver. The table shows the substrates (drugs metabolized by this enzyme) and the inductors and inhibitors of its activity:

Enzyme CYP3A4 is the enzyme that the greatest number of drugs use as a substrate. Over 100 drugs depend on its metabolism for their activity and many others act on the enzyme as inductors or inhibitors. 

Some foods also act as inductors or inhibitors of enzymatic activity. The following table shows the most common:
Foods and their influence on drug metabolism
Food Mechanism Drugs affected
Enzymatic inductor Acenocoumarol, warfarin
Grapefruit juice Enzymatic inhibition
Soya Enzymatic inhibition Clozapine, haloperidol, olanzapine, caffeine, NSAIDs, phenytoin, zafirlukast, warfarin
Garlic Increases antiplatelet activity
Ginseng To be determined Warfarin, heparin, aspirin and NSAIDs
Ginkgo biloba Strong inhibitor of platelet aggregation factor Warfarin, aspirin and NSAIDs
Hypericum perforatum (St John's wort) Enzymatic inductor (CYP450) Warfarin, digoxin, theophylline, cyclosporine, phenytoin and antiretrovirals
Ephedra Receptor level agonist MAOI, central nervous system stimulants, alkaloids ergotamines and xanthines
Kava (Piper methysticum) Unknown Levodopa
Ginger Inhibits thromboxane synthetase (in vitro) Anticoagulants
Chamomile Unknown Benzodiazepines, barbiturates and opioids
Hawthorn Unknown Beta-adrenergic antagonists, cisapride, digoxin, quinidine
Grapefruit juice can act as an enzyme inhibitor.
 
Any study of pharmacological interactions between particular medicines should also discuss the likely interactions of some medicinal plants. The effects caused by medicinal plants should be considered in the same way as those of medicines as their interaction with the organism gives rise to a pharmacological response. Other drugs can modify this response and also the plants can give rise to changes in the effects of other active ingredients.

There is little data available regarding interactions involving medicinal plants for the following reasons:

St John's wort can act as an enzyme inductor.
  1. False sense of security regarding medicinal plants. The interaction between a medicinal plant and a drug is usually overlooked due to a belief in the "safety of medicinal plants."
  2. Variability of composition, both qualitative and quantitative. The composition of a plant-based drug is often subject to wide variations due to a number of factors such as seasonal differences in concentrations, soil type, climatic changes or the existence of different varieties or chemical races within the same plant species that have variable compositions of the active ingredient. On occasion, an interaction can be due to just one active ingredient, but this can be absent in some chemical varieties or it can be present in low concentrations, which will not cause an interaction. Counter interactions can even occur. This occurs, for instance, with ginseng, the Panax ginseng variety increases the Prothrombin time, while the Panax quinquefolius variety decreases it.
  3. Absence of use in at-risk groups, such as hospitalized and polypharmacy patients, who tend to have the majority of drug interactions.
  4. Limited consumption of medicinal plants has given rise to a lack of interest in this area.
They are usually included in the category of foods as they are usually taken as a tea or food supplement. However, medicinal plants are increasingly being taken in a manner more often associated with conventional medicines: pills, tablets, capsules, etc.

Excretion interactions

Renal excretion

Human kidney nephron.

Only the free fraction of a drug that is dissolved in the blood plasma can be removed through the kidney. Therefore, drugs that are tightly bound to proteins are not available for renal excretion, as long as they are not metabolized when they may be eliminated as metabolites. Creatinine clearance is used as a measure of kidney functioning but it is only useful in cases where the drug is excreted in an unaltered form in the urine. The excretion of drugs from the kidney's nephrons has the same properties as that of any other organic solute: passive filtration, reabsorption and active secretion. In the latter phase the secretion of drugs is an active process that is subject to conditions relating to the saturability of the transported molecule and competition between substrates. Therefore, these are key sites where interactions between drugs could occur. Filtration depends on a number of factors including the pH of the urine, it having been shown that the drugs that act as weak bases are increasingly excreted as the pH of the urine becomes more acidic, and the inverse is true for weak acids. This mechanism is of great use when treating intoxications (by making the urine more acidic or more alkali) and it is also used by some drugs and herbal products to produce their interactive effect.

Bile excretion

Bile excretion is different from kidney excretion as it always involves energy expenditure in active transport across the epithelium of the bile duct against a concentration gradient. This transport system can also be saturated if the plasma concentrations of the drug are high. Bile excretion of drugs mainly takes place where their molecular weight is greater than 300 and they contain both polar and lipophilic groups. The glucuronidation of the drug in the kidney also facilitates bile excretion. Substances with similar physicochemical properties can block the receptor, which is important in assessing interactions. A drug excreted in the bile duct can occasionally be reabsorbed by the intestines (in the enterohepatic circuit), which can also lead to interactions with other drugs.

Herb-drug interactions

Herb-drug interactions are drug interactions that occur between herbal medicines and conventional drugs. These types of interactions may be more common than drug-drug interactions because herbal medicines often contain multiple pharmacologically active ingredients, while conventional drugs typically contain only one. Some such interactions are clinically significant, although most herbal remedies are not associated with drug interactions causing serious consequences. Most herb-drug interactions are moderate in severity. The most commonly implicated conventional drugs in herb-drug interactions are warfarin, insulin, aspirin, digoxin, and ticlopidine, due to their narrow therapeutic indices. The most commonly implicated herbs involved in such interactions are those containing St. John’s Wort, magnesium, calcium, iron, or ginkgo.

Examples

Examples of herb-drug interactions include, but are not limited to:

Mechanisms

The mechanisms underlying most herb-drug interactions are not fully understood. Interactions between herbal medicines and anticancer drugs typically involve enzymes that metabolize cytochrome P450. For example, St. John's Wort has been shown to induce CYP3A4 and P-glycoprotein in vitro and in vivo.

Underlying factors

It is possible to take advantage of positive drug interactions. However, the negative interactions are usually of more interest because of their pathological significance, and also because they are often unexpected, and may even go undiagnosed. By studying the conditions that favor the appearance of interactions, it should be possible to prevent them, or at least diagnose them in time. The factors or conditions that predispose the appearance of interactions include:
  • Old age: factors relating to how human physiology changes with age may affect the interaction of drugs. For example, liver metabolism, kidney function, nerve transmission or the functioning of bone marrow all decrease with age. In addition, in old age there is a sensory decrease that increases the chances of errors being made in the administration of drugs.
  • Polypharmacy: The use of multiple drugs by a single patient, to treat one or more ailments. The more drugs a patient takes the more likely it will be that some of them will interact.
  • Genetic factors: Genes synthesize enzymes that metabolize drugs. Some races have genotypic variations that could decrease or increase the activity of these enzymes. The consequence of this would, on occasions, be a greater predisposition towards drug interactions and therefore a greater predisposition for adverse effects to occur. This is seen in genotype variations in the isozymes of cytochrome P450.
  • Hepatic or renal diseases: The blood concentrations of drugs that are metabolized in the liver and/or eliminated by the kidneys may be altered if either of these organs is not functioning correctly. If this is the case an increase in blood concentration is normally seen.
  • Serious diseases that could worsen if the dose of the medicine is reduced.
  • Drug dependent factors:
    • Narrow therapeutic index: Where the difference between the effective dose and the toxic dose is small. The drug digoxin is an example of this type of drug.
    • Steep dose-response curve: Small changes in the dosage of a drug produce large changes in the drug's concentration in the patient's blood plasma.
    • Saturable hepatic metabolism: In addition to dose effects the capacity to metabolize the drug is greatly decreased

Epidemiology

Among US adults older than 55, 4% are taking medication and or supplements that put them at risk of a major drug interaction. Potential drug-drug interactions have increased over time and are more common in the low educated elderly even after controlling for age, sex, place of residence, and comorbidity.

Universal Networking Language

From Wikipedia, the free encyclopedia
 
Universal Networking Language (UNL) is a declarative formal language specifically designed to represent semantic data extracted from natural language texts. It can be used as a pivot language in interlingual machine translation systems or as a knowledge representation language in information retrieval applications.

Scope and goals

UNL is designed to establish a simple foundation for representing the most central aspects of information and meaning in a machine- and human-language-independent form. As a language-independent formalism, UNL aims to code, store, disseminate and retrieve information independently of the original language in which it was expressed. In this sense, UNL seeks to provide tools for overcoming the language barrier in a systematic way.

At first glance, UNL seems to be a kind of interlingua, into which source texts are converted before being translated into target languages. It can, in fact, be used for this purpose, and very efficiently, too. However, its real strength is knowledge representation and its primary objective is to provide an infrastructure for handling knowledge that already exists or can exist in any given language.

Nevertheless, it is important to note that at present it would be foolish to claim to represent the “full” meaning of any word, sentence, or text for any language. Subtleties of intention and interpretation make the “full meaning,” however we might conceive it, too variable and subjective for any systematic treatment. Thus UNL avoids the pitfalls of trying to represent the “full meaning” of sentences or texts, targeting instead the “core” or “consensual” meaning most often attributed to them. In this sense, much of the subtlety of poetry, metaphor, figurative language, innuendo, and other complex, indirect communicative behaviors is beyond the current scope and goals of UNL. Instead, UNL targets direct communicative behavior and literal meaning as a tangible, concrete basis for most human communication in practical, day-to-day settings.

Structure

In the UNL approach, information conveyed by natural language is represented sentence by sentence as a hypergraph composed of a set of directed binary labeled links (referred to as relations) between nodes or hypernodes (the Universal Words, or simply UWs), which stand for concepts. UWs can also be annotated with attributes representing context information. 

As an example, the English sentence ‘The sky was blue?!’ can be represented in UNL as follows:

UNLGraph.svg

In the example above, "sky(icl>natural world)" and "blue(icl>color)", which represent individual concepts, are UWs; "aoj" (= attribute of an object) is a directed binary semantic relation linking the two UWs; and "@def", "@interrogative", "@past", "@exclamation" and "@entry" are attributes modifying UWs.

UWs are intended to represent universal concepts, but are expressed in English words or in any other natural language in order to be humanly readable. They consist of a "headword" (the UW root) and a "constraint list" (the UW suffix between parentheses), where the constraints are used to disambiguate the general concept conveyed by the headword. The set of UWs is organized in the UNL Ontology, in which high-level concepts are related to lower-level ones through the relations "icl" (= is a kind of), "iof" (= is an instance of) and "equ" (= is equal to).

Relations are intended to represent semantic links between words in every existing language. They can be ontological (such as "icl" and "iof," referred to above), logical (such as "and" and "or"), and thematic (such as "agt" = agent, "ins" = instrument, "tim" = time, "plc" = place, etc.). There are currently 46 relations in the UNL Specs. They jointly define the UNL syntax.

Attributes represent information that cannot be conveyed by UWs and relations. Normally, they represent information concerning time ("@past", "@future", etc.), reference ("@def", "@indef", etc.), modality ("@can", "@must", etc.), focus ("@topic", "@focus", etc.), and so on.

Within the UNL Program, the process of representing natural language sentences in UNL graphs is called UNLization, and the process of generating natural language sentences out of UNL graphs is called NLization. UNLization, which involves natural language analysis and understanding, is intended to be carried out semi-automatically (i.e., by humans with computer aids); and NLization is intended to be carried out fully automatically.

History

The UNL Programme started in 1996, as an initiative of the Institute of Advanced Studies of the United Nations University in Tokyo, Japan. In January 2001, the United Nations University set up an autonomous organization, the UNDL Foundation, to be responsible for the development and management of the UNL Programme. The foundation, a non-profit international organisation, has an independent identity from the United Nations University, although it has special links with the UN. It inherited from the UNU/IAS the mandate of implementing the UNL Programme so that it can fulfil its mission.

The programme has already crossed important milestones. The overall architecture of the UNL System has been developed with a set of basic software and tools necessary for its functioning. These are being tested and improved. A vast amount of linguistic resources from the various native languages already under development, as well as from the UNL expression, has been accumulated in the last few years. Moreover, the technical infrastructure for expanding these resources is already in place, thus facilitating the participation of many more languages in the UNL system from now on. A growing number of scientific papers and academic dissertations on the UNL are being published every year.

The most visible accomplishment so far is the recognition by the Patent Co-operation Treaty (PCT) of the innovative character and industrial applicability of the UNL, which was obtained in May 2002 through the World Intellectual Property Organisation (WIPO). Acquiring the patents (US patents 6,704,700 and 7,107,206) for the UNL is a completely novel achievement within the United Nations.

Heritage language

From Wikipedia, the free encyclopedia
 
A heritage language is a minority language (either immigrant or indigenous) learned by its speakers at home as children, but never fully developed because of insufficient input from the social environment: in fact, the community of speakers grows up with a dominant language in which they become more competent. Polinsky & Kagan label it as a continuum (taken from Valdés definition of heritage language) that ranges from fluent speakers to barely-speaking individuals of the home language. In some countries or cultures in which they determine one's mother tongue by the ethnic group, a heritage language would be linked to the native language.

The term can also refer to the language of a person's family or community that the person does not speak or understand, but identifies with culturally.

Definitions and use

Heritage language is a language which is predominantly spoken by "nonsocietal" groups and linguistic minorities.

In various fields, such as foreign language education and linguistics, the definitions of heritage language become more specific and divergent. In foreign language education, heritage language is defined in terms of a student's upbringing and functional proficiency in the language: a student raised in a home where a non-majority language is spoken is a heritage speaker of that language if they possess some proficiency in it. Under this definition, individuals that have some cultural connection with the language but do not speak it are not considered heritage students. This restricted definition became popular in the mid 1990s with the publication of Standards for Foreign Language Learning by the American Council on the Teaching of Foreign Languages.

Among linguists, heritage language is an end-state language that is defined based on the temporal order of acquisition and, often, the language dominance in the individual. A heritage speaker acquires the heritage language as their first language through natural input in the home environment and acquires the majority language as a second language, usually when they start school and talk about different topics with people in school, or by exposure through media (written texts, internet, popular culture etc.). As exposure to the heritage language decreases and exposure to the majority language increases, the majority language becomes the individual’s dominant language and acquisition of the heritage language changes. The results of these changes can be seen in divergence of the heritage language from monolingual norms in the areas of phonology, lexical knowledge (knowledge of vocabulary or words), morphology, syntax, semantics and code-switching, although mastery of the heritage language may vary from purely receptive skills in only informal spoken language to native-like fluency.

Controversy in definition

As stated by Polinsky and Kagan: "The definition of a heritage speaker in general and for specific languages continues to be debated. The debate is of particular significance in such languages as Chinese, Arabic, and languages of India and the Philippines, where speakers of multiple languages or dialects are seen as heritage speakers of a single standard language taught for geographic, cultural or other reasons (Mandarin Chinese, Classical Arabic, Hindi, or Tagalog, respectively)."

One idea that prevails in the literature is that "[heritage] languages include indigenous languages that are often endangered. . . as well as world languages that are commonly spoken in many other regions of the world (Spanish in the United States, Arabic in France)". However, that view is not shared universally. In Canada, for example, First Nations languages are not classified as heritage languages by some groups whereas they are so classified by others.

The label heritage is given to a language based principally on the social status of its speakers and not necessarily on any linguistic property. Thus, while Spanish typically comes in second in terms of native speakers worldwide and has official status in a number of countries, it is considered a heritage language in the English-dominant United States and Canada. Outside the United States and Canada, heritage language definitions and use vary.

Speakers of the same heritage language raised in the same community may differ significantly in terms of their language abilities, yet be considered heritage speakers under this definition. Some heritage speakers may be highly proficient in the language, possessing several registers, while other heritage speakers may be able to understand the language but not produce it. Other individuals that simply have a cultural connection with a minority language but do not speak it may consider it to be their heritage language. It is held by some that ownership does not necessarily depend on usership: “Some Aboriginal people distinguish between usership and ownership. There are even those who claim that they own a language although they only know one single word of it: its name.”

Proficiency

Heritage learners have a fluent command of the dominant language and are comfortable using it in formal setting because of their exposure to the language through formal education. Their command of the heritage language, however, varies widely. Some heritage learners may lose some fluency in the first language after they begin formal education in the dominant language. Others may use the heritage language consistently at home and with family but receive little or no formal training in the heritage language and thus may struggle with literacy skills or with using it in broader settings outside of the home. An additional factor that affects the acquisition of learners is whether they show willingness or reluctance towards learning the heritage language.

One factor that has been shown to influence the loss of fluency in the heritage language is age. Studies have shown that younger bilingual children are more susceptible to fluency loss than older bilingual children. The older the child is when the dominant language is introduced, the less likely he/she is going to lose ability in using his/her first language (the heritage language). This is because the older the child is, the more exposure and knowledge of use the child will have had with the heritage language, and thus the heritage language will remain as their primary language.

Researchers found that this phenomenon primarily deals with the memory network of an individual. Once a memory network is organized, it is difficult for the brain to reorganize information contrary to the initial information, because the previous information was processed first. This phenomenon becomes a struggle for adults who are trying to learn a different language. Once an individual has learned a language fluently, they will be heavily influenced by the grammatical rules and pronunciations of their first language they learned, while learning a new language.

An emerging effective way of measuring the proficiency of a heritage speaker is by speech rate. A study of gender restructuring in heritage Russian showed that heritage speakers fell into two groups: those who maintained the three-gender system and those who radically reanalyzed the system as a two-gender system. The heritage speakers who reanalyzed the three-gender system as a two-gender system had a strong correlation with a slower speech rate. The correlation is straightforward—lower proficiency speakers have more difficulty accessing lexical items; therefore, their speech is slowed down.

Although speech rate has been shown to be an effective way of measuring proficiency of heritage speakers, some heritage speakers are reluctant to produce any heritage language whatsoever. Lexical proficiency is an alternative method that is also effective in measuring proficiency. In a study with heritage Russian speakers, there was a strong correlation between the speaker's knowledge of lexical items (measured using a basic word list of about 200) and the speaker's control over grammatical knowledge such as agreement, temporal marking, and embedding.

Some heritage speakers explicitly study the language to gain additional proficiency. The learning trajectories of heritage speakers are markedly different from the trajectories of second language learners with little or no previous exposure to a target language. For instance, heritage learners typically show a phonological advantage over second language learners in both perception and production of the heritage language, even when their exposure to the heritage language was interrupted very early in life. Heritage speakers also tend to distinguish, rather than conflate, easily confusable sounds in the heritage language and the dominant language more reliably than second language learners. In morphosyntax as well, heritage speakers have been found to be more native-like than second language learners, although they are typically significantly different from native speakers. Many linguists frame this change in heritage language acquisition as “incomplete acquisition” or "attrition." "Incomplete acquisition," loosely defined by Montrul, is "the outcome of language acquisition that is not complete in childhood." In this incomplete acquisition, there are particular properties of the language that were not able to reach age-appropriate levels of proficiency after the dominant language has been introduced. Attrition, as defined by Montrul, is the loss of a certain property of a language after one has already mastered it with native-speaker level accuracy. These two cases of language loss have been used by Montrul and many other linguists to describe the change in heritage language acquisition. However, this is not the only viewpoint of linguists to describe heritage language acquisition.

One argument against incomplete acquisition is that the input that heritage speakers receive is different from monolinguals (the input may be affected by cross-generational attrition, among other factors), thus the comparison of heritage speakers against monolinguals is weak. This argument by Pascual and Rothman claims that the acquisition of the heritage language is therefore not incomplete, but complete and simply different from monolingual acquisition of a language. Another argument argues for a shift in focus on the result of incomplete acquisition of a heritage language to the process of heritage language acquisition. In this argument, the crucial factor in changes to heritage language acquisition is the extent to which the heritage speaker activates and processes the heritage language. This new model thus moves away from language acquisition that is dependent on the exposure to input of the language and moves towards dependence on the frequency of processing for production and comprehension of the heritage language.

Some colleges and universities offer courses prepared for speakers of heritage languages. For example, students who grow up learning some Spanish in the home may enroll in a course that will build on their Spanish abilities.

First language

From Wikipedia, the free encyclopedia
The monument for the mother tongue ("Ana dili") in Nakhchivan, Azerbaijan

A first language, native language or mother/father/parent tongue (also known as arterial language or L1), is a language that a person has been exposed to from birth or within the critical period. In some countries, the term native language or mother tongue refers to the language of one's ethnic group rather than one's first language.

Sometimes, the term "mother tongue" or "mother language"(or "father tongue" / "father language") is used for the language that a person learned as a child (usually from their parents). Children growing up in bilingual homes can, according to this definition, have more than one mother tongue or native language. 

The first language of a child is part of that child's personal, social and cultural identity. Another impact of the first language is that it brings about the reflection and learning of successful social patterns of acting and speaking. It is basically responsible for differentiating the linguistic competence of acting. While some argue that there is no such thing as a "native speaker" or a "mother tongue", it is important to understand the key terms as well as to understand what it means to be a "non-native" speaker, and the implications that can have on one's life. Research suggests that while a non-native speaker may develop fluency in a targeted language after about two years of immersion, it can take between five and seven years for that child to be on the same working level as their native speaking counterparts.

On 17 November 1999, UNESCO designated 21 February as International Mother Language Day.

Definitions

One of the more widely accepted definitions of native speakers is that they were born in a particular country (and) raised to speak the language of that country during the critical period of their development. The person qualifies as a "native speaker" of a language by being born and immersed in the language during youth, in a family in which the adults shared a similar language experience to the child. Native speakers are considered to be an authority on their given language because of their natural acquisition process regarding the language, as opposed to having learned the language later in life. That is achieved by personal interaction with the language and speakers of the language. Native speakers will not necessarily be knowledgeable about every grammatical rule of the language, but they will have good "intuition" of the rules through their experience with the language.

The designation "native language", in its general usage, is thought to be imprecise and subject to various interpretations that are biased linguistically, especially with respect to bilingual children from ethnic minority groups. Many scholars have given definitions of 'native language' based on common usage, the emotional relation of the speaker towards the language, and even its dominance in relation to the environment. However, all three criteria lack precision. For many children whose home language differs from the language of the environment (the 'official' language), it is debatable which language is their "native language".

Defining "native language"

  • Based on origin: the language(s) one learned first (the language(s) in which one has established the first long-lasting verbal contacts).
  • Based on internal identification: the language(s) one identifies with/as a speaker of;
  • Based on external identification: the language(s) one is identified with/as a speaker of, by others.
  • Based on competence: the language(s) one knows best.
  • Based on function: the language(s) one uses most.
In some countries, such as Kenya, India, and various East Asian and Central Asian countries, "mother language" or "native language" is used to indicate the language of one's ethnic group in both common and journalistic parlance ("I have no apologies for not learning my mother tongue"), rather than one's first language. Also, in Singapore, "mother tongue" refers to the language of one's ethnic group regardless of actual proficiency, and the "first language" refers to English, which was established on the island under the British Empire, and is the lingua franca for most post-independence Singaporeans because of its use as the language of instruction in government schools and as a working language.

In the context of population censuses conducted on the Canadian population, Statistics Canada defines mother tongue as "the first language learned at home in childhood and still understood by the individual at the time of the census." It is quite possible that the first language learned is no longer a speaker's dominant language. That includes young immigrant children whose families have moved to a new linguistic environment as well as people who learned their mother tongue as a young child at home (rather than the language of the majority of the community), who may have lost, in part or in totality, the language they first acquired. According to Ivan Illich, the term "mother tongue" was first used by Catholic monks to designate a particular language they used, instead of Latin, when they were "speaking from the pulpit". That is, the "holy mother the Church" introduced this term and colonies inherited it from Christianity as a part of colonialism. J. R. R. Tolkien, in his 1955 lecture "English and Welsh", distinguishes the "native tongue" from the "cradle tongue". The latter is the language one learns during early childhood, and one's true "native tongue" may be different, possibly determined by an inherited linguistic taste and may later in life be discovered by a strong emotional affinity to a specific dialect (Tolkien personally confessed to such an affinity to the Middle English of the West Midlands in particular).

Children brought up speaking more than one language can have more than one native language, and be bilingual or multilingual. By contrast, a second language is any language that one speaks other than one's first language.

Bilingualism

International Mother Language Day Monument in Sydney, Australia, unveiling ceremony, 19 February 2006
 
A related concept is bilingualism. One definition is that a person is bilingual if they are equally proficient in two languages. Someone who grows up speaking Spanish and then learns English for four years is bilingual only if they speak the two languages with equal fluency. Pearl and Lambert were the first to test only "balanced" bilinguals—that is, a child who is completely fluent in two languages and feels that neither is their "native" language because they grasp both so perfectly. This study found that
  • balanced bilinguals perform significantly better in tasks that require flexibility (they constantly shift between the two known languages depending on the situation),
  • they are more aware of the arbitrary nature of language,
  • they choose word associations based on logical rather than phonetic preferences.

Multilingualism

One can have two or more native languages, thus being a native bilingual or indeed multilingual. The order in which these languages are learned is not necessarily the order of proficiency. For instance, if a French-speaking couple have a child who learned French first but then grew up in an English-speaking country, the child would likely be most proficient in English. Other examples are India, Indonesia, the Philippines, Kenya, Malaysia, Singapore, and South Africa, where most people speak more than one language.

Defining "native speaker"

Defining what constitutes a native speaker is difficult, and there is no test which can identify one. It is not known whether native speakers are a defined group of people, or if the concept should be thought of as a perfect prototype to which actual speakers may or may not conform.

An article titled "The Native Speaker: An Achievable Model?" published by the Asian EFL Journal states that there are six general principles that relate to the definition of "native speaker". The principles, according to the study, are typically accepted by language experts across the scientific field. A native speaker is defined according to the following guidelines:
  1. The individual acquired the language in early childhood and maintains the use of the language.
  2. The individual has intuitive knowledge of the language.
  3. The individual is able to produce fluent, spontaneous discourse.
  4. The individual is communicatively competent in different social contexts.
  5. The individual identifies with or is identified by a language community.
  6. The individual does not have a foreign accent.

Universal grammar

From Wikipedia, the free encyclopedia
 
Noam Chomsky is usually associated with the term universal grammar in the 20th and 21st centuries

Universal grammar (UG), in modern linguistics, is the theory of the genetic component of the language faculty, usually credited to Noam Chomsky. The basic postulate of UG is that a certain set of structural rules are innate to humans, independent of sensory experience. With more linguistic stimuli received in the course of psychological development, children then adopt specific syntactic rules that conform to UG. It is sometimes known as "mental grammar", and stands contrasted with other "grammars", e.g. prescriptive, descriptive and pedagogical. The advocates of this theory emphasize and partially rely on the poverty of the stimulus (POS) argument and the existence of some universal properties of natural human languages. However, the latter has not been firmly established, as some linguists have argued languages are so diverse that such universality is rare. It is a matter of empirical investigation to determine precisely what properties are universal and what linguistic capacities are innate.

Argument

The theory of universal grammar proposes that if human beings are brought up under normal conditions (not those of extreme sensory deprivation), then they will always develop language with certain properties (e.g., distinguishing nouns from verbs, or distinguishing function words from content words). The theory proposes that there is an innate, genetically determined language faculty that knows these rules, making it easier and faster for children to learn to speak than it otherwise would be. This faculty does not know the vocabulary of any particular language (so words and their meanings must be learned), and there remain several parameters which can vary freely among languages (such as whether adjectives come before or after nouns) which must also be learned. Evidence in favor of this idea can be found in studies like Valian (1986), which show that children of surprisingly young ages understand syntactic categories and their distribution before this knowledge shows up in production.

As Chomsky puts it, "Evidently, development of language in the individual must involve three factors: genetic endowment, which sets limits on the attainable languages, thereby making language acquisition possible; external data, converted to the experience that selects one or another language within a narrow range; [and] principles not specific to the Faculty of Language."

Occasionally, aspects of universal grammar seem describable in terms of general details regarding cognition. For example, if a predisposition to categorize events and objects as different classes of things is part of human cognition, and directly results in nouns and verbs showing up in all languages, then it could be assumed that rather than this aspect of universal grammar being specific to language, it is more generally a part of human cognition. To distinguish properties of languages that can be traced to other facts regarding cognition from properties of languages that cannot, the abbreviation UG* can be used. UG is the term often used by Chomsky for those aspects of the human brain which cause language to be the way that it is (i.e. are universal grammar in the sense used here), but here for the purposes of discussion, it is used for those aspects which are furthermore specific to language (thus UG, as Chomsky uses it, is just an abbreviation for universal grammar, but UG* as used here is a subset of universal grammar).

In the same article, Chomsky casts the theme of a larger research program in terms of the following question: "How little can be attributed to UG while still accounting for the variety of 'I-languages' attained, relying on third factor principles?" (I-languages meaning internal languages, the brain states that correspond to knowing how to speak and understand a particular language, and third factor principles meaning "principles not specific to the Faculty of Language" in the previous quote).

Chomsky has speculated that UG might be extremely simple and abstract, for example only a mechanism for combining symbols in a particular way, which he calls "merge". The following quote shows that Chomsky does not use the term "UG" in the narrow sense UG* suggested above.

"The conclusion that merge falls within UG holds whether such recursive generation is unique to FL (faculty of language) or is appropriated from other systems."

In other words, merge is seen as part of UG because it causes language to be the way it is, universal, and is not part of the environment or general properties independent of genetics and environment. Merge is part of universal grammar whether it is specific to language, or whether, as Chomsky suggests, it is also used for an example in mathematical thinking.

The distinction is the result of the long history of argument about UG*: whereas some people working on language agree that there is universal grammar, many people assume that Chomsky means UG* when he writes UG (and in some cases he might actually mean UG* [though not in the passage quoted above]).

Some students of universal grammar study a variety of grammars to extract generalizations called linguistic universals, often in the form of "If X holds true, then Y occurs." These have been extended to a variety of traits, such as the phonemes found in languages, the word orders which different languages choose, and the reasons why children exhibit certain linguistic behaviors.

Other linguists who have influenced this theory include Richard Montague, who developed his version of this theory as he considered issues of the argument from poverty of the stimulus to arise from the constructivist approach to linguistic theory. The application of the idea of universal grammar to the study of second language acquisition (SLA) is represented mainly in the work of McGill linguist Lydia White.

Syntacticians generally hold that there are parametric points of variation between languages, although heated debate occurs over whether UG constraints are essentially universal due to being "hard-wired" (Chomsky's principles and parameters approach), a logical consequence of a specific syntactic architecture (the generalized phrase structure approach) or the result of functional constraints on communication (the functionalist approach).

Relation to the evolution of language

In an article entitled "The Faculty of Language: What Is It, Who Has It, and How Did It Evolve?" Hauser, Chomsky, and Fitch present the three leading hypotheses for how language evolved and brought humans to the point where they have a universal grammar.

The first hypothesis states that the faculty of language in the broad sense (FLb) is strictly homologous to animal communication. This means that homologous aspects of the faculty of language exist in non-human animals.

The second hypothesis states that the FLb is a derived and uniquely human adaptation for language. This hypothesis holds that individual traits were subject to natural selection and came to be specialized for humans.

The third hypothesis states that only the faculty of language in the narrow sense (FLn) is unique to humans. It holds that while mechanisms of the FLb are present in both human and non-human animals, the computational mechanism of recursion is recently evolved solely in humans. This is the hypothesis which most closely aligns to the typical theory of universal grammar championed by Chomsky.

History

The term "universal grammar" predates Noam Chomsky, but pre-Chomskyan ideas of universal grammar are different. For Chomsky, UG is "[the] theory of the genetically based language faculty", which makes UG a theory of language acquisition, and part of the innateness hypothesis. Earlier grammarians and philosophers thought about universal grammar in the sense of a universally shared property or grammar of all languages. The closest analog to their understanding of universal grammar in the late 20th century are Greenberg's linguistic universals.

The idea of a universal grammar can be traced back to Roger Bacon's observations in his c. 1245 Overview of Grammar and c. 1268 Greek Grammar that all languages are built upon a common grammar, even though it may undergo incidental variations; and the 13th century speculative grammarians who, following Bacon, postulated universal rules underlying all grammars. The concept of a universal grammar or language was at the core of the 17th century projects for philosophical languages. An influential work in that time was Grammaire générale by Claude Lancelot and Antoine Arnauld, who built on the works of René Descartes. They tried to describe a general grammar for languages, coming to the conclusion that grammar has to be universal. There is a Scottish school of universal grammarians from the 18th century, as distinguished from the philosophical language project, which included authors such as James Beattie, Hugh Blair, James Burnett, James Harris, and Adam Smith. The article on grammar in the first edition of the Encyclopædia Britannica (1771) contains an extensive section titled "Of Universal Grammar".

This tradition was continued in the late 19th century by Wilhelm Wundt and in the early 20th century by linguist Otto Jespersen. Jespersen disagreed with early grammarians on their formulation of "universal grammar", arguing that they tried to derive too much from Latin, and that a UG based on Latin was bound to fail considering the breadth of worldwide linguistic variation. He does not fully dispense with the idea of a "universal grammar", but reduces it to universal syntactic categories or super-categories, such as number, tenses, etc.  Jespersen does not discuss whether these properties come from facts about general human cognition or from a language specific endowment (which would be closer to the Chomskyan formulation). As this work predates molecular genetics, he does not discuss the notion of a genetically conditioned universal grammar.

During the rise of behaviorism, the idea of a universal grammar (in either sense) was discarded. In the early 20th century, language was usually understood from a behaviourist perspective, suggesting that language acquisition, like any other kind of learning, could be explained by a succession of trials, errors, and rewards for success. In other words, children learned their mother tongue by simple imitation, through listening and repeating what adults said. For example, when a child says "milk" and the mother will smile and give her child milk as a result, the child will find this outcome rewarding, thus enhancing the child's language development. UG reemerged to prominence and influence in modern linguistics with the theories of Chomsky and Montague in the 1950s–1970s, as part of the "linguistics wars".

In 2016 Chomsky and Berwick co-wrote their book titled Why Only Us, where they defined both the minimalist program and the strong minimalist thesis and its implications to update their approach to UG theory. According to Berwick and Chomsky, the strong minimalist thesis states that "The optimal situation would be that UG reduces to the simplest computational principles which operate in accord with conditions of computational efficiency. This conjecture is ... called the Strong Minimalist Thesis (SMT)." The significance of SMT is to significantly shift the previous emphasis on universal grammars to the concept which Chomsky and Berwick now call "merge". "Merge" is defined in their 2016 book when they state "Every computational system has embedded within it somewhere an operation that applies to two objects X and Y already formed, and constructs from them a new object Z. Call this operation Merge." SMT dictates that "Merge will be as simple as possible: it will not modify X or Y or impose any arrangement on them; in particular, it will leave them unordered, an important fact... Merge is therefore just set formation: Merge of X and Y yields the set {X, Y}."

Chomsky's theory

Chomsky argued that the human brain contains a limited set of constraints for organizing language. This implies in turn that all languages have a common structural basis: the set of rules known as "universal grammar".

Speakers proficient in a language know which expressions are acceptable in their language and which are unacceptable. The key puzzle is how speakers come to know these restrictions of their language, since expressions that violate those restrictions are not present in the input, indicated as such. Chomsky argued that this poverty of stimulus means that Skinner's behaviourist perspective cannot explain language acquisition. The absence of negative evidence—evidence that an expression is part of a class of ungrammatical sentences in a given language—is the core of his argument. For example, in English, an interrogative pronoun like what cannot be related to a predicate within a relative clause:
*"What did John meet a man who sold?"
Such expressions are not available to language learners: they are, by hypothesis, ungrammatical. Speakers of the local language do not use them, and would note them as unacceptable to language learners. Universal grammar offers an explanation for the presence of the poverty of the stimulus, by making certain restrictions into universal characteristics of human languages. Language learners are consequently never tempted to generalize in an illicit fashion.

Presence of creole languages

The presence of creole languages is sometimes cited as further support for this theory, especially by Bickerton's controversial language bioprogram theory. Creoles are languages that develop and form when disparate societies come together and are forced to devise a new system of communication. The system used by the original speakers is typically an inconsistent mix of vocabulary items, known as a pidgin. As these speakers' children begin to acquire their first language, they use the pidgin input to effectively create their own original language, known as a creole. Unlike pidgins, creoles have native speakers (those with acquisition from early childhood) and make use of a full, systematic grammar.

According to Bickerton, the idea of universal grammar is supported by creole languages because certain features are shared by virtually all in the category. For example, their default point of reference in time (expressed by bare verb stems) is not the present moment, but the past. Using pre-verbal auxiliaries, they uniformly express tense, aspect, and mood. Negative concord occurs, but it affects the verbal subject (as opposed to the object, as it does in languages like Spanish). Another similarity among creoles can be seen in the fact that questions are created simply by changing the intonation of a declarative sentence, not its word order or content.

However, extensive work by Carla Hudson-Kam and Elissa Newport suggests that creole languages may not support a universal grammar at all. In a series of experiments, Hudson-Kam and Newport looked at how children and adults learn artificial grammars. They found that children tend to ignore minor variations in the input when those variations are infrequent, and reproduce only the most frequent forms. In doing so, they tend to standardize the language that they hear around them. Hudson-Kam and Newport hypothesize that in a pidgin-development situation (and in the real-life situation of a deaf child whose parents are or were disfluent signers), children systematize the language they hear, based on the probability and frequency of forms, and not that which has been suggested on the basis of a universal grammar. Further, it seems to follow that creoles would share features with the languages from which they are derived, and thus look similar in terms of grammar.

Many researchers of universal grammar argue against a concept of relexification, which says that a language replaces its lexicon almost entirely with that of another. This goes against universalist ideas of a universal grammar, which has an innate grammar.

Criticisms

Geoffrey Sampson maintains that universal grammar theories are not falsifiable and are therefore pseudoscientific. He argues that the grammatical "rules" linguists posit are simply post-hoc observations about existing languages, rather than predictions about what is possible in a language. Similarly, Jeffrey Elman argues that the unlearnability of languages assumed by universal grammar is based on a too-strict, "worst-case" model of grammar, that is not in keeping with any actual grammar. In keeping with these points, James Hurford argues that the postulate of a language acquisition device (LAD) essentially amounts to the trivial claim that languages are learnt by humans, and thus, that the LAD is less a theory than an explanandum looking for theories.

Morten H. Christiansen and Nick Chater have argued that the relatively fast-changing nature of language would prevent the slower-changing genetic structures from ever catching up, undermining the possibility of a genetically hard-wired universal grammar. Instead of an innate universal grammar, they claim, "apparently arbitrary aspects of linguistic structure may result from general learning and processing biases deriving from the structure of thought processes, perceptuo-motor factors, cognitive limitations, and pragmatics".

Hinzen summarizes the most common criticisms of universal grammar:
  • Universal grammar has no coherent formulation and is indeed unnecessary.
  • Universal grammar is in conflict with biology: it cannot have evolved by standardly accepted neo-Darwinian evolutionary principles.
  • There are no linguistic universals: universal grammar is refuted by abundant variation at all levels of linguistic organization, which lies at the heart of human faculty of language.
In addition, it has been suggested that people learn about probabilistic patterns of word distributions in their language, rather than hard and fast rules (see Distributional hypothesis). For example, children overgeneralize the past tense marker "ed" and conjugate irregular verbs incorrectly, producing forms like goed and eated and correct these errors over time. It has also been proposed that the poverty of the stimulus problem can be largely avoided, if it is assumed that children employ similarity-based generalization strategies in language learning, generalizing about the usage of new words from similar words that they already know how to use.

Language acquisition researcher Michael Ramscar has suggested that when children erroneously expect an ungrammatical form that then never occurs, the repeated failure of expectation serves as a form of implicit negative feedback that allows them to correct their errors over time such as how children correct grammar generalizations like goed to went through repetitive failure. This implies that word learning is a probabilistic, error-driven process, rather than a process of fast mapping, as many nativists assume.

In the domain of field research, the Pirahã language is claimed to be a counterexample to the basic tenets of universal grammar. This research has been led by Daniel Everett. Among other things, this language is alleged to lack all evidence for recursion, including embedded clauses, as well as quantifiers and colour terms. According to the writings of Everett, the Pirahã showed these linguistic shortcomings not because they were simple-minded, but because their culture—which emphasized concrete matters in the present and also lacked creation myths and traditions of art making—did not necessitate it. Some other linguists have argued, however, that some of these properties have been misanalyzed, and that others are actually expected under current theories of universal grammar. Other linguists have attempted to reassess Pirahã to see if it did indeed use recursion. In a corpus analysis of the Pirahã language, linguists failed to disprove Everett's arguments against universal grammar and the lack of recursion in Pirahã. However, they also stated that there was "no strong evidence for the lack of recursion either" and they provided "suggestive evidence that Pirahã may have sentences with recursive structures".

Daniel Everett has argued that even if a universal grammar is not impossible in principle, it should not be accepted because we have equally or more plausible theories that are simpler. In his words, "universal grammar doesn't seem to work, there doesn't seem to be much evidence for [it]. And what can we put in its place? A complex interplay of factors, of which culture, the values human beings share, plays a major role in structuring the way that we talk and the things that we talk about." Michael Tomasello, a developmental psychologist, also supports this claim, arguing that "although many aspects of human linguistic competence have indeed evolved biologically, specific grammatical principles and constructions have not. And universals in the grammatical structure of different languages have come from more general processes and constraints of human cognition, communication, and vocal-auditory processing, operating during the conventionalization and transmission of the particular grammatical constructions of particular linguistic communities."

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...