Search This Blog

Sunday, November 19, 2023

Polypharmacy

From Wikipedia, the free encyclopedia
Polypharmacy is often defined as taking 5 or more medicines.

Polypharmacy (polypragmasia) is an umbrella term to describe the simultaneous use of multiple medicines by a patient for their conditions. The term polypharmacy is often defined as regularly taking five or more medicines but there is no standard definition and the term has also been used in the context of when a person is prescribed 2 or more medications at the same time. Polypharmacy may be the consequence of having multiple long-term conditions, also known as multimorbidity and is more common in people who are older. In some cases, an excessive number of medications at the same time is worrisome, especially for people who are older with many chronic health conditions, because this increases the risk of an adverse event in that population. In many cases, polypharmacy cannot be avoided, but 'appropriate polypharmacy' practices are encouraged to decrease the risk of adverse effects. Appropriate polypharmacy is defined as the practice of prescribing for a person who has multiple conditions or complex health needs by ensuring that medications prescribed are optimized and follow 'best evidence' practices.

The prevalence of polypharmacy is estimated to be between 10% and 90% depending on the definition used, the age group studied, and the geographic location. Polypharmacy continues to grow in importance because of aging populations. Many countries are experiencing a fast growth of the older population, 65 years and older. This growth is a result of the baby-boomer generation getting older and an increased life expectancy as a result of ongoing improvement in health care services worldwide. About 21% of adults with intellectual disability are also exposed to polypharmacy. The level of polypharmacy has been increasing in the past decades. Research in the USA shows that the percentage of patients greater than 65 years-old using more than 5 medications increased from 24% to 39% between 1999 and 2012. Similarly, research in the UK found that the number of older people taking 5 plus medication had quadrupled from 12% to nearly 50% between 1994 and 2011.

Polypharmacy is not necessarily ill-advised, but in many instances can lead to negative outcomes or poor treatment effectiveness, often being more harmful than helpful or presenting too much risk for too little benefit. Therefore, health professionals consider it a situation that requires monitoring and review to validate whether all of the medications are still necessary. Concerns about polypharmacy include increased adverse drug reactions, drug interactions, prescribing cascade, and higher costs. A prescribing cascade occurs when a person is prescribed a drug and experiences an adverse drug effect that is misinterpreted as a new medical condition, so the patient is prescribed another drug. Polypharmacy also increases the burden of medication taking particularly in older people and is associated with medication non-adherence.

Polypharmacy is often associated with a decreased quality of life, including decreased mobility and cognition. Patient factors that influence the number of medications a patient is prescribed include a high number of chronic conditions requiring a complex drug regimen. Other systemic factors that impact the number of medications a patient is prescribed include a patient having multiple prescribers and multiple pharmacies that may not communicate.

Whether or not the advantages of polypharmacy (over taking single medications or monotherapy) outweigh the disadvantages or risks depends upon the particular combination and diagnosis involved in any given case. The use of multiple drugs, even in fairly straightforward illnesses, is not an indicator of poor treatment and is not necessarily overmedication. Moreover, it is well accepted in pharmacology that it is impossible to accurately predict the side effects or clinical effects of a combination of drugs without studying that particular combination of drugs in test subjects. Knowledge of the pharmacologic profiles of the individual drugs in question does not assure accurate prediction of the side effects of combinations of those drugs; and effects also vary among individuals because of genome-specific pharmacokinetics. Therefore, deciding whether and how to reduce a list of medications (deprescribe) is often not simple and requires the experience and judgment of a practicing clinician, as the clinician must weigh the pros and cons of keeping the patient on the medication. However, such thoughtful and wise review is an ideal that too often does not happen, owing to problems such as poorly handled care transitions (poor continuity of care, usually because of siloed information), overworked physicians and other clinical staff, and interventionism.

Appropriate medical uses

While polypharmacy is typically regarded as undesirable, prescription of multiple medications can be appropriate and therapeutically beneficial in some circumstances. “Appropriate polypharmacy” is described as prescribing for complex or multiple conditions in such a way that necessary medicines are used based on the best available evidence at the time to preserve safety and well-being. Polypharmacy is clinically indicated in some chronic conditions, for example in diabetes mellitus, but should be discontinued when evidence of benefit from the prescribed drugs no longer outweighs potential for harm (described below in Contraindications).

Often certain medications can interact with others in a positive way specifically intended when prescribed together, to achieve a greater effect than any of the single agents alone. This is particularly prominent in the field of anesthesia and pain management – where atypical agents such as antiepileptics, antidepressants, muscle relaxants, NMDA antagonists, and other medications are combined with more typical analgesics such as opioids, prostaglandin inhibitors, NSAIDS and others. This practice of pain management drug synergy is known as an analgesia sparing effect.

Examples

Special populations

People who are at greatest risk for negative polypharmacy consequences include elderly people, people with psychiatric conditions, patients with intellectual or developmental disabilities, people taking five or more drugs at the same time, those with multiple physicians and pharmacies, people who have been recently hospitalized, people who have concurrent comorbidities, people who live in rural communities, people with inadequate access to education, and those with impaired vision or dexterity. Marginalized populations may have a greater degrees of polypharmacy, which can occur more frequently in younger age groups.

It is not uncommon for people who are dependent or addicted to substances to enter or remain in a state of polypharmacy misuse. About 84% of prescription drug misusers reported using multiple drugs. Note, however, that the term polypharmacy and its variants generally refer to legal drug use as-prescribed, even when used in a negative or critical context.

Measures can be taken to limit polypharmacy to its truly legitimate and appropriate needs. This is an emerging area of research, frequently called deprescribing. Reducing the number of medications, as part of a clinical review, can be an effective healthcare intervention. Clinical pharmacists can perform drug therapy reviews and teach physicians and their patients about drug safety and polypharmacy, as well as collaborating with physicians and patients to correct polypharmacy problems. Similar programs are likely to reduce the potentially deleterious consequences of polypharmacy such as adverse drug events, non-adherence, hospital admissions, drug-drug interactions, geriatric syndromes, and mortality. Such programs hinge upon patients and doctors informing pharmacists of other medications being prescribed, as well as herbal, over-the-counter substances and supplements that occasionally interfere with prescription-only medication. Staff at residential aged care facilities have a range of views and attitudes towards polypharmacy that, in some cases, may contribute to an increase in medication use.

Risks of polypharmacy

The risk of polypharmacy increases with age, although there is some evidence that it may decrease slightly after age 90 years. Poorer health is a strong predictor of polypharmacy at any age, although it is unclear whether the polypharmacy causes the poorer health or if polypharmacy is used because of the poorer health. It appears possible that the risk factors for polypharmacy may be different for younger and middle-aged people compared to older people.

The use of polypharmacy is correlated to the use of potentially inappropriate medications. Potentially inappropriate medications are generally taken to mean those that have been agreed upon by expert consensus, such as by the Beers Criteria. These medications are generally inappropriate for older adults because the risks outweigh the benefits. Examples of these include urinary anticholinergics used to treat incontinence; the associated risks, with anticholinergics, include constipation, blurred vision, dry mouth, impaired cognition, and falls. Many older people living in long term care facilities experience polypharmacy, and under-prescribing of potentially indicated medicines and use of high risk medicines can also occur.

Polypharmacy is associated with an increased risk of falls in elderly people. Certain medications are well known to be associated with the risk of falls, including cardiovascular and psychoactive medications. There is some evidence that the risk of falls increases cumulatively with the number of medications. Although often not practical to achieve, withdrawing all medicines associated with falls risk can halve an individual's risk of future falls.

Every medication has potential adverse side-effects. With every drug added, there is an additive risk of side-effects. Also, some medications have interactions with other substances, including foods, other medications, and herbal supplements. 15% of older adults are potentially at risk for a major drug-drug interaction. Older adults are at a higher risk for a drug-drug interaction due to the increased number of medications prescribed and metabolic changes that occur with aging. When a new drug is prescribed, the risk of interactions increases exponentially. Doctors and pharmacists aim to avoid prescribing medications that interact; often, adjustments in the dose of medications need to be made to avoid interactions. For example, warfarin interacts with many medications and supplements that can cause it to lose its effect.

Pill burden

Pill burden is the number of pills (tablets or capsules, the most common dosage forms) that a person takes on a regular basis, along with all associated efforts that increase with that number - like storing, organizing, consuming, and understanding the various medications in one's regimen. The use of individual medications is growing faster than pill burden. A recent study found that older adults in long term care are taking an average of 14 to 15 tablets every day.

Poor medical adherence is a common challenge among individuals who have increased pill burden and are subject to polypharmacy. It also increases the possibility of adverse medication reactions (side effects) and drug-drug interactions. High pill burden has also been associated with an increased risk of hospitalization, medication errors, and increased costs for both the pharmaceuticals themselves and for the treatment of adverse events. Finally, pill burden is a source of dissatisfaction for many patients and family carers.

High pill burden was commonly associated with antiretroviral drug regimens to control HIV, and is also seen in other patient populations. For instance, adults with multiple common chronic conditions such as diabetes, hypertension, lymphedema, hypercholesterolemia, osteoporosis, constipation, inflammatory bowel disease, and clinical depression may be prescribed more than a dozen different medications daily. The combination of multiple drugs has been associated with an increased risk of adverse drug events.

Reducing pill burden is recognized as a way to improve medication compliance, also referred to as adherence. This is done through "deprescribing", where the risks and benefits are weighed when considering whether to continue a medication. This includes drugs such as bisphosphonates (for osteoporosis), which are often taken indefinitely although there is only evidence to use it for five to ten years. Patient educational programs, reminder messages, medication packaging, and the use of memory tricks has also been seen to improve adherence and reduce pill burden in several countries. These include associating medications with mealtimes, recording the dosage on the box, storing the medication in a special place, leaving it in plain sight in the living room, or putting the prescription sheet on the refrigerator. The development of applications has also shown some benefit in this regard. The use of a polypill regimen, such as combination pill for HIV treatment, as opposed to a multi-pill regimen, also alleviates pill burden and increases adherence.

The selection of long-acting active ingredients over short-acting ones may also reduce pill burden. For instance, ACE inhibitors are used in the management of hypertension. Both captopril and lisinopril are examples of ACE inhibitors. However, lisinopril is dosed once a day, whereas captopril may be dosed 2-3 times a day. Assuming that there are no contraindications or potential for drug interactions, using lisinopril instead of captopril may be an appropriate way to limit pill burden.

Interventions

The most common intervention to help people who are struggling with polypharmacy is deprescribing. Deprescribing can be confused with medication simplification, which does not attempt to reduce the number of medicines but rather reduce the number of dose forms and administration times. Deprescribing refers to reducing the number of medications that a person is prescribed and includes the identification and discontinuance of medications when the benefit no longer outweighs the harm. In elderly patients, this can commonly be done as a patient becomes more frail and treatment focus needs to shift from preventative to palliative. Deprescribing is feasible and effective in many settings including residential care, communities and hospitals. This preventative measure should be considered for anyone who exhibits one of the following: (1) a new symptom or adverse event arises, (2) when the person develops an end-stage disease, (3) if the combination of drugs is risky, or (4) if stopping the drug does not alter the disease trajectory.

Several tools exist to help physicians decide when to deprescribe and what medications can be added to a pharmaceutical regimen. The Beers Criteria and the STOPP/START criteria help identify medications that have the highest risk of adverse drug events (ADE) and drug-drug interactions. The Medication appropriateness tool for comorbid health conditions during dementia (MATCH-D) is the only tool available specifically for people with dementia, and also cautions against polypharmacy and complex medication regimens.

Barriers faced by both physicians and people taking the medications have made it challenging to apply deprescribing strategies in practice. For physicians, these include fear of consequences of deprescribing, the prescriber's own confidence in their skills and knowledge to deprescribe, reluctance to alter medications that are prescribed by specialists, the feasibility of deprescribing, lack of access to all of patients' clinical notes, and the complexity of having multiple providers. For patients who are prescribed or require the medication, barriers include attitudes or beliefs about the medications, inability to communicate with physicians, fears and uncertainties surrounding deprescribing, and influence of physicians, family, and the media. Barriers can include other health professionals or carers, such as in residential care, believing that the medicines are required.

In people with multiple long-term conditions (multimorbidity) and polypharmacy deprescribing represents a complex challenge as clinical guidelines are usually developed for single conditions. In these cases tools and guidelines like the Beers Criteria and STOPP/START could be used safely by clinicians but not all patients might benefit from stopping their medication. There is a need for clarity about how much clinicians can do beyond the guidelines and the responsibility they need to take could help them prescribing and deprescribing for complex cases. Further factors that can help clinicians tailor their decisions to the individual are: access to detailed data on the people in their care (including their backgrounds and personal medical goals), discussing plans to stop a medicine already when it is first prescribed, and a good relationship that involves mutual trust and regular discussions on progress. Furthermore, longer appointments for prescribing and deprescribing would allow time explain the process of deprescribing, explore related concerns, and support making the right decisions.

The effectiveness of specific interventions to improve the appropriate use of polypharmacy such as pharmaceutical care and computerised decision support is unclear. This is due to low quality of current evidence surrounding these interventions. High quality evidence is needed to make any conclusions about the effects of such interventions in any environment, including in care homes. Deprescribing is not influenced by whether medicines are prescribed through a paper-based or an electronic system. Deprescribing rounds has been proposed as a potentially successful methodology in reducing polypharmacy. Sharing of positive outcomes from physicians who have implemented deprescribing, increased communication between all practitioners involved in patient care, higher compensation for time spent deprescribing, and clear deprescribing guidelines can help enable the practice of deprescribing. Despite the difficulties, a recent blinded study of deprescribing reported that participants used an average of two fewer medicines each after 12 months showing again that deprescribing is feasible.

Adverse drug reaction

From Wikipedia, the free encyclopedia
Adverse drug reaction
A rash due to a drug reaction

An adverse drug reaction (ADR) is a harmful, unintended result caused by taking medication. ADRs may occur following a single dose or prolonged administration of a drug or may result from the combination of two or more drugs. The meaning of this term differs from the term "side effect" because side effects can be beneficial as well as detrimental. The study of ADRs is the concern of the field known as pharmacovigilance. An adverse event (AE) refers to any unexpected and inappropriate occurrence at the time a drug is used, whether or not the event is associated with the administration of the drug. An ADR is a special type of AE in which a causative relationship can be shown. ADRs are only one type of medication-related harm. Another type of medication-related harm type includes not taking prescribed medications, which is also known as non-adherence. Non-adherence to medications can lead to death and other negative outcomes. Adverse drug reactions require the use of a medication.

Classification

Traditional Classification

  • Type A: augmented pharmacological effects, which are dose-dependent and predictable
Type A reactions, which constitute approximately 80% of adverse drug reactions, are usually a consequence of the drug's primary pharmacological effect (e.g., bleeding when using the anticoagulant warfarin) or a low therapeutic index of the drug (e.g., nausea from digoxin), and they are therefore predictable. They are dose-related and usually mild, although they may be serious or even fatal (e.g. intracranial bleeding from warfarin). Such reactions are usually due to inappropriate dosage, especially when drug elimination is impaired. The term side effects may be applied to minor type A reactions.
  • Type B: Type B reactions are not dose-dependent and are not predictable, and so may be called idiosyncratic. These reactions can be due to particular elements within the person or the environment.

Types A and B were proposed in the 1970s, and the other types were proposed subsequently when the first two proved insufficient to classify ADRs.

Other types of adverse drug reactions are Type C, Type D, Type E, and Type F. Type C was categorized for chronic adverse drug reactions, Type D for delayed adverse drug reactions, Type E for withdrawal adverse drug reactions, and Type F for failure of therapy as an adverse drug reaction. Adverse drug reactions can also be categorized using time-relatedness, dose-relatedness, and susceptibility, which collectively are called the DoTS classification.

Seriousness

The U.S Food and Drug Administration defines a serious adverse event as one when the patient outcome is one of the following:

  • Death
  • Life-threatening
  • Hospitalization (initial or prolonged)
  • Disability - significant, persistent, or permanent change, impairment, damage or disruption in the patient's body function/structure, physical activities or quality of life.
  • Congenital abnormality
  • Requires intervention to prevent permanent impairment or damage

Severity is a measure of the intensity of the adverse event in question. The terms "severe" and "serious", when applied to adverse events, are technically very different. They are easily confused but can not be used interchangeably, requiring care in usage. Seriousness usually indicates patient outcome (such as negative outcomes including disability, long-term effects, and death).

A headache is severe if it causes intense pain. There are scales like "visual analog scale" that help clinicians assess the severity. On the other hand, a headache is not usually serious (but may be in case of subarachnoid hemorrhage, subdural bleed, even a migraine may temporally fit criteria), unless it also satisfies the criteria for seriousness listed above.

In adverse drug reactions, the seriousness of the reaction is important for reporting.

Location

Adverse effects may be local, i.e. limited to a certain location, or systemic, where medication has caused adverse effects throughout the systemic circulation.

For instance, some ocular antihypertensives cause systemic effects, although they are administered locally as eye drops, since a fraction escapes to the systemic circulation.

Mechanisms

Adverse drug reaction leading to hepatitis (drug-induced hepatitis) with granulomata. Other causes were excluded with extensive investigations. Liver biopsy. H&E stain.

As research better explains the biochemistry of drug use, fewer ADRs are Type B and more are Type A. Common mechanisms are:

  • Abnormal pharmacokinetics due to:
  • Synergistic effects between either:
    • a drug and a disease
    • two drugs
  • Antagonism effects between either:
    • a drug and a disease
    • two drugs

Abnormal pharmacokinetics

Comorbid disease states

Various diseases, especially those that cause renal or hepatic insufficiency, may alter drug metabolism. Resources are available that report changes in a drug's metabolism due to disease states.

The Medication Appropriateness Tool for Comorbid Health Conditions in Dementia (MATCH-D) criteria warns that people with dementia are more likely to experience adverse effects, and that they are less likely to be able to reliably report symptoms.

Genetic factors

Pharmacogenomics includes how genes can predict potential adverse drug reactions. However, pharmacogenomics is not limited to adverse events (of any type), but also looks at how genes may impact other responses to medications, such as low/no effect or expected/normal responses (especially based on drug metabolism).

Abnormal drug metabolism may be due to inherited factors of either Phase I oxidation or Phase II conjugation.

Phase I reactions

Phase I reactions include metabolism by cytochrome P450. Patients have abnormal metabolism by cytochrome P450 due to either inheriting abnormal alleles or due to drug interactions. Tables are available to check for drug interactions due to P450 interactions.

Inheriting abnormal butyrylcholinesterase (pseudocholinesterase) may affect metabolism of drugs such as succinylcholine.

Phase II reactions

Inheriting abnormal N-acetyltransferase which conjugated some drugs to facilitate excretion may affect the metabolism of drugs such as isoniazid, hydralazine, and procainamide.

Inheriting abnormal thiopurine S-methyltransferase may affect the metabolism of the thiopurine drugs mercaptopurine and azathioprine.

Protein binding

Protein binding interactions are usually transient and mild until a new steady state is achieved. These are mainly for drugs without much first-pass liver metabolism. The principal plasma proteins for drug binding are:

  1. albumin
  2. α1-acid glycoprotein
  3. lipoproteins

Some drug interactions with warfarin are due to changes in protein binding.

Drug Interactions

The risk of drug interactions is increased with polypharmacy, especially in older adults.

Additive drug effects

Two or more drugs that contribute to the same mechanism in the body can have additive toxic or adverse effects. One example of this is multiple medications administered concurrently that prolong the QT interval, such as antiarrhythmics like sotalol and some macrolide antibiotics, such as systemic azithromycin. Another example of additive effects for adverse drug reactions is in serotonin toxicity (serotonin syndrome). If medications that cause increased serotonin levels are combined, they can cause serotonin toxicity (though therapeutic doses of one agent that increases serotonin levels can cause serotonin toxicity in certain cases and individuals). Some of the medications that can contribute to serotonin toxicity include MAO inhibitors, SSRIs, and tricyclic antidepressants.

Altered Metabolism

Some medications can either inhibit or induce key drug metabolizing enzymes or drug transporters, which when combined with other medications that utilize the same proteins can lead to either toxic or sub-therapeutic adverse effects. One example of this is a patient taking a cytochrome P450 3A4 (CYP3A4) inhibitor such as the antibiotic clarithromycin, as well as another medication metabolized by CYP3A4 such as the anticoagulant apixaban, which results in elevated blood concentrations of apixaban and greater risk of serious bleeds. Additionally, Clarithromycin is a permeability glycoprotein (P-gp) efflux pump inhibitor, which when given with apixaban (a substrate for P-gp) will lead to increased absorption of apixaban, resulting in the same adverse effects as with CYP3A4 inhibition.

Assessing causality

Causality assessment is used to determine the likelihood that a drug caused a suspected ADR. There are a number of different methods used to judge causation, including the Naranjo algorithm, the Venulet algorithm and the WHO causality term assessment criteria. Each have pros and cons associated with their use and most require some level of expert judgement to apply. An ADR should not be labeled as 'certain' unless the ADR abates with a challenge-dechallenge-rechallenge protocol (stopping and starting the agent in question). The chronology of the onset of the suspected ADR is important, as another substance or factor may be implicated as a cause; co-prescribed medications and underlying psychiatric conditions may be factors in the ADR.

Assigning causality to a specific agent often proves difficult, unless the event is found during a clinical study or large databases are used. Both methods have difficulties and can be fraught with error. Even in clinical studies, some ADRs may be missed as large numbers of test individuals are required to find a specific adverse drug reaction, especially for rare ADRs. Psychiatric ADRs are often missed as they are grouped together in the questionnaires used to assess the population.

Monitoring bodies

Many countries have official bodies that monitor drug safety and reactions. On an international level, the WHO runs the Uppsala Monitoring Centre. The European Union runs the European Medicines Agency (EMA). In the United States, the Food and Drug Administration (FDA) is responsible for monitoring post-marketing studies. The FDA has a reporting system called the FDA Adverse Event Reporting System, where individuals can report adverse drug events. Healthcare professionals, consumers, and the pharmaceutical industry can all submit information to this system. For health products marketed in Canada, a branch of Health Canada called The Canada Vigilance Program is responsible for surveillance. Both healthcare professionals and consumers can report to this program. In Australia, the Therapeutic Goods Administration (TGA) conducts postmarket monitoring of therapeutic products. In the UK, a monitoring system called the Yellow Card Scheme was established in 1964. The Yellow Card Scheme was set up to surveil medications and other health products.

Epidemiology

A study by the Agency for Healthcare Research and Quality (AHRQ) found that in 2011, sedatives and hypnotics were a leading source for adverse drug events seen in the hospital setting. Approximately 2.8% of all ADEs present on admission and 4.4% of ADEs that originated during a hospital stay were caused by a sedative or hypnotic drug. A second study by AHRQ found that in 2011, the most common specifically identified causes of adverse drug events that originated during hospital stays in the U.S. were steroids, antibiotics, opiates/narcotics, and anticoagulants. Patients treated in urban teaching hospitals had higher rates of ADEs involving antibiotics and opiates/narcotics compared to those treated in urban nonteaching hospitals. Those treated in private, nonprofit hospitals had higher rates of most ADE causes compared to patients treated in public or private, for-profit hospitals.

Medication related harm (MRH) is common after hospital discharge in older adults, but methodological inconsistencies between studies and a paucity of data on risk factors limits clear understanding of the epidemiology. There was a wide range in incidence, from 0.4% to 51.2% of participants, and 35% to 59% of harm was preventable. Medication related harm incidence within 30 days after discharge ranged from 167 to 500 events per 1,000 individuals discharged (17–51% of individuals).

In the U.S., females had a higher rate of ADEs involving opiates and narcotics than males in 2011, while male patients had a higher rate of anticoagulant ADEs. Nearly 8 in 1,000 adults aged 65 years or older experienced one of the four most common ADEs (steroids, antibiotics, opiates/narcotics, and anticoagulants) during hospitalization. A study showed that 48% of patients had an adverse drug reaction to at least one drug, and pharmacist involvement helps to pick up adverse drug reactions.

In 2012, McKinsey & Company concluded that the cost of the 50-100 million preventable error-related adverse drug events would be between US$18–115 billion.

An article published in The Journal of the American Medical Association (JAMA) in 2016 reported adverse drug event statistics from emergency departments around the United States in 2013-2014. From this article, an estimated prevalence of adverse drug events that were presented to the emergency department (ED) was 4 events out of every 1000 people. This article reported that 57.1% of these adverse drug events presented to the ED were in females. As well, out of all of the adverse drug events presented to the emergency department documented in this article, 17.6% were from anticoagulants, 16.1% were from antibiotics, and 13.3% from diabetic agents.

Death drive

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Death_drive

In classical Freudian psychoanalytic theory, the death drive (German: Todestrieb) is the drive toward death and destruction, often expressed through behaviors such as aggression, repetition compulsion, and self-destructiveness. It was originally proposed by Sabina Spielrein in her paper "Destruction as the Cause of Coming Into Being" (Die Destruktion als Ursache des Werdens) in 1912, which was then taken up by Sigmund Freud in 1920 in Beyond the Pleasure Principle. This concept has been translated as "opposition between the ego or death instincts and the sexual or life instincts". In Beyond the Pleasure Principle, Freud used the plural "death drives" (Todestriebe) much more frequently than the singular.

The death drive opposes Eros, the tendency toward survival, propagation, sex, and other creative, life-producing drives. The death drive is sometimes referred to as "Thanatos" in post-Freudian thought, complementing "Eros", although this term was not used in Freud's own work, being rather introduced by Wilhelm Stekel in 1909 and then by Paul Federn in the present context. Subsequent psychoanalysts such as Jacques Lacan and Melanie Klein have defended the concept.

Terminology

The standard edition of Freud's works in English confuses two terms that are different in German, Instinkt (instinct) and Trieb (drive), often translating both as instinct; for example, "the hypothesis of a death instinct, the task of which is to lead organic life back into the inanimate state". "This equating of Instinkt and Trieb has created serious misunderstandings". Freud actually refers to the term "Instinkt" in explicit use elsewhere, and so while the concept of "instinct" can loosely be referred to as a "drive," any essentialist or naturalist connotations of the term should be put in abeyance. In a sense, the death drive is a force that is not essential to the life of an organism (unlike an "instinct") and tends to denature it or make it behave in ways that are sometimes counter-intuitive. In other words, the term death "instinct" is simply a false representation of death drive. The term is almost universally known in scholarly literature on Freud as the "death drive", and Lacanian psychoanalysts often shorten it to simply "drive" (although Freud posited the existence of other drives as well, and Lacan explicitly states in Seminar XI that all drives are partial to the death drive). The contemporary Penguin translations of Freud translate Trieb and Instinkt as "drive" and "instinct" respectively.

Origin of the theory: Beyond the Pleasure Principle

It was a basic premise of Freud's that "the course taken by mental events is automatically regulated by the pleasure principle...[associated] with an avoidance of unpleasure or a production of pleasure". Three main types of conflictual evidence, difficult to explain satisfactorily in such terms, led Freud late in his career to look for another principle in mental life beyond the pleasure principle—a search that would ultimately lead him to the concept of the death drive.

The first problem Freud encountered was the phenomenon of repetition in (war) trauma. When Freud worked with people with trauma (particularly the trauma experienced by soldiers returning from World War I), he observed that subjects often tended to repeat or re-enact these traumatic experiences: "dreams occurring in traumatic patients have the characteristic of repeatedly bringing the patient back into the situation of his accident", contrary to the expectations of the pleasure principle.

A second problematic area was found by Freud in children's play (such as the Fort/Da Forth/here game played by Freud's grandson, who would stage and re-stage the disappearance of his mother and even himself). "How then does his repetition of this distressing experience as a game fit in with the pleasure principle?"

The third problem came from clinical practice. Freud found his patients, dealing with painful experiences that had been repressed, regularly "obliged to repeat the repressed material as a contemporary experience instead of ... remembering it as something belonging to the past". Combined with what he called "the compulsion of destiny ... come across [in] people all of whose human relationships have the same outcome", such evidence led Freud "to justify the hypothesis of a compulsion to repeat—something that would seem more primitive, more elementary, more instinctual than the pleasure principle which it over-rides".

He then set out to find an explanation of such a compulsion, an explanation that some scholars have labeled as "metaphysical biology". In Freud's own words, "What follows is speculation, often far-fetched speculation, which the reader will consider or dismiss according to his individual predilection". Seeking a new instinctual paradigm for such problematic repetition, he found it ultimately in "an urge in organic life to restore an earlier state of things"—the inorganic state from which life originally emerged. From the conservative, restorative character of instinctual life, Freud derived his death drive, with its "pressure towards death", and the resulting "separation of the death instincts from the life instincts" seen in Eros. The death drive then manifested itself in the individual creature as a force "whose function is to assure that the organism shall follow its own path to death".

Seeking further potential clinical support for the existence of such a self-destructive force, Freud found it through a reconsideration of his views of masochism—previously "regarded as sadism that has been turned round upon the subject's own ego"—so as to allow that "there might be such a thing as primary masochism—a possibility which I had contested" before. Even with such support, however, he remained very tentative to the book's close about the provisional nature of his theoretical construct: what he called "the whole of our artificial structure of hypotheses".

Although Spielrein's paper was published in 1912, Freud initially resisted the concept as he considered it to be too Jungian. Nevertheless, Freud eventually adopted the concept, and in later years would build extensively upon the tentative foundations he had set out in Beyond the Pleasure Principle. In The Ego and the Id (1923) he would develop his argument to state that "the death instinct would thus seem to express itself—though probably only in part—as an instinct of destruction directed against the external world". The following year he would spell out more clearly that the "libido has the task of making the destroying instinct innocuous, and it fulfils the task by diverting that instinct to a great extent outwards .... The instinct is then called the destructive instinct, the instinct for mastery, or the will to power", a perhaps much more recognisable set of manifestations.

At the close of the decade, in Civilization and Its Discontents (1930), Freud acknowledged that "To begin with it was only tentatively that I put forward the views I have developed here, but in the course of time they have gained such a hold upon me that I can no longer think in any other way".

Philosophy

From a philosophical perspective, the death drive may be viewed in relation to the work of the German philosopher Arthur Schopenhauer. His philosophy, expounded in The World as Will and Representation (1818) postulates that all exists by a metaphysical "will" (more clearly, a will to live), and that pleasure affirms this will. Schopenhauer's pessimism led him to believe that the affirmation of the "will" was a negative and immoral thing, due to his belief of life producing more suffering than happiness. The death drive would seem to manifest as a natural and psychological negation of the "will".

Freud was well aware of such possible linkages. In a letter of 1919, he wrote that regarding "the theme of death, [that I] have stumbled onto an odd idea via the drives and must now read all sorts of things that belong to it, for instance Schopenhauer". Ernest Jones (who like many analysts was not convinced of the need for the death drive, over and above an instinct of aggression) considered that "Freud seemed to have landed in the position of Schopenhauer, who taught that 'death is the goal of life'".

However, as Freud put it to the imagined auditors of his New Introductory Lectures (1932), "You may perhaps shrug your shoulders and say: "That isn't natural science, it's Schopenhauer's philosophy!" But, ladies and gentlemen, why should not a bold thinker have guessed something that is afterwards confirmed by sober and painstaking detailed research?" He then went on to add that "what we are saying is not even genuine Schopenhauer....we are not overlooking the fact that there is life as well as death. We recognise two basic instincts and give each of them its own aim".

Cultural application: Civilization and Its Discontents

Freud applied his new theoretical construct in Civilization and Its Discontents (1930) to the difficulties inherent in Western civilization—indeed, in civilization and in social life as a whole. In particular, given that "a portion of the [death] instinct is diverted towards the external world and comes to light as an instinct of aggressiveness', he saw 'the inclination to aggression ... [as] the greatest impediment to civilization". The need to overcome such aggression entailed the formation of the [cultural] superego: "We have even been guilty of the heresy of attributing the origin of conscience to this diversion inwards of aggressiveness". The presence thereafter in the individual of the superego and a related sense of guilt—"Civilization, therefore, obtains mastery over the individual's dangerous desire for aggression by ... setting up an agency within him to watch over it"—leaves an abiding sense of uneasiness inherent in civilized life, thereby providing a structural explanation for 'the suffering of civilized man'.

Freud made a further connection between group life and innate aggression, where the former comes together more closely by directing aggression to other groups, an idea later picked up by group analysts like Wilfred Bion.

Continuing development of Freud's views

In the closing decade of Freud's life, it has been suggested, his view of the death drive changed somewhat, with "the stress much more upon the death instinct's manifestations outwards". Given "the ubiquity of non-erotic aggressivity and destructiveness", he wrote in 1930, "I adopt the standpoint, therefore, that the inclination to aggression is an original, self-subsisting instinctual disposition in man".

In 1933, he conceived of his original formulation of the death drive 'the improbability of our speculations. A queer instinct, indeed, directed to the destruction of its own organic home!'. He wrote moreover that "Our hypothesis is that there are two essentially different classes of instincts: the sexual instincts, understood in the widest sense—Eros, if you prefer that name—and the aggressive instincts, whose aim is destruction". In 1937, he went so far as to suggest privately that 'We should have a neat schematic picture if we supposed that originally, at the beginning of life, all libido was directed to the inside and all aggressiveness to the outside'. In his last writings, it was the contrast of "two basic instincts, Eros and the destructive instinct ... our two primal instincts, Eros and destructiveness", on which he laid stress. Nevertheless, his belief in "the death instinct ... [as] a return to an earlier state ... into an inorganic state" continued to the end.

Mortido and Destrudo

The terms mortido and destrudo, formed analogously to libido, refer to the energy of the death instinct. In the early 21st century, their use amongst Freudian psychoanalysts has been waning, but still designate destructive energy. The importance of integrating mortido into an individual's life, as opposed to splitting it off and disowning it, has been taken up by figures like Robert Bly in the men's movement.

Paul Federn used the term mortido for the new energy source, and has generally been followed in that by other analytic writers. His disciple and collaborator Weiss, however, chose destrudo, which was later taken up by Charles Brenner.

Mortido has also been applied in contemporary expositions of the Cabbala.

Whereas Freud himself never named the aggressive and destructive energy of the death drive (as he had done with the life drive, "libido"), the next generation of psychoanalysts vied to find suitable names for it.

Literary criticism has been almost more prepared than psychoanalysis to make at least metaphorical use of the term 'Destrudo'. Artistic images were seen by Joseph Campbell in terms of "incestuous 'libido' and patricidal 'destrudo'"; while literary descriptions of the conflict between destrudo and libido are still fairly widespread in the 21st century.

Destrudo as an evocative name also appears in rock music and video games.

Paul Federn

Mortido was introduced by Freud's pupil Paul Federn to cover the psychic energy of the death instinct, something left open by Freud himself: Providing what he saw as clinical proof of the reality of the death instinct in 1930, Federn reported on the self-destructive tendencies of severely melancholic patients as evidence of what he would later call inwardly-directed mortido.

However, Freud himself favoured neither term – mortido or destrudo. This worked against either of them gaining widespread popularity in the psychoanalytic literature.

Edoardo Weiss

Destrudo is a term introduced by Italian psychoanalyst Edoardo Weiss in 1935 to denote the energy of the death instinct, on the analogy of libido—and thus to cover the energy of the destructive impulse in Freudian psychology.

Destrudo is the opposite of libido—the urge to create, an energy that arises from the Eros (or "life") drive—and is the urge to destroy arising from Thanatos (death), and thus an aspect of what Sigmund Freud termed "the aggressive instincts, whose aim is destruction".

Weiss related aggression/destrudo to secondary narcissism, something generally only described in terms of the libido turning towards the self.

Eric Berne

Eric Berne, who was a pupil of Federn's, made extensive use of the term mortido in his pre-transactional analysis study, The Mind in Action (1947). As he wrote in the foreword to the third edition of 1967, "the historical events of the last thirty years...become much clearer by introducing Paul Federn's concept of mortido".

Berne saw mortido as activating such forces as hate and cruelty, blinding anger and social hostilities; and considered that inwardly directed mortido underlay the phenomena of guilt and self-punishment, as well as their clinical exacerbations in the form of depression or melancholia.

Berne saw sexual acts as gratifying mortido at the same time as libido; and recognised that on occasion the former becomes more important sexually than the latter, as in sadomasochism and destructive emotional relationships.

Berne's concern with the role of mortido in individuals and groups, social formations and nations, arguably continued throughout all his later writings.

Jean Laplanche

Jean Laplanche has explored repeatedly the question of mortido, and of how far a distinctive instinct of destruction can be identified in parallel to the forces of libido.

Analytic reception

As Freud wryly commented in 1930, "The assumption of the existence of an instinct of death or destruction has met with resistance even in analytic circles". Indeed, Ernest Jones would comment of Beyond the Pleasure Principle that the book not only "displayed a boldness of speculation that was unique in all his writings" but was "further noteworthy in being the only one of Freud's which has received little acceptance on the part of his followers".

Otto Fenichel in his compendious survey of the first Freudian half-century concluded that "the facts on which Freud based his concept of a death instinct in no way necessitate the assumption ... of a genuine self-destructive instinct". Heinz Hartmann set the tone for ego psychology when he "chose to ... do without 'Freud's other, mainly biologically oriented set of hypotheses of the "life" and "death instincts"'". In the object relations theory, among the independent group 'the most common repudiation was the loathsome notion of the death instinct'. Indeed, "for most analysts Freud's idea of a primitive urge towards death, of a primary masochism, was ... bedevilled by problems".

Nevertheless, the concept has been defended, extended, and carried forward by some analysts, generally those tangential to the psychoanalytic mainstream; while among the more orthodox, arguably of "those who, in contrast to most other analysts, take Freud's doctrine of the death drive seriously, K. R. Eissler has been the most persuasive—or least unpersuasive".

Melanie Klein and her immediate followers considered that "the infant is exposed from birth to the anxiety stirred up by the inborn polarity of instincts—the immediate conflict between the life instinct and the death instinct"; and Kleinians indeed built much of their theory of early childhood around the outward deflection of the latter. "This deflection of the death instinct, described by Freud, in Melanie Klein's view consists partly of a projection, partly of the conversion of the death instinct into aggression".

French psychoanalyst Jacques Lacan, for his part, castigated the "refusal to accept this culminating point of Freud's doctrine ... by those who conduct their analysis on the basis of a conception of the ego ... that death instinct whose enigma Freud propounded for us at the height of his experience". Characteristically, he stressed the linguistic aspects of the death drive: "the symbol is substituted for death in order to take possession of the first swelling of life .... There is therefore no further need to have recourse to the outworn notion of primordial masochism in order to understand the reason for the repetitive games in ... his Fort! and in his Da!."

Eric Berne too would proudly proclaim that he, "besides having repeated and confirmed the conventional observations of Freud, also believes right down the line with him concerning the death instinct, and the pervasiveness of the repetition compulsion".

For the twenty-first century, "the death drive today ... remains a highly controversial theory for many psychoanalysts ... [almost] as many opinions as there are psychoanalysts".

Freud's conceptual opposition of death and eros drives in the human psyche was applied by Walter A. Davis in Deracination: Historicity, Hiroshima, and the Tragic Imperative and Death's Dream Kingdom: The American Psyche since 9/11. Davis described social reactions to both Hiroshima and 9/11 from the Freudian viewpoint of the death force. Unless they consciously take responsibility for the damage of those reactions, Davis claims that Americans will repeat them.

Domestication of vertebrates

From Wikipedia, the free encyclopedia
Dogs and sheep were among the first animals to be domesticated.

The domestication of vertebrates is the mutual relationship between vertebrate animals including birds and mammals, and the humans who have influence on their care and reproduction.

Charles Darwin recognized a small number of traits that made domesticated species different from their wild ancestors. He was also the first to recognize the difference between conscious selective breeding (i.e. artificial selection) in which humans directly select for desirable traits, and unconscious selection where traits evolve as a by-product of natural selection or from selection on other traits. There is a genetic difference between domestic and wild populations. There is also a genetic difference between the domestication traits that researchers believe to have been essential at the early stages of domestication, and the improvement traits that have appeared since the split between wild and domestic populations. Domestication traits are generally fixed within all domesticates, and were selected during the initial episode of domestication of that animal or plant, whereas improvement traits are present only in a portion of domesticates, though they may be fixed in individual breeds or regional populations.

Domestication should not be confused with taming. Taming is the conditioned behavioral modification of a wild-born animal when its natural avoidance of humans is reduced and it accepts the presence of humans, but domestication is the permanent genetic modification of a bred lineage that leads to an inherited predisposition toward humans. Certain animal species, and certain individuals within those species, make better candidates for domestication than others because they exhibit certain behavioral characteristics: (1) the size and organization of their social structure; (2) the availability and the degree of selectivity in their choice of mates; (3) the ease and speed with which the parents bond with their young, and the maturity and mobility of the young at birth; (4) the degree of flexibility in diet and habitat tolerance; and (5) responses to humans and new environments, including flight responses and reactivity to external stimuli.

It is proposed that there were three major pathways that most animal domesticates followed into domestication: (1) commensals, adapted to a human niche (e.g., dogs, cats, fowl, possibly pigs); (2) animals sought for food and other byproducts (e.g., sheep, goats, cattle, water buffalo, yak, pig, reindeer, llama, alpaca, and turkey); and (3) targeted animals for draft and nonfood resources (e.g., horse, donkey, camel). The dog was the first to be domesticated, and was established across Eurasia before the end of the Late Pleistocene era, well before cultivation and before the domestication of other animals. Unlike other domestic species which were primarily selected for production-related traits, dogs were initially selected for their behaviors. The archaeological and genetic data suggest that long-term bidirectional gene flow between wild and domestic stocks – including donkeys, horses, New and Old World camelids, goats, sheep, and pigs – was common. One study has concluded that human selection for domestic traits likely counteracted the homogenizing effect of gene flow from wild boars into pigs and created domestication islands in the genome. The same process may also apply to other domesticated animals. Some of the most commonly domesticated animals are cats and dogs.

Definitions

Domestication

Domestication has been defined as "a sustained multi-generational, mutualistic relationship in which one organism assumes a significant degree of influence over the reproduction and care of another organism in order to secure a more predictable supply of a resource of interest, and through which the partner organism gains advantage over individuals that remain outside this relationship, thereby benefitting and often increasing the fitness of both the domesticator and the target domesticate." This definition recognizes both the biological and the cultural components of the domestication process and the effects on both humans and the domesticated animals and plants. All past definitions of domestication have included a relationship between humans with plants and animals, but their differences lay in who was considered as the lead partner in the relationship. This new definition recognizes a mutualistic relationship in which both partners gain benefits. Domestication has vastly enhanced the reproductive output of crop plants, livestock, and pets far beyond that of their wild progenitors. Domesticates have provided humans with resources that they could more predictably and securely control, move, and redistribute, which has been the advantage that had fueled a population explosion of the agro-pastoralists and their spread to all corners of the planet.

Domestication syndrome

Traits used to define the animal domestication syndrome

Domestication syndrome is a term often used to describe the suite of phenotypic traits arising during domestication that distinguish crops from their wild ancestors. The term is also applied to animals and includes increased docility and tameness, coat color changes, reductions in tooth size, changes in craniofacial morphology, alterations in ear and tail form (e.g., floppy ears), more frequent and nonseasonal estrus cycles, alterations in adrenocorticotropic hormone levels, changed concentrations of several neurotransmitters, prolongations in juvenile behavior, and reductions in both total brain size and of particular brain regions. The set of traits used to define the animal domestication syndrome is inconsistent.

Difference from taming

Domestication should not be confused with taming. Taming is the conditioned behavioral modification of a wild-born animal when its natural avoidance of humans is reduced and it accepts the presence of humans, but domestication is the permanent genetic modification of a bred lineage that leads to an inherited predisposition toward humans. Human selection included tameness, but without a suitable evolutionary response then domestication was not achieved. Domestic animals need not be tame in the behavioral sense, such as the Spanish fighting bull. Wild animals can be tame, such as a hand-raised cheetah. A domestic animal's breeding is controlled by humans and its tameness and tolerance of humans is genetically determined. However, an animal merely bred in captivity is not necessarily domesticated. Tigers, gorillas, and polar bears breed readily in captivity but are not domesticated. Asian elephants are wild animals that with taming manifest outward signs of domestication, yet their breeding is not human controlled and thus they are not true domesticates.

History, cause and timing

Evolution of temperatures in the postglacial period, after the Last Glacial Maximum, showing very low temperatures for the most part of the Younger Dryas, rapidly rising afterwards to reach the level of the warm Holocene, based on Greenland ice cores.

The domestication of animals and plants was triggered by the climatic and environmental changes that occurred after the peak of the Last Glacial Maximum around 21,000 years ago and which continue to this present day. These changes made obtaining food difficult. The first domesticate was the domestic dog (Canis lupus familiaris) from a wolf ancestor (Canis lupus) at least 15,000 years ago. The Younger Dryas that occurred 12,900 years ago was a period of intense cold and aridity that put pressure on humans to intensify their foraging strategies. By the beginning of the Holocene from 11,700 years ago, favorable climatic conditions and increasing human populations led to small-scale animal and plant domestication, which allowed humans to augment the food that they were obtaining through hunter-gathering.

The increased use of agriculture and continued domestication of species during the Neolithic transition marked the beginning of a rapid shift in the evolution, ecology, and demography of both humans and numerous species of animals and plants. Areas with increasing agriculture, underwent urbanization, developing higher-density populations, expanded economies, and became centers of livestock and crop domestication. Such agricultural societies emerged across Eurasia, North Africa, and South and Central America.

In the Fertile Crescent 10,000-11,000 years ago, zooarchaeology indicates that goats, pigs, sheep, and taurine cattle were the first livestock to be domesticated. Archaeologists working in Cyprus found an older burial ground, approximately 9500 years old, of an adult human with a feline skeleton. Two thousand years later, humped zebu cattle were domesticated in what is today Baluchistan in Pakistan. In East Asia 8,000 years ago, pigs were domesticated from wild boar that were genetically different from those found in the Fertile Crescent. The horse was domesticated on the Central Asian steppe 5,500 years ago. The chicken in Southeast Asia was domesticated 4,000 years ago.

Universal features

The biomass of wild vertebrates is now increasingly small compared to the biomass of domestic animals, with the calculated biomass of domestic cattle alone being greater than that of all wild mammals. Because the evolution of domestic animals is ongoing, the process of domestication has a beginning but not an end. Various criteria have been established to provide a definition of domestic animals, but all decisions about exactly when an animal can be labelled "domesticated" in the zoological sense are arbitrary, although potentially useful. Domestication is a fluid and nonlinear process that may start, stop, reverse, or go down unexpected paths with no clear or universal threshold that separates the wild from the domestic. However, there are universal features held in common by all domesticated animals.

Behavioral preadaption

Certain animal species, and certain individuals within those species, make better candidates for domestication than others because they exhibit certain behavioral characteristics: (1) the size and organization of their social structure; (2) the availability and the degree of selectivity in their choice of mates; (3) the ease and speed with which the parents bond with their young, and the maturity and mobility of the young at birth; (4) the degree of flexibility in diet and habitat tolerance; and (5) responses to humans and new environments, including flight responses and reactivity to external stimuli. Reduced wariness to humans and low reactivity to both humans and other external stimuli are a key pre-adaptation for domestication, and these behaviors are also the primary target of the selective pressures experienced by the animal undergoing domestication. This implies that not all animals can be domesticated, e.g. a wild member of the horse family, the zebra.

Jared Diamond in his book Guns, Germs, and Steel enquired as to why, among the world's 148 large wild terrestrial herbivorous mammals, only 14 were domesticated, and proposed that their wild ancestors must have possessed six characteristics before they could be considered for domestication:

Hereford cattle, domesticated for beef production.
  1. Efficient diet – Animals that can efficiently process what they eat and live off plants are less expensive to keep in captivity. Carnivores feed on flesh, which would require the domesticators to raise additional animals to feed the carnivores and therefore increase the consumption of plants further.
  2. Quick growth rate – Fast maturity rate compared to the human life span allows breeding intervention and makes the animal useful within an acceptable duration of caretaking. Some large animals require many years before they reach a useful size.
  3. Ability to breed in captivity – Animals that will not breed in captivity are limited to acquisition through capture in the wild.
  4. Pleasant disposition – Animals with nasty dispositions are dangerous to keep around humans.
  5. Tendency not to panic – Some species are nervous, fast, and prone to flight when they perceive a threat.
  6. Social structure – All species of domesticated large mammals had wild ancestors that lived in herds with a dominance hierarchy amongst the herd members, and the herds had overlapping home territories rather than mutually exclusive home territories. This arrangement allows humans to take control of the dominance hierarchy.

Brain size and function

Reduction in skull size with neoteny - grey wolf and chihuahua skulls

The sustained selection for lowered reactivity among mammal domesticates has resulted in profound changes in brain form and function. The larger the size of the brain to begin with and the greater its degree of folding, the greater the degree of brain-size reduction under domestication. Foxes that had been selectively bred for tameness over 40 years had experienced a significant reduction in cranial height and width and by inference in brain size, which supports the hypothesis that brain-size reduction is an early response to the selective pressure for tameness and lowered reactivity that is the universal feature of animal domestication. The most affected portion of the brain in domestic mammals is the limbic system, which in domestic dogs, pigs, and sheep show a 40% reduction in size compared with their wild species. This portion of the brain regulates endocrine function that influences behaviors such as aggression, wariness, and responses to environmentally induced stress, all attributes which are dramatically affected by domestication.

Pleiotropy

A putative cause for the broad changes seen in domestication syndrome is pleiotropy. Pleiotropy occurs when one gene influences two or more seemingly unrelated phenotypic traits. Certain physiological changes characterize domestic animals of many species. These changes include extensive white markings (particularly on the head), floppy ears, and curly tails. These arise even when tameness is the only trait under selective pressure. The genes involved in tameness are largely unknown, so it is not known how or to what extent pleiotropy contributes to domestication syndrome. Tameness may be caused by the down regulation of fear and stress responses via reduction of the adrenal glands. Based on this, the pleiotropy hypotheses can be separated into two theories. The Neural Crest Hypothesis relates adrenal gland function to deficits in neural crest cells during development. The Single Genetic Regulatory Network Hypothesis claims that genetic changes in upstream regulators affect downstream systems.

Neural crest cells (NCC) are vertebrate embryonic stem cells that function directly and indirectly during early embryogenesis to produce many tissue types. Because the traits commonly affected by domestication syndrome are all derived from NCC in development, the neural crest hypothesis suggests that deficits in these cells cause the domain of phenotypes seen in domestication syndrome. These deficits could cause changes we see to many domestic mammals, such as lopped ears (seen in rabbit, dog, fox, pig, sheep, goat, cattle, and donkeys) as well as curly tails (pigs, foxes, and dogs). Although they do not affect the development of the adrenal cortex directly, the neural crest cells may be involved in relevant upstream embryological interactions. Furthermore, artificial selection targeting tameness may affect genes that control the concentration or movement of NCCs in the embryo, leading to a variety of phenotypes.

The single genetic regulatory network hypothesis proposes that domestication syndrome results from mutations in genes that regulate the expression pattern of more downstream genes. For example piebald, or spotted coat coloration, may be caused by a linkage in the biochemical pathways of melanins involved in coat coloration and neurotransmitters such as dopamine that help shape behavior and cognition. These linked traits may arise from mutations in a few key regulatory genes. A problem with this hypothesis is that it proposes that there are mutations in gene networks that cause dramatic effects that are not lethal, however no currently known genetic regulatory networks cause such dramatic change in so many different traits.

Limited reversion

Feral mammals such as dogs, cats, goats, donkeys, pigs, and ferrets that have lived apart from humans for generations show no sign of regaining the brain mass of their wild progenitors. Dingos have lived apart from humans for thousands of years but still have the same brain size as that of a domestic dog. Feral dogs that actively avoid human contact are still dependent on human waste for survival and have not reverted to the self-sustaining behaviors of their wolf ancestors.

Categories

Domestication can be considered as the final phase of intensification in the relationship between animal or plant sub-populations and human societies, but it is divided into several grades of intensification. For studies in animal domestication, researchers have proposed five distinct categories: wild, captive wild, domestic, cross-breeds and feral.

Wild animals
Subject to natural selection, although the action of past demographic events and artificial selection induced by game management or habitat destruction cannot be excluded.
Captive wild animals
Directly affected by a relaxation of natural selection associated with feeding, breeding and protection/confinement by humans, and an intensification of artificial selection through passive selection for animals that are more suited to captivity.
Domestic animals
Subject to intensified artificial selection through husbandry practices with relaxation of natural selection associated with captivity and management.
Cross-breed animals
Genetic hybrids of wild and domestic parents. They may be forms intermediate between both parents, forms more similar to one parent than the other, or unique forms distinct from both parents. Hybrids can be intentionally bred for specific characteristics or can arise unintentionally as the result of contact with wild individuals.
Feral animals
Domesticates that have returned to a wild state. As such, they experience relaxed artificial selection induced by the captive environment paired with intensified natural selection induced by the wild habitat.

In 2015, a study compared the diversity of dental size, shape and allometry across the proposed domestication categories of modern pigs (genus Sus). The study showed clear differences between the dental phenotypes of wild, captive wild, domestic, and hybrid pig populations, which supported the proposed categories through physical evidence. The study did not cover feral pig populations but called for further research to be undertaken on them, and on the genetic differences with hybrid pigs.

Pathways

Since 2012, a multi-stage model of animal domestication has been accepted by two groups. The first group proposed that animal domestication proceeded along a continuum of stages from anthropophily, commensalism, control in the wild, control of captive animals, extensive breeding, intensive breeding, and finally to pets in a slow, gradually intensifying relationship between humans and animals.

The second group proposed that there were three major pathways that most animal domesticates followed into domestication: (1) commensals, adapted to a human niche (e.g., dogs, cats, fowl, possibly pigs); (2) prey animals sought for food (e.g., sheep, goats, cattle, water buffalo, yak, pig, reindeer, llama and alpaca); and (3) targeted animals for draft and nonfood resources (e.g., horse, donkey, camel). The beginnings of animal domestication involved a protracted coevolutionary process with multiple stages along different pathways. Humans did not intend to domesticate animals from, or at least they did not envision a domesticated animal resulting from, either the commensal or prey pathways. In both of these cases, humans became entangled with these species as the relationship between them, and the human role in their survival and reproduction, intensified. Although the directed pathway proceeded from capture to taming, the other two pathways are not as goal-oriented and archaeological records suggest that they take place over much longer time frames.

The pathways that animals may have followed are not mutually exclusive. Pigs, for example, may have been domesticated as their populations became accustomed to the human niche, which would suggest a commensal pathway, or they may have been hunted and followed a prey pathway, or both.

Commensal

The commensal pathway was traveled by vertebrates that fed on refuse around human habitats or by animals that preyed on other animals drawn to human camps. Those animals established a commensal relationship with humans in which the animals benefited but the humans received no harm but little benefit. Those animals that were most capable of taking advantage of the resources associated with human camps would have been the tamer, less aggressive individuals with shorter fight or flight distances. Later, these animals developed closer social or economic bonds with humans that led to a domestic relationship. The leap from a synanthropic population to a domestic one could only have taken place after the animals had progressed from anthropophily to habituation, to commensalism and partnership, when the relationship between animal and human would have laid the foundation for domestication, including captivity and human-controlled breeding. From this perspective, animal domestication is a coevolutionary process in which a population responds to selective pressure while adapting to a novel niche that included another species with evolving behaviors. Commensal pathway animals include dogs, cats, fowl, and possibly pigs.

The domestication of animals commenced over 15,000 years before present (YBP), beginning with the grey wolf (Canis lupus) by nomadic hunter-gatherers. It was not until 11,000 YBP that people living in the Near East entered into relationships with wild populations of aurochs, boar, sheep, and goats. A domestication process then began to develop. The grey wolf most likely followed the commensal pathway to domestication. When, where, and how many times wolves may have been domesticated remains debated because only a small number of ancient specimens have been found, and both archaeology and genetics continue to provide conflicting evidence. The most widely accepted, earliest dog remains date back 15,000 YBP to the Bonn–Oberkassel dog. Earlier remains dating back to 30,000 YBP have been described as Paleolithic dogs, however their status as dogs or wolves remains debated. Recent studies indicate that a genetic divergence occurred between dogs and wolves 20,000–40,000 YBP, however this is the upper time-limit for domestication because it represents the time of divergence and not the time of domestication.

The chicken is one of the most widespread domesticated species and one of the human world's largest sources of protein. Although the chicken was domesticated in South-East Asia, archaeological evidence suggests that it was not kept as a livestock species until 400 BCE in the Levant. Prior to this, chickens had been associated with humans for thousands of years and kept for cock-fighting, rituals, and royal zoos, so they were not originally a prey species. The chicken was not a popular food in Europe until only one thousand years ago.

Prey

Humped cattle serving as dairy cows in North India

The prey pathway was the way in which most major livestock species entered into domestication as these were once hunted by humans for their meat. Domestication was likely initiated when humans began to experiment with hunting strategies designed to increase the availability of these prey, perhaps as a response to localized pressure on the supply of the animal. Over time and with the more responsive species, these game-management strategies developed into herd-management strategies that included the sustained multi-generational control over the animals’ movement, feeding, and reproduction. As human interference in the life-cycles of prey animals intensified, the evolutionary pressures for a lack of aggression would have led to an acquisition of the same domestication syndrome traits found in the commensal domesticates.

Prey pathway animals include sheep, goats, cattle, water buffalo, yak, pig, reindeer, llama and alpaca. The right conditions for the domestication for some of them appear to have been in place in the central and eastern Fertile Crescent at the end of the Younger Dryas climatic downturn and the beginning of the Early Holocene about 11,700 YBP, and by 10,000 YBP people were preferentially killing young males of a variety of species and allowed the females to live in order to produce more offspring. By measuring the size, sex ratios, and mortality profiles of zooarchaeological specimens, archeologists have been able to document changes in the management strategies of hunted sheep, goats, pigs, and cows in the Fertile Crescent starting 11,700 YBP. A recent demographic and metrical study of cow and pig remains at Sha’ar Hagolan, Israel, demonstrated that both species were severely overhunted before domestication, suggesting that the intensive exploitation led to management strategies adopted throughout the region that ultimately led to the domestication of these populations following the prey pathway. This pattern of overhunting before domestication suggests that the prey pathway was as accidental and unintentional as the commensal pathway.

Directed

Kazakh shepherd with horse and dogs. Their job is to guard the sheep from predators.

The directed pathway was a more deliberate and directed process initiated by humans with the goal of domesticating a free-living animal. It probably only came into being once people were familiar with either commensal or prey-pathway domesticated animals. These animals were likely not to possess many of the behavioral preadaptions some species show before domestication. Therefore, the domestication of these animals requires more deliberate effort by humans to work around behaviors that do not assist domestication, with increased technological assistance needed.

Humans were already reliant on domestic plants and animals when they imagined the domestic versions of wild animals. Although horses, donkeys, and Old World camels were sometimes hunted as prey species, they were each deliberately brought into the human niche for sources of transport. Domestication was still a multi-generational adaptation to human selection pressures, including tameness, but without a suitable evolutionary response then domestication was not achieved. For example, despite the fact that hunters of the Near Eastern gazelle in the Epipaleolithic avoided culling reproductive females to promote population balance, neither gazelles nor zebras possessed the necessary prerequisites and were never domesticated. There is no clear evidence for the domestication of any herded prey animal in Africa, with the notable exception of the donkey, which was domesticated in Northeast Africa sometime in the 4th millennium BCE.

Post-domestication gene flow

As agricultural societies migrated away from the domestication centers taking their domestic partners with them, they encountered populations of wild animals of the same or sister species. Because domestics often shared a recent common ancestor with the wild populations, they were capable of producing fertile offspring. Domestic populations were small relative to the surrounding wild populations, and repeated hybridizations between the two eventually led to the domestic population becoming more genetically divergent from its original domestic source population.

Advances in DNA sequencing technology allow the nuclear genome to be accessed and analyzed in a population genetics framework. The increased resolution of nuclear sequences has demonstrated that gene flow is common, not only between geographically diverse domestic populations of the same species but also between domestic populations and wild species that never gave rise to a domestic population.

  • The yellow leg trait possessed by numerous modern commercial chicken breeds was acquired via introgression from the grey junglefowl indigenous to South Asia.
  • African cattle are hybrids that possess both a European Taurine cattle maternal mitochondrial signal and an Asian Indicine cattle paternal Y-chromosome signature.
  • Numerous other bovid species, including bison, yak, banteng, and gaur hybridize with ease.
  • Cats and horses have been shown to hybridize with many closely related species.

The archaeological and genetic data suggests that long-term bidirectional gene flow between wild and domestic stocks – including canids, donkeys, horses, New and Old World camelids, goats, sheep, and pigs – was common. Bidirectional gene flow between domestic and wild reindeer continues today.

The consequence of this introgression is that modern domestic populations can often appear to have much greater genomic affinity to wild populations that were never involved in the original domestication process. Therefore, it is proposed that the term "domestication" should be reserved solely for the initial process of domestication of a discrete population in time and space. Subsequent admixture between introduced domestic populations and local wild populations that were never domesticated should be referred to as "introgressive capture". Conflating these two processes muddles understanding of the original process and can lead to an artificial inflation of the number of times domestication took place. This introgression can, in some cases, be regarded as adaptive introgression, as observed in domestic sheep due to gene flow with the wild European Mouflon.

The sustained admixture between dog and wolf populations across the Old and New Worlds over at least the last 10,000 years has blurred the genetic signatures and confounded efforts of researchers at pinpointing the origins of domestic dogs. None of the modern wolf populations are related to the Pleistocene wolves that were first domesticated, and the extinction of the wolves that were the direct ancestors of dogs has muddied efforts to pinpoint the time and place of dog domestication.

Positive selection

Charles Darwin recognized the small number of traits that made domestic species different from their wild ancestors. He was also the first to recognize the difference between conscious selective breeding in which humans directly select for desirable traits, and unconscious selection where traits evolve as a by-product of natural selection or from selection on other traits.

Domestic animals vary in coat color, craniofacial morphology, reduced brain size, floppy ears, and changes in the endocrine system and reproductive cycle. The domesticated silver fox experiment demonstrated that selection for tameness within a few generations can result in modified behavioral, morphological, and physiological traits. The experiment demonstrated that domestic phenotypic traits could arise through selection for a behavioral trait, and that domestic behavioral traits could arise through the selection for a phenotypic trait. In addition, the experiment provided a mechanism for the start of the animal domestication process that did not depend on deliberate human forethought and action. In the 1980s, a researcher used a set of behavioral, cognitive, and visible phenotypic markers, such as coat color, to produce domesticated fallow deer within a few generations. Similar results for tameness and fear have been found for mink and Japanese quail.

Pig herding in fog, Armenia. Human selection for domestic traits is not affected by later gene flow from wild boar.

The genetic difference between domestic and wild populations can be framed within two considerations. The first distinguishes between domestication traits that are presumed to have been essential at the early stages of domestication, and improvement traits that have appeared since the split between wild and domestic populations. Domestication traits are generally fixed within all domesticates and were selected during the initial episode of domestication, whereas improvement traits are present only in a proportion of domesticates, though they may be fixed in individual breeds or regional populations. A second issue is whether traits associated with the domestication syndrome resulted from a relaxation of selection as animals exited the wild environment or from positive selection resulting from intentional and unintentional human preference. Some recent genomic studies on the genetic basis of traits associated with the domestication syndrome have shed light on both of these issues.

Geneticists have identified more than 300 genetic loci and 150 genes associated with coat color variability. Knowing the mutations associated with different colors has allowed some correlation between the timing of the appearance of variable coat colors in horses with the timing of their domestication. Other studies have shown how human-induced selection is responsible for the allelic variation in pigs. Together, these insights suggest that, although natural selection has kept variation to a minimum before domestication, humans have actively selected for novel coat colors as soon as they appeared in managed populations.

In 2015, a study looked at over 100 pig genome sequences to ascertain their process of domestication. The process of domestication was assumed to have been initiated by humans, involved few individuals and relied on reproductive isolation between wild and domestic forms, but the study found that the assumption of reproductive isolation with population bottlenecks was not supported. The study indicated that pigs were domesticated separately in Western Asia and China, with Western Asian pigs introduced into Europe where they crossed with wild boar. A model that fitted the data included admixture with a now extinct ghost population of wild pigs during the Pleistocene. The study also found that despite back-crossing with wild pigs, the genomes of domestic pigs have strong signatures of selection at genetic loci that affect behavior and morphology. Human selection for domestic traits likely counteracted the homogenizing effect of gene flow from wild boars and created domestication islands in the genome.

Unlike other domestic species which were primarily selected for production-related traits, dogs were initially selected for their behaviors. In 2016, a study found that there were only 11 fixed genes that showed variation between wolves and dogs. These gene variations were unlikely to have been the result of natural evolution, and indicate selection on both morphology and behavior during dog domestication. These genes have been shown to affect the catecholamine synthesis pathway, with the majority of the genes affecting the fight-or-flight response (i.e. selection for tameness), and emotional processing. Dogs generally show reduced fear and aggression compared to wolves. Some of these genes have been associated with aggression in some dog breeds, indicating their importance in both the initial domestication and then later in breed formation.

Authorship of the Bible

From Wikipedia, the free encyclopedia ...