Search This Blog

Wednesday, July 12, 2023

Personality test

From Wikipedia, the free encyclopedia
Personality test
Lavater1792.jpg
The four temperaments as illustrated by Johann Kaspar Lavater
MeSHD010556

A personality test is a method of assessing human personality constructs. Most personality assessment instruments (despite being loosely referred to as "personality tests") are in fact introspective (i.e., subjective) self-report questionnaire (Q-data, in terms of LOTS data) measures or reports from life records (L-data) such as rating scales. Attempts to construct actual performance tests of personality have been very limited even though Raymond Cattell with his colleague Frank Warburton compiled a list of over 2000 separate objective tests that could be used in constructing objective personality tests. One exception however, was the Objective-Analytic Test Battery, a performance test designed to quantitatively measure 10 factor-analytically discerned personality trait dimensions. A major problem with both L-data and Q-data methods is that because of item transparency, rating scales and self-report questionnaires are highly susceptible to motivational and response distortion ranging all the way from lack of adequate self-insight (or biased perceptions of others) to downright dissimulation (faking good/faking bad) depending on the reason/motivation for the assessment being undertaken.

The first personality assessment measures were developed in the 1920s and were intended to ease the process of personnel selection, particularly in the armed forces. Since these early efforts, a wide variety of personality scales and questionnaires have been developed, including the Minnesota Multiphasic Personality Inventory (MMPI), the Sixteen Personality Factor Questionnaire (16PF), the Comrey Personality Scales (CPS), among many others. Although popular especially among personnel consultants, the Myers–Briggs Type Indicator (MBTI) has numerous psychometric deficiencies. More recently, a number of instruments based on the Five Factor Model of personality have been constructed such as the Revised NEO Personality Inventory. However, the Big Five and related Five Factor Model have been challenged for accounting for less than two-thirds of the known trait variance in the normal personality sphere alone.

Estimates of how much the personality assessment industry in the US is worth range anywhere from $2 and $4 billion a year (as of 2013). Personality assessment is used in wide a range of contexts, including individual and relationship counseling, clinical psychology, forensic psychology, school psychology, career counseling, employment testing, occupational health and safety and customer relationship management.

History

Illustration in a 19th-century book depicting physiognomy

The origins of personality assessment date back to the 18th and 19th centuries, when personality was assessed through phrenology, the measurement of bumps on the human skull, and physiognomy, which assessed personality based on a person's outer appearances. Sir Francis Galton took another approach to assessing personality late in the 19th century. Based on the lexical hypothesis, Galton estimated the number of adjectives that described personality in the English dictionary. Galton's list was eventually refined by Louis Leon Thurstone to 60 words that were commonly used for describing personality at the time. Through factor analyzing responses from 1300 participants, Thurstone was able to reduce this severely restricted pool of 60 adjectives into seven common factors. This procedure of factor analyzing common adjectives was later utilized by Raymond Cattell (7th most highly cited psychologist of the 20th Century—based on the peer-reviewed journal literature), who subsequently utilized a data set of over 4000 affect terms from the English dictionary that eventually resulted in construction of the Sixteen Personality Factor Questionnaire (16PF) which also measured up to eight second-stratum personality factors. Of the many introspective (i.e., subjective) self-report instruments constructed to measure the putative Big Five personality dimensions, perhaps the most popular has been the Revised NEO Personality Inventory (NEO-PI-R) However, the psychometric properties of the NEO-PI-R (including its factor analytic/construct validity) has been severely criticized.

Another early personality instrument was the Woodworth Personal Data Sheet, a self-report inventory developed for World War I and used for the psychiatric screening of new draftees.

Overview

There are many different types of personality assessment measures. The self-report inventory involves administration of many items requiring respondents to introspectively assess their own personality characteristics. This is highly subjective, and because of item transparency, such Q-data measures are highly susceptible to motivational and response distortion. Respondents are required to indicate their level of agreement with each item using a Likert scale or, more accurately, a Likert-type scale. An item on a personality questionnaire, for example, might ask respondents to rate the degree to which they agree with the statement "I talk to a lot of different people at parties" on a scale from 1 ("strongly disagree") to 5 ("strongly agree").

Historically, the most widely used multidimensional personality instrument is the Minnesota Multiphasic Personality Inventory (MMPI), a psychopathology instrument originally designed to assess archaic psychiatric nosology.

In addition to subjective/introspective self-report inventories, there are several other methods for assessing human personality, including observational measures, ratings of others, projective tests (e.g., the TAT and Ink Blots), and actual objective performance tests (T-data).

Topics

Norms

The meaning of personality test scores are difficult to interpret in a direct sense. For this reason substantial effort is made by producers of personality tests to produce norms to provide a comparative basis for interpreting a respondent's test scores. Common formats for these norms include percentile ranks, z scores, sten scores, and other forms of standardized scores.

Test development

A substantial amount of research and thinking has gone into the topic of personality test development. Development of personality tests tends to be an iterative process whereby a test is progressively refined. Test development can proceed on theoretical or statistical grounds. There are three commonly used general strategies: Inductive, Deductive, and Empirical. Scales created today will often incorporate elements of all three methods.

Deductive assessment construction begins by selecting a domain or construct to measure. The construct is thoroughly defined by experts and items are created which fully represent all the attributes of the construct definition. Test items are then selected or eliminated based upon which will result in the strongest internal validity for the scale. Measures created through deductive methodology are equally valid and take significantly less time to construct compared to inductive and empirical measures. The clearly defined and face valid questions that result from this process make them easy for the person taking the assessment to understand. Although subtle items can be created through the deductive process, these measure often are not as capable of detecting lying as other methods of personality assessment construction.

Inductive assessment construction begins with the creation of a multitude of diverse items. The items created for an inductive measure to not intended to represent any theory or construct in particular. Once the items have been created they are administered to a large group of participants. This allows researchers to analyze natural relationships among the questions and label components of the scale based upon how the questions group together. Several statistical techniques can be used to determine the constructs assessed by the measure. Exploratory Factor Analysis and Confirmatory Factor Analysis are two of the most common data reduction techniques that allow researchers to create scales from responses on the initial items.

The Five Factor Model of personality was developed using this method. Advanced statistical methods include the opportunity to discover previously unidentified or unexpected relationships between items or constructs. It also may allow for the development of subtle items that prevent test takers from knowing what is being measured and may represent the actual structure of a construct better than a pre-developed theory. Criticisms include a vulnerability to finding item relationships that do not apply to a broader population, difficulty identifying what may be measured in each component because of confusing item relationships, or constructs that were not fully addressed by the originally created questions.

Empirically derived personality assessments require statistical techniques. One of the central goals of empirical personality assessment is to create a test that validly discriminates between two distinct dimensions of personality. Empirical tests can take a great deal of time to construct. In order to ensure that the test is measuring what it is purported to measure, psychologists first collect data through self- or observer reports, ideally from a large number of participants.

Self- vs. observer-reports

A personality test can be administered directly to the person being evaluated or to an observer. In a self-report, the individual responds to personality items as they pertain to the person himself/herself. Self-reports are commonly used. In an observer-report, a person responds to the personality items as those items pertain to someone else. To produce the most accurate results, the observer needs to know the individual being evaluated. Combining the scores of a self-report and an observer report can reduce error, providing a more accurate depiction of the person being evaluated. Self- and observer-reports tend to yield similar results, supporting their validity.

Direct observation reports

Direct observation involves a second party directly observing and evaluating someone else. The second party observes how the target of the observation behaves in certain situations (e.g., how a child behaves in a schoolyard during recess). The observations can take place in a natural (e.g., a schoolyard) or artificial setting (social psychology laboratory). Direct observation can help identify job applicants (e.g., work samples) who are likely to be successful or maternal attachment in young children (e.g., Mary Ainsworth's strange situation). The object of the method is to directly observe genuine behaviors in the target. A limitation of direct observation is that the target persons may change their behavior because they know that they are being observed. A second limitation is that some behavioral traits are more difficult to observe (e.g., sincerity) than others (e.g., sociability). A third limitation is that direct observation is more expensive and time-consuming than a number of other methods (e.g., self-report).

Personality tests in the workplace

Though personality tests date back to the early 20th century, it was not until 1988 when it became illegal in the United States for employers to use polygraphs that they began to more broadly utilize personality tests. The idea behind these personality tests is that employers can reduce their turnover rates and prevent economic losses in the form of people prone to thievery, drug abuse, emotional disorders or violence in the workplace. There is a chance that an applicant may fake responses to personality test items in order to make the applicant appear more attractive to the employing organization than the individual actually is.

Personality tests are often part of management consulting services, as having a certification to conduct a particular test is a way for a consultant to offer an additional service and demonstrate their qualifications. The tests are used in narrowing down potential job applicants, as well as which employees are more suitable for promotion. The United States federal government is a notable customer of personality test services outside the private sector with approximately 200 federal agencies, including the military, using personality assessment services.

Despite evidence showing personality tests as one of the least reliable metrics in assessing job applicants, they remain popular as a way to screen candidates.

Test evaluation

There are several criteria for evaluating a personality test. For a test to be successful, users need to be sure that (a) test results are replicable and (b) the test measures what its creators purport it to measure. Fundamentally, a personality test is expected to demonstrate reliability and validity. Reliability refers to the extent to which test scores, if a test were administered to a sample twice within a short period of time, would be similar in both administrations. Test validity refers to evidence that a test measures the construct (e.g., neuroticism) that it is supposed to measure.

Analysis

A respondent's response is used to compute the analysis. Analysis of data is a long process. Two major theories are used here: classical test theory (CTT), used for the observed score; and item response theory (IRT), "a family of models for persons' responses to items". The two theories focus upon different 'levels' of responses and researchers are implored to use both in order to fully appreciate their results.

Non-response

Firstly, item non-response needs to be addressed. Non-response can either be unit, where a person gave no response for any of the n items, or item, i.e., individual question. Unit non-response is generally dealt with exclusion. Item non-response should be handled by imputation – the method used can vary between test and questionnaire items.

Scoring

The conventional method of scoring items is to assign '0' for an incorrect answer and '1' for a correct answer. When tests have more response options (e.g. multiple choice items) '0' when incorrect, '1' for being partly correct and '2' for being correct. Personality tests can also be scored using a dimensional (normative) or a typological (ipsative) approach. Dimensional approaches such as the Big 5 describe personality as a set of continuous dimensions on which individuals differ. From the item scores, an 'observed' score is computed. This is generally found by summing the un-weighted item scores.

Criticism and controversy

Personality versus social factors

In the 1960s and 1970s some psychologists dismissed the whole idea of personality, considering much behaviour to be context-specific. This idea was supported by the fact that personality often does not predict behaviour in specific contexts. However, more extensive research has shown that when behaviour is aggregated across contexts, that personality can be a mostly good predictor of behaviour. Almost all psychologists now acknowledge that both social and individual difference factors (i.e., personality) influence behaviour. The debate is currently more around the relative importance of each of these factors and how these factors interact.

Respondent faking

One problem with self-report measures of personality is that respondents are often able to distort their responses.

Several meta-analyses show that people are able to substantially change their scores on personality tests when such tests are taken under high-stakes conditions, such as part of a job selection procedure.

Work in experimental settings has also shown that when student samples have been asked to deliberately fake on a personality test, they clearly demonstrated that they are capable of doing so. Hogan, Barett and Hogan (2007) analyzed data of 5,266 applicants who did a personality test based on the Big Five. At the first application the applicants were rejected. After six months the applicants reapplied and completed the same personality test. The answers on the personality tests were compared and there was no significant difference between the answers.

So in practice, most people do not significantly distort. Nevertheless, a researcher has to be prepared for such possibilities. Also, sometimes participants think that tests results are more valid than they really are because they like the results that they get. People want to believe that the positive traits that the test results say they possess are in fact present in their personality. This leads to distorted results of people's sentiments on the validity of such tests.

Several strategies have been adopted for reducing respondent faking. One strategy involves providing a warning on the test that methods exist for detecting faking and that detection will result in negative consequences for the respondent (e.g., not being considered for the job). Forced choice item formats (ipsative testing) have been adopted which require respondents to choose between alternatives of equal social desirability. Social desirability and lie scales are often included which detect certain patterns of responses, although these are often confounded by true variability in social desirability.

More recently, Item Response Theory approaches have been adopted with some success in identifying item response profiles that flag fakers. Other researchers are looking at the timing of responses on electronically administered tests to assess faking. While people can fake in practice they seldom do so to any significant level. To successfully fake means knowing what the ideal answer would be. Even with something as simple as assertiveness people who are unassertive and try to appear assertive often endorse the wrong items. This is because unassertive people confuse assertion with aggression, anger, oppositional behavior, etc.

Psychological research

Research on the importance of personality and intelligence in education shows evidence that when others provide the personality rating, rather than providing a self-rating, the outcome is nearly four times more accurate for predicting grades.

Additional applications

The MBTI questionnaire is a popular tool for people to use as part of self-examination or to find a shorthand to describe how they relate to others in society. It is well known from its widespread adoption in hiring practices, but popular among individuals for its focus exclusively on positive traits and "types" with memorable names. Some users of the questionnaire self-identify by their personality type on social media and dating profiles. Due to the publisher's strict copyright enforcement, many assessments come from free websites which provide modified tests based on the framework.

Unscientific personality type quizzes are also a common form of entertainment. In particular Buzzfeed became well known for publishing user-created quizzes, with personality-style tests often based on deciding which pop culture character or celebrity the user most resembles.

Dangers

There is an issue of privacy to be of concern forcing applicants to reveal private thoughts and feelings through his or her responses that seem to become a condition for employment. Another danger is the illegal discrimination of certain groups under the guise of a personality test.

In addition to the risks of personality test results being used outside of an appropriate context, they can give inaccurate results when conducted incorrectly. In particular, ipsative personality tests are often misused in recruitment and selection, where they are mistakenly treated as if they were normative measures.

Effects of technological advancements on the field

New technological advancements are increasing the possible ways that data can be collected and analyzed, and broadening the types of data that can be used to reliably assess personality. Although qualitative assessments of job-applicants' social media have existed for nearly as long as social media itself, many scientific studies have successfully quantized patterns in social media usage into various metrics to assess personality quantitatively. Smart devices, such as smart phones and smart watches, are also now being used to collect data in new ways and in unprecedented quantities. Also, brain scan technology has dramatically improved, which is now being developed to analyze personalities of individuals extremely accurately.

Aside from the advancing data collection methods, data processing methods are also improving rapidly. Strides in big data and pattern recognition in enormous databases (data mining) have allowed for better data analysis than ever before. Also, this allows for the analysis of large amounts of data that was difficult or impossible to reliably interpret before (for example, from the internet). There are other areas of current work too, such as gamification of personality tests to make the tests more interesting and to lower effects of psychological phenomena that skews personality assessment data.

With new data collection methods comes new ethical concerns, such as over the analysis of one's public data to make assessments on their personality and when consent is needed.

Examples of personality tests

  • The first modern personality test was the Woodworth Personal Data Sheet, which was first used in 1919. It was designed to help the United States Army screen out recruits who might be susceptible to shell shock.
  • The Rorschach inkblot test was introduced in 1921 as a way to determine personality by the interpretation of inkblots.
  • The Thematic Apperception Test was commissioned by the Office of Strategic Services (O.S.S.) in the 1930s to identify personalities that might be susceptible to being turned by enemy intelligence.
  • The Minnesota Multiphasic Personality Inventory was published in 1942 as a way to aid in assessing psychopathology in a clinical setting. It can also be used to assess the Personality Psychopathology Five (PSY-5), which are similar to the Five Factor Model (FFM; or Big Five personality traits). These five scales on the MMPI-2 include aggressiveness, psychoticism, disconstraint, negative emotionality/neuroticism, and introversion/low positive emotionality.
  • Myers–Briggs Type Indicator (MBTI) is a questionnaire designed to measure psychological preferences in how people perceive the world and make decisions. This 16-type indicator test is based on Carl Jung's Psychological Types, developed during World War II by Isabel Myers and Katharine Briggs. The 16-type indicator includes a combination of Extroversion-Introversion, Sensing-Intuition, Thinking-Feeling and Judging-Perceiving. The MBTI utilizes two opposing behavioral divisions on four scales that yields a "personality type".
  • OAD Survey is an adjective word list designated to measure seven work related personality traits and job behaviors: Assertiveness-Compliance, Extroversion-Introversion, Patience-Impatience, Detail-Broad, High Versatility-Low Versatility, Low Emotional IQ-High Emotional IQ, Low Creativity-High Creativity. It was first published in 1990 with periodic norm revisions to assure scale validity, reliability, and non-bias.
  • Keirsey Temperament Sorter developed by David Keirsey is influenced by Isabel Myers sixteen types and Ernst Kretschmer's four types.
  • The True Colors Test developed by Don Lowry in 1978 is based on the work of David Keirsey in his book, Please Understand Me, as well as the Myers-Briggs Type Indicator and provides a model for understanding personality types using the colors blue, gold, orange and green to represent four basic personality temperaments.
  • The 16PF Questionnaire (16PF) was developed by Raymond Cattell and his colleagues in the 1940s and 1950s in a search to try to discover the basic traits of human personality using scientific methodology. The test was first published in 1949, and is now in its 5th edition, published in 1994. It is used in a wide variety of settings for individual and marital counseling, career counseling and employee development, in educational settings, and for basic research.
  • The EQSQ Test developed by Simon Baron-Cohen, Sally Wheelwright centers on the empathizing-systemizing theory of the male versus the female brain types.
  • The Personality and Preference Inventory (PAPI), originally designed by Dr Max Kostick, Professor of Industrial Psychology at Boston State College, in Massachusetts, USA, in the early 1960s evaluates the behaviour and preferred work styles of individuals.
  • The Strength Deployment Inventory, developed by Elias Porter in 1971 and is based on his theory of Relationship Awareness. Porter was the first known psychometrician to use colors (Red, Green and Blue) as shortcuts to communicate the results of a personality test.
  • The Newcastle Personality Assessor (NPA), created by Daniel Nettle, is a short questionnaire designed to quantify personality on five dimensions: Extraversion, Neuroticism, Conscientious, Agreeableness, and Openness.
  • The DISC assessment is based on the research of William Moulton Marston and later work by John Grier, and identifies four personality types: Dominance; Influence; Steadiness and Conscientiousness. It is used widely in Fortune 500 companies, for-profit and non-profit organizations.
  • The Winslow Personality Profile measures 24 traits on a decile scale. It has been used in the National Football League, the National Basketball Association, the National Hockey League and every draft choice for Major League Baseball for the last 30 years and can be taken online for personal development.
  • Other personality tests include Forté Profile, Millon Clinical Multiaxial Inventory, Eysenck Personality Questionnaire, Swedish Universities Scales of Personality, Edwin E. Wagner's The Hand Test, and Enneagram of Personality.
  • The HEXACO Personality Inventory – Revised (HEXACO PI-R) is based on the HEXACO model of personality structure, which consists of six domains, the five domains of the Big Five model, as well as the domain of Honesty-Humility.
  • The Personality Inventory for DSM-5 (PID-5) was developed in September 2012 by the DSM-5 Personality and Personality Disorders Workgroup with regard to a personality trait model proposed for DSM-5. The PID-5 includes 25 maladaptive personality traits as determined by Krueger, Derringer, Markon, Watson, and Skodol.
  • The Process Communication Model (PCM), developed by Taibi Kahler with NASA funding, was used to assist with shuttle astronaut selection. Now it is a non-clinical personality assessment, communication and management methodology that is now applied to corporate management, interpersonal communications, education, and real-time analysis of call centre interactions among other uses.
  • The Birkman Method (TBM) was developed by Roger W. Birkman in the late 1940s. The instrument consists of ten scales describing "occupational preferences" (Interests), 11 scales describing "effective behaviors" (Usual behavior) and 11 scales describing interpersonal and environmental expectations (Needs). A corresponding set of 11 scale values was derived to describe "less than effective behaviors" (Stress behavior). TBM was created empirically. The psychological model is most closely associated with the work of Kurt Lewin. Occupational profiling consists of 22 job families with over 200 associated job titles connected to O*Net.
  • The International Personality Item Pool (IPIP) is a public domain set of more than 2000 personality items which can be used to measure many personality variables, including the Five Factor Model.
  • The Guilford-Zimmerman Temperament Survey examined 10 factors that represented normal personality, and was used in both longitudinal studies and to examine the personality profiles of Italian pilots.

Personality tests of the five factor model

Different types of the Big Five personality traits:

  • The NEO PI-R, or the Revised NEO Personality Inventory, is one of the most significant measures of the Five Factor Model (FFM). The measure was created by Costa and McCrae and contains 240 items in the forms of sentences. Costa and McCrae had divided each of the five domains into six facets each, 30 facets total, and changed the way the FFM is measured.
  • The Five-Factor Model Rating Form (FFMRF) was developed by Lynam and Widiger in 2001 as a shorter alternative to the NEO PI-R. The form consists of 30 facets, 6 facets for each of the Big Five factors.
  • The Ten-Item Personality Inventory (TIPI) and the Five Item Personality Inventory (FIPI) are very abbreviated rating forms of the Big Five personality traits.
  • The Five Factor Personality Inventory – Children (FFPI-C) was developed to measure personality traits in children based upon the Five Factor Model (FFM).
  • The Big Five Inventory (BFI), developed by John, Donahue, and Kentle, is a 44-item self-report questionnaire consisting of adjectives that assess the domains of the Five Factor Model (FFM). The 10-Item Big Five Inventory is a simplified version of the well-established BFI. It is developed to provide a personality inventory under time constraints. The BFI-10 assesses the five dimensions of BFI using only two items each to cut down on length of BFI.
  • The Semi-structured Interview for the Assessment of the Five-Factor Model (SIFFM) is the only semi-structured interview intended to measure a personality model or personality disorder. The interview assesses the five domains and 30 facets as presented by the NEO PI-R, and it additional assesses both normal and abnormal extremities of each facet.

Human chorionic gonadotropin

From Wikipedia, the free encyclopedia

Human chorionic gonadotropin (hCG) is a hormone for the maternal recognition of pregnancy produced by trophoblast cells that are surrounding a growing embryo (syncytiotrophoblast initially), which eventually forms the placenta after implantation. The presence of hCG is detected in some pregnancy tests (HCG pregnancy strip tests). Some cancerous tumors produce this hormone; therefore, elevated levels measured when the patient is not pregnant may lead to a cancer diagnosis and, if high enough, paraneoplastic syndromes, however, it is not known whether this production is a contributing cause, or an effect of carcinogenesis. The pituitary analog of hCG, known as luteinizing hormone (LH), is produced in the pituitary gland of males and females of all ages.

Various endogenous forms of hCG exist. The measurement of these diverse forms is used in the diagnosis of pregnancy and a variety of disease states. Preparations of hCG from various sources have also been used therapeutically, by both medicine and quackery. As of December 6, 2011, the United States Food and Drug Administration has prohibited the sale of "homeopathic" and over-the-counter hCG diet products and declared them fraudulent and illegal.

Beta-hCG is initially secreted by the syncytiotrophoblast.

Structure

Human chorionic gonadotropin is a glycoprotein composed of 237 amino acids with a molecular mass of 36.7 kDa, approximately 14.5kDa αhCG and 22.2kDa βhCG.

It is heterodimeric, with an α (alpha) subunit identical to that of luteinizing hormone (LH), follicle-stimulating hormone (FSH), thyroid-stimulating hormone (TSH), and a β (beta) subunit that is unique to hCG.

  • The α (alpha) subunit is 92 amino acids long.
  • The β-subunit of hCG gonadotropin (beta-hCG) contains 145 amino acids, encoded by six highly homologous genes that are arranged in tandem and inverted pairs on chromosome 19q13.3 - CGB (1, 2, 3, 5, 7, 8). It is known that CGB7 has a sequence slightly different from that of the others.

The two subunits create a small hydrophobic core surrounded by a high surface area-to-volume ratio: 2.8 times that of a sphere. The vast majority of the outer amino acids are hydrophilic.

beta-hCG is mostly similar to beta-LH, with the exception of a Carboxy Terminus Peptide (beta-CTP) containing four glycosylated serine residues that is responsible for hCG's longer half-life.

Function

Human chorionic gonadotropin interacts with the LHCG receptor of the ovary and promotes the maintenance of the corpus luteum for the maternal recognition of pregnancy at the beginning of pregnancy. This allows the corpus luteum to secrete the hormone progesterone during the first trimester. Progesterone enriches the uterus with a thick lining of blood vessels and capillaries so that it can sustain the growing fetus.

It has been hypothesized that hCG may be a placental link for the development of local maternal immunotolerance. For example, hCG-treated endometrial cells induce an increase in T cell apoptosis (dissolution of T cells). These results suggest that hCG may be a link in the development of peritrophoblastic immune tolerance, and may facilitate the trophoblast invasion, which is known to expedite fetal development in the endometrium. It has also been suggested that hCG levels are linked to the severity of morning sickness or Hyperemesis gravidarum in pregnant women.

Because of its similarity to LH, hCG can also be used clinically to induce ovulation in the ovaries as well as testosterone production in the testes. As the most abundant biological source is in women who are presently pregnant, some organizations collect urine from pregnant women to extract hCG for use in fertility treatment.

Human chorionic gonadotropin also plays a role in cellular differentiation/proliferation and may activate apoptosis.

Production

Naturally, it is produced in the human placenta by the syncytiotrophoblast.

Like any other gonadotropins, it can be extracted from the urine of pregnant women or produced from cultures of genetically modified cells using recombinant DNA technology.

In Pubergen, Pregnyl, Follutein, Profasi, Choragon and Novarel, it is extracted from the urine of pregnant women. In Ovidrel, it is produced with recombinant DNA technology.

hCG forms

Three major forms of hCG are produced by humans, with each having distinct physiological roles. These include regular hCG, hyperglycosylated hCG, and the free beta-subunit of hCG. Degradation products of hCG have also been detected, including nicked hCG, hCG missing the C-terminal peptide from the beta-subunit, and free alpha-subunit, which has no known biological function. Some hCG is also made by the pituitary gland with a pattern of glycosylation that differs from placental forms of hCG.

Regular hCG is the main form of hCG associated with the majority of pregnancy and in non-invasive molar pregnancies. This is produced in the trophoblast cells of the placental tissue. Hyperglycosylated hCG is the main form of hCG during the implantation phase of pregnancy, with invasive molar pregnancies, and with choriocarcinoma.

Gonadotropin preparations of hCG can be produced for pharmaceutical use from animal or synthetic sources.

Testing

Blood or urine tests measure hCG. These can be pregnancy tests. hCG-positive can indicate an implanted blastocyst and mammalian embryogenesis or can be detected for a short time following childbirth or pregnancy loss. Tests can be done to diagnose and monitor germ cell tumors and gestational trophoblastic diseases.

Concentrations are commonly reported in thousandth international units per milliliter (mIU/mL). The international unit of hCG was originally established in 1938 and has been redefined in 1964 and in 1980. At the present time, 1 international unit is equal to approximately 2.35×10−12 moles, or about 6×10−8 grams.

It is also possible to test for hCG to have an approximation of the gestational age.

Methodology

Most tests employ a monoclonal antibody, which is specific to the β-subunit of hCG (β-hCG). This procedure is employed to ensure that tests do not make false positives by confusing hCG with LH and FSH. (The latter two are always present at varying levels in the body, whereas the presence of hCG almost always indicates pregnancy.)

Many hCG immunoassays are based on the sandwich principle, which uses antibodies to hCG labeled with an enzyme or a conventional or luminescent dye. Pregnancy urine dipstick tests are based on the lateral flow technique.

  • The urine test may be a chromatographic immunoassay or any of several other test formats, home-, physician's office-, or laboratory-based. Published detection thresholds range from 20 to 100 mIU/mL, depending on the brand of test. Early in pregnancy, more accurate results may be obtained by using the first urine of the morning (when urine is most concentrated). When the urine is dilute (specific gravity less than 1.015), the hCG concentration may not be representative of the blood concentration, and the test may be falsely negative.
  • The serum test, using 2-4 mL of venous blood, is typically a chemiluminescent or fluorimetric immunoassay that can detect βhCG levels as low as 5 mIU/mL and allows quantification of the βhCG concentration.

Reference levels in normal pregnancy

The following is a list of serum hCG levels. (LMP is the last menstrual period dated from the first day of the last menstrual period.) The levels grow exponentially after conception and implantation.

weeks since LMP mIU/mL
3 5 – 50
4 5 – 428
5 18 – 7,340
6 1,080 – 56,500
7 – 8 7,650 – 229,000
9 – 12 25,700 – 288,000
13 – 16 13,300 – 254,000
17 – 24 4,060 – 165,400
25 – 40 3,640 – 117,000
Non-pregnant females <5.0
Postmenopausal females <9.5

If a pregnant woman has serum hCG levels that are higher than expected, they may be experiencing a multiple pregnancy or an abnormal uterine growth. Falling hCG levels may indicate the possibility of a miscarriage. hCG levels which are rising at a slower rate than expected may indicate an ectopic pregnancy.

Interpretation

The ability to quantitate the βhCG level is useful in monitoring germ cell and trophoblastic tumors, follow-up care after miscarriage, and diagnosis of and follow-up care after treatment of ectopic pregnancy. The lack of a visible fetus on vaginal ultrasound after βhCG levels reach 1500 mIU/mL is strongly indicative of an ectopic pregnancy. Still, even an hCG over 2000 IU/L does not necessarily exclude the presence of a viable intrauterine pregnancy in such cases.

As pregnancy tests, quantitative blood tests and the most sensitive urine tests usually detect hCG between 6 and 12 days after ovulation. It must be taken into account, however, that total hCG levels may vary in a very wide range within the first 4 weeks of gestation, leading to false results during this period. A rise of 35% over 48 hours is proposed as the minimal rise consistent with a viable intrauterine pregnancy.

Gestational trophoblastic disease like hydatidiform moles ("molar pregnancy") or choriocarcinoma may produce high levels of βhCG (due to the presence of syncytiotrophoblasts - part of the villi that make up the placenta) despite the absence of an embryo. This, as well as several other conditions, can lead to elevated hCG readings in the absence of pregnancy.

hCG levels are also a component of the triple test, a screening test for certain fetal chromosomal abnormalities/birth defects.

A study of 32 normal pregnancies came to the result that a gestational sac of 1–3 mm was detected at a mean hCG level of 1150 IU/L (range 800–1500), a yolk sac was detected at a mean level of 6000 IU/L (range 4500–7500) and fetal heartbeat was visible at a mean hCG level of 10,000 IU/L (range 8650–12,200).

Uses

Tumor marker

Human chorionic gonadotropin can be used as a tumor marker, as its β subunit is secreted by some cancers including seminoma, choriocarcinoma, teratoma with elements of choriocarcinoma, other germ cell tumors, hydatidiform mole, and islet cell tumor. For this reason, a positive result in males can be a test for testicular cancer. The normal range for men is between 0-5 mIU/mL. Combined with alpha-fetoprotein, β-HCG is an excellent tumor marker for the monitoring of germ cell tumors.

Fertility

Human chorionic gonadotropin injection is extensively used for final maturation induction in lieu of luteinizing hormone. In the presence of one or more mature ovarian follicles, ovulation can be triggered by the administration of HCG. As ovulation will happen between 38 and 40 hours after a single HCG injection, procedures can be scheduled to take advantage of this time sequence, such as intrauterine insemination or sexual intercourse. Also, patients that undergo IVF, in general, receive HCG to trigger the ovulation process, but have an oocyte retrieval performed at about 34 to 36 hours after injection, a few hours before the eggs actually would be released from the ovary.

As HCG supports the corpus luteum, administration of HCG is used in certain circumstances to enhance the production of progesterone.

In the male, HCG injections are used to stimulate the Leydig cells to synthesize testosterone. The intratesticular testosterone is necessary for spermatogenesis from the sertoli cells. Typical uses for HCG in men include hypogonadism and fertility treatment, including during testosterone replacement therapy to restore or maintain fertility and prevent testicular atrophy.

Several vaccines against human chorionic gonadotropin (hCG) for the prevention of pregnancy are currently in clinical trials.

HCG Pubergen, Pregnyl warnings

In the case of female patients who want to be treated with HCG Pubergen, Pregnyl: a) Since infertile female patients who undergo medically assisted reproduction (especially those who need in vitro fertilization), are known to often be suffering from tubal abnormalities, after a treatment with this drug they might experience many more ectopic pregnancies. This is why early ultrasound confirmation at the beginning of a pregnancy (to see whether the pregnancy is intrauterine or not) is crucial. Pregnancies that have occurred after a treatment with this drug have a higher risk of multiple pregnancy. Female patients who have thrombosis, severe obesity, or thrombophilia should not be prescribed this medicine as they have a higher risk of arterial or venous thromboembolic events after or during a treatment with HCG Pubergen, Pregnyl. b)Female patients who have been treated with this medicine are usually more prone to pregnancy losses.

In the case of male patients: A prolonged treatment with HCG Pubergen, Pregnyl is known to regularly lead to increased production of androgen. Therefore: Patients who have overt or latent cardiac failure, hypertension, renal dysfunction, migraines, or epilepsy might not be allowed to start using this medicine or may require a lower dose of HCG Pubergen, Pregnyl. This drug should be used with extreme caution in the treatment of prepubescent teenagers in order to reduce the risk of precocious sexual development or premature epiphyseal closure. This type of patients' skeletal maturation should be closely and regularly monitored.

Both male and female patients who have the following medical conditions must not start a treatment with HCG Pubergen, Pregnyl: (1) Hypersensitivity to this drug or to any of its main ingredients. (2) Known or possible androgen-dependent tumors for example male breast carcinoma or prostatic carcinoma.

Anabolic steroid adjunct

In the world of performance-enhancing drugs, HCG is increasingly used in combination with various anabolic-androgenic steroid (AAS) cycles. As a result, HCG is included in some sports' illegal drug lists.

When exogenous AAS are put into the male body, natural negative-feedback loops cause the body to shut down its own production of testosterone via shutdown of the hypothalamic-pituitary-gonadal axis (HPGA). This causes testicular atrophy, among other things. HCG is commonly used during and after steroid cycles to maintain and restore testicular size as well as normal testosterone production.

High levels of AASs, that mimic the body's natural testosterone, trigger the hypothalamus to shut down its production of gonadotropin-releasing hormone (GnRH) from the hypothalamus. Without GnRH, the pituitary gland stops releasing luteinizing hormone (LH). LH normally travels from the pituitary via the blood stream to the testes, where it triggers the production and release of testosterone. Without LH, the testes shut down their production of testosterone. In males, HCG helps restore and maintain testosterone production in the testes by mimicking LH and triggering the production and release of testosterone.

If HCG is used for too long and in too high a dose, the resulting rise in natural testosterone and estrogen would eventually inhibit endogenous production of luteinizing hormone via negative feedback on the hypothalamus and pituitary gland.

Professional athletes who have tested positive for HCG have been temporarily banned from their sport, including a 50-game ban from MLB for Manny Ramirez in 2009 and a 4-game ban from the NFL for Brian Cushing for a positive urine test for HCG. Mixed Martial Arts fighter Dennis Siver was fined $19,800 and suspended 9 months for being tested positive after his bout at UFC 168.

HCG diet

British endocrinologist Albert T. W. Simeons proposed HCG as an adjunct to an ultra-low-calorie weight-loss diet (fewer than 500 calories). Simeons, while studying pregnant women in India on a calorie-deficient diet, and "fat boys" with pituitary problems (Frölich's syndrome) treated with low-dose HCG, observed that both lost fat rather than lean (muscle) tissue. He reasoned that HCG must be programming the hypothalamus to do this in the former cases in order to protect the developing fetus by promoting mobilization and consumption of abnormal, excessive adipose deposits. Simeons in 1954 published a book entitled Pounds and Inches, designed to combat obesity. Simeons, practicing at Salvator Mundi International Hospital in Rome, Italy, recommended low-dose daily HCG injections (125 IU) in combination with a customized ultra-low-calorie (500 cal/day, high-protein, low-carbohydrate/fat) diet, which was supposed to result in a loss of adipose tissue without loss of lean tissue.

Other researchers did not find the same results when attempting experiments to confirm Simeons' conclusions, and in 1976 in response to complaints the FDA required Simeons and others to include the following disclaimer on all advertisements:

These weight reduction treatments include the injection of HCG, a drug which has not been approved by the Food and Drug Administration as safe and effective in the treatment of obesity or weight control. There is no substantial evidence that HCG increases weight loss beyond that resulting from caloric restriction, that it causes a more attractive or "normal" distribution of fat, or that it decreases the hunger and discomfort associated with calorie-restrictive diets.

— 1976 FDA-mandated disclaimer for HCG diet advertisements

There was a resurgence of interest in the "HCG diet" following promotion by Kevin Trudeau, who was banned from making HCG diet weight-loss claims by the U.S. Federal Trade Commission in 2008, and eventually jailed over such claims.

A 1976 study in the American Journal of Clinical Nutrition concluded that HCG is not more effective as a weight-loss aid than dietary restriction alone.

A 1995 meta analysis found that studies supporting HCG for weight loss were of poor methodological quality and concluded that "there is no scientific evidence that HCG is effective in the treatment of obesity; it does not bring about weight-loss or fat-redistribution, nor does it reduce hunger or induce a feeling of well-being".

On November 15, 2016, the American Medical Association (AMA) passed policy that "The use of human chorionic gonadotropin (HCG) for weight loss is inappropriate."

There is no scientific evidence that HCG is effective in the treatment of obesity. The meta-analysis found insufficient evidence supporting the claims that HCG is effective in altering fat-distribution, hunger reduction, or in inducing a feeling of well-being. The authors stated "…the use of HCG should be regarded as an inappropriate therapy for weight reduction…" In the authors opinion, "Pharmacists and physicians should be alert on the use of HCG for Simeons therapy. The results of this meta-analysis support a firm standpoint against this improper indication. Restraints on physicians practicing this therapy can be based on our findings."

— American Society of Bariatric Physicians' commentary on Lijesen et al. (1995)

According to the American Society of Bariatric Physicians, no new clinical trials have been published since the definitive 1995 meta-analysis.

The scientific consensus is that any weight loss reported by individuals on an "HCG diet" may be attributed entirely to the fact that such diets prescribe calorie intake of between 500 and 1,000 calories per day, substantially below recommended levels for an adult, to the point that this may risk health effects associated with malnutrition.

Homeopathic HCG for weight control

Controversy about, and shortages of, injected HCG for weight loss have led to substantial Internet promotion of "homeopathic HCG" for weight control. The ingredients in these products are often obscure, but if prepared from true HCG via homeopathic dilution, they contain either no HCG at all or only trace amounts. Moreover, it is highly unlikely that oral HCG is bioavailable due to the fact that digestive protease enzymes and hepatic metabolism renders peptide-based molecules (such as insulin and human growth hormone) biologically inert. HCG can likely only enter the bloodstream through injection.

The United States Food and Drug Administration has stated that over-the-counter products containing HCG are fraudulent and ineffective for weight loss. They are also not protected as homeopathic drugs and have been deemed illegal substances. HCG is classified as a prescription drug in the United States and it has not been approved for over-the-counter sales by the FDA as a weight loss product or for any other purposes, and therefore neither HCG in its pure form nor any preparations containing HCG may be sold legally in the country except by prescription. In December 2011, FDA and FTC started to take actions to pull unapproved HCG products from the market. In the aftermath, some suppliers started to switch to "hormone-free" versions of their weight loss products, where the hormone is replaced with an unproven mixture of free amino acids or where radionics is used to transfer the "energy" to the final product.

Tetanus vaccine conspiracy theory

Catholic Bishops in Kenya are among those who have spread a conspiracy theory asserting that HCG forms part of a covert sterilization program, forcing denials from the Kenyan government.

In order to induce a stronger immune response, some versions of human chorionic gonadotropin-based anti-fertility vaccines were designed as conjugates of the β subunit of HCG covalently linked to tetanus toxoid. It was alleged that a non-conjugated tetanus vaccine used in developing countries was laced with a human chorionic gonadotropin-based anti-fertility drug and was distributed as a means of mass sterilization. This charge has been vigorously denied by the World Health Organization (WHO) and UNICEF. Others have argued that a hCG-laced vaccine could not possibly be used for sterilization, since the effects of the anti-fertility vaccines are reversible (requiring booster doses to maintain infertility) and a non-conjugated vaccine is likely to be ineffective. Finally, independent testing of the tetanus vaccine by Kenya's health authorities revealed no traces of the human chorionic gonadotropin hormone.

Evolution

From Wikipedia, the free encyclopedia
 
In biology, evolution is the change in heritable characteristics of biological populations over successive generations. These characteristics are the expressions of genes, which are passed on from parent to offspring during reproduction. Genetic variation tends to exist within any given population as a result of genetic mutation and recombination. Evolution occurs when evolutionary processes such as natural selection (including sexual selection) and genetic drift act on this variation, resulting in certain characteristics becoming more or less common within a population over successive generations. It is this process of evolution that has given rise to biodiversity at every level of biological organisation.

The theory of evolution by natural selection was conceived independently by Charles Darwin and Alfred Russel Wallace in the mid-19th century and was set out in detail in Darwin's book On the Origin of Species. Evolution by natural selection is established by observable facts about living organisms: (1) more offspring are often produced than can possibly survive; (2) traits vary among individuals with respect to their morphology, physiology, and behaviour (phenotypic variation); (3) different traits confer different rates of survival and reproduction (differential fitness); and (4) traits can be passed from generation to generation (heritability of fitness). In successive generations, members of a population are therefore more likely to be replaced by the offspring of parents with favourable characteristics. In the early 20th century, other competing ideas of evolution such as mutationism and orthogenesis were refuted as the modern synthesis concluded Darwinian evolution acts on Mendelian genetic variation.

All life on Earth—including humanity—shares a last universal common ancestor (LUCA), which lived approximately 3.5–3.8 billion years ago. The fossil record includes a progression from early biogenic graphite to microbial mat fossils to fossilised multicellular organisms. Existing patterns of biodiversity have been shaped by repeated formations of new species (speciation), changes within species (anagenesis), and loss of species (extinction) throughout the evolutionary history of life on Earth. Morphological and biochemical traits are more similar among species that share a more recent common ancestor, and these traits can be used to reconstruct phylogenetic trees.

Evolutionary biologists have continued to study various aspects of evolution by forming and testing hypotheses as well as constructing theories based on evidence from the field or laboratory and on data generated by the methods of mathematical and theoretical biology. Their discoveries have influenced not just the development of biology but numerous other scientific and industrial fields, including agriculture, medicine, and computer science.

Heredity

DNA structure. Bases are in the centre, surrounded by phosphate–sugar chains in a double helix.

Evolution in organisms occurs through changes in heritable traits—the inherited characteristics of an organism. In humans, for example, eye colour is an inherited characteristic and an individual might inherit the "brown-eye trait" from one of their parents. Inherited traits are controlled by genes and the complete set of genes within an organism's genome (genetic material) is called its genotype.

The complete set of observable traits that make up the structure and behaviour of an organism is called its phenotype. These traits come from the interaction of its genotype with the environment. As a result, many aspects of an organism's phenotype are not inherited. For example, suntanned skin comes from the interaction between a person's genotype and sunlight; thus, suntans are not passed on to people's children. However, some people tan more easily than others, due to differences in genotypic variation; a striking example are people with the inherited trait of albinism, who do not tan at all and are very sensitive to sunburn.

Heritable traits are passed from one generation to the next via DNA, a molecule that encodes genetic information. DNA is a long biopolymer composed of four types of bases. The sequence of bases along a particular DNA molecule specifies the genetic information, in a manner similar to a sequence of letters spelling out a sentence. Before a cell divides, the DNA is copied, so that each of the resulting two cells will inherit the DNA sequence. Portions of a DNA molecule that specify a single functional unit are called genes; different genes have different sequences of bases. Within cells, the long strands of DNA form condensed structures called chromosomes. The specific location of a DNA sequence within a chromosome is known as a locus. If the DNA sequence at a locus varies between individuals, the different forms of this sequence are called alleles. DNA sequences can change through mutations, producing new alleles. If a mutation occurs within a gene, the new allele may affect the trait that the gene controls, altering the phenotype of the organism. However, while this simple correspondence between an allele and a trait works in some cases, most traits are more complex and are controlled by quantitative trait loci (multiple interacting genes).

Some heritable changes cannot be explained by changes to the sequence of nucleotides in the DNA. These phenomena are classed as epigenetic inheritance systems. DNA methylation marking chromatin, self-sustaining metabolic loops, gene silencing by RNA interference and the three-dimensional conformation of proteins (such as prions) are areas where epigenetic inheritance systems have been discovered at the organismic level. Developmental biologists suggest that complex interactions in genetic networks and communication among cells can lead to heritable variations that may underlay some of the mechanics in developmental plasticity and canalisation. Heritability may also occur at even larger scales. For example, ecological inheritance through the process of niche construction is defined by the regular and repeated activities of organisms in their environment. This generates a legacy of effects that modify and feed back into the selection regime of subsequent generations. Descendants inherit genes plus environmental characteristics generated by the ecological actions of ancestors. Other examples of heritability in evolution that are not under the direct control of genes include the inheritance of cultural traits and symbiogenesis.

Sources of variation

Evolution can occur if there is genetic variation within a population. Variation comes from mutations in the genome, reshuffling of genes through sexual reproduction and migration between populations (gene flow). Despite the constant introduction of new variation through mutation and gene flow, most of the genome of a species is identical in all individuals of that species. However, even relatively small differences in genotype can lead to dramatic differences in phenotype: for example, chimpanzees and humans differ in only about 5% of their genomes.

An individual organism's phenotype results from both its genotype and the influence of the environment it has lived in. A substantial part of the phenotypic variation in a population is caused by genotypic variation. The modern evolutionary synthesis defines evolution as the change over time in this genetic variation. The frequency of one particular allele will become more or less prevalent relative to other forms of that gene. Variation disappears when a new allele reaches the point of fixation—when it either disappears from the population or replaces the ancestral allele entirely.

Before the discovery of Mendelian genetics, one common hypothesis was blending inheritance. But with blending inheritance, genetic variation would be rapidly lost, making evolution by natural selection implausible. The Hardy–Weinberg principle provides the solution to how variation is maintained in a population with Mendelian inheritance. The frequencies of alleles (variations in a gene) will remain constant in the absence of selection, mutation, migration and genetic drift.

Mutation

Duplication of part of a chromosome

Mutations are changes in the DNA sequence of a cell's genome and are the ultimate source of genetic variation in all organisms. When mutations occur, they may alter the product of a gene, or prevent the gene from functioning, or have no effect. Based on studies in the fly Drosophila melanogaster, it has been suggested that if a mutation changes a protein produced by a gene, this will probably be harmful, with about 70% of these mutations having damaging effects, and the remainder being either neutral or weakly beneficial.

Mutations can involve large sections of a chromosome becoming duplicated (usually by genetic recombination), which can introduce extra copies of a gene into a genome. Extra copies of genes are a major source of the raw material needed for new genes to evolve. This is important because most new genes evolve within gene families from pre-existing genes that share common ancestors. For example, the human eye uses four genes to make structures that sense light: three for colour vision and one for night vision; all four are descended from a single ancestral gene.

New genes can be generated from an ancestral gene when a duplicate copy mutates and acquires a new function. This process is easier once a gene has been duplicated because it increases the redundancy of the system; one gene in the pair can acquire a new function while the other copy continues to perform its original function. Other types of mutations can even generate entirely new genes from previously noncoding DNA, a phenomenon termed de novo gene birth.

The generation of new genes can also involve small parts of several genes being duplicated, with these fragments then recombining to form new combinations with new functions (exon shuffling). When new genes are assembled from shuffling pre-existing parts, domains act as modules with simple independent functions, which can be mixed together to produce new combinations with new and complex functions. For example, polyketide synthases are large enzymes that make antibiotics; they contain up to 100 independent domains that each catalyse one step in the overall process, like a step in an assembly line.

One example of mutation is wild boar piglets. They are camouflage colored and show a characteristic pattern of dark and light longitudinal stripes. However, mutations in melanocortin 1 receptor (MC1R) disrupt the pattern. The majority of pig breeds carry MC1R mutations disrupting wild-type color and different mutations causing dominant black color of the pigs.

Sex and recombination

In asexual organisms, genes are inherited together, or linked, as they cannot mix with genes of other organisms during reproduction. In contrast, the offspring of sexual organisms contain random mixtures of their parents' chromosomes that are produced through independent assortment. In a related process called homologous recombination, sexual organisms exchange DNA between two matching chromosomes. Recombination and reassortment do not alter allele frequencies, but instead change which alleles are associated with each other, producing offspring with new combinations of alleles. Sex usually increases genetic variation and may increase the rate of evolution.

This diagram illustrates the twofold cost of sex. If each individual were to contribute to the same number of offspring (two), (a) the sexual population remains the same size each generation, where the (b) Asexual reproduction population doubles in size each generation.

The two-fold cost of sex was first described by John Maynard Smith. The first cost is that in sexually dimorphic species only one of the two sexes can bear young. This cost does not apply to hermaphroditic species, like most plants and many invertebrates. The second cost is that any individual who reproduces sexually can only pass on 50% of its genes to any individual offspring, with even less passed on as each new generation passes. Yet sexual reproduction is the more common means of reproduction among eukaryotes and multicellular organisms. The Red Queen hypothesis has been used to explain the significance of sexual reproduction as a means to enable continual evolution and adaptation in response to coevolution with other species in an ever-changing environment. Another hypothesis is that sexual reproduction is primarily an adaptation for promoting accurate recombinational repair of damage in germline DNA, and that increased diversity is a byproduct of this process that may sometimes be adaptively beneficial.

Gene flow

Gene flow is the exchange of genes between populations and between species. It can therefore be a source of variation that is new to a population or to a species. Gene flow can be caused by the movement of individuals between separate populations of organisms, as might be caused by the movement of mice between inland and coastal populations, or the movement of pollen between heavy-metal-tolerant and heavy-metal-sensitive populations of grasses.

Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer. Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among bacteria. In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species. Horizontal transfer of genes from bacteria to eukaryotes such as the yeast Saccharomyces cerevisiae and the adzuki bean weevil Callosobruchus chinensis has occurred. An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which have received a range of genes from bacteria, fungi and plants. Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains.

Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and bacteria, during the acquisition of chloroplasts and mitochondria. It is possible that eukaryotes themselves originated from horizontal gene transfers between bacteria and archaea.

Evolutionary processes

Mutation followed by natural selection results in a population with darker colouration.

From a neo-Darwinian perspective, evolution occurs when there are changes in the frequencies of alleles within a population of interbreeding organisms, for example, the allele for black colour in a population of moths becoming more common. Mechanisms that can lead to changes in allele frequencies include natural selection, genetic drift, gene flow and mutation bias.

Natural selection

Evolution by natural selection is the process by which traits that enhance survival and reproduction become more common in successive generations of a population. It embodies three principles:

  • Variation exists within populations of organisms with respect to morphology, physiology and behaviour (phenotypic variation).
  • Different traits confer different rates of survival and reproduction (differential fitness).
  • These traits can be passed from generation to generation (heritability of fitness).

More offspring are produced than can possibly survive, and these conditions produce competition between organisms for survival and reproduction. Consequently, organisms with traits that give them an advantage over their competitors are more likely to pass on their traits to the next generation than those with traits that do not confer an advantage. This teleonomy is the quality whereby the process of natural selection creates and preserves traits that are seemingly fitted for the functional roles they perform. Consequences of selection include nonrandom mating and genetic hitchhiking.

The central concept of natural selection is the evolutionary fitness of an organism. Fitness is measured by an organism's ability to survive and reproduce, which determines the size of its genetic contribution to the next generation. However, fitness is not the same as the total number of offspring: instead fitness is indicated by the proportion of subsequent generations that carry an organism's genes. For example, if an organism could survive well and reproduce rapidly, but its offspring were all too small and weak to survive, this organism would make little genetic contribution to future generations and would thus have low fitness.

If an allele increases fitness more than the other alleles of that gene, then with each generation this allele will become more common within the population. These traits are said to be "selected for." Examples of traits that can increase fitness are enhanced survival and increased fecundity. Conversely, the lower fitness caused by having a less beneficial or deleterious allele results in this allele becoming rarer—they are "selected against." Importantly, the fitness of an allele is not a fixed characteristic; if the environment changes, previously neutral or harmful traits may become beneficial and previously beneficial traits become harmful. However, even if the direction of selection does reverse in this way, traits that were lost in the past may not re-evolve in an identical form. However, a re-activation of dormant genes, as long as they have not been eliminated from the genome and were only suppressed perhaps for hundreds of generations, can lead to the re-occurrence of traits thought to be lost like hindlegs in dolphins, teeth in chickens, wings in wingless stick insects, tails and additional nipples in humans etc. "Throwbacks" such as these are known as atavisms.

These charts depict the different types of genetic selection. On each graph, the x-axis variable is the type of phenotypic trait and the y-axis variable is the number of organisms. Group A is the original population and Group B is the population after selection.
· Graph 1 shows directional selection, in which a single extreme phenotype is favoured.
· Graph 2 depicts stabilizing selection, where the intermediate phenotype is favoured over the extreme traits.
· Graph 3 shows disruptive selection, in which the extreme phenotypes are favoured over the intermediate.

Natural selection within a population for a trait that can vary across a range of values, such as height, can be categorised into three different types. The first is directional selection, which is a shift in the average value of a trait over time—for example, organisms slowly getting taller. Secondly, disruptive selection is selection for extreme trait values and often results in two different values becoming most common, with selection against the average value. This would be when either short or tall organisms had an advantage, but not those of medium height. Finally, in stabilising selection there is selection against extreme trait values on both ends, which causes a decrease in variance around the average value and less diversity. This would, for example, cause organisms to eventually have a similar height.

Natural selection most generally makes nature the measure against which individuals and individual traits, are more or less likely to survive. "Nature" in this sense refers to an ecosystem, that is, a system in which organisms interact with every other element, physical as well as biological, in their local environment. Eugene Odum, a founder of ecology, defined an ecosystem as: "Any unit that includes all of the organisms...in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (i.e., exchange of materials between living and nonliving parts) within the system...." Each population within an ecosystem occupies a distinct niche, or position, with distinct relationships to other parts of the system. These relationships involve the life history of the organism, its position in the food chain and its geographic range. This broad understanding of nature enables scientists to delineate specific forces which, together, comprise natural selection.

Natural selection can act at different levels of organisation, such as genes, cells, individual organisms, groups of organisms and species. Selection can act at multiple levels simultaneously. An example of selection occurring below the level of the individual organism are genes called transposons, which can replicate and spread throughout a genome. Selection at a level above the individual, such as group selection, may allow the evolution of cooperation.

Genetic hitchhiking

Recombination allows alleles on the same strand of DNA to become separated. However, the rate of recombination is low (approximately two events per chromosome per generation). As a result, genes close together on a chromosome may not always be shuffled away from each other and genes that are close together tend to be inherited together, a phenomenon known as linkage. This tendency is measured by finding how often two alleles occur together on a single chromosome compared to expectations, which is called their linkage disequilibrium. A set of alleles that is usually inherited in a group is called a haplotype. This can be important when one allele in a particular haplotype is strongly beneficial: natural selection can drive a selective sweep that will also cause the other alleles in the haplotype to become more common in the population; this effect is called genetic hitchhiking or genetic draft. Genetic draft caused by the fact that some neutral genes are genetically linked to others that are under selection can be partially captured by an appropriate effective population size.

Sexual selection

Male moor frogs become blue during the height of mating season. Blue reflectance may be a form of intersexual communication. It is hypothesized that males with brighter blue coloration may signal greater sexual and genetic fitness.

A special case of natural selection is sexual selection, which is selection for any trait that increases mating success by increasing the attractiveness of an organism to potential mates. Traits that evolved through sexual selection are particularly prominent among males of several animal species. Although sexually favoured, traits such as cumbersome antlers, mating calls, large body size and bright colours often attract predation, which compromises the survival of individual males. This survival disadvantage is balanced by higher reproductive success in males that show these hard-to-fake, sexually selected traits.

Genetic drift

Simulation of genetic drift of 20 unlinked alleles in populations of 10 (top) and 100 (bottom). Drift to fixation is more rapid in the smaller population.

Genetic drift is the random fluctuation of allele frequencies within a population from one generation to the next. When selective forces are absent or relatively weak, allele frequencies are equally likely to drift upward or downward in each successive generation because the alleles are subject to sampling error. This drift halts when an allele eventually becomes fixed, either by disappearing from the population or by replacing the other alleles entirely. Genetic drift may therefore eliminate some alleles from a population due to chance alone. Even in the absence of selective forces, genetic drift can cause two separate populations that begin with the same genetic structure to drift apart into two divergent populations with different sets of alleles.

According to the now largely abandoned neutral theory of molecular evolution most evolutionary changes are the result of the fixation of neutral mutations by genetic drift. In this model, most genetic changes in a population are thus the result of constant mutation pressure and genetic drift. This form of the neutral theory is now largely abandoned since it does not seem to fit the genetic variation seen in nature. A better-supported version of this model is the nearly neutral theory, according to which a mutation that would be effectively neutral in a small population is not necessarily neutral in a large population. Other theories propose that genetic drift is dwarfed by other stochastic forces in evolution, such as genetic hitchhiking, also known as genetic draft. Another concept is constructive neutral evolution (CNE), which explains that complex systems can emerge and spread into a population through neutral transitions due to the principles of excess capacity, presuppression, and ratcheting, and it has been applied in areas ranging from the origins of the spliceosome to the complex interdependence of microbial communities.

The time it takes a neutral allele to become fixed by genetic drift depends on population size; fixation is more rapid in smaller populations. The number of individuals in a population is not critical, but instead a measure known as the effective population size. The effective population is usually smaller than the total population since it takes into account factors such as the level of inbreeding and the stage of the lifecycle in which the population is the smallest. The effective population size may not be the same for every gene in the same population.

It is usually difficult to measure the relative importance of selection and neutral processes, including drift. The comparative importance of adaptive and non-adaptive forces in driving evolutionary change is an area of current research.

Gene flow

Gene flow involves the exchange of genes between populations and between species. The presence or absence of gene flow fundamentally changes the course of evolution. Due to the complexity of organisms, any two completely isolated populations will eventually evolve genetic incompatibilities through neutral processes, as in the Bateson-Dobzhansky-Muller model, even if both populations remain essentially identical in terms of their adaptation to the environment.

If genetic differentiation between populations develops, gene flow between populations can introduce traits or alleles which are disadvantageous in the local population and this may lead to organisms within these populations evolving mechanisms that prevent mating with genetically distant populations, eventually resulting in the appearance of new species. Thus, exchange of genetic information between individuals is fundamentally important for the development of the Biological Species Concept.

During the development of the modern synthesis, Sewall Wright developed his shifting balance theory, which regarded gene flow between partially isolated populations as an important aspect of adaptive evolution. However, recently there has been substantial criticism of the importance of the shifting balance theory.

Mutation bias

Mutation bias is usually conceived as a difference in expected rates for two different kinds of mutation, e.g., transition-transversion bias, GC-AT bias, deletion-insertion bias. This is related to the idea of developmental bias. Haldane and Fisher argued that, because mutation is a weak pressure easily overcome by selection, tendencies of mutation would be ineffectual except under conditions of neutral evolution or extraordinarily high mutation rates. This opposing-pressures argument was long used to dismiss the possibility of internal tendencies in evolution, until the molecular era prompted renewed interest in neutral evolution.

Noboru Sueoka and Ernst Freese proposed that systematic biases in mutation might be responsible for systematic differences in genomic GC composition between species. The identification of a GC-biased E. coli mutator strain in 1967, along with the proposal of the neutral theory, established the plausibility of mutational explanations for molecular patterns, which are now common in the molecular evolution literature.

For instance, mutation biases are frequently invoked in models of codon usage. Such models also include effects of selection, following the mutation-selection-drift model, which allows both for mutation biases and differential selection based on effects on translation. Hypotheses of mutation bias have played an important role in the development of thinking about the evolution of genome composition, including isochores. Different insertion vs. deletion biases in different taxa can lead to the evolution of different genome sizes. The hypothesis of Lynch regarding genome size relies on mutational biases toward increase or decrease in genome size.

However, mutational hypotheses for the evolution of composition suffered a reduction in scope when it was discovered that (1) GC-biased gene conversion makes an important contribution to composition in diploid organisms such as mammals and (2) bacterial genomes frequently have AT-biased mutation.

Contemporary thinking about the role of mutation biases reflects a different theory from that of Haldane and Fisher. More recent work showed that the original "pressures" theory assumes that evolution is based on standing variation: when evolution depends on events of mutation that introduce new alleles, mutational and developmental biases in the introduction of variation (arrival biases) can impose biases on evolution without requiring neutral evolution or high mutation rates).

Several studies report that the mutations implicated in adaptation reflect common mutation biases though others dispute this interpretation.

Applications

Concepts and models used in evolutionary biology, such as natural selection, have many applications.

Artificial selection is the intentional selection of traits in a population of organisms. This has been used for thousands of years in the domestication of plants and animals. More recently, such selection has become a vital part of genetic engineering, with selectable markers such as antibiotic resistance genes being used to manipulate DNA. Proteins with valuable properties have evolved by repeated rounds of mutation and selection (for example modified enzymes and new antibodies) in a process called directed evolution.

Understanding the changes that have occurred during an organism's evolution can reveal the genes needed to construct parts of the body, genes which may be involved in human genetic disorders. For example, the Mexican tetra is an albino cavefish that lost its eyesight during evolution. Breeding together different populations of this blind fish produced some offspring with functional eyes, since different mutations had occurred in the isolated populations that had evolved in different caves. This helped identify genes required for vision and pigmentation.

Evolutionary theory has many applications in medicine. Many human diseases are not static phenomena, but capable of evolution. Viruses, bacteria, fungi and cancers evolve to be resistant to host immune defences, as well as to pharmaceutical drugs. These same problems occur in agriculture with pesticide and herbicide resistance. It is possible that we are facing the end of the effective life of most of available antibiotics and predicting the evolution and evolvability of our pathogens and devising strategies to slow or circumvent it is requiring deeper knowledge of the complex forces driving evolution at the molecular level.

In computer science, simulations of evolution using evolutionary algorithms and artificial life started in the 1960s and were extended with simulation of artificial selection. Artificial evolution became a widely recognised optimisation method as a result of the work of Ingo Rechenberg in the 1960s. He used evolution strategies to solve complex engineering problems. Genetic algorithms in particular became popular through the writing of John Henry Holland. Practical applications also include automatic evolution of computer programmes. Evolutionary algorithms are now used to solve multi-dimensional problems more efficiently than software produced by human designers and also to optimise the design of systems.

Natural outcomes

Evolution influences every aspect of the form and behaviour of organisms. Most prominent are the specific behavioural and physical adaptations that are the outcome of natural selection. These adaptations increase fitness by aiding activities such as finding food, avoiding predators or attracting mates. Organisms can also respond to selection by cooperating with each other, usually by aiding their relatives or engaging in mutually beneficial symbiosis. In the longer term, evolution produces new species through splitting ancestral populations of organisms into new groups that cannot or will not interbreed. These outcomes of evolution are distinguished based on time scale as macroevolution versus microevolution. Macroevolution refers to evolution that occurs at or above the level of species, in particular speciation and extinction; whereas microevolution refers to smaller evolutionary changes within a species or population, in particular shifts in allele frequency and adaptation. Macroevolution the outcome of long periods of microevolution. Thus, the distinction between micro- and macroevolution is not a fundamental one—the difference is simply the time involved. However, in macroevolution, the traits of the entire species may be important. For instance, a large amount of variation among individuals allows a species to rapidly adapt to new habitats, lessening the chance of it going extinct, while a wide geographic range increases the chance of speciation, by making it more likely that part of the population will become isolated. In this sense, microevolution and macroevolution might involve selection at different levels—with microevolution acting on genes and organisms, versus macroevolutionary processes such as species selection acting on entire species and affecting their rates of speciation and extinction.

A common misconception is that evolution has goals, long-term plans, or an innate tendency for "progress", as expressed in beliefs such as orthogenesis and evolutionism; realistically however, evolution has no long-term goal and does not necessarily produce greater complexity. Although complex species have evolved, they occur as a side effect of the overall number of organisms increasing and simple forms of life still remain more common in the biosphere. For example, the overwhelming majority of species are microscopic prokaryotes, which form about half the world's biomass despite their small size, and constitute the vast majority of Earth's biodiversity. Simple organisms have therefore been the dominant form of life on Earth throughout its history and continue to be the main form of life up to the present day, with complex life only appearing more diverse because it is more noticeable. Indeed, the evolution of microorganisms is particularly important to evolutionary research, since their rapid reproduction allows the study of experimental evolution and the observation of evolution and adaptation in real time.

Adaptation

Homologous bones in the limbs of tetrapods. The bones of these animals have the same basic structure, but have been adapted for specific uses.

Adaptation is the process that makes organisms better suited to their habitat. Also, the term adaptation may refer to a trait that is important for an organism's survival. For example, the adaptation of horses' teeth to the grinding of grass. By using the term adaptation for the evolutionary process and adaptive trait for the product (the bodily part or function), the two senses of the word may be distinguished. Adaptations are produced by natural selection. The following definitions are due to Theodosius Dobzhansky:

  1. Adaptation is the evolutionary process whereby an organism becomes better able to live in its habitat or habitats.
  2. Adaptedness is the state of being adapted: the degree to which an organism is able to live and reproduce in a given set of habitats.
  3. An adaptive trait is an aspect of the developmental pattern of the organism which enables or enhances the probability of that organism surviving and reproducing.

Adaptation may cause either the gain of a new feature, or the loss of an ancestral feature. An example that shows both types of change is bacterial adaptation to antibiotic selection, with genetic changes causing antibiotic resistance by both modifying the target of the drug, or increasing the activity of transporters that pump the drug out of the cell. Other striking examples are the bacteria Escherichia coli evolving the ability to use citric acid as a nutrient in a long-term laboratory experiment, Flavobacterium evolving a novel enzyme that allows these bacteria to grow on the by-products of nylon manufacturing, and the soil bacterium Sphingobium evolving an entirely new metabolic pathway that degrades the synthetic pesticide pentachlorophenol. An interesting but still controversial idea is that some adaptations might increase the ability of organisms to generate genetic diversity and adapt by natural selection (increasing organisms' evolvability).

A baleen whale skeleton. Letters a and b label flipper bones, which were adapted from front leg bones, while c indicates vestigial leg bones, both suggesting an adaptation from land to sea.

Adaptation occurs through the gradual modification of existing structures. Consequently, structures with similar internal organisation may have different functions in related organisms. This is the result of a single ancestral structure being adapted to function in different ways. The bones within bat wings, for example, are very similar to those in mice feet and primate hands, due to the descent of all these structures from a common mammalian ancestor. However, since all living organisms are related to some extent, even organs that appear to have little or no structural similarity, such as arthropod, squid and vertebrate eyes, or the limbs and wings of arthropods and vertebrates, can depend on a common set of homologous genes that control their assembly and function; this is called deep homology.

During evolution, some structures may lose their original function and become vestigial structures. Such structures may have little or no function in a current species, yet have a clear function in ancestral species, or other closely related species. Examples include pseudogenes, the non-functional remains of eyes in blind cave-dwelling fish, wings in flightless birds, the presence of hip bones in whales and snakes, and sexual traits in organisms that reproduce via asexual reproduction. Examples of vestigial structures in humans include wisdom teeth, the coccyx, the vermiform appendix, and other behavioural vestiges such as goose bumps and primitive reflexes.

However, many traits that appear to be simple adaptations are in fact exaptations: structures originally adapted for one function, but which coincidentally became somewhat useful for some other function in the process. One example is the African lizard Holaspis guentheri, which developed an extremely flat head for hiding in crevices, as can be seen by looking at its near relatives. However, in this species, the head has become so flattened that it assists in gliding from tree to tree—an exaptation. Within cells, molecular machines such as the bacterial flagella and protein sorting machinery evolved by the recruitment of several pre-existing proteins that previously had different functions. Another example is the recruitment of enzymes from glycolysis and xenobiotic metabolism to serve as structural proteins called crystallins within the lenses of organisms' eyes.

An area of current investigation in evolutionary developmental biology is the developmental basis of adaptations and exaptations. This research addresses the origin and evolution of embryonic development and how modifications of development and developmental processes produce novel features. These studies have shown that evolution can alter development to produce new structures, such as embryonic bone structures that develop into the jaw in other animals instead forming part of the middle ear in mammals. It is also possible for structures that have been lost in evolution to reappear due to changes in developmental genes, such as a mutation in chickens causing embryos to grow teeth similar to those of crocodiles. It is now becoming clear that most alterations in the form of organisms are due to changes in a small set of conserved genes.

Coevolution

The common garter snake has evolved resistance to the defensive substance tetrodotoxin in its amphibian prey.

Interactions between organisms can produce both conflict and cooperation. When the interaction is between pairs of species, such as a pathogen and a host, or a predator and its prey, these species can develop matched sets of adaptations. Here, the evolution of one species causes adaptations in a second species. These changes in the second species then, in turn, cause new adaptations in the first species. This cycle of selection and response is called coevolution. An example is the production of tetrodotoxin in the rough-skinned newt and the evolution of tetrodotoxin resistance in its predator, the common garter snake. In this predator-prey pair, an evolutionary arms race has produced high levels of toxin in the newt and correspondingly high levels of toxin resistance in the snake.

Cooperation

Not all co-evolved interactions between species involve conflict. Many cases of mutually beneficial interactions have evolved. For instance, an extreme cooperation exists between plants and the mycorrhizal fungi that grow on their roots and aid the plant in absorbing nutrients from the soil. This is a reciprocal relationship as the plants provide the fungi with sugars from photosynthesis. Here, the fungi actually grow inside plant cells, allowing them to exchange nutrients with their hosts, while sending signals that suppress the plant immune system.

Coalitions between organisms of the same species have also evolved. An extreme case is the eusociality found in social insects, such as bees, termites and ants, where sterile insects feed and guard the small number of organisms in a colony that are able to reproduce. On an even smaller scale, the somatic cells that make up the body of an animal limit their reproduction so they can maintain a stable organism, which then supports a small number of the animal's germ cells to produce offspring. Here, somatic cells respond to specific signals that instruct them whether to grow, remain as they are, or die. If cells ignore these signals and multiply inappropriately, their uncontrolled growth causes cancer.

Such cooperation within species may have evolved through the process of kin selection, which is where one organism acts to help raise a relative's offspring. This activity is selected for because if the helping individual contains alleles which promote the helping activity, it is likely that its kin will also contain these alleles and thus those alleles will be passed on. Other processes that may promote cooperation include group selection, where cooperation provides benefits to a group of organisms.

Speciation

The four geographic modes of speciation

Speciation is the process where a species diverges into two or more descendant species.

There are multiple ways to define the concept of "species." The choice of definition is dependent on the particularities of the species concerned. For example, some species concepts apply more readily toward sexually reproducing organisms while others lend themselves better toward asexual organisms. Despite the diversity of various species concepts, these various concepts can be placed into one of three broad philosophical approaches: interbreeding, ecological and phylogenetic. The Biological Species Concept (BSC) is a classic example of the interbreeding approach. Defined by evolutionary biologist Ernst Mayr in 1942, the BSC states that "species are groups of actually or potentially interbreeding natural populations, which are reproductively isolated from other such groups." Despite its wide and long-term use, the BSC like others is not without controversy, for example because these concepts cannot be applied to prokaryotes; this is called the species problem. Some researchers have attempted a unifying monistic definition of species, while others adopt a pluralistic approach and suggest that there may be different ways to logically interpret the definition of a species.

Barriers to reproduction between two diverging sexual populations are required for the populations to become new species. Gene flow may slow this process by spreading the new genetic variants also to the other populations. Depending on how far two species have diverged since their most recent common ancestor, it may still be possible for them to produce offspring, as with horses and donkeys mating to produce mules. Such hybrids are generally infertile. In this case, closely related species may regularly interbreed, but hybrids will be selected against and the species will remain distinct. However, viable hybrids are occasionally formed and these new species can either have properties intermediate between their parent species, or possess a totally new phenotype. The importance of hybridisation in producing new species of animals is unclear, although cases have been seen in many types of animals, with the gray tree frog being a particularly well-studied example.

Speciation has been observed multiple times under both controlled laboratory conditions and in nature. In sexually reproducing organisms, speciation results from reproductive isolation followed by genealogical divergence. There are four primary geographic modes of speciation. The most common in animals is allopatric speciation, which occurs in populations initially isolated geographically, such as by habitat fragmentation or migration. Selection under these conditions can produce very rapid changes in the appearance and behaviour of organisms. As selection and drift act independently on populations isolated from the rest of their species, separation may eventually produce organisms that cannot interbreed.

The second mode of speciation is peripatric speciation, which occurs when small populations of organisms become isolated in a new environment. This differs from allopatric speciation in that the isolated populations are numerically much smaller than the parental population. Here, the founder effect causes rapid speciation after an increase in inbreeding increases selection on homozygotes, leading to rapid genetic change.

The third mode is parapatric speciation. This is similar to peripatric speciation in that a small population enters a new habitat, but differs in that there is no physical separation between these two populations. Instead, speciation results from the evolution of mechanisms that reduce gene flow between the two populations. Generally this occurs when there has been a drastic change in the environment within the parental species' habitat. One example is the grass Anthoxanthum odoratum, which can undergo parapatric speciation in response to localised metal pollution from mines. Here, plants evolve that have resistance to high levels of metals in the soil. Selection against interbreeding with the metal-sensitive parental population produced a gradual change in the flowering time of the metal-resistant plants, which eventually produced complete reproductive isolation. Selection against hybrids between the two populations may cause reinforcement, which is the evolution of traits that promote mating within a species, as well as character displacement, which is when two species become more distinct in appearance.

Geographical isolation of finches on the Galápagos Islands produced over a dozen new species.

Finally, in sympatric speciation species diverge without geographic isolation or changes in habitat. This form is rare since even a small amount of gene flow may remove genetic differences between parts of a population. Generally, sympatric speciation in animals requires the evolution of both genetic differences and nonrandom mating, to allow reproductive isolation to evolve.

One type of sympatric speciation involves crossbreeding of two related species to produce a new hybrid species. This is not common in animals as animal hybrids are usually sterile. This is because during meiosis the homologous chromosomes from each parent are from different species and cannot successfully pair. However, it is more common in plants because plants often double their number of chromosomes, to form polyploids. This allows the chromosomes from each parental species to form matching pairs during meiosis, since each parent's chromosomes are represented by a pair already. An example of such a speciation event is when the plant species Arabidopsis thaliana and Arabidopsis arenosa crossbred to give the new species Arabidopsis suecica. This happened about 20,000 years ago, and the speciation process has been repeated in the laboratory, which allows the study of the genetic mechanisms involved in this process. Indeed, chromosome doubling within a species may be a common cause of reproductive isolation, as half the doubled chromosomes will be unmatched when breeding with undoubled organisms.

Speciation events are important in the theory of punctuated equilibrium, which accounts for the pattern in the fossil record of short "bursts" of evolution interspersed with relatively long periods of stasis, where species remain relatively unchanged. In this theory, speciation and rapid evolution are linked, with natural selection and genetic drift acting most strongly on organisms undergoing speciation in novel habitats or small populations. As a result, the periods of stasis in the fossil record correspond to the parental population and the organisms undergoing speciation and rapid evolution are found in small populations or geographically restricted habitats and therefore rarely being preserved as fossils.

Extinction

Tyrannosaurus rex. Non-avian dinosaurs died out in the Cretaceous–Paleogene extinction event at the end of the Cretaceous period.

Extinction is the disappearance of an entire species. Extinction is not an unusual event, as species regularly appear through speciation and disappear through extinction. Nearly all animal and plant species that have lived on Earth are now extinct, and extinction appears to be the ultimate fate of all species. These extinctions have happened continuously throughout the history of life, although the rate of extinction spikes in occasional mass extinction events. The Cretaceous–Paleogene extinction event, during which the non-avian dinosaurs became extinct, is the most well-known, but the earlier Permian–Triassic extinction event was even more severe, with approximately 96% of all marine species driven to extinction. The Holocene extinction event is an ongoing mass extinction associated with humanity's expansion across the globe over the past few thousand years. Present-day extinction rates are 100–1000 times greater than the background rate and up to 30% of current species may be extinct by the mid 21st century. Human activities are now the primary cause of the ongoing extinction event; global warming may further accelerate it in the future. Despite the estimated extinction of more than 99% of all species that ever lived on Earth, about 1 trillion species are estimated to be on Earth currently with only one-thousandth of 1% described.

The role of extinction in evolution is not very well understood and may depend on which type of extinction is considered. The causes of the continuous "low-level" extinction events, which form the majority of extinctions, may be the result of competition between species for limited resources (the competitive exclusion principle). If one species can out-compete another, this could produce species selection, with the fitter species surviving and the other species being driven to extinction. The intermittent mass extinctions are also important, but instead of acting as a selective force, they drastically reduce diversity in a nonspecific manner and promote bursts of rapid evolution and speciation in survivors.

Evolutionary history of life

Origin of life

The Earth is about 4.54 billion years old. The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago, during the Eoarchean Era after a geological crust started to solidify following the earlier molten Hadean Eon. Microbial mat fossils have been found in 3.48 billion-year-old sandstone in Western Australia. Other early physical evidence of a biogenic substance is graphite in 3.7 billion-year-old metasedimentary rocks discovered in Western Greenland as well as "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia. Commenting on the Australian findings, Stephen Blair Hedges wrote, "If life arose relatively quickly on Earth, then it could be common in the universe." In July 2016, scientists reported identifying a set of 355 genes from the last universal common ancestor (LUCA) of all organisms living on Earth.

More than 99% of all species, amounting to over five billion species, that ever lived on Earth are estimated to be extinct. Estimates on the number of Earth's current species range from 10 million to 14 million, of which about 1.9 million are estimated to have been named and 1.6 million documented in a central database to date, leaving at least 80% not yet described.

Highly energetic chemistry is thought to have produced a self-replicating molecule around 4 billion years ago, and half a billion years later the last common ancestor of all life existed. The current scientific consensus is that the complex biochemistry that makes up life came from simpler chemical reactions. The beginning of life may have included self-replicating molecules such as RNA and the assembly of simple cells.

Common descent

All organisms on Earth are descended from a common ancestor or ancestral gene pool. Current species are a stage in the process of evolution, with their diversity the product of a long series of speciation and extinction events. The common descent of organisms was first deduced from four simple facts about organisms: First, they have geographic distributions that cannot be explained by local adaptation. Second, the diversity of life is not a set of completely unique organisms, but organisms that share morphological similarities. Third, vestigial traits with no clear purpose resemble functional ancestral traits. Fourth, organisms can be classified using these similarities into a hierarchy of nested groups, similar to a family tree.

The hominoids are descendants of a common ancestor.

Due to horizontal gene transfer, this "tree of life" may be more complicated than a simple branching tree, since some genes have spread independently between distantly related species. To solve this problem and others, some authors prefer to use the "Coral of life" as a metaphor or a mathematical model to illustrate the evolution of life. This view dates back to an idea briefly mentioned by Darwin but later abandoned.

Past species have also left records of their evolutionary history. Fossils, along with the comparative anatomy of present-day organisms, constitute the morphological, or anatomical, record. By comparing the anatomies of both modern and extinct species, palaeontologists can infer the lineages of those species. However, this approach is most successful for organisms that had hard body parts, such as shells, bones or teeth. Further, as prokaryotes such as bacteria and archaea share a limited set of common morphologies, their fossils do not provide information on their ancestry.

More recently, evidence for common descent has come from the study of biochemical similarities between organisms. For example, all living cells use the same basic set of nucleotides and amino acids. The development of molecular genetics has revealed the record of evolution left in organisms' genomes: dating when species diverged through the molecular clock produced by mutations. For example, these DNA sequence comparisons have revealed that humans and chimpanzees share 98% of their genomes and analysing the few areas where they differ helps shed light on when the common ancestor of these species existed.

Evolution of life

EuryarchaeotaNanoarchaeotaThermoproteotaProtozoaAlgaePlantSlime moldsAnimalFungusGram-positive bacteriaChlamydiotaChloroflexotaActinomycetotaPlanctomycetotaSpirochaetotaFusobacteriotaCyanobacteriaThermophilesAcidobacteriotaPseudomonadota
Evolutionary tree showing the divergence of modern species from their common ancestor in the centre. The three domains are coloured, with bacteria blue, archaea green and eukaryotes red.

Prokaryotes inhabited the Earth from approximately 3–4 billion years ago. No obvious changes in morphology or cellular organisation occurred in these organisms over the next few billion years. The eukaryotic cells emerged between 1.6 and 2.7 billion years ago. The next major change in cell structure came when bacteria were engulfed by eukaryotic cells, in a cooperative association called endosymbiosis. The engulfed bacteria and the host cell then underwent coevolution, with the bacteria evolving into either mitochondria or hydrogenosomes. Another engulfment of cyanobacterial-like organisms led to the formation of chloroplasts in algae and plants.

The history of life was that of the unicellular eukaryotes, prokaryotes and archaea until about 610 million years ago when multicellular organisms began to appear in the oceans in the Ediacaran period. The evolution of multicellularity occurred in multiple independent events, in organisms as diverse as sponges, brown algae, cyanobacteria, slime moulds and myxobacteria. In January 2016, scientists reported that, about 800 million years ago, a minor genetic change in a single molecule called GK-PID may have allowed organisms to go from a single cell organism to one of many cells.

Soon after the emergence of these first multicellular organisms, a remarkable amount of biological diversity appeared over approximately 10 million years, in an event called the Cambrian explosion. Here, the majority of types of modern animals appeared in the fossil record, as well as unique lineages that subsequently became extinct. Various triggers for the Cambrian explosion have been proposed, including the accumulation of oxygen in the atmosphere from photosynthesis.

About 500 million years ago, plants and fungi colonised the land and were soon followed by arthropods and other animals. Insects were particularly successful and even today make up the majority of animal species. Amphibians first appeared around 364 million years ago, followed by early amniotes and birds around 155 million years ago (both from "reptile"-like lineages), mammals around 129 million years ago, Homininae around 10 million years ago and modern humans around 250,000 years ago. However, despite the evolution of these large animals, smaller organisms similar to the types that evolved early in this process continue to be highly successful and dominate the Earth, with the majority of both biomass and species being prokaryotes.

History of evolutionary thought

Lucretius
Alfred Russel Wallace
Thomas Robert Malthus
In 1842, Charles Darwin penned his first sketch of On the Origin of Species.

Classical antiquity

The proposal that one type of organism could descend from another type goes back to some of the first pre-Socratic Greek philosophers, such as Anaximander and Empedocles. Such proposals survived into Roman times. The poet and philosopher Lucretius followed Empedocles in his masterwork De rerum natura (On the Nature of Things).

Middle Ages

In contrast to these materialistic views, Aristotelianism had considered all natural things as actualisations of fixed natural possibilities, known as forms. This became part of a medieval teleological understanding of nature in which all things have an intended role to play in a divine cosmic order. Variations of this idea became the standard understanding of the Middle Ages and were integrated into Christian learning, but Aristotle did not demand that real types of organisms always correspond one-for-one with exact metaphysical forms and specifically gave examples of how new types of living things could come to be.

A number of Arab Muslim scholars wrote about evolution, most notably Ibn Khaldun, who wrote the book Muqaddimah in 1377 AD, in which he asserted that humans developed from "the world of the monkeys", in a process by which "species become more numerous".

Pre-Darwinian

The "New Science" of the 17th century rejected the Aristotelian approach. It sought to explain natural phenomena in terms of physical laws that were the same for all visible things and that did not require the existence of any fixed natural categories or divine cosmic order. However, this new approach was slow to take root in the biological sciences: the last bastion of the concept of fixed natural types. John Ray applied one of the previously more general terms for fixed natural types, "species", to plant and animal types, but he strictly identified each type of living thing as a species and proposed that each species could be defined by the features that perpetuated themselves generation after generation. The biological classification introduced by Carl Linnaeus in 1735 explicitly recognised the hierarchical nature of species relationships, but still viewed species as fixed according to a divine plan.

Other naturalists of this time speculated on the evolutionary change of species over time according to natural laws. In 1751, Pierre Louis Maupertuis wrote of natural modifications occurring during reproduction and accumulating over many generations to produce new species. Georges-Louis Leclerc, Comte de Buffon, suggested that species could degenerate into different organisms, and Erasmus Darwin proposed that all warm-blooded animals could have descended from a single microorganism (or "filament"). The first full-fledged evolutionary scheme was Jean-Baptiste Lamarck's "transmutation" theory of 1809, which envisaged spontaneous generation continually producing simple forms of life that developed greater complexity in parallel lineages with an inherent progressive tendency, and postulated that on a local level, these lineages adapted to the environment by inheriting changes caused by their use or disuse in parents. (The latter process was later called Lamarckism.) These ideas were condemned by established naturalists as speculation lacking empirical support. In particular, Georges Cuvier insisted that species were unrelated and fixed, their similarities reflecting divine design for functional needs. In the meantime, Ray's ideas of benevolent design had been developed by William Paley into the Natural Theology or Evidences of the Existence and Attributes of the Deity (1802), which proposed complex adaptations as evidence of divine design and which was admired by Charles Darwin.

Darwinian revolution

The crucial break from the concept of constant typological classes or types in biology came with the theory of evolution through natural selection, which was formulated by Charles Darwin and Alfred Wallace in terms of variable populations. Darwin used the expression "descent with modification" rather than "evolution". Partly influenced by An Essay on the Principle of Population (1798) by Thomas Robert Malthus, Darwin noted that population growth would lead to a "struggle for existence" in which favourable variations prevailed as others perished. In each generation, many offspring fail to survive to an age of reproduction because of limited resources. This could explain the diversity of plants and animals from a common ancestry through the working of natural laws in the same way for all types of organism. Darwin developed his theory of "natural selection" from 1838 onwards and was writing up his "big book" on the subject when Alfred Russel Wallace sent him a version of virtually the same theory in 1858. Their separate papers were presented together at an 1858 meeting of the Linnean Society of London. At the end of 1859, Darwin's publication of his "abstract" as On the Origin of Species explained natural selection in detail and in a way that led to an increasingly wide acceptance of Darwin's concepts of evolution at the expense of alternative theories. Thomas Henry Huxley applied Darwin's ideas to humans, using paleontology and comparative anatomy to provide strong evidence that humans and apes shared a common ancestry. Some were disturbed by this since it implied that humans did not have a special place in the universe.

Pangenesis and heredity

The mechanisms of reproductive heritability and the origin of new traits remained a mystery. Towards this end, Darwin developed his provisional theory of pangenesis. In 1865, Gregor Mendel reported that traits were inherited in a predictable manner through the independent assortment and segregation of elements (later known as genes). Mendel's laws of inheritance eventually supplanted most of Darwin's pangenesis theory. August Weismann made the important distinction between germ cells that give rise to gametes (such as sperm and egg cells) and the somatic cells of the body, demonstrating that heredity passes through the germ line only. Hugo de Vries connected Darwin's pangenesis theory to Weismann's germ/soma cell distinction and proposed that Darwin's pangenes were concentrated in the cell nucleus and when expressed they could move into the cytoplasm to change the cell's structure. De Vries was also one of the researchers who made Mendel's work well known, believing that Mendelian traits corresponded to the transfer of heritable variations along the germline. To explain how new variants originate, de Vries developed a mutation theory that led to a temporary rift between those who accepted Darwinian evolution and biometricians who allied with de Vries. In the 1930s, pioneers in the field of population genetics, such as Ronald Fisher, Sewall Wright and J. B. S. Haldane set the foundations of evolution onto a robust statistical philosophy. The false contradiction between Darwin's theory, genetic mutations, and Mendelian inheritance was thus reconciled.

The 'modern synthesis'

In the 1920s and 1930s, the so-called modern synthesis connected natural selection and population genetics, based on Mendelian inheritance, into a unified theory that applied generally to any branch of biology. It explained patterns observed across species in populations, through fossil transitions in palaeontology.

Further syntheses

Since then, further syntheses have extended evolution's explanatory power in the light of numerous discoveries, to cover biological phenomena across the whole of the biological hierarchy from genes to populations.

The publication of the structure of DNA by James Watson and Francis Crick with contribution of Rosalind Franklin in 1953 demonstrated a physical mechanism for inheritance. Molecular biology improved understanding of the relationship between genotype and phenotype. Advances were also made in phylogenetic systematics, mapping the transition of traits into a comparative and testable framework through the publication and use of evolutionary trees. In 1973, evolutionary biologist Theodosius Dobzhansky penned that "nothing in biology makes sense except in the light of evolution," because it has brought to light the relations of what first seemed disjointed facts in natural history into a coherent explanatory body of knowledge that describes and predicts many observable facts about life on this planet.

One extension, known as evolutionary developmental biology and informally called "evo-devo," emphasises how changes between generations (evolution) act on patterns of change within individual organisms (development). Since the beginning of the 21st century, some biologists have argued for an extended evolutionary synthesis, which would account for the effects of non-genetic inheritance modes, such as epigenetics, parental effects, ecological inheritance and cultural inheritance, and evolvability.

Social and cultural responses

As evolution became widely accepted in the 1870s, caricatures of Charles Darwin with an ape or monkey body symbolised evolution.

In the 19th century, particularly after the publication of On the Origin of Species in 1859, the idea that life had evolved was an active source of academic debate centred on the philosophical, social and religious implications of evolution. Today, the modern evolutionary synthesis is accepted by a vast majority of scientists.[259] However, evolution remains a contentious concept for some theists.

While various religions and denominations have reconciled their beliefs with evolution through concepts such as theistic evolution, there are creationists who believe that evolution is contradicted by the creation myths found in their religions and who raise various objections to evolution. As had been demonstrated by responses to the publication of Vestiges of the Natural History of Creation in 1844, the most controversial aspect of evolutionary biology is the implication of human evolution that humans share common ancestry with apes and that the mental and moral faculties of humanity have the same types of natural causes as other inherited traits in animals. In some countries, notably the United States, these tensions between science and religion have fuelled the current creation–evolution controversy, a religious conflict focusing on politics and public education. While other scientific fields such as cosmology and Earth science also conflict with literal interpretations of many religious texts, evolutionary biology experiences significantly more opposition from religious literalists.

The teaching of evolution in American secondary school biology classes was uncommon in most of the first half of the 20th century. The Scopes Trial decision of 1925 caused the subject to become very rare in American secondary biology textbooks for a generation, but it was gradually re-introduced later and became legally protected with the 1968 Epperson v. Arkansas decision. Since then, the competing religious belief of creationism was legally disallowed in secondary school curricula in various decisions in the 1970s and 1980s, but it returned in pseudoscientific form as intelligent design (ID), to be excluded once again in the 2005 Kitzmiller v. Dover Area School District case. The debate over Darwin's ideas did not generate significant controversy in China.

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...