Search This Blog

Saturday, May 12, 2018

Intelligence quotient

From Wikipedia, the free encyclopedia

Intelligence quotient
Medical diagnostics
[picture of an example IQ test item]
An example of one kind of IQ test item, modeled after items in the Raven's Progressive Matrices test
ICD-10-PCS Z01.8
ICD-9-CM 94.01

An intelligence quotient (IQ) is a total score derived from several standardized tests designed to assess human intelligence. The abbreviation "IQ" was coined by the psychologist William Stern for the German term Intelligenzquotient, his term for a scoring method for intelligence tests at University of Breslau he advocated in a 1912 book.[1] Historically, IQ is a score obtained by dividing a person's mental age score, obtained by administering an intelligence test, by the person's chronological age, both expressed in terms of years and months. The resulting fraction is multiplied by 100 to obtain the IQ score.[2] When current IQ tests were developed, the median raw score of the norming sample is defined as IQ 100 and scores each standard deviation (SD) up or down are defined as 15 IQ points greater or less,[3] although this was not always so historically. By this definition, approximately two-thirds of the population scores are between IQ 85 and IQ 115. About 2.5 percent of the population scores above 130, and 2.5 percent below 70.[4][5]

Scores from intelligence tests are estimates of intelligence. Unlike, for example, distance and mass, a concrete measure of intelligence cannot be achieved given the abstract nature of the concept of "intelligence".[6] IQ scores have been shown to be associated with such factors as morbidity and mortality,[7][8] parental social status,[9] and, to a substantial degree, biological parental IQ. While the heritability of IQ has been investigated for nearly a century, there is still debate about the significance of heritability estimates[10][11] and the mechanisms of inheritance.[12]

IQ scores are used for educational placement, assessment of intellectual disability, and evaluating job applicants. Even when students improve their scores on standardized tests, they do not always improve their cognitive abilities, such as memory, attention and speed.[13] In research contexts they have been studied as predictors of job performance, and income. They are also used to study distributions of psychometric intelligence in populations and the correlations between it and other variables. Raw scores on IQ tests for many populations have been rising at an average rate that scales to three IQ points per decade since the early 20th century, a phenomenon called the Flynn effect. Investigation of different patterns of increases in subtest scores can also inform current research on human intelligence.

History

Precursors to IQ testing

Historically, even before IQ tests were devised, there were attempts to classify people into intelligence categories by observing their behavior in daily life.[14][15] Those other forms of behavioral observation are still important for validating classifications based primarily on IQ test scores. Both intelligence classification by observation of behavior outside the testing room and classification by IQ testing depend on the definition of "intelligence" used in a particular case and on the reliability and error of estimation in the classification procedure.[citation needed]

The English statistician Francis Galton made the first attempt at creating a standardized test for rating a person's intelligence. A pioneer of psychometrics and the application of statistical methods to the study of human diversity and the study of inheritance of human traits, he believed that intelligence was largely a product of heredity (by which he did not mean genes, although he did develop several pre-Mendelian theories of particulate inheritance).[16][17][18] He hypothesized that there should exist a correlation between intelligence and other observable traits such as reflexes, muscle grip, and head size.[19] He set up the first mental testing centre in the world in 1882 and he published "Inquiries into Human Faculty and Its Development" in 1883, in which he set out his theories. After gathering data on a variety of physical variables, he was unable to show any such correlation, and he eventually abandoned this research.[20][21]

French psychologist Alfred Binet was one of the key developers of what later became known as the Stanford–Binet test.

French psychologist Alfred Binet, together with Victor Henri and Théodore Simon had more success in 1905, when they published the Binet-Simon test, which focused on verbal abilities. It was intended to identify mental retardation in school children,[22] but in specific contradistinction to claims made by psychiatrists that these children were "sick" (not "slow") and should therefore be removed from school and cared for in asylums.[23] The score on the Binet-Simon scale would reveal the child's mental age. For example, a six-year-old child who passed all the tasks usually passed by six-year-olds—but nothing beyond—would have a mental age that matched his chronological age, 6.0. (Fancher, 1985). Binet thought that intelligence was multifaceted, but came under the control of practical judgment.

In Binet's view, there were limitations with the scale and he stressed what he saw as the remarkable diversity of intelligence and the subsequent need to study it using qualitative, as opposed to quantitative, measures (White, 2000). American psychologist Henry H. Goddard published a translation of it in 1910. American psychologist Lewis Terman at Stanford University revised the Binet-Simon scale, which resulted in the Stanford-Binet Intelligence Scales (1916). It became the most popular test in the United States for decades.[22][24][25][26]

General factor (g)

The many different kinds of IQ tests include a wide variety of item content. Some test items are visual, while many are verbal. Test items vary from being based on abstract-reasoning problems to concentrating on arithmetic, vocabulary, or general knowledge.
The British psychologist Charles Spearman in 1904 made the first formal factor analysis of correlations between the tests. He observed that children's school grades across seemingly unrelated school subjects were positively correlated, and reasoned that these correlations reflected the influence of an underlying general mental ability that entered into performance on all kinds of mental tests. He suggested that all mental performance could be conceptualized in terms of a single general ability factor and a large number of narrow task-specific ability factors. Spearman named it g for "general factor" and labeled the specific factors or abilities for specific tasks s. In any collection of test items that make up an IQ test, the score that best measures g is the composite score that has the highest correlations with all the item scores. Typically, the "g-loaded" composite score of an IQ test battery appears to involve a common strength in abstract reasoning across the test's item content. Therefore, Spearman and others have regarded g as closely related to the essence of human intelligence.[citation needed]

Spearman's argument proposing a general factor of human intelligence is still accepted in principle by many psychometricians. Today's factor models of intelligence typically represent cognitive abilities as a three-level hierarchy, where there are a large number of narrow factors at the bottom of the hierarchy, a handful of broad, more general factors at the intermediate level, and at the apex a single factor, referred to as the g factor, which represents the variance common to all cognitive tasks. However, this view is not universally accepted; other factor analyses of the data, with different results, are possible. Some psychometricians regard g as a statistical artifact.[citation needed]

United States military selection in World War I

During World War I, a way was needed to evaluate and assign Army recruits to appropriate tasks. This led to the development of several mental tests by Robert Yerkes, who worked with major hereditarians of American psychometrics—including Terman, Goddard—to write the test.[27] The testing generated controversy and much public debate in the United States. Nonverbal or "performance" tests were developed for those who could not speak English or were suspected of malingering.[22] Based on Goddard's translation of the Binet-Simon test, the tests had an impact in screening men for officer training:
...the tests did have a strong impact in some areas, particularly in screening men for officer training. At the start of the war, the army and national guard maintained nine thousand officers. By the end, two hundred thousand officers presided, and two- thirds of them had started their careers in training camps where the tests were applied. In some camps, no man scoring below C could be considered for officer training.[27]
1.75 million men were tested in total, making the results the first mass-produced written tests of intelligence, though considered dubious and non-usable, for reasons including high variability of test implementation throughout different camps and questions testing for familiarity with American culture rather than intelligence.[27] After the war, positive publicity promoted by army psychologists helped to make psychology a respected field.[28] Subsequently, there was an increase in jobs and funding in psychology in the United States.[29] Group intelligence tests were developed and became widely used in schools and industry.[30]

The results of these tests, which at the time reaffirmed contemporary racism and nationalism, are considered controversial and dubious, having rested on certain contested assumptions: that intelligence was heritable, innate, and could be relegated to a single number, the tests were enacted systematically, and test questions actually tested for innate intelligence rather than subsuming environmental factors.[27] The tests also allowed for the bolstering of jingoist narratives in the context of increased immigration, which may have influenced the passing of the Immigration Restriction Act of 1924.[27]

L.L. Thurstone argued for a model of intelligence that included seven unrelated factors (verbal comprehension, word fluency, number facility, spatial visualization, associative memory, perceptual speed, reasoning, and induction). While not widely used, Thurstone's model influenced later theories.[22]

David Wechsler produced the first version of his test in 1939. It gradually became more popular and overtook the Stanford-Binet in the 1960s. It has been revised several times, as is common for IQ tests, to incorporate new research. One explanation is that psychologists and educators wanted more information than the single score from the Binet. Wechsler's ten or more subtests provided this. Another is that the Stanford-Binet test reflected mostly verbal abilities, while the Wechsler test also reflected nonverbal abilities. The Stanford-Binet has also been revised several times and is now similar to the Wechsler in several aspects, but the Wechsler continues to be the most popular test in the United States.[22]

IQ testing and the Eugenics movement in the United States

Eugenics refers to the principles of heredity used to improve the human race. Francis Galton first used the term in the late 1800s.[31] The eugenics movement was popular in the US in the 1920s and 1930s.

Goddard was a eugenicist. In 1908, he published his own version, “The Binet and Simon Test of Intellectual Capacity”, and cordially promoted the test. He quickly extended the use of the scale to the public schools (1913), to immigration (Ellis Island, 1914) and to a court of law (1914).[32]

Different from Galton, who promoted eugenics through selective breeding for positive traits, Goddard went with the US eugenics movement to eliminate "undesirable" traits.[33] Goddard coined the word "feeblemindedness" to refer to people who did not perform well in the test and thus were intellectually inferior. He argued that "feeblemindedness" is caused by heredity, thus feebleminded people should be avoided to give birth, either by institutional isolation or sterilization surgeries.[32] At first, the sterilization targeted the disabled and was extended to the poor people. Goddard's intelligence test was endorsed by the eugenicists to push for laws for forced sterilization. Different states adopted the sterilization laws at different pace. These laws forced over 64,000 people to go through sterilization in the United States.[34]

Noteworthily, California's sterilization program was so effective that the Nazi turned to the government for advice to eliminate the birth of the "unfit".[35] The US eugenics movement lost its momentum in 1940s and was halted by the horrors of Nazi Germany.

Cattell–Horn–Carroll theory

Psychologist Raymond Cattell defined fluid and crystallized intelligence and authored the Cattell Culture Fair III IQ test.

Raymond Cattell (1941) proposed two types of cognitive abilities in a revision of Spearman's concept of general intelligence. Fluid intelligence (Gf) was hypothesized as the ability to solve novel problems by using reasoning, and crystallized intelligence (Gc) was hypothesized as a knowledge-based ability that was very dependent on education and experience. In addition, fluid intelligence was hypothesized to decline with age, while crystallized intelligence was largely resistant to the effects of aging. The theory was almost forgotten, but was revived by his student John L. Horn (1966) who later argued Gf and Gc were only two among several factors, and who eventually identified nine or ten broad abilities. The theory continued to be called Gf-Gc theory.[22]

John B. Carroll (1993), after a comprehensive reanalysis of earlier data, proposed the three stratum theory, which is a hierarchical model with three levels. The bottom stratum consists of narrow abilities that are highly specialized (e.g., induction, spelling ability). The second stratum consists of broad abilities. Carroll identified eight second-stratum abilities. Carroll accepted Spearman's concept of general intelligence, for the most part, as a representation of the uppermost, third stratum.[36][37]

In 1999, a merging of the Gf-Gc theory of Cattell and Horn with Carroll's Three-Stratum theory has led to the Cattell–Horn–Carroll theory (CHC Theory). It has greatly influenced many of the current broad IQ tests.[22]

In CHC theory, a hierarchy of factors is used; g is at the top. Under it are ten broad abilities that in turn are subdivided into seventy narrow abilities. The broad abilities are:[22]
  • Fluid intelligence (Gf) includes the broad ability to reason, form concepts, and solve problems using unfamiliar information or novel procedures.
  • Crystallized intelligence (Gc) includes the breadth and depth of a person's acquired knowledge, the ability to communicate one's knowledge, and the ability to reason using previously learned experiences or procedures.
  • Quantitative reasoning (Gq) is the ability to comprehend quantitative concepts and relationships and to manipulate numerical symbols.
  • Reading and writing ability (Grw) includes basic reading and writing skills.
  • Short-term memory (Gsm) is the ability to apprehend and hold information in immediate awareness, and then use it within a few seconds.
  • Long-term storage and retrieval (Glr) is the ability to store information and fluently retrieve it later in the process of thinking.
  • Visual processing (Gv) is the ability to perceive, analyze, synthesize, and think with visual patterns, including the ability to store and recall visual representations.
  • Auditory processing (Ga) is the ability to analyze, synthesize, and discriminate auditory stimuli, including the ability to process and discriminate speech sounds that may be presented under distorted conditions.
  • Processing speed (Gs) is the ability to perform automatic cognitive tasks, particularly when measured under pressure to maintain focused attention.
  • Decision/reaction time/speed (Gt) reflects the immediacy with which an individual can react to stimuli or a task (typically measured in seconds or fractions of seconds; it is not to be confused with Gs, which typically is measured in intervals of 2–3 minutes). See Mental chronometry.
Modern tests do not necessarily measure all of these broad abilities. For example, Gq and Grw may be seen as measures of school achievement and not IQ.[22] Gt may be difficult to measure without special equipment. g was earlier often subdivided into only Gf and Gc, which were thought to correspond to the nonverbal or performance subtests and verbal subtests in earlier versions of the popular Wechsler IQ test. More recent research has shown the situation to be more complex.[22] Modern comprehensive IQ tests do not stop at reporting a single IQ score. Although they still give an overall score, they now also give scores for many of these more restricted abilities, identifying particular strengths and weaknesses of an individual.[22]

Other theories

J.P. Guilford's Structure of Intellect (1967) model used three dimensions which when combined yielded a total of 120 types of intelligence. It was popular in the 1970s and early 1980s, but faded owing to both practical problems and theoretical criticisms.[22]

Alexander Luria's earlier work on neuropsychological processes led to the PASS theory (1997). It argued that only looking at one general factor was inadequate for researchers and clinicians who worked with learning disabilities, attention disorders, intellectual disability, and interventions for such disabilities. The PASS model covers four kinds of processes (planning process, attention/arousal process, simultaneous processing, and successive processing). The planning processes involve decision making, problem solving, and performing activities and requires goal setting and self-monitoring.

The attention/arousal process involves selectively attending to a particular stimulus, ignoring distractions, and maintaining vigilance. Simultaneous processing involves the integration of stimuli into a group and requires the observation of relationships. Successive processing involves the integration of stimuli into serial order. The planning and attention/arousal components comes from structures located in the frontal lobe, and the simultaneous and successive processes come from structures located in the posterior region of the cortex.[38][39][40] It has influenced some recent IQ tests, and been seen as a complement to the Cattell-Horn-Carroll theory described above.[22]

Current tests

Normalized IQ distribution with mean 100 and standard deviation 15.

There are a variety of individually administered IQ tests in use in the English-speaking world.[41][42] The most commonly used individual IQ test series is the Wechsler Adult Intelligence Scale for adults and the Wechsler Intelligence Scale for Children for school-age test-takers. Other commonly used individual IQ tests (some of which do not label their standard scores as "IQ" scores) include the current versions of the Stanford-Binet Intelligence Scales, Woodcock-Johnson Tests of Cognitive Abilities, the Kaufman Assessment Battery for Children, the Cognitive Assessment System, and the Differential Ability Scales.

IQ tests that measure intelligence also include:
  1. Raven's Progressive Matrices
  2. Cattell Culture Fair III
  3. Reynolds Intellectual Assessment Scales
  4. Thurstone's Primary Mental Abilities [43][44]
  5. Kaufman Brief Intelligence Test[45]
  6. Multidimensional Aptitude Battery II
  7. Das–Naglieri cognitive assessment system
IQ scales are ordinally scaled.[46][47][48][49][50] While one standard deviation is 15 points, and two SDs are 30 points, and so on, this does not imply that mental ability is linearly related to IQ, such that IQ 50 means half the cognitive ability of IQ 100. In particular, IQ points are not percentage points.

On a related note, this fixed standard deviation means that the proportion of the population who have IQs in a particular range is theoretically fixed, and current Wechsler tests only give Full Scale IQs between 40 and 160. This should be borne in mind when considering reports of people with much higher IQs.[51][52]

Test bias or differential item functioning

Differential item functioning (DIF), sometimes referred to as measurement bias, is a phenomenon when participants from different groups (e.g. gender, race, disability) with the same latent abilities give different answers to specific questions on the same IQ test.[53] DIF analysis measures such specific items on a test alongside measuring participants latent abilities on other similar questions. A consistent different group response to a specific question among similar type of questions can indicate an effect of DIF. It does not count as differential item functioning if both groups have equally valid chance of giving different responses to the same questions. Such bias can be a result of culture, educational level and other factors that are independent of group traits. DIF is only considered if test-takers from different groups with the same underlying latent ability level have a different chance of giving specific responses.[54] Such questions are usually removed in order to make the test equally fair for both groups. Common techniques for analyzing DIF are item response theory (IRT) based methods, Mantel-Haenszel, and logistic regression.[54]

Reliability and validity

Psychometricians generally regard IQ tests as having high statistical reliability.[9][55] A high reliability implies that – although test-takers may have varying scores when taking the same test on differing occasions, and although they may have varying scores when taking different IQ tests at the same age – the scores generally agree with one another and across time. Like all statistical quantities, any particular estimate of IQ has an associated standard error that measures uncertainty about the estimate. For modern tests, the standard error of measurement is about three points[citation needed]. Clinical psychologists generally regard IQ scores as having sufficient statistical validity for many clinical purposes.[22][56][57] In a survey of 661 randomly sampled psychologists and educational researchers, published in 1988, Mark Snyderman and Stanley Rothman reported a general consensus supporting the validity of IQ testing. "On the whole, scholars with any expertise in the area of intelligence and intelligence testing (defined very broadly) share a common view of the most important components of intelligence, and are convinced that it can be measured with some degree of accuracy." Almost all respondents picked out abstract reasoning, ability to solve problems and ability to acquire knowledge as the most important elements.[58]
IQ scores can differ to some degree for the same person on different IQ tests, so a person does not always belong to the same IQ score range each time the person is tested. (IQ score table data and pupil pseudonyms adapted from description of KABC-II norming study cited in Kaufman 2009.[59][60])
Pupil KABC-II WISC-III WJ-III
Asher 90 95 111
Brianna 125 110 105
Colin 100 93 101
Danica 116 127 118
Elpha 93 105 93
Fritz 106 105 105
Georgi 95 100 90
Hector 112 113 103
Imelda 104 96 97
Jose 101 99 86
Keoku 81 78 75
Leo 116 124 102

Flynn effect

Since the early 20th century, raw scores on IQ tests have increased in most parts of the world.[61][62][63] When a new version of an IQ test is normed, the standard scoring is set so performance at the population median results in a score of IQ 100. The phenomenon of rising raw score performance means if test-takers are scored by a constant standard scoring rule, IQ test scores have been rising at an average rate of around three IQ points per decade. This phenomenon was named the Flynn effect in the book The Bell Curve after James R. Flynn, the author who did the most to bring this phenomenon to the attention of psychologists.[64][65]
Researchers have been exploring the issue of whether the Flynn effect is equally strong on performance of all kinds of IQ test items, whether the effect may have ended in some developed nations, whether there are social subgroup differences in the effect, and what possible causes of the effect might be.[66] A 2011 textbook, IQ and Human Intelligence, by N. J. Mackintosh, noted the Flynn effect demolishes the fears that IQ would be decreased. He also asks whether it represents a real increase in intelligence beyond IQ scores.[67] A 2011 psychology textbook, lead authored by Harvard Psychologist Professor Daniel Schacter, noted that humans' inherited intelligence could be going down while acquired intelligence goes up.[68]

Age

IQ can change to some degree over the course of childhood.[69] However, in one longitudinal study, the mean IQ scores of tests at ages 17 and 18 were correlated at r=0.86 with the mean scores of tests at ages five, six, and seven and at r=0.96 with the mean scores of tests at ages 11, 12, and 13.[9]

For decades, practitioners' handbooks and textbooks on IQ testing have reported IQ declines with age after the beginning of adulthood. However, later researchers pointed out this phenomenon is related to the Flynn effect and is in part a cohort effect rather than a true aging effect. A variety of studies of IQ and aging have been conducted since the norming of the first Wechsler Intelligence Scale drew attention to IQ differences in different age groups of adults. Current consensus is that fluid intelligence generally declines with age after early adulthood, while crystallized intelligence remains intact. Both cohort effects (the birth year of the test-takers) and practice effects (test-takers taking the same form of IQ test more than once) must be controlled to gain accurate data. It is unclear whether any lifestyle intervention can preserve fluid intelligence into older ages.[70]

The exact peak age of fluid intelligence or crystallized intelligence remains elusive. Cross-sectional studies usually show that especially fluid intelligence peaks at a relatively young age (often in the early adulthood) while longitudinal data mostly show that intelligence is stable until the mid adulthood or later. Subsequently, intelligence seems to decline slowly.[71]

Genetics and environment

Environmental and genetic factors play a role in determining IQ. Their relative importance has been the subject of much research and debate.[72]

Heritability

Heritability is defined as the proportion of variance in a trait which is attributable to genotype within a defined population in a specific environment. A number of points must be considered when interpreting heritability.[73] Heritability, as a term, applies to populations, and in populations there are variations in traits between individuals. Heritability measures how much of that variation is caused by genetics. The value of heritability can change if the impact of environment (or of genes) in the population is substantially altered. A high heritability of a trait does not mean environmental effects, such as learning, are not involved. Since heritability increases during childhood and adolescence, one should be cautious drawing conclusions regarding the role of genetics and environment from studies where the participants are not followed until they are adults.[citation needed]
The general figure for the heritability of IQ, according to an authoritative American Psychological Association report, is 0.45 for children, and rises to around 0.75 for late adolescents and adults.[74][75] Heritability measures in infancy are as low as 0.2, around 0.4 in middle childhood, and as high as 0.9 in adulthood.[76][77] One proposed explanation is that people with different genes tend to reinforce the effects of those genes, for example by seeking out different environments.[9][78]

Shared family environment

Family members have aspects of environments in common (for example, characteristics of the home). This shared family environment accounts for 0.25–0.35 of the variation in IQ in childhood. By late adolescence, it is quite low (zero in some studies). The effect for several other psychological traits is similar. These studies have not looked at the effects of extreme environments, such as in abusive families.[9][79][80][81]

Non-shared family environment and environment outside the family

Although parents treat their children differently, such differential treatment explains only a small amount of nonshared environmental influence. One suggestion is that children react differently to the same environment because of different genes. More likely influences may be the impact of peers and other experiences outside the family.[9][80]

Individual genes

A very large proportion of the over 17,000 human genes are thought to have an effect on the development and functionality of the brain.[82] While a number of individual genes have been reported to be associated with IQ, none have a strong effect. Deary and colleagues (2009) reported that no finding of a strong single gene effect on IQ has been replicated.[83] Recent findings of gene associations with normally varying intelligence differences in adults continue to show weak effects for any one gene;[84] likewise in children.,[85] but see.[86]

Gene-environment interaction

David Rowe reported an interaction of genetic effects with socioeconomic status, such that the heritability was high in high-SES families, but much lower in low-SES families.[87] In the US, this has been replicated in infants,[88] children,[89] adolescents,[90] and adults.[91] Outside the US, studies show no link between heritability and SES.[92] Some effects may even reverse sign outside the US.[92][93]

Dickens and Flynn (2001) have argued that genes for high IQ initiate an environment-shaping feedback cycle, with genetic effects causing bright children to seek out more stimulating environments that then further increase their IQ. In Dickens' model, environment effects are modeled as decaying over time. In this model, the Flynn effect can be explained by an increase in environmental stimulation independent of it being sought out by individuals. The authors suggest that programs aiming to increase IQ would be most likely to produce long-term IQ gains if they enduringly raised children's drive to seek out cognitively demanding experiences.[94][95]

Interventions

In general, educational interventions, as those described below, have shown short-term effects on IQ, but long-term follow-up is often missing. For example, in the US very large intervention programs such as the Head Start Program have not produced lasting gains in IQ scores. More intensive, but much smaller projects such as the Abecedarian Project have reported lasting effects, often on socioeconomic status variables, rather than IQ.[9]

Recent studies have shown that training in using one's working memory may increase IQ. A study on young adults published in April 2008 by a team from the Universities of Michigan and Bern supports the possibility of the transfer of fluid intelligence from specifically designed working memory training.[96] Further research will be needed to determine nature, extent and duration of the proposed transfer. Among other questions, it remains to be seen whether the results extend to other kinds of fluid intelligence tests than the matrix test used in the study, and if so, whether, after training, fluid intelligence measures retain their correlation with educational and occupational achievement or if the value of fluid intelligence for predicting performance on other tasks changes. It is also unclear whether the training is durable of extended periods of time.[97]

Music

Musical training in childhood has been found to correlate with higher than average IQ.[98][99] It is popularly thought that listening to classical music raises IQ. However, multiple attempted replications (e.g.[100]) have shown that this is at best a short-term effect (lasting no longer than 10 to 15 minutes), and is not related to IQ-increase.[101]

Brain anatomy

Several neurophysiological factors have been correlated with intelligence in humans, including the ratio of brain weight to body weight and the size, shape, and activity level of different parts of the brain. Specific features that may affect IQ include the size and shape of the frontal lobes, the amount of blood and chemical activity in the frontal lobes, the total amount of gray matter in the brain, the overall thickness of the cortex, and the glucose metabolic rate.[102]

Health

Health is important in understanding differences in IQ test scores and other measures of cognitive ability. Several factors can lead to significant cognitive impairment, particularly if they occur during pregnancy and childhood when the brain is growing and the blood–brain barrier is less effective. Such impairment may sometimes be permanent, sometimes be partially or wholly compensated for by later growth.[citation needed]

Since about 2010, researchers such as Eppig, Hassel, and MacKenzie have found a very close and consistent link between IQ scores and infectious diseases, especially in the infant and preschool populations and the mothers of these children.[103] They have postulated that fighting infectious diseases strains the child's metabolism and prevents full brain development. Hassel postulated that it is by far the most important factor in determining population IQ. However, they also found that subsequent factors such as good nutrition and regular quality schooling can offset early negative effects to some extent.

Developed nations have implemented several health policies regarding nutrients and toxins known to influence cognitive function. These include laws requiring fortification of certain food products and laws establishing safe levels of pollutants (e.g. lead, mercury, and organochlorides). Improvements in nutrition, and in public policy in general, have been implicated in worldwide IQ increases.[citation needed]

Cognitive epidemiology is a field of research that examines the associations between intelligence test scores and health. Researchers in the field argue that intelligence measured at an early age is an important predictor of later health and mortality differences.

Social correlations

School performance

The American Psychological Association's report "Intelligence: Knowns and Unknowns" states that wherever it has been studied, children with high scores on tests of intelligence tend to learn more of what is taught in school than their lower-scoring peers. The correlation between IQ scores and grades is about .50. This means that the explained variance is 25%. Achieving good grades depends on many factors other than IQ, such as "persistence, interest in school, and willingness to study" (p. 81).[9]

It has been found that the correlation of IQ scores with school performance depends on the IQ measurement used. For undergraduate students, the Verbal IQ as measured by WAIS-R has been found to correlate significantly (0.53) with the grade point average (GPA) of the last 60 hours. In contrast, Performance IQ correlation with the same GPA was only 0.22 in the same study.[104]

Some measures of educational aptitude correlate highly with IQ tests – for instance, Frey and Detterman (2004) reported a correlation of 0.82 between g (general intelligence factor) and SAT scores;[105] another research found a correlation of 0.81 between g and GCSE scores, with the explained variance ranging "from 58.6% in Mathematics and 48% in English to 18.1% in Art and Design".[106]

Job performance

According to Schmidt and Hunter, "for hiring employees without previous experience in the job the most valid predictor of future performance is general mental ability."[107] The validity of IQ as a predictor of job performance is above zero for all work studied to date, but varies with the type of job and across different studies, ranging from 0.2 to 0.6.[108] The correlations were higher when the unreliability of measurement methods was controlled for.[9] While IQ is more strongly correlated with reasoning and less so with motor function,[109] IQ-test scores predict performance ratings in all occupations.[107] That said, for highly qualified activities (research, management) low IQ scores are more likely to be a barrier to adequate performance, whereas for minimally-skilled activities, athletic strength (manual strength, speed, stamina, and coordination) are more likely to influence performance.[107] The prevailing view among academics is that it is largely through the quicker acquisition of job-relevant knowledge that higher IQ mediates job performance. This view has been challenged by Byington & Felps (2010), who argued that "the current applications of IQ-reflective tests allow individuals with high IQ scores to receive greater access to developmental resources, enabling them to acquire additional capabilities over time, and ultimately perform their jobs better."[110]

In establishing a causal direction to the link between IQ and work performance, longitudinal studies by Watkins and others suggest that IQ exerts a causal influence on future academic achievement, whereas academic achievement does not substantially influence future IQ scores.[111] Treena Eileen Rohde and Lee Anne Thompson write that general cognitive ability, but not specific ability scores, predict academic achievement, with the exception that processing speed and spatial ability predict performance on the SAT math beyond the effect of general cognitive ability.[112]

The US military has minimum enlistment standards at about the IQ 85 level. There have been two experiments with lowering this to 80 but in both cases these men could not master soldiering well enough to justify their costs.[113]

Income

While it has been suggested that "in economic terms it appears that the IQ score measures something with decreasing marginal value. It is important to have enough of it, but having lots and lots does not buy you that much",[114][115] large-scale longitudinal studies indicate an increase in IQ translates into an increase in performance at all levels of IQ: i.e. ability and job performance are monotonically linked at all IQ levels.[116][117] Charles Murray, coauthor of The Bell Curve, found that IQ has a substantial effect on income independent of family background.[118]

The link from IQ to wealth is much less strong than that from IQ to job performance. Some studies indicate that IQ is unrelated to net worth.[119][120]

The American Psychological Association's 1995 report Intelligence: Knowns and Unknowns stated that IQ scores accounted for (explained variance) about a quarter of the social status variance and one-sixth of the income variance. Statistical controls for parental SES eliminate about a quarter of this predictive power. Psychometric intelligence appears as only one of a great many factors that influence social outcomes.[9]

In a meta-analysis, Strenze (2006) reviewed much of the literature and estimated the correlation between IQ and income to be about 0.23.[121]

Some studies claim that IQ only accounts for (explains) a sixth of the variation in income because many studies are based on young adults, many of whom have not yet reached their peak earning capacity, or even their education. On pg 568 of The g Factor, Arthur Jensen claims that although the correlation between IQ and income averages a moderate 0.4 (one sixth or 16% of the variance), the relationship increases with age, and peaks at middle age when people have reached their maximum career potential. In the book, A Question of Intelligence, Daniel Seligman cites an IQ income correlation of 0.5 (25% of the variance).

A 2002 study[122] further examined the impact of non-IQ factors on income and concluded that an individual's location, inherited wealth, race, and schooling are more important as factors in determining income than IQ.

Crime

The American Psychological Association's 1995 report Intelligence: Knowns and Unknowns stated that the correlation between IQ and crime was −0.2. It was −0.19 between IQ scores and number of juvenile offenses in a large Danish sample; with social class controlled, the correlation dropped to −0.17. A correlation of 0.20 means that the explained variance is 4%. The causal links between psychometric ability and social outcomes may be indirect. Children with poor scholastic performance may feel alienated. Consequently, they may be more likely to engage in delinquent behavior, compared to other children who do well.[9]

In his book The g Factor (1998), Arthur Jensen cited data which showed that, regardless of race, people with IQs between 70 and 90 have higher crime rates than people with IQs below or above this range, with the peak range being between 80 and 90.

The 2009 Handbook of Crime Correlates stated that reviews have found that around eight IQ points, or 0.5 SD, separate criminals from the general population, especially for persistent serious offenders. It has been suggested that this simply reflects that "only dumb ones get caught" but there is similarly a negative relation between IQ and self-reported offending. That children with conduct disorder have lower IQ than their peers "strongly argues" for the theory.[123]

A study of the relationship between US county-level IQ and US county-level crime rates found that higher average IQs were associated with lower levels of property crime, burglary, larceny rate, motor vehicle theft, violent crime, robbery, and aggravated assault. These results were not "confounded by a measure of concentrated disadvantage that captures the effects of race, poverty, and other social disadvantages of the county."[124][125]

The American Psychological Association's 1995 report Intelligence: Knowns and Unknowns stated that the correlations for most "negative outcome" variables are typically smaller than 0.20, which means that the explained variance is less than 4%.[9]

Tambs et al.[126][better source needed] found that occupational status, educational attainment, and IQ are individually heritable; and further found that "genetic variance influencing educational attainment ... contributed approximately one-fourth of the genetic variance for occupational status and nearly half the genetic variance for IQ." In a sample of U.S. siblings, Rowe et al.[127] report that the inequality in education and income was predominantly due to genes, with shared environmental factors playing a subordinate role.

Health and mortality

Multiple studies conducted in Scotland have found that higher IQs in early life are associated with lower mortality and morbidity rates later in life.[128][129]

Other accomplishments

Average adult combined IQs associated with real-life accomplishments by various tests[130][131]
 
Accomplishment IQ Test/study Year
MDs, JDs, and PhDs 125 WAIS-R 1987
College graduates 112 KAIT 2000
K-BIT 1992
115 WAIS-R
1–3 years of college 104 KAIT
K-BIT
105–110 WAIS-R
Clerical and sales workers 100–105

High school graduates, skilled workers (e.g., electricians, cabinetmakers) 100 KAIT
WAIS-R
97 K-BIT
1–3 years of high school (completed 9–11 years of school) 94 KAIT
90 K-BIT
95 WAIS-R
Semi-skilled workers (e.g. truck drivers, factory workers) 90–95

Elementary school graduates (completed eighth grade) 90

Elementary school dropouts (completed 0–7 years of school) 80–85

Have 50/50 chance of reaching high school 75

Average IQ of various occupational groups:[132]
Accomplishment IQ Test/study Year
Professional and technical 112

Managers and administrators 104

Clerical workers, sales workers, skilled workers, craftsmen, and foremen 101

Semi-skilled workers (operatives, service workers, including private household) 92

Unskilled workers 87

Type of work that can be accomplished:[130]
Accomplishment IQ Test/study Year
Adults can harvest vegetables, repair furniture 60

Adults can do domestic work 50


There is considerable variation within and overlap among these categories. People with high IQs are found at all levels of education and occupational categories. The biggest difference occurs for low IQs with only an occasional college graduate or professional scoring below 90.[22]

Group-IQ or the collective intelligence factor c

With operationalization and methodology derived from the general intelligence factor g, a new scientific understanding of collective intelligence, defined as a group’s general ability to perform a wide range of tasks,[133] aims to explain intelligent behavior of groups. Goal is to detect and explain a general intelligence factor c for groups, parallel to the g factor for individuals. As g is highly interrelated with the concept of IQ,[134][135] this measurement of collective intelligence can be interpreted as intelligence quotient for groups (Group-IQ) even though the score is not a quotient per se. Current evidence suggests that this Group-IQ is only moderately correlated with group members' IQs but with other correlates such as group members' Theory of Mind.[133]

Group differences

Among the most controversial issues related to the study of intelligence is the observation that intelligence measures such as IQ scores vary between ethnic and racial groups and sexes. While there is little scholarly debate about the existence of some of these differences, their causes remain highly controversial both within academia and in the public sphere.[136]

Sex

Most IQ tests are constructed so that there are no overall score differences between females and males.[9][137] Popular IQ batteries such as the WAIS and the WISC-R are also constructed in order to eliminate sex differences.[138] In a paper presented at the International Society for Intelligence Research in 2002, it was pointed out that because test constructors and the United States' Educational Testing Service (which developed the US SAT test) often eliminate items showing marked sex differences in order to reduce the perception of bias, the "true sex" difference is masked. Items like the Mental Rotations Test and reaction time tests,[jargon] which show a male advantage in IQ, are often removed.[139] Meta-analysis focusing on gender differences in math performance found nearly identical performance for boys and girls,[140] and the subject of mathematical intelligence and gender has been controversial.[141]

Race and intelligence

Race and intelligence in United States of America

The 1996 Task Force investigation on Intelligence sponsored by the American Psychological Association concluded that there are significant variations in IQ across races.[9] The problem of determining the causes underlying this variation relates to the question of the contributions of "nature and nurture" to IQ. Psychologists such as Alan S. Kaufman[142] and Nathan Brody[143] and statisticians such as Bernie Devlin[144] argue that there are insufficient data to conclude that this is because of genetic influences. A review article published in 2012 by leading scholars on human intelligence concluded, after reviewing the prior research literature, that group differences in IQ are best understood as environmental in origin.[145]

In considering disparities between test results of different ethnic groups, one might investigate the effects of stereotype threat (a situational predicament in which a person feels at risk of confirming negative stereotypes about the group(s) he identifies with),[146] as well as culture and acculturation.[147] This phenomenon has been criticized as a fiction of publication bias.[148]

Public policy

In the United States, certain public policies and laws regarding military service,[149][150] education, public benefits,[151] capital punishment,[152] and employment incorporate an individual's IQ into their decisions. However, in the case of Griggs v. Duke Power Co. in 1971, for the purpose of minimizing employment practices that disparately impacted racial minorities, the U.S. Supreme Court banned the use of IQ tests in employment, except when linked to job performance via a job analysis. Internationally, certain public policies, such as improving nutrition and prohibiting neurotoxins, have as one of their goals raising, or preventing a decline in, intelligence.
A diagnosis of intellectual disability is in part based on the results of IQ testing. Borderline intellectual functioning is a categorization where a person has below average cognitive ability (an IQ of 71–85), but the deficit is not as severe as intellectual disability (70 or below).

In the United Kingdom, the eleven plus exam which incorporated an intelligence test has been used from 1945 to decide, at eleven years of age, which type of school a child should go to. They have been much less used since the widespread introduction of comprehensive schools.

Criticism and views

Relationship to intelligence

IQ is the most thoroughly researched means of measuring intelligence, and by far the most widely used in practical settings. However, while IQ strives to measure some concepts of intelligence, it may fail to serve as an accurate measure of broader definitions of intelligence. IQ tests examine some areas of intelligence, while neglecting to account for other areas, such as creativity and social intelligence.
Critics such as Keith Stanovich do not dispute the reliability of IQ test scores or their capacity to predict some kinds of achievement, but argue that basing a concept of intelligence on IQ test scores alone neglects other important aspects of mental ability.[9][153]

Criticism of IQ

Some scientists dispute the worthiness of IQ entirely. In The Mismeasure of Man (1996), paleontologist Stephen Jay Gould criticized IQ tests and argued that they were used for scientific racism. He argued that g was a mathematical artifact and criticized:
...the abstraction of intelligence as a single entity, its location within the brain, its quantification as one number for each individual, and the use of these numbers to rank people in a single series of worthiness, invariably to find that oppressed and disadvantaged groups—races, classes, or sexes—are innately inferior and deserve their status.[154]
Arthur Jensen responded:
...what Gould has mistaken for "reification" is neither more nor less than the common practice in every science of hypothesizing explanatory models to account for the observed relationships within a given domain. Well known examples include the heliocentric theory of planetary motion, the Bohr atom, the electromagnetic field, the kinetic theory of gases, gravitation, quarks, Mendelian genes, mass, velocity, etc. None of these constructs exists as a palpable entity occupying physical space.[155]
Jensen also argued that even if g were replaced by a model with several intelligences this would change the situation less than expected. He argues that all tests of cognitive ability would continue to be highly correlated with one another and there would still be a black-white gap on cognitive tests.[156] Hans Eysenck responded to Gould by stating that no psychologist had said that intelligence was an area located in the brain.[157] Eysenck also argued IQ tests were not racist, pointing out that Northeast Asians and Jews both scored higher than non-Jewish Europeans on IQ tests, and this would not please European racists.[158]

Psychologist Peter Schönemann persistently criticized IQ, calling it "the IQ myth". He argued that g is a flawed theory and that the high heritability estimates of IQ are based on false assumptions.[159][160] Robert Sternberg, another significant critic of g as the main measure of human cognitive abilities, argued that reducing the concept of intelligence to the measure of g does not fully account for the different skills and knowledge types that produce success in human society.[161]

Test bias

The American Psychological Association's report Intelligence: Knowns and Unknowns stated that in the United States IQ tests as predictors of social achievement are not biased against African Americans since they predict future performance, such as school achievement, similarly to the way they predict future performance for Caucasians.[9] While agreeing that IQ tests predict performance equally well for all racial groups, Nicholas Mackintosh also points out that there may still be a bias inherent in IQ testing if the education system is also systematically biased against African Americans, in which case educational performance may in fact also be an underestimation of African American children's cognitive abilities.[162] Earl Hunt points out that while this may be the case that would not be a bias of the test, but of society.[163]
However, IQ tests may well be biased when used in other situations. A 2005 study stated that "differential validity in prediction suggests that the WAIS-R test may contain cultural influences that reduce the validity of the WAIS-R as a measure of cognitive ability for Mexican American students,"[164] indicating a weaker positive correlation relative to sampled white students. Other recent studies have questioned the culture-fairness of IQ tests when used in South Africa.[165][166] Standard intelligence tests, such as the Stanford-Binet, are often inappropriate for autistic children; the alternative of using developmental or adaptive skills measures are relatively poor measures of intelligence in autistic children, and may have resulted in incorrect claims that a majority of autistic children are mentally retarded.[167]

Outdated methodology

According to a 2006 article by the National Center for Biotechnology Information, contemporary psychological research often did not reflect substantial recent developments in psychometrics and "bears an uncanny resemblance to the psychometric state of the art as it existed in the 1950s."[168]

"Intelligence: Knowns and Unknowns"

In response to the controversy surrounding The Bell Curve, the American Psychological Association's Board of Scientific Affairs established a task force in 1995 to write a report on the state of intelligence research which could be used by all sides as a basis for discussion, "Intelligence: Knowns and Unknowns". The full text of the report is available through several websites.[9]

In this paper, the representatives of the association regret that IQ-related works are frequently written with a view to their political consequences: "research findings were often assessed not so much on their merits or their scientific standing as on their supposed political implications".

The task force concluded that IQ scores do have high predictive validity for individual differences in school achievement. They confirm the predictive validity of IQ for adult occupational status, even when variables such as education and family background have been statistically controlled. They stated that individual differences in intelligence are substantially influenced by both genetics and environment.

The report stated that a number of biological factors, including malnutrition, exposure to toxic substances, and various prenatal and perinatal stressors, result in lowered psychometric intelligence under at least some conditions. The task force agrees that large differences do exist between the average IQ scores of blacks and whites, saying:
The cause of that differential is not known; it is apparently not due to any simple form of bias in the content or administration of the tests themselves. The Flynn effect shows that environmental factors can produce differences of at least this magnitude, but that effect is mysterious in its own right. Several culturally based explanations of the Black/ White IQ differential have been proposed; some are plausible, but so far none has been conclusively supported. There is even less empirical support for a genetic interpretation. In short, no adequate explanation of the differential between the IQ means of Blacks and Whites is presently available.
The APA journal that published the statement, American Psychologist, subsequently published eleven critical responses in January 1997, several of them arguing that the report failed to examine adequately the evidence for partly genetic explanations.

Dynamic assessment

An alternative to standard IQ tests originated in the writings of psychologist Lev Vygotsky (1896–1934) from his last two years of work.[169][170] The notion of the zone of proximal development that he introduced in 1933, roughly a year before his death, served as the banner for his proposal to diagnose development as the level of actual development that can be measured by the child's independent problem solving and, at the same time, the level of proximal, or potential development that is measured in the situation of moderately assisted problem solving by the child.[171] The maximum level of complexity and difficulty of the problem that the child is capable to solve under some guidance indicates the level of potential development. Then, the difference between the higher level of potential and the lower level of actual development indicates the zone of proximal development. Combination of the two indexes—the level of actual and the zone of the proximal development—according to Vygotsky, provides a significantly more informative indicator of psychological development than the assessment of the level of actual development alone.[172][173]

The ideas on the zone of development were later developed in a number of psychological and educational theories and practices. Most notably, they were developed under the banner of dynamic assessment that focuses on the testing of learning and developmental potential[174][175][176] (for instance, in the work of Reuven Feuerstein and his associates,[177] who has criticized standard IQ testing for its putative assumption or acceptance of "fixed and immutable" characteristics of intelligence or cognitive functioning). Grounded in developmental theories of Vygotsky and Feuerstein, who maintained that human beings are not static entities but are always in states of transition and transactional relationships with the world, dynamic assessment received also considerable support in the recent revisions of cognitive developmental theory by Joseph Campione, Ann Brown, and John D. Bransford and in theories of multiple intelligences by Howard Gardner and Robert Sternberg.[178] Still, dynamic assessment has not been implemented in education on a large scale as is up to now, by admission of one of its notable proponents, "in search of its identity".[179]

Classification

IQ classification is the practice used by IQ test publishers for designating IQ score ranges into various categories with labels such as "superior" or "average."[180] IQ classification was preceded historically by attempts to classify human beings by general ability based on other forms of behavioral observation. Those other forms of behavioral observation are still important for validating classifications based on IQ tests.

High IQ societies

There are social organizations, some international, which limit membership to people who have scores as high as or higher than the 98th percentile (2 standard deviations above the mean) on some IQ test or equivalent. Mensa International is perhaps the best known of these. The largest 99.9th percentile (3 standard deviations above the mean) society is the Triple Nine Society.

Charles Murray (political scientist)

From Wikipedia, the free encyclopedia
 
Charles Murray
Charles Murray Speaking at FreedomFest.jpeg
Murray in 2013
Born Charles Alan Murray
January 8, 1943 (age 75)
Newton, Iowa, U.S.
Alma mater Harvard University (AB)
Massachusetts Institute of Technology (SM, PhD)
Known for The Bell Curve
Losing Ground
Human Accomplishment
Coming Apart
Spouse(s) Suchart Dej-Udom (1966–1980)
Catherine Bly Cox (1983–present)
Awards Irving Kristol Award (2009)
Kistler Prize (2011)
Scientific career
Fields Political science
Sociology
Race and intelligence
Thesis Investment and Tithing in Thai Villages: A Behavioral Study of Rural Modernization (1974)
Doctoral advisor Lucian Pye

Charles Alan Murray (/ˈmɜːri/; born January 8, 1943) is an American political scientist, author, and columnist. His book Losing Ground: American Social Policy 1950–1980 (1984), which discussed the American welfare system, was widely read and discussed, and influenced subsequent government policy.[3] He became well known for his controversial book The Bell Curve (1994), written with Richard Herrnstein, in which he argues that intelligence is a better predictor than parental socio-economic status or education level of many individual outcomes including income, job performance, pregnancy out of wedlock, and crime, and that social welfare programs and education efforts to improve social outcomes for the disadvantaged are largely wasted.

Murray's most successful subsequent books have been Human Accomplishment: The Pursuit of Excellence in the Arts and Sciences, 800 B.C. to 1950 (2003) and Coming Apart: The State of White America, 1960–2010 (2012).[3] Over his career he has published dozens of books and articles. His work has drawn accusations of scientific racism.

Murray is a fellow at the American Enterprise Institute, a conservative think tank in Washington, D.C.[3]

Early life

Of Scotch-Irish ancestry,[5][6] Murray was born in Newton, Iowa, and raised in a Republican, "Norman Rockwell kind of family" that stressed moral responsibility. He is the son of Frances B. (née Patrick) and Alan B. Murray, a Maytag Company executive.[7] His youth was marked by a rebellious and pranksterish sensibility.[8] As a teen, he played pool at a hangout for juvenile delinquents, developed debating skills, espoused labor unionism (to his parents' annoyance), and on one occasion lit fireworks that were attached to a cross that he put next to a police station.[9]

Murray credits the SAT with helping him get out of Newton and into Harvard. "Back in 1961, the test helped get me into Harvard from a small Iowa town by giving me a way to show that I could compete with applicants from Exeter and Andover," wrote Murray. "Ever since, I have seen the SAT as the friend of the little guy, just as James Bryant Conant, president of Harvard, said it would be when he urged the SAT upon the nation in the 1940s."[10] However, in an op-ed published in the New York Times on March 8, 2012, Murray suggested removing the SAT's role in college admissions, noting that the SAT "has become a symbol of new-upper-class privilege, as people assume (albeit wrongly) that high scores are purchased through the resources of private schools and expensive test preparation programs".[11]

Murray obtained a A.B. in history from Harvard in 1965 and a Ph.D. in political science from the Massachusetts Institute of Technology in 1974.[3]

Peace Corps

Murray left for the Peace Corps in Thailand in 1965, staying abroad for a formative six years.[12] At the beginning of this period, the young Murray kindled a romance with his Thai Buddhist language instructor (in Hawaii), Suchart Dej-Udom, the daughter of a wealthy Thai businessman, who was "born with one hand and a mind sharp enough to outscore the rest of the country on the college entrance exam." Murray subsequently proposed by mail from Thailand, and their marriage began the following year, a move that Murray now considers youthful rebellion. "I'm getting married to a one-handed Thai Buddhist," he said. "This was not the daughter-in-law that would have normally presented itself to an Iowa couple."[13]

Murray credits his time in the Peace Corps in Thailand with his lifelong interest in Asia. "There are aspects of Asian culture as it is lived that I still prefer to Western culture, 30 years after I last lived in Thailand," says Murray. "Two of my children are half-Asian. Apart from those personal aspects, I have always thought that the Chinese and Japanese civilizations had elements that represented the apex of human accomplishment in certain domains."[14]

His tenure with the Peace Corps ended in 1968, and during the remainder of his time in Thailand he worked on an American Institutes for Research (AIR) covert counter-insurgency program for the US military in cooperation with the CIA.[15][16][17]

Recalling his time in Thailand in a 2014 episode of "Conversations with Bill Kristol," Murray noted that his worldview was fundamentally shaped by his time there. "Essentially, most of what you read in my books I learned in Thai villages." He went on, "I suddenly was struck first by the enormous discrepancy between what Bangkok thought was important to the villagers and what the villagers wanted out of government. And the second thing I got out of it was that when the government change agent showed up, the village went to hell in terms of its internal governance."[18]

Murray's work in the Peace Corps and subsequent social research in Thailand for research firms associated with the US government led to the subject of his statistical doctoral thesis in political science at M.I.T., in which he argued against bureaucratic intervention in the lives of the Thai villagers.[19][20]

Divorce and remarriage

By the 1980s, his marriage to Suchart Dej-Udom had been unhappy for years, but "his childhood lessons on the importance of responsibility brought him slowly to the idea that divorce was an honorable alternative, especially with young children involved."[21]

Murray divorced Dej-Udom after fourteen years of marriage[8] and three years later married Catherine Bly Cox (born 1949, Newton, Iowa),[22] an English literature instructor at Rutgers University. Cox was initially dubious when she saw his conservative reading choices, and she spent long hours "trying to reconcile his shocking views with what she saw as his deep decency."[8] In 1989, Murray and Cox co-authored a book on the Apollo program, Apollo: Race to the Moon.[23] Murray attends and Cox is a member of a Quaker meeting in Virginia, and they live in Frederick County, Maryland near Washington, D.C.[24]

Murray has four children, two by each wife.[25] His second wife, Catherine Bly Cox, had converted to Quakerism as of 2014, while Murray considered himself an agnostic.[26]

Research and views

Murray continued research work at AIR, one of the largest of the private social science research organizations, upon his return to the US. From 1974 to 1981, Murray worked for the AIR eventually becoming chief political scientist. While at AIR, Murray supervised evaluations in the fields of urban education, welfare services, daycare, adolescent pregnancy, services for the elderly, and criminal justice.[citation needed]

From 1981 to 1990, he was a fellow with the conservative Manhattan Institute where he wrote Losing Ground, which heavily influenced the welfare reform debate in 1996, and In Pursuit.[citation needed]

He has been a fellow of the American Enterprise Institute since 1990 and was a frequent contributor to The Public Interest, a journal of conservative politics and culture. In March 2009, he received AEI's highest honor, the Irving Kristol Award. He has also received a doctorate honoris causa from Universidad Francisco Marroquín.[27]

Murray has received grants from the conservative Bradley Foundation to support his scholarship, including the writing of The Bell Curve.

Murray identifies as a libertarian;[28] he has also been described as conservative[29][30][31][32] and far-right.[33][34][35][36]

Murray's Law

Murray's law is a set of conclusions derived by Charles Murray in his book Losing Ground: American Social Policy, 1950–1980. Essentially, it states that all social welfare programs are doomed to effect a net harm on society, and actually hurt the very people those programs are trying to help. In the end, he concludes that social welfare programs cannot be successful and should ultimately be eliminated altogether.

Murray's Law:
  1. The Law of Imperfect Selection: Any objective rule that defines eligibility for a social transfer program will irrationally exclude some persons.
  2. The Law of Unintended Rewards: Any social transfer increases the net value of being in the condition that prompted the transfer.
  3. The Law of Net Harm: The less likely it is that the unwanted behavior will change voluntarily, the more likely it is that a program to induce change will cause net harm.

The Bell Curve

External video
Booknotes interview with Murray on The Bell Curve, December 4, 1994, C-SPAN
The Bell Curve: Intelligence and Class Structure in American Life (1994) is a controversial bestseller that Charles Murray wrote with Harvard professor Richard J. Herrnstein. Its central thesis is that intelligence is a better predictor of many factors including financial income, job performance, unwed pregnancy, and crime than one's parents' socio-economic status or education level. Also, the book argued that those with high intelligence (the "cognitive elite") are becoming separated from the general population of those with average and below-average intelligence, and that this was a dangerous social trend. Murray expanded on this theme in his 2012 book Coming Apart.[citation needed]

Of the book's origins, Murray has said,
I got interested in IQ and its relationship to social problems. And by 1989, I had decided I was going to write a book about it, but then Dick Herrnstein, a professor at Harvard who had written on IQ in the past had an article in the Atlantic Monthly which led me to think, "Ah, Herrnstein is already doing this." So I called him up. I had met him before. We'd been friendly. And I said, "If you’re doing a book on this, I'm not going to try to compete with you." And Dick said to me, "No, I'm not." And he paused and he said, "Why don't we do it together?"[37]
Much of the controversy stemmed from Chapters 13 and 14, where the authors write about the enduring differences in race and intelligence and discuss implications of that difference. They write in the introduction to Chapter 13 that "The debate about whether and how much genes and environment have to do with ethnic differences remains unresolved,"[38] and "It seems highly likely to us that both genes and the environment have something to do with racial differences."[39]

The book's title comes from the bell-shaped normal distribution of IQ scores.

After its publication, various commentators criticized and defended the book. Some critics said it supported scientific racism[40][41][42][43][44][45] and a number of books were written to rebut The Bell Curve. Those works included a 1996 edition of evolutionary biologist Stephen Jay Gould's The Mismeasure of Man; a collection of essays, The Bell Curve Wars (1995), reacting to Murray and Herrnstein's commentary; and The Bell Curve Debate (1995), whose essays similarly respond to issues raised in The Bell Curve. Arthur S. Goldberger and Charles F. Manski critique the empirical methods supporting the book's hypotheses.[46]

Citing assertions made by Murray in The Bell Curve, The Southern Poverty Law Center labeled him a "white nationalist," charging his ideas were rooted in eugenics.[47][48][49] Murray eventually responded in a point-by-point rebuttal.[50]

In 2000, Murray authored a policy study for AEI on the same subject matter as The Bell Curve in which he wrote:
Try to imagine a GOP presidential candidate saying in front of the cameras, "One reason that we still have poverty in the United States is that a lot of poor people are born lazy." You cannot imagine it because that kind of thing cannot be said. And yet this unimaginable statement merely implies that when we know the complete genetic story, it will turn out that the population below the poverty line in the United States has a configuration of the relevant genetic makeup that is significantly different from the configuration of the population above the poverty line. This is not unimaginable. It is almost certainly true.[51]

Education

Murray has been critical of the No Child Left Behind law, arguing that it "set a goal that was devoid of any contact with reality.... The United States Congress, acting with large bipartisan majorities, at the urging of the President, enacted as the law of the land that all children are to be above average." He sees the law as an example of "Educational romanticism [which] asks too much from students at the bottom of the intellectual pile, asks the wrong things from those in the middle, and asks too little from those at the top."[52]

Challenging "educational romanticism," he wrote Real Education: Four Simple Truths for Bringing America's Schools Back to Reality. His "four simple truths" are as follows:
  1. Ability varies.
  2. Half of all children are below average.
  3. Too many people are going to college.
  4. America's future depends on how we educate the academically gifted.[53]

Human group differences

Murray has attracted controversy for his views on differences between gender and racial groups. In a paper published in 2005 titled "Where Are the Female Einsteins?", Murray stated, among other things, that "no woman has been a significant original thinker in any of the world's great philosophical traditions. In the sciences, the most abstract field is mathematics, where the number of great female mathematicians is approximately two (Emmy Noether definitely, Sonya Kovalevskaya maybe). In the other hard sciences, the contributions of great women have usually been empirical rather than theoretical, with leading cases in point being Henrietta Leavitt, Dorothy Hodgkin, Lise Meitner, Irene Joliot-Curie and Marie Curie herself."[54] Asked about this in 2014, he stated he could only recall one important female philosopher, "and she was not a significant thinker in the estimation of historians of philosophy," adding "So, yeah, I still stick with that. Until somebody gives me evidence to the contrary, I’ll stick with that statement."[55]

In 2007, Murray wrote a back cover blurb for James R. Flynn's book What Is Intelligence?: "This book is a gold mine of pointers to interesting work, much of which was new to me. All of us who wrestle with the extraordinarily difficult questions about intelligence that Flynn discusses are in his debt."[56]

In 2014, a speech that Murray was scheduled to give at Azusa Pacific University was "postponed" due to Murray's research on human group differences.[57] Murray responded to the institution by pointing out that it was a disservice to the students and faculty to dismiss research because of its controversial nature rather than the evidence. Murray also urged the university to consider his works as they are and reach conclusions for themselves, rather than relying on sources that "specialize in libeling people."[58][59]

Op-ed writings

Murray has published opinion pieces in The New Republic, Commentary, The Public Interest, The New York Times, The Wall Street Journal, National Review, and The Washington Post. He has been a witness before United States House and Senate committees and a consultant to senior Republican government officials in the United States and other conservative officials in the United Kingdom, Eastern Europe, and the Organization for Economic Co-operation and Development.[60][citation needed]

In the April 2007 issue of Commentary magazine, Murray wrote on the disproportionate representation of Jews in the ranks of outstanding achievers and says that one of the reasons is that they "have been found to have an unusually high mean intelligence as measured by IQ tests since the first Jewish samples were tested." His article concludes with the assertion: "At this point, I take sanctuary in my remaining hypothesis, uniquely parsimonious and happily irrefutable. The Jews are God's chosen people."[61]

In the July/August 2007 issue of The American, a magazine published by the American Enterprise Institute, Murray says he has changed his mind about SAT tests and says they should be scrapped: "Perhaps the SAT had made an important independent contribution to predicting college performance in earlier years, but by the time research was conducted in the last half of the 1990s, the test had already been ruined by political correctness." Murray advocates replacing the traditional SAT with the College Board's subject achievement tests: "The surprising empirical reality is that the SAT is redundant if students are required to take achievement tests."[10]

Incident at Middlebury College

On March 2, 2017, Murray was shouted down at Middlebury College (Middlebury, Vermont) by students and others not connected with the school, and prevented from speaking at the original location on campus. The speech was moved to another location and a closed circuit broadcast showed him being interviewed by professor Allison Stanger. After the interview, there was a violent confrontation between protesters and Murray, Vice President for Communications Bill Burger, and Stanger (who was hospitalized with a neck injury and concussion) as they left the McCullough Student Center. Middlebury students claimed that Middlebury Public Safety officers instigated and escalated violence against nonviolent protesters and that administrator Bill Burger assaulted protesters with a car.[62] Middlebury President Laurie L. Patton responded after the event, saying the school would respond to "the clear violations of Middlebury College policy that occurred inside and outside Wilson Hall."[63][64][65][66] The school took disciplinary action against 67 students for their involvement in the incident.[67][68]

Selected bibliography

In addition to these books, Murray has published articles in Commentary magazine, The New Criterion, The Weekly Standard, The Washington Post, Wall Street Journal, and The New York Times.[3]

Cartesian coordinate system

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cartesian_coordinate_system ...