Search This Blog

Wednesday, December 9, 2020

Mental chronometry

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Mental_chronometry

Mental chronometry is the study of reaction time (RT; also referred to as "response time") in perceptual-motor tasks to infer the content, duration, and temporal sequencing of mental operations. Mental chronometry is one of the core methodological paradigms of human experimental and cognitive psychology, but is also commonly analyzed in psychophysiology, cognitive neuroscience, and behavioral neuroscience to help elucidate the biological mechanisms underlying perception, attention, and decision-making across species.

Mental chronometry uses measurements of elapsed time between sensory stimulus onsets and subsequent behavioral responses. It is considered an index of processing speed and efficiency indicating how fast an individual can execute task-relevant mental operations. Behavioral responses are typically button presses, but eye movements, vocal responses, and other observable behaviors can be used. RT is constrained by the speed of signal transmission in white matter as well as the processing efficiency of neocortical gray matter. Conclusions about information processing drawn from RT are often made with consideration of task experimental design, limitations in measurement technology, and mathematical modeling.

Types

Reaction time ("RT") is the time that elapses between a person being presented with a stimulus and the person initiating a motor response to the stimulus. It is usually on the order of 200 ms. The processes that occur during this brief time enable the brain to perceive the surrounding environment, identify an object of interest, decide an action in response to the object, and issue a motor command to execute the movement. These processes span the domains of perception and movement, and involve perceptual decision making and motor planning.

There are several commonly used paradigms for measuring RT:

  • Simple RT is the motion required for an observer to respond to the presence of a stimulus. For example, a subject might be asked to press a button as soon as a light or sound appears. Mean RT for college-age individuals is about 160 milliseconds to detect an auditory stimulus, and approximately 190 milliseconds to detect visual stimulus. The mean RTs for sprinters at the Beijing Olympics were 166 ms for males and 169 ms for females, but in one out of 1,000 starts they can achieve 109 ms and 121 ms, respectively. This study also concluded that longer female RTs can be an artifact of the measurement method used, suggesting that the starting block sensor system might overlook a female false-start due to insufficient pressure on the pads. The authors suggested compensating for this threshold would improve false-start detection accuracy with female runners.
  • Recognition or go/no-go RT tasks require that the subject press a button when one stimulus type appears and withhold a response when another stimulus type appears. For example, the subject may have to press the button when a green light appears and not respond when a blue light appears.
  • Choice reaction time (CRT) tasks require distinct responses for each possible class of stimulus. For example, the subject might be asked to press one button if a red light appears and a different button if a yellow light appears. The Jensen box is an example of an instrument designed to measure choice RT.
  • Discrimination RT involves comparing pairs of simultaneously presented visual displays and then pressing one of two buttons according to which display appears brighter, longer, heavier, or greater in magnitude on some dimension of interest.

Due to momentary attentional lapses, there is a considerable amount of variability in an individual's response time, which does not tend to follow a normal (Gaussian) distribution. To control for this, researchers typically require a subject to perform multiple trials, from which a measure of the 'typical' or baseline response time can be calculated. Taking the mean of the raw response time is rarely an effective method of characterizing the typical response time, and alternative approaches (such as modeling the entire response time distribution) are often more appropriate.

Evolution of methodology

Car rigged with two pistols to measure a driver's reaction time. The pistols fire when the brake pedal is depressed

Galton and differential psychology

Sir Francis Galton is typically credited as the founder of differential psychology, which seeks to determine and explain the mental differences between individuals. He was the first to use rigorous RT tests with the express intention of determining averages and ranges of individual differences in mental and behavioral traits in humans. Galton hypothesized that differences in intelligence would be reflected in variation of sensory discrimination and speed of response to stimuli, and he built various machines to test different measures of this, including RT to visual and auditory stimuli. His tests involved a selection of over 10,000 men, women and children from the London public.

Donders' experiment

The first scientist to measure RT in the laboratory was Franciscus Donders (1869). Donders found that simple RT is shorter than recognition RT, and that choice RT is longer than both.

Donders also devised a subtraction method to analyze the time it took for mental operations to take place. By subtracting simple RT from choice RT, for example, it is possible to calculate how much time is needed to make the connection.

This method provides a way to investigate the cognitive processes underlying simple perceptual-motor tasks, and formed the basis of subsequent developments.

Although Donders' work paved the way for future research in mental chronometry tests, it was not without its drawbacks. His insertion method, often referred to as "pure insertion", was based on the assumption that inserting a particular complicating requirement into an RT paradigm would not affect the other components of the test. This assumption—that the incremental effect on RT was strictly additive—was not able to hold up to later experimental tests, which showed that the insertions were able to interact with other portions of the RT paradigm. Despite this, Donders' theories are still of interest and his ideas are still used in certain areas of psychology, which now have the statistical tools to use them more accurately.

Hick's law

W. E. Hick (1952) devised a CRT experiment which presented a series of nine tests in which there are n equally possible choices. The experiment measured the subject's RT based on the number of possible choices during any given trial. Hick showed that the individual's RT increased by a constant amount as a function of available choices, or the "uncertainty" involved in which reaction stimulus would appear next. Uncertainty is measured in "bits", which are defined as the quantity of information that reduces uncertainty by half in information theory. In Hick's experiment, the RT is found to be a function of the binary logarithm of the number of available choices (n). This phenomenon is called "Hick's law" and is said to be a measure of the "rate of gain of information". The law is usually expressed by the formula , where and are constants representing the intercept and slope of the function, and is the number of alternatives. The Jensen Box is a more recent application of Hick's law. Hick's law has interesting modern applications in marketing, where restaurant menus and web interfaces (among other things) take advantage of its principles in striving to achieve speed and ease of use for the consumer.

Sternberg's memory-scanning task

Saul Sternberg (1966) devised an experiment wherein subjects were told to remember a set of unique digits in short-term memory. Subjects were then given a probe stimulus in the form of a digit from 0–9. The subject then answered as quickly as possible whether the probe was in the previous set of digits or not. The size of the initial set of digits determined the RT of the subject. The idea is that as the size of the set of digits increases the number of processes that need to be completed before a decision can be made increases as well. So if the subject has 4 items in short-term memory (STM), then after encoding the information from the probe stimulus the subject needs to compare the probe to each of the 4 items in memory and then make a decision. If there were only 2 items in the initial set of digits, then only 2 processes would be needed. The data from this study found that for each additional item added to the set of digits, about 38 milliseconds were added to the response time of the subject. This supported the idea that a subject did a serial exhaustive search through memory rather than a serial self-terminating search. Sternberg (1969) developed a much-improved method for dividing RT into successive or serial stages, called the additive factor method.

Shepard and Metzler's mental rotation task

Shepard and Metzler (1971) presented a pair of three-dimensional shapes that were identical or mirror-image versions of one another. RT to determine whether they were identical or not was a linear function of the angular difference between their orientation, whether in the picture plane or in depth. They concluded that the observers performed a constant-rate mental rotation to align the two objects so they could be compared. Cooper and Shepard (1973) presented a letter or digit that was either normal or mirror-reversed, and presented either upright or at angles of rotation in units of 60 degrees. The subject had to identify whether the stimulus was normal or mirror-reversed. Response time increased roughly linearly as the orientation of the letter deviated from upright (0 degrees) to inverted (180 degrees), and then decreases again until it reaches 360 degrees. The authors concluded that the subjects mentally rotate the image the shortest distance to upright, and then judge whether it is normal or mirror-reversed.

Sentence-picture verification

Mental chronometry has been used in identifying some of the processes associated with understanding a sentence. This type of research typically revolves around the differences in processing 4 types of sentences: true affirmative (TA), false affirmative (FA), false negative (FN), and true negative (TN). A picture can be presented with an associated sentence that falls into one of these 4 categories. The subject then decides if the sentence matches the picture or does not. The type of sentence determines how many processes need to be performed before a decision can be made. According to the data from Clark and Chase (1972) and Just and Carpenter (1971), the TA sentences are the simplest and take the least time, than FA, FN, and TN sentences.

Models of memory

Hierarchical network models of memory were largely discarded due to some findings related to mental chronometry. The TLC model proposed by Collins and Quillian (1969) had a hierarchical structure indicating that recall speed in memory should be based on the number of levels in memory traversed in order to find the necessary information. But the experimental results did not agree. For example, a subject will reliably answer that a robin is a bird more quickly than he will answer that an ostrich is a bird despite these questions accessing the same two levels in memory. This led to the development of spreading activation models of memory (e.g., Collins & Loftus, 1975), wherein links in memory are not organized hierarchically but by importance instead.

Posner's letter matching studies

Michael Posner (1978) used a series of letter-matching studies to measure the mental processing time of several tasks associated with recognition of a pair of letters. The simplest task was the physical match task, in which subjects were shown a pair of letters and had to identify whether the two letters were physically identical or not. The next task was the name match task where subjects had to identify whether two letters had the same name. The task involving the most cognitive processes was the rule match task in which subjects had to determine whether the two letters presented both were vowels or not vowels.

The physical match task was the most simple; subjects had to encode the letters, compare them to each other, and make a decision. When doing the name match task subjects were forced to add a cognitive step before making a decision: they had to search memory for the names of the letters, and then compare those before deciding. In the rule based task they had to also categorize the letters as either vowels or consonants before making their choice. The time taken to perform the rule match task was longer than the name match task which was longer than the physical match task. Using the subtraction method experimenters were able to determine the approximate amount of time that it took for subjects to perform each of the cognitive processes associated with each of these tasks.

Predictive validity

Cognitive development

There is extensive recent research using mental chronometry for the study of cognitive development. Specifically, various measures of speed of processing were used to examine changes in the speed of information processing as a function of age. Kail (1991) showed that speed of processing increases exponentially from early childhood to early adulthood. Studies of RTs in young children of various ages are consistent with common observations of children engaged in activities not typically associated with chronometry. This includes speed of counting, reaching for things, repeating words, and other developing vocal and motor skills that develop quickly in growing children. Once reaching early maturity, there is then a long period of stability until speed of processing begins declining from middle age to senility (Salthouse, 2000). In fact, cognitive slowing is considered a good index of broader changes in the functioning of the brain and intelligence. Demetriou and colleagues, using various methods of measuring speed of processing, showed that it is closely associated with changes in working memory and thought (Demetriou, Mouyi, & Spanoudis, 2009). These relations are extensively discussed in the neo-Piagetian theories of cognitive development.

During senescence, RT deteriorates (as does fluid intelligence), and this deterioration is systematically associated with changes in many other cognitive processes, such as executive functions, working memory, and inferential processes. In the theory of Andreas Demetriou, one of the neo-Piagetian theories of cognitive development, change in speed of processing with age, as indicated by decreasing RT, is one of the pivotal factors of cognitive development.

Cognitive ability

Researchers have reported medium-sized correlations between RT and measures of intelligence: There is thus a tendency for individuals with higher IQ to be faster on RT tests.

Research into this link between mental speed and general intelligence (perhaps first proposed by Charles Spearman) was re-popularised by Arthur Jensen, and the "choice reaction apparatus" associated with his name became a common standard tool in RT-IQ research.

The strength of the RT-IQ association is a subject of research. Several studies have reported association between simple RT and intelligence of around (r=−.31), with a tendency for larger associations between choice RT and intelligence (r=−.49). Much of the theoretical interest in RT was driven by Hick's Law, relating the slope of RT increases to the complexity of decision required (measured in units of uncertainty popularized by Claude Shannon as the basis of information theory). This promised to link intelligence directly to the resolution of information even in very basic information tasks. There is some support for a link between the slope of the RT curve and intelligence, as long as reaction time is tightly controlled.

Standard deviations of RTs have been found to be more strongly correlated with measures of general intelligence (g) than mean RTs. The RTs of low-g individuals are more spread-out than those of high-g individuals.

The cause of the relationship is unclear. It may reflect more efficient information processing, better attentional control, or the integrity of neuronal processes.

Health and mortality

Performance on simple and choice reaction time tasks is associated with a variety of health-related outcomes, including general, objective health composites as well as specific measures like cardio-respiratory integrity. The association between IQ and earlier all-cause mortality has been found to be chiefly mediated by a measure of reaction time. These studies generally find that faster and more accurate responses to reaction time tasks are associated with better health outcomes and longer lifespan.

Drift-diffusion model

Graphical representation of drift-diffusion rate used to model reaction times in two-choice tasks.

The drift-diffusion model (DDM) is a well-defined mathematical formulation to explain observed variance in response times and accuracy across trials in a (typically two-choice) reaction time task. This model and its variants account for these distributional features by partitioning a reaction time trial into a non-decision residual stage and a stochastic "diffusion" stage, where the actual response decision is generated. The distribution of reaction times across trials is determined by the rate at which evidence accumulates in neurons with an underlying "random walk" component. The drift rate (v) is the average rate at which this evidence accumulates in the presence of this random noise. The decision threshold (a) represents the width of the decision boundary, or the amount of evidence needed before a response is made. The trial terminates when the accumulating evidence reaches either the correct or the incorrect boundary.

Application in biological psychology/cognitive neuroscience

Regions of the Brain Involved in a Number Comparison Task Derived from EEG and fMRI Studies. The regions represented correspond to those showing effects of notation used for the numbers (pink and hatched), distance from the test number (orange), choice of hand (red), and errors (purple). Picture from the article: 'Timing the Brain: Mental Chronometry as a Tool in Neuroscience'.

With the advent of the functional neuroimaging techniques of PET and fMRI, psychologists started to modify their mental chronometry paradigms for functional imaging. Although psycho(physio)logists have been using electroencephalographic measurements for decades, the images obtained with PET have attracted great interest from other branches of neuroscience, popularizing mental chronometry among a wider range of scientists in recent years. The way that mental chronometry is utilized is by performing RT based tasks which show through neuroimaging the parts of the brain which are involved in the cognitive process.

With the invention of functional magnetic resonance imaging (fMRI), techniques were used to measure activity through electrical event-related potentials in a study when subjects were asked to identify if a digit that was presented was above or below five. According to Sternberg's additive theory, each of the stages involved in performing this task includes: encoding, comparing against the stored representation for five, selecting a response, and then checking for error in the response. The fMRI image presents the specific locations where these stages are occurring in the brain while performing this simple mental chronometry task.

In the 1980s, neuroimaging experiments allowed researchers to detect the activity in localized brain areas by injecting radionuclides and using positron emission tomography (PET) to detect them. Also, fMRI was used which have detected the precise brain areas that are active during mental chronometry tasks. Many studies have shown that there is a small number of brain areas which are widely spread out which are involved in performing these cognitive tasks.

Current medical reviews indicate that signaling through the dopamine pathways originating in the ventral tegmental area is strongly positively correlated with improved (shortened) RT; e.g., dopaminergic pharmaceuticals like amphetamine have been shown to expedite responses during interval timing, while dopamine antagonists (specifically, for D2-type receptors) produce the opposite effect. Similarly, age-related loss of dopamine from the striatum, as measured by SPECT imaging of the dopamine transporter, strongly correlates with slowed RT.

 

Polymath

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Polymath

A polymath (Greek: πολυμαθής, polymathēs, "having learned much"; Latin: homo universalis, "universal man") is an individual whose knowledge spans a significant number of subjects. The earliest recorded use of the term in English is from 1624, in the second edition of The Anatomy of Melancholy by Robert Burton; the form polymathist is slightly older, first appearing in the Diatribae upon the first part of the late History of Tithes of Richard Montagu in 1621. Use in English of the similar term polyhistor dates from the late sixteenth century.

In Western Europe, the first work to use polymathy in its title (De Polymathia tractatio: integri operis de studiis veterum) was published in 1603 by Johann von Wowern, a Hamburg philosopher. Von Wowern defined polymathy as "knowledge of various matters, drawn from all kinds of studies [...] ranging freely through all the fields of the disciplines, as far as the human mind, with unwearied industry, is able to pursue them". Von Wowern lists erudition, literature, philology, philomathy and polyhistory as synonyms.

Polymaths include the great scholars and thinkers of the Islamic Golden Age, the period of Renaissance and the Enlightenment, who excelled at several fields in science, technology, engineering, mathematics, and the arts. In the Italian Renaissance, the idea of the polymath was expressed by Leon Battista Alberti (1404–1472) in the statement that "a man can do all things if he will".

Embodying a basic tenet of Renaissance humanism that humans are limitless in their capacity for development, the concept led to the notion that people should embrace all knowledge and develop their capacities as fully as possible. This is expressed in the term Renaissance man, often applied to the gifted people of that age who sought to develop their abilities in all areas of accomplishment: intellectual, artistic, social, physical, and spiritual.

Renaissance man

"Renaissance man" was first recorded in written English in the early 20th century. It is now used to refer to great thinkers living before, during, or after the Renaissance. Leonardo da Vinci has often been described as the archetype of the Renaissance man, a man of "unquenchable curiosity" and "feverishly inventive imagination". Many notable polymaths lived during the Renaissance period, a cultural movement that spanned roughly the 14th through to the 17th century that began in Italy in the Late Middle Ages and later spread to the rest of Europe. These polymaths had a rounded approach to education that reflected the ideals of the humanists of the time. A gentleman or courtier of that era was expected to speak several languages, play a musical instrument, write poetry and so on, thus fulfilling the Renaissance ideal.

The idea of a universal education was essential to achieving polymath ability, hence the word university was used to describe a seat of learning. At this time, universities did not specialize in specific areas, but rather trained students in a broad array of science, philosophy and theology. This universal education gave them a grounding from which they could continue into apprenticeship toward becoming a master of a specific field.

When someone is called a "Renaissance man" today, it is meant that rather than simply having broad interests or superficial knowledge in several fields, the individual possesses a more profound knowledge and a proficiency, or even an expertise, in at least some of those fields.

Some dictionaries use the term "Renaissance man" to describe someone with many interests or talents, while others give a meaning restricted to the Renaissance and more closely related to Renaissance ideals.

In academia

Robert Root-Bernstein and colleagues

Robert Root-Bernstein is considered the principal responsible for rekindling the interest on polymathy in the scientific community. He is a professor of physiology at Michigan State University and has been awarded the MacArthur Fellowship. He and colleagues, especially Michèle Root-Bernstein, authored many important works spearheading the modern field of polymathy studies.

Root-Bernstein's works emphasize the contrast between the polymath and two other types: the specialist and the dilettante. The specialist demonstrates depth but lacks breadth of knowledge. The dilettante demonstrates superficial breadth but tend to acquire skills merely "for their own sake without regard to understanding the broader applications or implications and without integrating it" (R. Root-Bernstein, 2009, p. 857). Conversely, the polymath is a person with a level of expertise that is able to "put a significant amount of time and effort into their avocations and find ways to use their multiple interests to inform their vocations" (R. Root-Bernstein, 2009, p. 857).

A key point in the work of Root-Bernstein and colleagues is the argument in favor of the universality of the creative process. That is, although creative products, such as a painting, a mathematical model or a poem, can be domain-specific, at the level of the creative process, the mental tools that lead to the generation of creative ideas are the same, be it in the arts or science. These mental tools are sometimes called intuitive tools of thinking. It is therefore not surprising that many of the most innovative scientists have serious hobbies or interests in artistic activities, and that some of the most innovative artists have an interest or hobbies in the sciences.

Root-Bernstein and colleagues' research is an important counterpoint to the claim by some psychologists that creativity is a domain-specific phenomenon. Through their research, Root-Bernstein and colleagues conclude that there are certain comprehensive thinking skills and tools that cross the barrier of different domains and can foster creative thinking: "[creativity researchers] who discuss integrating ideas from diverse fields as the basis of creative giftedness ask not 'who is creative?' but 'what is the basis of creative thinking?' From the polymathy perspective, giftedness is the ability to combine disparate (or even apparently contradictory) ideas, sets of problems, skills, talents, and knowledge in novel and useful ways. Polymathy is therefore the main source of any individual's creative potential" (R. Root-Bernstein, 2009, p. 854). In "Life Stages of Creativity", Robert and Michèle Root-Bernstein suggest six typologies of creative life stages. These typologies based on real creative production records first published by Root-Bernstein, Bernstein, and Garnier (1993).

  • Type 1 represents people who specialize in developing one major talent early in life (e.g., prodigies) and successfully exploit that talent exclusively for the rest of their lives.
  • Type 2 individuals explore a range of different creative activities (e.g., through worldplay or a variety of hobbies) and then settle on exploiting one of these for the rest of their lives.
  • Type 3 people are polymathic from the outset and manage to juggle multiple careers simultaneously so that their creativity pattern is constantly varied.
  • Type 4 creators are recognized early for one major talent (e.g., math or music) but go on to explore additional creative outlets, diversifying their productivity with age.
  • Type 5 creators devote themselves serially to one creative field after another.
  • Type 6 people develop diversified creative skills early and then, like Type 5 individuals, explore these serially, one at a time.

Finally, his studies suggest that understanding polymathy and learning from polymathic exemplars can help structure a new model of education that better promotes creativity and innovation: "we must focus education on principles, methods, and skills that will serve them [students] in learning and creating across many disciplines, multiple careers, and succeeding life stages" (R. Root-Bernstein & M. Root-Bernstein, 2017, p. 161).

Peter Burke

Peter Burke, Professor Emeritus of Cultural History and Fellow of Emmanuel College at Cambridge, discussed the theme of polymathy in some of his works. He has presented a comprehensive historical overview of the ascension and decline of the polymath as, what he calls, an "intellectual species" (see Burke, 2020, 2012; 2010).

He observes that in ancient and medieval times, scholars did not have to specialize. However, from the 17th century on, the rapid rise of new knowledge in the Western world—both from the systematic investigation of the natural world and from the flow of information coming from other parts of the world—was making it increasingly difficult for individual scholars to master as many disciplines as before. Thus, an intellectual retreat of the polymath species occurred: "from knowledge in every [academic] field to knowledge in several fields, and from making original contributions in many fields to a more passive consumption of what has been contributed by others" (Burke, 2010, p. 72).

Given this change in the intellectual climate, it has since then been more common to find "passive polymaths", who consume knowledge in various domains but make their reputation in one single discipline, than "proper polymaths", who—through a feat of "intellectual heroism"—manage to make serious contributions to several disciplines.

However, Burke warns that in the age of specialization, polymathic people are more necessary than ever, both for synthesis—to paint the big picture—and for analysis. He says: "It takes a polymath to 'mind the gap' and draw attention to the knowledges that may otherwise disappear into the spaces between disciplines, as they are currently defined and organized" (Burke, 2012, p. 183).

Finally, he suggests that governments and universities should nurture a habitat in which this "endangered species" can survive, offering students and scholars the possibility of interdisciplinary work.

Kaufman, Beghetto and colleagues

James C. Kaufman, from the Neag School of Education at the University of Connecticut, and Ronald A. Beghetto, from the same university, investigated the possibility that everyone could have the potential for polymathy as well as the issue of the domain-generality or domain-specificity of creativity.

Based on their earlier four-c model of creativity, Beghetto and Kaufman proposed a typology of polymathy, ranging from the ubiquitous mini-c polymathy to the eminent but rare Big-C polymathy, as well as a model with some requirements for a person (polymath or not) to be able to reach the highest levels of creative accomplishment. They account for three general requirements—intelligence, motivation to be creative and an environment that allows creative expression—that are needed for any attempt at creativity to succeed. Then, depending on the domain of choice, more specific abilities will be required. The more that one's abilities and interests match the requirements of a domain, the better. While some will develop their specific skills and motivations for specific domains, polymathic people will display intrinsic motivation (and the ability) to pursue a variety of subject matters across different domains.

Regarding the interplay of polymathy and education, they suggest that rather than asking whether every student has multicreative potential, educators might more actively nurture the multicreative potential of their students. As an example, the authors cite that teachers should encourage students to make connections across disciplines, use different forms of media to express their reasoning/understanding (e.g., drawings, movies, and other forms of visual media).

Bharath Sriraman

Bharath Sriraman, of the University of Montana, also investigated the role of polymathy in education. He poses that an ideal education should nurture talent in the classroom and enable individuals to pursue multiple fields of research and appreciate both the aesthetic and structural/scientific connections between mathematics, arts and the sciences.

In 2009, Sriraman published a paper reporting a 3-year study with 120 pre-service mathematics teachers and derived several implications for mathematics pre-service education as well as interdisciplinary education. He utilized a hermeneutic-phenomenological approach to recreate the emotions, voices and struggles of students as they tried to unravel Russell's paradox presented in its linguistic form. They found that those more engaged in solving the paradox also displayed more polymathic thinking traits. He concludes by suggesting that fostering polymathy in the classroom may help students change beliefs, discover structures and open new avenues for interdisciplinary pedagogy.

Michael Araki

The Developmental Model of Polymathy (DMP)

Michael Araki is a professor at Universidade Federal Fluminense in Brazil. He sought to formalize in a general model how the development of polymathy takes place. His Developmental Model of Polymathy (DMP) is presented in a 2018 article with two main objectives: (i) organize the elements involved in the process of polymathy development into a structure of relationships that is wed to the approach of polymathy as a life project, and (ii) provide an articulation with other well-developed constructs, theories and models, especially from the fields of giftedness and education. The model, which was designed to reflect a structural model, has five major components: (1) polymathic antecedents, (2) polymathic mediators, (3) polymathic achievements, (4) intrapersonal moderators, and (5) environmental moderators.

Regarding the definition of the term polymathy, the researcher, through an analysis of the extant literature, concluded that although there are a multitude of perspectives on polymathy, most of them ascertain that polymathy entails three core elements: breadth, depth and integration.

Breadth refers to comprehensiveness, extension and diversity of knowledge. It is contrasted with the idea of narrowness, specialization, and the restriction of one's expertise to a limited domain. The possession of comprehensive knowledge at very disparate areas is a hallmark of the greatest polymaths.

Depth refers to the vertical accumulation of knowledge and the degree of elaboration or sophistication of one's sets of one's conceptual network. Like Robert Root-Bernstein, Araki uses the concept of dilettancy as a contrast to the idea of profound learning that polymathy entails.

Integration, although not explicit in most definitions of polymathy, is also a core component of polymathy according to the author. Integration involves the capacity of connecting, articulating, concatenating or synthesizing different conceptual networks, which in non-polymathic persons might be segregated. In addition, integration can happen at the personality level, when the person is able to integrate his or her diverse activities in a synergic whole, which can also mean a psychic (motivational, emotional and cognitive) integration.

Finally, the author also suggests that, via a psychoeconomic approach, polymathy can be seen as a "life project". That is, depending on a person's temperament, endowments, personality, social situation and opportunities (or lack thereof), the project of a polymathic self-formation may present itself to the person as more or less alluring and more or less feasible to be pursued.

Related terms

Aside from "Renaissance man" as mentioned above, similar terms in use are homo universalis (Latin) and uomo universale (Italian), which translate to "universal man". The related term "generalist"—contrasted with a "specialist"—is used to describe a person with a general approach to knowledge.

The term "universal genius" or "versatile genius" is also used, with Leonardo da Vinci as the prime example again. The term is used especially for people who made lasting contributions in at least one of the fields in which they were actively involved and when they took a universality of approach.

When a person is described as having encyclopedic knowledge, they exhibit a vast scope of knowledge. However, this designation may be anachronistic in the case of persons such as Eratosthenes, whose reputation for having encyclopedic knowledge predates the existence of any encyclopedic object.

 

Genius

From Wikipedia, the free encyclopedia

A genius is a person who displays exceptional intellectual ability, creative productivity, universality in genres or originality, typically to a degree that is associated with the achievement of new advances in a domain of knowledge. Despite the presence of scholars in many subjects throughout history, many geniuses have shown high achievements in only a single kind of activity.

There is no scientifically precise definition of a genius. Sometimes genius is associated with talent, but several authors such as Cesare Lombroso and Arthur Schopenhauer systematically distinguish these terms.

Etymology

Srinivasa Ramanujan, mathematician who is widely regarded as a genius. He made substantial contributions to mathematics despite little formal training.
 
Confucius, one of the most influential thinkers of the ancient world and the most famous Chinese philosopher, is often considered a genius.

In ancient Rome, the genius (plural in Latin genii) was the guiding spirit or tutelary deity of a person, family (gens), or place (genius loci). The noun is related to the Latin verbs "gignere" (to beget, to give birth to) and "generare" (to beget, to generate, to procreate), and derives directly from the Indo-European stem thereof: "ǵenh" (to produce, to beget, to give birth). Because the achievements of exceptional individuals seemed to indicate the presence of a particularly powerful genius, by the time of Augustus, the word began to acquire its secondary meaning of "inspiration, talent". The term genius acquired its modern sense in the eighteenth century, and is a conflation of two Latin terms: genius, as above, and Ingenium, a related noun referring to our innate dispositions, talents, and inborn nature. Beginning to blend the concepts of the divine and the talented, the Encyclopédie article on genius (génie) describes such a person as "he whose soul is more expansive and struck by the feelings of all others; interested by all that is in nature never to receive an idea unless it evokes a feeling; everything excites him and on which nothing is lost."

Historical development

Galton

Miguel de Cervantes, novelist who is acknowledged as a literary genius
 
Bobby Fischer, considered a chess genius

The assessment of intelligence was initiated by Francis Galton (1822–1911) and James McKeen Cattell. They had advocated the analysis of reaction time and sensory acuity as measures of "neurophysiological efficiency" and the analysis of sensory acuity as a measure of intelligence.

Galton is regarded as the founder of psychometry. He studied the work of his older half-cousin Charles Darwin about biological evolution. Hypothesizing that eminence is inherited from ancestors, Galton did a study of families of eminent people in Britain, publishing it in 1869 as Hereditary Genius. Galton's ideas were elaborated from the work of two early 19th-century pioneers in statistics: Carl Friedrich Gauss and Adolphe Quetelet. Gauss discovered the normal distribution (bell-shaped curve): given a large number of measurements of the same variable under the same conditions, they vary at random from a most frequent value, the "average", to two least frequent values at maximum differences greater and lower than the most frequent value. Quetelet discovered that the bell-shaped curve applied to social statistics gathered by the French government in the course of its normal processes on large numbers of people passing through the courts and the military. His initial work in criminology led him to observe "the greater the number of individuals observed the more do peculiarities become effaced...". This ideal from which the peculiarities were effaced became "the average man".

Galton was inspired by Quetelet to define the average man as "an entire normal scheme"; that is, if one combines the normal curves of every measurable human characteristic, one will, in theory, perceive a syndrome straddled by "the average man" and flanked by persons that are different. In contrast to Quetelet, Galton's average man was not statistical but was theoretical only. There was no measure of general averageness, only a large number of very specific averages. Setting out to discover a general measure of the average, Galton looked at educational statistics and found bell-curves in test results of all sorts; initially in mathematics grades for the final honors examination and in entrance examination scores for Sandhurst.

Galton's method in Hereditary Genius was to count and assess the eminent relatives of eminent men. He found that the number of eminent relatives was greater with a closer degree of kinship. This work is considered the first example of historiometry, an analytical study of historical human progress. The work is controversial and has been criticized for several reasons. Galton then departed from Gauss in a way that became crucial to the history of the 20th century AD. The bell-shaped curve was not random, he concluded. The differences between the average and the upper end were due to a non-random factor, "natural ability", which he defined as "those qualities of intellect and disposition, which urge and qualify men to perform acts that lead to reputation…a nature which, when left to itself, will, urged by an inherent stimulus, climb the path that leads to eminence." The apparent randomness of the scores was due to the randomness of this natural ability in the population as a whole, in theory.

Criticisms include that Galton's study fails to account for the impact of social status and the associated availability of resources in the form of economic inheritance, meaning that inherited "eminence" or "genius" can be gained through the enriched environment provided by wealthy families. Galton went on to develop the field of eugenics. Galton attempted to control for economic inheritance by comparing the adopted nephews of popes, who would have the advantage of wealth without being as closely related to popes as sons are to their fathers, to the biological children of eminent individuals. 

Psychology

Stanley Kubrick, deemed a filmmaking genius
 
Marie Curie, physicist and chemist cited as a genius

Genius is expressed in a variety of forms (e.g., mathematical, literary, musical performance). Persons with genius tend to have strong intuitions about their domains, and they build on these insights with tremendous energy. Carl Rogers, a founder of the Humanistic Approach to Psychology, expands on the idea of a genius trusting his or her intuition in a given field, writing: "El Greco, for example, must have realized as he looked at some of his early work, that 'good artists do not paint like that.' But somehow he trusted his own experiencing of life, the process of himself, sufficiently that he could go on expressing his own unique perceptions. It was as though he could say, 'Good artists don't paint like this, but I paint like this.' Or to move to another field, Ernest Hemingway was surely aware that 'good writers do not write like this.' But fortunately he moved toward being Hemingway, being himself, rather than toward someone else's conception of a good writer."

A number of people commonly regarded as geniuses have been or were diagnosed with mental disorders, for example Vincent van Gogh, Virginia Woolf, John Forbes Nash Jr., and Ernest Hemingway.

It has been suggested that there exists a connection between mental illness, in particular schizophrenia and bipolar disorder, and genius. Individuals with bipolar disorder and schizotypal personality disorder, the latter of which being more common amongst relatives of schizophrenics, tend to show elevated creativity.

In a 2010 study done in the Karolinska Institute it was observed that highly creative individuals and schizophrenics have a lower density of thalamic dopamine D2 receptors. One of the investigators explained that "Fewer D2 receptors in the thalamus probably means a lower degree of signal filtering, and thus a higher flow of information from the thalamus." This could be a possible mechanism behind the ability of healthy highly creative people to see numerous uncommon connections in a problem-solving situation and the bizarre associations found in the schizophrenics.

IQ and genius

Albert Einstein, theoretical physicist who is considered a genius

Galton was a pioneer in investigating both eminent human achievement and mental testing. In his book Hereditary Genius, written before the development of IQ testing, he proposed that hereditary influences on eminent achievement are strong, and that eminence is rare in the general population. Lewis Terman chose "'near' genius or genius" as the classification label for the highest classification on his 1916 version of the Stanford–Binet test. By 1926, Terman began publishing about a longitudinal study of California schoolchildren who were referred for IQ testing by their schoolteachers, called Genetic Studies of Genius, which he conducted for the rest of his life. Catherine M. Cox, a colleague of Terman's, wrote a whole book, The Early Mental Traits of 300 Geniuses, published as volume 2 of The Genetic Studies of Genius book series, in which she analyzed biographical data about historic geniuses. Although her estimates of childhood IQ scores of historical figures who never took IQ tests have been criticized on methodological grounds, Cox's study was thorough in finding out what else matters besides IQ in becoming a genius. By the 1937 second revision of the Stanford–Binet test, Terman no longer used the term "genius" as an IQ classification, nor has any subsequent IQ test. In 1939, David Wechsler specifically commented that "we are rather hesitant about calling a person a genius on the basis of a single intelligence test score".

The Terman longitudinal study in California eventually provided historical evidence regarding how genius is related to IQ scores. Many California pupils were recommended for the study by schoolteachers. Two pupils who were tested but rejected for inclusion in the study (because their IQ scores were too low) grew up to be Nobel Prize winners in physics, William Shockley, and Luis Walter Alvarez. Based on the historical findings of the Terman study and on biographical examples such as Richard Feynman, who had a self-reported IQ of 125 and went on to win the Nobel Prize in physics and become widely known as a genius, the current view of psychologists and other scholars of genius is that a minimum level of IQ (approximately 125) is necessary for genius but not sufficient, and must be combined with personality characteristics such as drive and persistence, plus the necessary opportunities for talent development. For instance, in a chapter in an edited volume on achievement, IQ researcher Arthur Jensen proposed a multiplicative model of genius consisting of high ability, high productivity, and high creativity. Jensen's model was motivated by the finding that eminent achievement is highly positively skewed, a finding known as Price's law, and related to Lotka's law.

Some high IQ individuals join a High IQ society. The most famous is Mensa International, but others exist, including The International High IQ Society, the Prometheus Society, the Triple Nine Society, and Magnus.

Philosophy

Leonardo da Vinci is widely acknowledged as having been a genius and a polymath.
 
Wolfgang Amadeus Mozart, considered a prodigy and musical genius

Various philosophers have proposed definitions of what genius is and what that implies in the context of their philosophical theories.

In the philosophy of David Hume, the way society perceives genius is similar to the way society perceives the ignorant. Hume states that a person with the characteristics of a genius is looked at as a person disconnected from society, as well as a person who works remotely, at a distance, away from the rest of the world.

On the other hand, the mere ignorant is still more despised; nor is any thing deemed a surer sign of an illiberal genius in an age and nation where the sciences flourish, than to be entirely destitute of all relish for those noble entertainments. The most perfect character is supposed to lie between those extremes; retaining an equal ability and taste for books, company, and business; preserving in conversation that discernment and delicacy which arise from polite letters; and in business, that probity and accuracy which are the natural result of a just philosophy.

In the philosophy of Immanuel Kant, genius is the ability to independently arrive at and understand concepts that would normally have to be taught by another person. For Kant, originality was the essential character of genius. This genius is a talent for producing ideas which can be described as non-imitative. Kant's discussion of the characteristics of genius is largely contained within the Critique of Judgment and was well received by the Romantics of the early 19th century. In addition, much of Schopenhauer's theory of genius, particularly regarding talent and freedom from constraint, is directly derived from paragraphs of Part I of Kant's Critique of Judgment.

Genius is a talent for producing something for which no determinate rule can be given, not a predisposition consisting of a skill for something that can be learned by following some rule or other.

In the philosophy of Arthur Schopenhauer, a genius is someone in whom intellect predominates over "will" much more than within the average person. In Schopenhauer's aesthetics, this predominance of the intellect over the will allows the genius to create artistic or academic works that are objects of pure, disinterested contemplation, the chief criterion of the aesthetic experience for Schopenhauer. Their remoteness from mundane concerns means that Schopenhauer's geniuses often display maladaptive traits in more mundane concerns; in Schopenhauer's words, they fall into the mire while gazing at the stars, an allusion to Plato's dialogue Theætetus, in which Socrates tells of Thales (the first philosopher) being ridiculed for falling in such circumstances. As he says in Volume 2 of The World as Will and Representation:

Talent hits a target no one else can hit; Genius hits a target no one else can see.

In the philosophy of Bertrand Russell, genius entails that an individual possesses unique qualities and talents that make the genius especially valuable to the society in which he or she operates, once given the chance to contribute to society. Russell's philosophy further maintains, however, that it is possible for such geniuses to be crushed in their youth and lost forever when the environment around them is unsympathetic to their potential maladaptive traits. Russell rejected the notion he believed was popular during his lifetime that, "genius will out".

 

Theory of multiple intelligences

From Wikipedia, the free encyclopedia

The theory of multiple intelligences proposes the differentiation of human intelligence into specific “modalities of intelligence”, rather than defining intelligence as a single, general ability. The theory has been criticized by mainstream psychology for its lack of empirical evidence, and its dependence on subjective judgement.

Separation criteria

According to the theory, an intelligence 'modality' must fulfill eight criteria:

  1. potential for brain isolation by brain damage
  2. place in evolutionary history
  3. presence of core operations
  4. susceptibility to encoding (symbolic expression)
  5. a distinct developmental progression
  6. the existence of savants, prodigies and other exceptional people
  7. support from experimental psychology
  8. support from psychometric findings

The intelligence modalities

In Frames of Mind: The Theory of Multiple Intelligences (1983), Howard Gardner proposed eight abilities that manifest multiple intelligences.

Musical-rhythmic and harmonic

This area of intelligence with sensitivity to the sounds, rhythms, and tones of music. People with musical intelligence normally have good pitch or might possess absolute pitch, and are able to sing, play musical instruments, and compose music. They have sensitivity to rhythm, pitch, meter, tone, melody or timbre.

Visual-spatial

This area deals with spatial judgment and the ability to visualize with the mind's eye. Spatial ability is one of the three factors beneath g in the hierarchical model of intelligence.

Verbal-linguistic

People with high verbal-linguistic intelligence display a facility with words and languages. They are typically good at reading, writing, telling stories and memorizing words along with dates. Verbal ability is one of the most g-loaded abilities. This type of intelligence is measured with the Verbal IQ in WAIS-IV.

Logical-mathematical

This area has to do with logic, abstractions, reasoning, numbers and critical thinking. This also has to do with having the capacity to understand the underlying principles of some kind of causal system. Logical reasoning is closely linked to fluid intelligence and to general intelligence (g factor).

Bodily-kinesthetic

The core elements of the bodily-kinesthetic intelligence are control of one's bodily motions and the capacity to handle objects skillfully. Gardner elaborates to say that this also includes a sense of timing, a clear sense of the goal of a physical action, along with the ability to train responses.

People who have high bodily-kinesthetic intelligence should be generally good at physical activities such as sports, dance and making things.

Gardner believes that careers that suit those with high bodily-kinesthetic intelligence include: athletes, dancers, musicians, actors, builders, police officers, and soldiers. Although these careers can be duplicated through virtual simulation, they will not produce the actual physical learning that is needed in this intelligence.

Interpersonal

In theory, individuals who have high interpersonal intelligence are characterized by their sensitivity to others' moods, feelings, temperaments, motivations, and their ability to cooperate to work as part of a group. According to Gardner in How Are Kids Smart: Multiple Intelligences in the Classroom, "Inter- and Intra- personal intelligence is often misunderstood with being extroverted or liking other people..." Those with high interpersonal intelligence communicate effectively and empathize easily with others, and may be either leaders or followers. They often enjoy discussion and debate." Gardner has equated this with emotional intelligence of Goleman.

Gardner believes that careers that suit those with high interpersonal intelligence include sales persons, politicians, managers, teachers, lecturers, counselors and social workers.

Intrapersonal

This area has to do with introspective and self-reflective capacities. This refers to having a deep understanding of the self; what one's strengths or weaknesses are, what makes one unique, being able to predict one's own reactions or emotions.

Naturalistic

Not part of Gardner's original seven, naturalistic intelligence was proposed by him in 1995. "If I were to rewrite Frames of Mind today, I would probably add an eighth intelligence – the intelligence of the naturalist. It seems to me that the individual who is readily able to recognize flora and fauna, to make other consequential distinctions in the natural world, and to use this ability productively (in hunting, in farming, in biological science) is exercising an important intelligence and one that is not adequately encompassed in the current list." This area has to do with nurturing and relating information to one's natural surroundings. Examples include classifying natural forms such as animal and plant species and rocks and mountain types. This ability was clearly of value in our evolutionary past as hunters, gatherers, and farmers; it continues to be central in such roles as botanist or chef.

This sort of ecological receptiveness is deeply rooted in a "sensitive, ethical, and holistic understanding" of the world and its complexities – including the role of humanity within the greater ecosphere.

Existential

Gardner did not want to commit to a spiritual intelligence, but suggested that an "existential" intelligence may be a useful construct, also proposed after the original 7 in his 1999 book. The hypothesis of an existential intelligence has been further explored by educational researchers.

Additional intelligences

In January 2016, Gardner mentioned in an interview with BigThink that he is considering adding the teaching-pedagogical intelligence "which allows us to be able to teach successfully to other people". In the same interview, he explicitly refused some other suggested intelligences like humour, cooking and sexual intelligence. Professor Nan B. Adams (2004) argues that based on Gardner's definition of Multiple Intelligences, digital intelligence – a meta-intelligence composed of many other identified intelligences and stemmed from human interactions with digital computers – now exists.

Physical intelligence

Physical intelligence, also known as bodily-kinasthetic intelligence, is any intelligence derived through physical and practiced learning such as sports, dance, or craftsmanship. It may refer to the ability to use one's hands to create, to express oneself with one's body, a reliance on tactile mechanisms and movement, and accuracy in controlling body movement. An individual with high physical intelligence is someone who is adept at using their physical body to solve problems and express ideas and emotions. The ability to control the physical body and the mind-body connection is part of a much broader range of human potential as set out in Howard Gardner’s Theory of multiple intelligences.

Characteristics

American baseball player, Babe Ruth

Exhibiting well developed bodily kinasthetic intelligence will be reflected in a person's movements and how they use their physical body. Often people with high physical intelligence will have excellent hand-eye coordination and be very agile; they are precise and accurate in movement and can express themselves using their body. Gardner referred to the idea of natural skill and innate physical intelligence within his discussion of the autobiographical story of Babe Ruth – a legendary baseball player who, at 15, felt that he has been ‘born’ on the pitcher's mound. Individuals with a high body-kinesthetic, or physical intelligence, are likely to be successful in physical careers, including athletes, dancers, musicians, police officers, and soldiers.

Theory

A professor of Education at Harvard University, developmental psychologist Howard Gardner, outlined nine types of intelligence, including spatial intelligence and linguistic intelligence among others. His seminal work, Frame of Mind, was published in 1983 and was influenced by the works of Alfred Binet and the German psychologist William Stern, who originally coined the term 'Intelligence quotient' (IQ). Within his paradigm of intelligence, Gardner defines it as being "the ability to learn" or "to solve problems," referring to intelligence as a "bio-psychological potential to process information".

Gardner suggested that each individual may possess all of the various forms of intelligence to some extent, but that there is always a dominant, or primary, form. Gardner granted each of the different forms of intelligence equal importance, and he proposed that they have the potential to be nurtured and so strengthened, or ignored and weakened. There have been various critiques of Gardner's work, however, predominantly due to the lack of empirical evidence used to support his thinking. Furthermore, some have suggested that the 'intelligences' refer to talents, personality, or ability rather than a distinct form of intelligence.

Impact on education

Within his Theory of Multiple Intelligences, Gardner stated that our "educational system is heavily biased towards linguistic modes of intersection and assessment and, to a somewhat lesser degree, toward logical quantities modes as well". His work went on to shape educational pedagogy and influence relevant policy and legislation across the world; with particular reference to how teachers must assess students’ progress to establish the most effective teaching methods for the individual learner. Gardner's research into the field of learning regarding bodily kinasthetic intelligence has resulted in the use of activities that require physical movement and exertion, with students exhibiting a high level of physical intelligence reporting to benefit from 'learning through movement' in the classroom environment.

Although the distinction between intelligences has been set out in great detail, Gardner opposes the idea of labelling learners to a specific intelligence. Gardner maintains that his theory should "empower learners", not restrict them to one modality of learning. According to Gardner, an intelligence is "a biopsychological potential to process information that can be activated in a cultural setting to solve problems or create products that are of value in a culture." According to a 2006 study, each of the domains proposed by Gardner involves a blend of the general g factor, cognitive abilities other than g, and, in some cases, non-cognitive abilities or personality characteristics.

Critical reception

Gardner argues that there is a wide range of cognitive abilities, but that there are only very weak correlations among them. For example, the theory postulates that a child who learns to multiply easily is not necessarily more intelligent than a child who has more difficulty on this task. The child who takes more time to master multiplication may best learn to multiply through a different approach, may excel in a field outside mathematics, or may be looking at and understanding the multiplication process at a fundamentally deeper level.

Intelligence tests and psychometrics have generally found high correlations between different aspects of intelligence, rather than the low correlations which Gardner's theory predicts, supporting the prevailing theory of general intelligence rather than multiple intelligences (MI). The theory has been criticized by mainstream psychology for its lack of empirical evidence, and its dependence on subjective judgement.

Definition of intelligence

One major criticism of the theory is that it is ad hoc: that Gardner is not expanding the definition of the word "intelligence", but rather denies the existence of intelligence as traditionally understood, and instead uses the word "intelligence" where other people have traditionally used words like "ability" and "aptitude". This practice has been criticized by Robert J. Sternberg, Eysenck, and Scarr. White (2006) points out that Gardner's selection and application of criteria for his "intelligences" is subjective and arbitrary, and that a different researcher would likely have come up with different criteria.

Defenders of MI theory argue that the traditional definition of intelligence is too narrow, and thus a broader definition more accurately reflects the differing ways in which humans think and learn.

Some criticisms arise from the fact that Gardner has not provided a test of his multiple intelligences. He originally defined it as the ability to solve problems that have value in at least one culture, or as something that a student is interested in. He then added a disclaimer that he has no fixed definition, and his classification is more of an artistic judgment than fact:

Ultimately, it would certainly be desirable to have an algorithm for the selection of intelligence, such that any trained researcher could determine whether a candidate's intelligence met the appropriate criteria. At present, however, it must be admitted that the selection (or rejection) of a candidate's intelligence is reminiscent more of an artistic judgment than of a scientific assessment.

Generally, linguistic and logical-mathematical abilities are called intelligence, but artistic, musical, athletic, etc. abilities are not. Gardner argues this causes the former to be needlessly aggrandized. Certain critics are wary of this widening of the definition, saying that it ignores "the connotation of intelligence ... [which] has always connoted the kind of thinking skills that makes one successful in school."

Gardner writes "I balk at the unwarranted assumption that certain human abilities can be arbitrarily singled out as intelligence while others cannot." Critics hold that given this statement, any interest or ability can be redefined as "intelligence". Thus, studying intelligence becomes difficult, because it diffuses into the broader concept of ability or talent. Gardner's edition of the naturalistic intelligence and conceptions of the existential and moral intelligence are seen as the fruits of this diffusion. Defenders of the MI theory would argue that this is simply a recognition of the broad scope of inherent mental abilities and that such an exhaustive scope by nature defies a one-dimensional classification such as an IQ value.

The theory and definitions have been critiqued by Perry D. Klein as being so unclear as to be tautologous and thus unfalsifiable. Having a high musical ability means being good at music while at the same time being good at music is explained by having high musical ability.

Henri Wallon argues that "We can not distinguish intelligence from its operations". Yves Richez distinguishes 10 Natural Operating Modes (Modes Opératoires Naturels – MoON). Richez's studies are premised on a gap between Chinese thought and Western thought. In China, the notion of "being" (self) and the notion of "intelligence" don't exist. These are claimed to be Graeco-Roman inventions derived from Plato. Instead of intelligence, Chinese refers to "operating modes", which is why Yves Richez does not speak of "intelligence" but of "natural operating modes" (MoON).

Neo-Piagetian criticism

Andreas Demetriou suggests that theories which overemphasize the autonomy of the domains are as simplistic as the theories that overemphasize the role of general intelligence and ignore the domains. He agrees with Gardner that there are indeed domains of intelligence that are relevantly autonomous of each other. Some of the domains, such as verbal, spatial, mathematical, and social intelligence are identified by most lines of research in psychology. In Demetriou's theory, one of the neo-Piagetian theories of cognitive development, Gardner is criticized for underestimating the effects exerted on the various domains of intelligences by the various subprocesses that define overall processing efficiency, such as speed of processing, executive functions, working memory, and meta-cognitive processes underlying self-awareness and self-regulation. All of these processes are integral components of general intelligence that regulate the functioning and development of different domains of intelligence.

The domains are to a large extent expressions of the condition of the general processes, and may vary because of their constitutional differences but also differences in individual preferences and inclinations. Their functioning both channels and influences the operation of the general processes. Thus, one cannot satisfactorily specify the intelligence of an individual or design effective intervention programs unless both the general processes and the domains of interest are evaluated.

Human adaptation to multiple environments

The premise of the multiple intelligences hypothesis, that human intelligence is a collection of specialist abilities, have been criticized for not being able to explain human adaptation to most if not all environments in the world. In this context, humans are contrasted to social insects that indeed have a distributed "intelligence" of specialists, and such insects may spread to climates resembling that of their origin but the same species never adapt to a wide range of climates from tropical to temperate by building different types of nests and learning what is edible and what is poisonous. While some such as the leafcutter ant grow fungi on leaves, they do not cultivate different species in different environments with different farming techniques as human agriculture does. It is therefore argued that human adaptability stems from a general ability to falsify hypotheses and make more generally accurate predictions and adapt behavior thereafter, and not a set of specialized abilities which would only work under specific environmental conditions.

IQ tests

Gardner argues that IQ tests only measure linguistic and logical-mathematical abilities. He argues the importance of assessing in an "intelligence-fair" manner. While traditional paper-and-pen examinations favor linguistic and logical skills, there is a need for intelligence-fair measures that value the distinct modalities of thinking and learning that uniquely define each intelligence.

Psychologist Alan S. Kaufman points out that IQ tests have measured spatial abilities for 70 years. Modern IQ tests are greatly influenced by the Cattell-Horn-Carroll theory which incorporates a general intelligence but also many more narrow abilities. While IQ tests do give an overall IQ score, they now also give scores for many more narrow abilities.

Lack of empirical evidence

According to a 2006 study, many of Gardner's "intelligences" correlate with the g factor, supporting the idea of a single dominant type of intelligence. According to the study, each of the domains proposed by Gardner involved a blend of g, of cognitive abilities other than g, and, in some cases, of non-cognitive abilities or of personality characteristics.

The Johnson O'Connor Research Foundation has tested hundreds of thousands of people to determine their "aptitudes" ("intelligences"), such as manual dexterity, musical ability, spatial visualization, and memory for numbers. There is correlation of these aptitudes with the g factor, but not all are strongly correlated; correlation between the g factor and "inductive speed" ("quickness in seeing relationships among separate facts, ideas, or observations") is only 0.5, considered a moderate correlation.

Linda Gottfredson (2006) has argued that thousands of studies support the importance of intelligence quotient (IQ) in predicting school and job performance, and numerous other life outcomes. In contrast, empirical support for non-g intelligences is either lacking or very poor. She argued that despite this, the ideas of multiple non-g intelligences are very attractive to many due to the suggestion that everyone can be smart in some way.

A critical review of MI theory argues that there is little empirical evidence to support it:

To date, there have been no published studies that offer evidence of the validity of the multiple intelligences. In 1994 Sternberg reported finding no empirical studies. In 2000 Allix reported finding no empirical validating studies, and at that time Gardner and Connell conceded that there was "little hard evidence for MI theory" (2000, p. 292). In 2004 Sternberg and Grigerenko stated that there were no validating studies for multiple intelligences, and in 2004 Gardner asserted that he would be "delighted were such evidence to accrue", and admitted that "MI theory has few enthusiasts among psychometricians or others of a traditional psychological background" because they require "psychometric or experimental evidence that allows one to prove the existence of the several intelligences."

The same review presents evidence to demonstrate that cognitive neuroscience research does not support the theory of multiple intelligences:

... the human brain is unlikely to function via Gardner's multiple intelligences. Taken together the evidence for the intercorrelations of subskills of IQ measures, the evidence for a shared set of genes associated with mathematics, reading, and g, and the evidence for shared and overlapping "what is it?" and "where is it?" neural processing pathways, and shared neural pathways for language, music, motor skills, and emotions suggest that it is unlikely that each of Gardner's intelligences could operate "via a different set of neural mechanisms" (1999, p. 99). Equally important, the evidence for the "what is it?" and "where is it?" processing pathways, for Kahneman's two decision-making systems, and for adapted cognition modules suggests that these cognitive brain specializations have evolved to address very specific problems in our environment. Because Gardner claimed that the intelligences are innate potentialities related to a general content area, MI theory lacks a rationale for the phylogenetic emergence of the intelligences.

The theory of multiple intelligences is sometimes cited as an example of pseudoscience because it lacks empirical evidence or falsifiability, though Gardner has argued otherwise.

Use in education

Gardner defines intelligence as "bio-psychological potential to process information that can be activated in a cultural setting to solve problems or create products that are of value in a culture." According to Gardner, there are more ways to do this than just through logical and linguistic intelligence. Gardner believes that the purpose of schooling "should be to develop intelligence and to help people reach vocational and avocational goals that are appropriate to their particular spectrum of intelligence. People who are helped to do so, [he] believe[s], feel more engaged and competent and therefore more inclined to serve the society in a constructive way."

Gardner contends that IQ tests focus mostly on logical and linguistic intelligence. Upon doing well on these tests, the chances of attending a prestigious college or university increase, which in turn creates contributing members of society. While many students function well in this environment, there are those who do not. Gardner's theory argues that students will be better served by a broader vision of education, wherein teachers use different methodologies, exercises and activities to reach all students, not just those who excel at linguistic and logical intelligence. It challenges educators to find "ways that will work for this student learning this topic".

James Traub's article in The New Republic notes that Gardner's system has not been accepted by most academics in intelligence or teaching. Gardner states that "while Multiple Intelligences theory is consistent with much empirical evidence, it has not been subjected to strong experimental tests ... Within the area of education, the applications of the theory are currently being examined in many projects. Our hunches will have to be revised many times in light of actual classroom experience."

Jerome Bruner agreed with Gardner that the intelligence was "useful fictions," and went on to state that "his approach is so far beyond the data-crunching of mental testers that it deserves to be cheered."

George Miller, a prominent cognitive psychologist, wrote in The New York Times Book Review that Gardner's argument consisted of "hunch and opinion" and Charles Murray and Richard J. Herrnstein in The Bell Curve (1994) called Gardner's theory "uniquely devoid of psychometric or other quantitative evidence."

In spite of its lack of general acceptance in the psychological community, Gardner's theory has been adopted by many schools, where it is often conflated with learning styles, and hundreds of books have been written about its applications in education. Some of the applications of Gardner's theory have been described as "simplistic" and Gardner himself has said he is "uneasy" with the way his theory has been used in schools. Gardner has denied that multiple intelligences are learning styles and agrees that the idea of learning styles is incoherent and lacking in empirical evidence. Gardner summarizes his approach with three recommendations for educators: individualize the teaching style (to suit the most effective method for each student), pluralize the teaching (teach important materials in multiple ways), and avoid the term "styles" as being confusing.

 

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...