Search This Blog

Monday, February 9, 2026

Theory of multiple intelligences

The intelligence modalities

The theory of multiple intelligences (MI) posits that human intelligence is not a single general ability but comprises various distinct modalities, such as linguistic, logical-mathematical, musical, and spatial intelligences. Introduced in Howard Gardner's book Frames of Mind: The Theory of Multiple Intelligences (1983), this framework has gained popularity among educators who accordingly develop varied teaching strategies purported to cater to different student strengths.

Despite its educational impact, MI has faced criticism from the psychological and scientific communities. A primary point of contention is Gardner's use of the term "intelligences" to describe these modalities. Critics argue that labeling these abilities as separate intelligences expands the definition of intelligence beyond its traditional scope, leading to debates over its scientific validity.

While empirical research often supports a general intelligence factor (g-factor), Gardner contends that his model offers a more nuanced understanding of human cognitive abilities. This difference in defining and interpreting "intelligence" has fueled ongoing discussions about the theory's scientific robustness.

Separation criteria

Beginning in the late 1970s, using a pragmatic definition, Howard Gardner surveyed several disciplines and cultures around the world to determine skills and abilities essential to human development and culture building. He subjected candidate abilities to evaluation using eight criteria that must be substantively met to warrant their identification as an intelligence. Furthermore, the intelligences need to be relatively autonomous from each other, and composed of subsets of skills that are highly correlated and coherently organized.

In 1983, the field of cognitive neuroscience was embryonic but Gardner was one of the early psychological theorists to describe direct links between brain systems and intelligence. Likewise the field of educational neuroscience was yet to be conceived. Since Frames of Mind was published (1983) the terms cognitive science and cognitive neuroscience have become standard in the field with extensive libraries of scholarly and scientific papers and textbooks. Thus it is essential to examine neuroscience evidence as it pertains to MI validity.

Gardner defined intelligence as "a biopsychological potential to process information that can be activated in a cultural setting to solve problems or create products that are of value in a culture."

This definition is unique for several reasons that account for MI theory's broad appeal to educators as well as its rejection by mainstream psychologists who are rooted in the traditional conception of intelligence as an abstract, logical capacity. A fundamental element for each intelligence is a framework of clearly defined levels of skill, complexity and accomplishment. One model that fits with the MI framework is Bloom’s taxonomy where each intelligence can be delineated along different levels, ranging from basic knowledge up to their highest levels of analysis / synthesis.

MI is also unique because it gives full appreciation for the impact and interactions - via symbol systems - between the individual’s cognitions and their particular culture. As Gardner states,

The multiple intelligences commence as a set of uncommitted neurobiological potentials. They become crystallized and mobilized by the communication that takes place among human beings and, especially, by the systems of meaning-making that already exist in a given culture.

Unlike traditional practices beginning in the 19th century, MI theory is not built on the statistical analyses of psychometric test data searching for factors that account for academic achievement. Instead, Gardner employs a multi-disciplinary, cross-cultural methodology to evaluate which human capacities fit into a comprehensive model of intelligence. Eight criteria accounting for advances in neuroscience and the influence of cultural factors are used to qualify a capacity as an intelligence. These criteria are drawn from a more extensive database than what was acceptable and available to researchers in the late 19th and 20th centuries. Evidence is gathered from a variety of disciplines including psychology, neurology, biology, sociology, and anthropology as well as the arts and humanities. If a candidate faculty meets this set of criteria reasonably well then it can qualify as an intelligence. If it does not, then it is set aside or reconceptualized.

Criteria for each type of intelligence

The eight criteria can be grouped into four general categories:

  1. biology (neuroscience and evolution)
  2. analysis (core operations and symbol systems)
  3. psychology (skill development, individual differences)
  4. psychometrics (psychological experiments and test evidence)

The criteria briefly described are:

  • potential for brain isolation by brain damage
  • place in evolutionary history
  • presence of core operations
  • susceptibility to encoding (symbolic expression)
  • a distinct developmental progression
  • the existence of savants, prodigies and other exceptional people
  • support from experimental psychology
  • support from psychometric findings

This scientific method resembles the process used by astronomers to determine which celestial bodies to classify as a planet versus dwarf planet, star, comet, etc.

Forms of intelligences

In Frames of Mind and its sequels, Howard Gardner describes eight intelligences that can be expressed in everyday life in a variety of ways referred to as domains, skills, competencies, or talents. Like describing a multi-layer cake, the complexity depends upon how you slice the cake. One model integrates the eight intelligences with Sternberg's triarchic theory, so each intelligence is actively expressed in three ways: (1) creative, (2) academic / analytical and (3) practical thinking. In this analogy each of the eight cake layers are divided into three segments with different expressions sharing a central core. Exemplar professions and adult roles requiring specific intelligences are described along with their core skills and potential deficits. Several references to exemplar neuroscientific studies are also provided for each of the eight intelligences. Furthermore, some have suggested that the 'intelligences' refer to talents, personality, or ability rather than a distinct form of intelligence.

The two intelligences that are most associated with the traditional I.Q. or general intelligence are the linguistic and logical-mathematical intelligences. Some intelligence models and tests also include visual-spatial intelligence as a third element.

Musical

This area of intelligence includes sensitivity to the sounds, rhythms, pitch, and tones of music. People with musical intelligence normally may be able to sing, play musical instruments, or compose music. They have high sensitivity to pitch, meter, melody and timbre. Musical intelligence includes cognitive elements that contribute to a person’s success and quality of life. There is a strong relationship between music and emotions as evidenced in both popular and classical music spheres. Neuroscience investigators continue to investigate the interaction between music and cognitive performances. Music is deeply rooted in human evolutionary history (Paleolithic bone flute) and culture (every country on Earth has a national anthem') and our personal lives (many important life events are associated with particular types of music, like birthday songs, wedding songs, funeral dirges, etc.).

Deficits in musical processing and abilities include congenital amusia, tone deafness, musical hallucinations, musical anhedonia, acquired music agnosia, and arrhythmia (beat deafness).

Professions requiring essential musical skills include vocalist, instrumentalist, lyricist, dancer, sound engineer and composer. Musical intelligence is combined with kinesthetic to produce instrumentalists, dancers and, combined with a linguistic intelligence, for music critics and lyricists. Music combined with interpersonal intelligence is required for success as a music therapist or teacher.

Visual-spatial

This area deals with spatial awareness / judgment and the ability to visualize with the mind's eye.[17] It is composed of two main dimensions: A) mental visualization and B) perception of the physical world (spatial arrangements and objects). It includes both practical problem-solving as well as artistic creations. Spatial ability is one of the three factors beneath g (general intelligence) in the hierarchical model of intelligence. Many I.Q. tests include a measure of spatial problem-solving skills, e.g., block design and mental rotation of objects.

Visual-spatial intelligence can be expressed in both practical (e.g., drafting and building) or artistic (e.g., fine art, crafts, floral arrangements) ways. Or they can be combined in fields such as architecture, industrial design, landscape design, and fashion design. Visual-spatial processing is often combined with the kinesthetic intelligence and referred to as eye-hand or visual-motor integration for tasks such as hitting a baseball (see Babe Ruth example for Kinesthetic), sewing, golf or skiing.

Professions that emphasize skill with visual-spatial processing include carpentry, engineering, designers, pilots, firefighters, surgeons, commercial and fine arts and crafts. Spatial intelligence combined with linguistic is required for success as an art critic or textbook graphic designer. Spatial artistic skills combined with naturalist sensitivity produce a pet groomer or clothing designer, costumer.

Linguistic

The core linguistic ability is sensitivity to words and their meanings. People with high verbal-linguistic intelligence display a facility with expressive language and verbal comprehension. They are typically good at reading, writing, telling stories, rhetoric and memorizing words along with dates. Verbal ability is one of the most g-loaded abilities. Linguistic (academic aspect) intelligence is measured with the Verbal Intelligence Quotient (IQ) in Wechsler Adult Intelligence Scale (WAIS-IV).

Deficits in linguistic abilities include expressive and receptive aphasia, agraphia, specific language impairment, written language disorder and word recognition deficit (dyslexia).

Linguistic ability can be expressed according to Triarchic theory in three main ways: analytical-academic (reading, writing, definitions); practical (verbal or written directions, explanations, narration); and creative (story telling, poetry, lyrics, imaginative word play, science fiction).

Professions that require linguistic skills include teaching, sales, management, counselors, leaders, childcare, journalists, academics and politicians (debating and creating support for particular sets of values). Linguistic intelligence combines with all other intelligences to facilitate communication either via the spoken or written word. It is frequently highly correlated with the interpersonal intelligence to facilitate social interactions for education, business and human relations. Successful sports coaches combine three intelligences: kinesthetic, interpersonal and linguistic. Corporate managers require skills in the interpersonal, linguistic and logical-mathematical intelligences.

Logical-mathematical

This area has to do with logic, abstractions, reasoning, calculations, strategic and critical thinking. This intelligence includes the capacity to understand underlying principles of some kind of causal system. Logical reasoning is closely linked to fluid intelligence as well as to general intelligence (g factor).  This capacity is most often associated with convergent problem-solving but it also includes divergent thinking associated with “problem-finding”.

This intelligence is most closely associated with the cognitive development theory described by Jean Piaget (1983). The four main types of logical-mathematical intelligence include logical reasoning, calculations, practical thinking (common sense) and discovery.

Deficits in logical-mathematical thinking include acalculia, dyscalculia, mild cognitive impairment, dementia and intellectual disability.

Some critics believe that the logical and mathematics domains should be separate entities. However, Gardner argues that they both spring from the same source—abstractions taken from real world elements, e.g., logic from words and calculations from the manipulation from objects. This is not dissimilar from the relationship between musical intelligence and vocal or instrumental skills where they are very different expressions springing from a shared musical source.

Professions most closely associated with this intelligence include accounting, bookkeeping, banking, finance, engineering and the sciences. Logic-mathematical skills combine with all the other intelligences to facilitate complex problem solving and creation such as environmental engineering and scientists (naturalist); symphonies (music); public sculptures (visual-spatial) and choreography/ movement analysis (kinesthetic).

Bodily-kinesthetic

The core elements of the bodily-kinesthetic intelligence are control of one's bodily movements and fine motor control to handle objects skillfully. Gardner elaborates to say that this also includes a sense of timing, a clear sense of the goal of a physical action, along with the ability to train responses. Kinesthetic ability can be displayed in goal-directed activities (athletics, handcrafts, etc.) as well as in more expressive movements (drama, dance, mime and gestures). Expressive movements can be for either concepts or feelings. For example, saluting, shaking hands or facial expressions can convey both ideas and emotions. Two major kinesthetic categories are gross and fine motor skills.

Deficits in kinesthetic ability are described as proprioception disorders affecting body awareness, coordination, balance, dexterity and motor control.

Gardner believes that careers that suit those with high bodily-kinesthetic intelligence include: athletes, dancers, musicians, actors, craftspeople, builders, technicians, and firefighters. Although these careers can be duplicated through virtual simulation, they will not produce the actual physical learning that is needed in this intelligence.  

Often people with high physical intelligence combined with visual motion acuity will have excellent hand-eye coordination and be very agile; they are precise and accurate in movement (surgeons) and can express themselves using their body (actors and dancers). Gardner referred to the idea of natural skill and innate kinesthetic intelligence within his discussion of the autobiographical story of Babe Ruth – a legendary baseball player who, at 15, felt that he had been 'born' on the pitcher's mound. Seeing the pitched ball and coordinating one’s swing to meet it over the plate requires highly developed visual-motor integration. Each sport requires its own distinctive combination of specific skills associated with the kinesthetic and visual-spatial intelligences.

American baseball player Babe Ruth

Physical ability

Physical intelligence, also known as bodily-kinesthetic intelligence, is any intelligence derived through physical and practiced learning such as sports, dance, or craftsmanship. It may refer to the ability to use one's hands to create, to express oneself with one's body, a reliance on tactile mechanisms and movement, and accuracy in controlling body movement. An individual with high physical intelligence is someone who is adept at using their physical body to solve problems and express ideas and emotions. The ability to control the physical body and the mind-body connection is part of a much broader range of human potential as set out in Gardner's theory of multiple intelligences.

Characteristics

Exhibiting well developed bodily kinesthetic intelligence will be reflected in a person's movements and how they use their physical body. Often people with high physical intelligence will have excellent hand-eye coordination and be very agile; they are precise and accurate in movement and can express themselves using their body. Gardner referred to the idea of natural skill and innate physical intelligence within his discussion of the autobiographical story of Babe Ruth – a legendary baseball player who, at 15, felt that he has been 'born' on the pitcher's mound. Individuals with a high body-kinesthetic, or physical intelligence, are likely to be successful in physical careers, including athletes, dancers, musicians, police officers, and soldiers.

Interpersonal

In MI theory, individuals who have high interpersonal intelligence are characterized by their sensitivity to others' moods, feelings, temperaments, motivations, and their ability to cooperate or to lead a group. According to Thomas Armstrong in How Are Kids Smart: Multiple Intelligences in the Classroom, "Interpersonal intelligence is often misunderstood with being extroverted or liking other people. Those with high interpersonal intelligence communicate effectively and empathize easily with others, and may be either leaders or followers. They often enjoy discussion and debate." They have insightful understanding of other peoples' point of view. Daniel Goleman based his concept of emotional intelligence in part on the feeling aspects of the intrapersonal and interpersonal intelligences.  Interpersonal skill can be displayed in either one-on-one and group interactions.

Deficits in interpersonal understanding are described as ego centrism, narcissism, socio-pathology, Asperger’s Syndrome and autism.

Gardner believes that careers that suit those with high interpersonal intelligence include leaders, politicians, managers, teachers, clergy, counselors, social workers and sales persons. Mother Teresa, Martin Luther King and Lyndon Johnson are cited as historical leaders with exceptional interpersonal intelligence. Interpersonal combined with intrapersonal management are required for successful leaders, psychologists, life coaches and conflict negotiators. And obviously, team sports require specific combinations of the interpersonal and kinesthetic intelligences while individual sports emphasize the kinesthetic and intrapersonal intelligences (i.e., Tiger Woods and gymnasts).

In theory, individuals who have high interpersonal intelligence are characterized by their sensitivity to others' moods, feelings, temperaments, motivations, and their ability to cooperate to work as part of a group. According to Gardner in How Are Kids Smart: Multiple Intelligences in the Classroom, "Inter- and Intra- personal intelligence is often misunderstood with being extroverted or liking other people". "Those with high interpersonal intelligence communicate effectively and empathize easily with others, and may be either leaders or followers. They often enjoy discussion and debate." Gardner has equated this with emotional intelligence of Goleman.

Intrapersonal

This refers to having a deep and accurate understanding of the self; what one's strengths and weaknesses are, what makes one unique, being able to predict and manage one's own reactions, emotions and behaviors. Activities associated with this intelligence include introspection and self-reflection. Intrapersonal skills can be categorized in at least four areas: metacognition, awareness of thoughts, management of feelings and emotions, behavior, self-management, decision-making and judgment.

Deficits in intrapersonal understanding are described as anosognosia, depersonalization, dissociation and self-dysregulation (ADHD).

Leaders and people in high stress occupations need well developed intrapersonal skills, e.g., pilots, police and firefighters, entrepreneurs, middle managers, first responders and health care providers. Mahatma Gandhi, Jesus and Martin Luther King Jr. are all noted for their strong self-awareness. Deficits in intrapersonal understanding may be correlated with ADHD, substance abuse and emotional disturbances (mid-life crisis, etc.).

Intrapersonal intelligence may be correlated with concepts such as self-confidence, introspection and self-efficacy but it should not be confused with personality styles/preferences such as narcissism, self-esteem, introversion or shyness. High level performance in many demanding professions and roles requires exceptional intrapersonal intelligence: Olympic athletes, professional golfers, stage performers, CEOs, crisis managers.

Naturalistic

Not part of Gardner's original seven, naturalistic intelligence was proposed by him in 1995. "If I were to rewrite Frames of Mind today, I would probably add an eighth intelligence – the intelligence of the naturalist. It seems to me that the individual who is readily able to recognize flora and fauna, to make other consequential distinctions in the natural world, and to use this ability productively (in hunting, in farming, in biological science) is exercising an important intelligence and one that is not adequately encompassed in the current list." This area has to do with nurturing and relating information to one's natural surroundings. Examples include classifying natural forms such as animal and plant species and rocks and mountain types. Essential cognitive skills include pattern recognition, taxonomy and empathy for living beings. Nature deficit disorder describes a recent hypothesis that mental health is negatively impacted by a lack of attention to and understanding of nature, e.g., nature deficit disorder.

This sort of ecological receptiveness is deeply rooted in a "sensitive, ethical, and holistic understanding" of the world and its complexities – including the role of humanity within the greater ecosphere.

This ability continues to be central in such roles like veterinarians, ecological scientists and botanists.

Proposed additional intelligences

From the beginning Howard Gardner has stated that there may be more intelligences beyond the original seven identified in 1983. That is why the naturalist was added to the list in 1999. Several other human capacities were rejected because they do not meet enough of the criteria including personality characteristics such as humor, sexuality and extroversion.

Pedagogical and digital

In January 2016, Gardner mentioned in an interview with Big Think that he was considering adding the teaching–pedagogical intelligence "which allows us to be able to teach successfully to other people". In the same interview, he explicitly refused some other suggested intelligences like humour, cooking and sexual intelligence. Professor Nan B. Adams argues that based on Gardner's definition of multiple intelligences, digital intelligence – a meta-intelligence composed of many other identified intelligences and stemmed from human interactions with digital computers – now exists.

Use in education

Within his Theory of Multiple Intelligences, Gardner stated that our "educational system is heavily biased towards linguistic modes of intersection and assessment and, to a somewhat lesser degree, toward logical quantities modes as well". His work went on to shape educational pedagogy and influence relevant policy and legislation across the world; with particular reference to how teachers must assess students' progress to establish the most effective teaching methods for the individual learner. Gardner's research into the field of learning regarding bodily kinesthetic intelligence has resulted in the use of activities that require physical movement and exertion, with students exhibiting a high level of physical intelligence reporting to benefit from 'learning through movement' in the classroom environment.

Although the distinction between intelligences has been set out in great detail, Gardner opposes the idea of labelling learners to a specific intelligence. Gardner maintains that his theory should "empower learners", not restrict them to one modality of learning. According to Gardner, an intelligence is "a biopsychological potential to process information that can be activated in a cultural setting to solve problems or create products that are of value in a culture". According to a 2006 study, each of the domains proposed by Gardner involves a blend of the general g factor, cognitive abilities other than g, and, in some cases, non-cognitive abilities or personality characteristics.

Gardner defines an intelligence as "bio-psychological potential to process information that can be activated in a cultural setting to solve problems or create products that are of value in a culture". According to Gardner, there are more ways to do this than just through logical and linguistic intelligence. Gardner believes that the purpose of schooling "should be to develop intelligences and to help people reach vocational and avocational goals that are appropriate to their particular spectrum of intelligences. People who are helped to do so, [he] believe[s], feel more engaged and competent and therefore more inclined to serve society in a constructive way."

Gardner contends that Intelligence Quotient (IQ) tests focus mostly on logical and linguistic intelligence. Upon doing well on these tests, the chances of attending a prestigious college or university increase, which in turn creates contributing members of society. While many students function well in this environment, there are those who do not. Gardner's theory argues that students will be better served by a broader vision of education, wherein teachers use different methodologies, exercises and activities to reach all students, not just those who excel at linguistic and logical intelligence. It challenges educators to find "ways that will work for this student learning this topic".

James Traub's article in The New Republic notes that Gardner's system has not been accepted by most academics in intelligence or teaching. Gardner states that "while Multiple Intelligences theory is consistent with much empirical evidence, it has not been subjected to strong experimental tests ... Within the area of education, the applications of the theory are currently being examined in many projects. Our hunches will have to be revised many times in light of actual classroom experience."

Jerome Bruner agreed with Gardner that the intelligences were "useful fictions", and went on to state that "his approach is so far beyond the data-crunching of mental testers that it deserves to be cheered."

George Miller, a prominent cognitive psychologist, wrote in The New York Times Book Review that Gardner's argument consisted of "hunch and opinion" and Charles Murray and Richard J. Herrnstein in The Bell Curve (1994) called Gardner's theory "uniquely devoid of psychometric or other quantitative evidence".

Distinction to learning styles

The notion of learning styles is problematic, and their educational use is suspect. Gardner has regularly explained the distinction between Theory of multiple intelligences and various learning style models. A big problem is that there are more than 80 different learning styles models so it is difficult to know which model is being referred to when making a comparison or planning instruction. A key difference is that learning styles typically refer to sensory modalities, preferences, personality characteristics, attitudes, and interests while the multiple intelligences are cognitive abilities with defined levels of skill. It is easy to see why they are confused given the popularity of VAK (Visual, Auditory and Kinesthetic) and Introversion, Extroversion models. Their names sound alike and they share sensory systems (vision, hearing, physicality) but the eight intelligences are much more than the senses or personal preferences.

While learning style theories are fundamentally different from the eight intelligences, there is a model proposed by Richard Strong and others that integrates a person’s preference with the eight intelligences to produce a descriptive tapestry of a person’s intellectual dispositions. The four styles are Mastery, Understanding, Interpersonal, and Self-Expressive. For the visual-spatial intelligence expressed artistically, a person may have a distinct pattern of preferences for realistic imagery (Mastery), conceptual art (Understanding), portraiture (Interpersonal) or abstract expression (Self-Expressive). This model has not been tested empirically.

Talents and aptitudes

Intelligences not typically associated with academic achievement have been traditionally delegated to the status of talents or aptitudes—e.g., musical, visual-spatial, kinesthetic and naturalist. Gardner takes issue with this hierarchy because it lowers the importance of these “non-academic” intelligences and devalues their contribution to human thought, individual development and culture. Gardner is fine with calling them all talents (or aptitudes) (including logical-mathematical and linguistic) so long as they are seen to be of equal value.

In spite of its lack of general acceptance in the psychological community, Gardner's theory has been adopted by many schools, where it is often conflated with learning styles, and hundreds of books have been written about its applications in education. Some of the applications of Gardner's theory have been described as "simplistic" and Gardner himself has said he is "uneasy" with the way his theory has been used in schools. Gardner has denied that multiple intelligences are learning styles and agrees that the idea of learning styles is incoherent and lacking in empirical evidence. Gardner summarizes his approach with three recommendations for educators: individualize the teaching style (to suit the most effective method for each student), pluralize the teaching (teach important materials in multiple ways), and avoid the term "styles" as being confusing.

Criticism

Gardner argues that there is a wide range of cognitive abilities, but that there are only very weak correlations among them. For example, the theory postulates that a child who learns to multiply easily is not necessarily more intelligent than a child who has more difficulty on this task. The child who takes more time to master multiplication may best learn to multiply through a different approach, may excel in a field outside mathematics, or may be looking at and understanding the multiplication process at a fundamentally deeper level.

Intelligence tests and psychometrics have generally found high correlations between different aspects of intelligence, rather than the low correlations which Gardner's theory predicts, supporting the prevailing theory of general intelligence rather than multiple intelligences (MI). The theory has been criticized by mainstream psychology for its lack of empirical evidence, and its dependence on subjective judgement.

Definition of intelligence

A major criticism of the theory is that it is ad hoc: that Gardner is not expanding the definition of the word "intelligence", but rather denies the existence of intelligence as traditionally understood, and instead uses the word "intelligence" where other people have traditionally used words like "ability" and "aptitude". This practice has been criticized by Robert J. Sternberg, Michael Eysenck, and Sandra Scarr. White (2006) points out that Gardner's selection and application of criteria for his "intelligences" is subjective and arbitrary, and that a different researcher would likely have come up with different criteria.

Defenders of MI theory argue that the traditional definition of intelligence is too narrow, and thus a broader definition more accurately reflects the differing ways in which humans think and learn.

Some criticisms arise from the fact that Gardner has not provided a test of his multiple intelligences. He originally defined it as the ability to solve problems that have value in at least one culture, or as something that a student is interested in. He then added a disclaimer that he has no fixed definition, and his classification is more of an artistic judgment than fact:

Ultimately, it would certainly be desirable to have an algorithm for the selection of intelligence, such that any trained researcher could determine whether a candidate's intelligence met the appropriate criteria. At present, however, it must be admitted that the selection (or rejection) of a candidate's intelligence is reminiscent more of an artistic judgment than of a scientific assessment.

Generally, linguistic and logical-mathematical abilities are called intelligence, but artistic, musical, athletic, etc. abilities are not. Gardner argues this causes the former to be needlessly aggrandized. Certain critics are wary of this widening of the definition, saying that it ignores "the connotation of intelligence ... [which] has always connoted the kind of thinking skills that makes one successful in school."

Gardner writes "I balk at the unwarranted assumption that certain human abilities can be arbitrarily singled out as intelligence while others cannot." Critics hold that given this statement, any interest or ability can be redefined as "intelligence". Thus, studying intelligence becomes difficult, because it diffuses into the broader concept of ability or talent. Gardner's addition of the naturalistic intelligence and conceptions of the existential and moral intelligence are seen as the fruits of this diffusion. Defenders of the MI theory would argue that this is simply a recognition of the broad scope of inherent mental abilities and that such an exhaustive scope by nature defies a one-dimensional classification such as an IQ value.

The theory and definitions have been critiqued by Perry D. Klein as being so unclear as to be tautologous and thus unfalsifiable. Having a high musical ability means being good at music while at the same time being good at music is explained by having high musical ability.

Henri Wallon argues that "We can not distinguish intelligence from its operations". Yves Richez distinguishes 10 Natural Operating Modes (Modes Opératoires Naturels – MoON). Richez's studies are premised on a gap between Chinese thought and Western thought. In China, the notion of "being" (self) and the notion of "intelligence" do not exist. These are claimed to be Graeco-Roman inventions derived from Plato. Instead of intelligence, Chinese refers to "operating modes", which is why Yves Richez does not speak of "intelligence" but of "natural operating modes" (MoON).

Validity

Critics argue that MI cannot be taken seriously as a scientific theory of intelligence for a number of reasons, the most common are given below:

  • It is not scientific as in a body of knowledge acquired by performing replicated experiments in the laboratory.
  • There is conceptual confusion for determining exactly what intelligence is and what it isn’t, e.g., MI conflates personality, talent and learning styles with intelligence. MI does not value reasoning and academic skills.
  • There are no empirical, experimental studies using psychometrics to establish validity. The proposed intelligences are not proven to be sufficiently independent to warrant separate identification.
  • There is no evidence for educational efficacy and its use may undermine school effectiveness.

Neo-Piagetian criticism

Andreas Demetriou suggests that theories which overemphasize the autonomy of the domains are as simplistic as the theories that overemphasize the role of general intelligence and ignore the domains. He agrees with Gardner that there are indeed domains of intelligence that are relevantly autonomous of each other. Some of the domains, such as verbal, spatial, mathematical, and social intelligence are identified by most lines of research in psychology. In Demetriou's theory, one of the neo-Piagetian theories of cognitive development, Gardner is criticized for underestimating the effects exerted on the various domains of intelligences by the various subprocesses that define overall processing efficiency, such as speed of processing, executive functions, working memory, and meta-cognitive processes underlying self-awareness and self-regulation. All of these processes are integral components of general intelligence that regulate the functioning and development of different domains of intelligence.

The domains are to a large extent expressions of the condition of the general processes, and may vary because of their constitutional differences but also differences in individual preferences and inclinations. Their functioning both channels and influences the operation of the general processes. Thus, one cannot satisfactorily specify the intelligence of an individual or design effective intervention programs unless both the general processes and the domains of interest are evaluated.

Human adaptation to multiple environments

The premise of the multiple intelligences hypothesis, that human intelligence is a collection of specialist abilities, have been criticized for not being able to explain human adaptation to most if not all environments in the world. In this context, humans are contrasted to social insects that indeed have a distributed "intelligence" of specialists, and such insects may spread to climates resembling that of their origin but the same species never adapt to a wide range of climates from tropical to temperate by building different types of nests and learning what is edible and what is poisonous. While some such as the leafcutter ant grow fungi on leaves, they do not cultivate different species in different environments with different farming techniques as human agriculture does. It is therefore argued that human adaptability stems from a general ability to falsify hypotheses and make more generally accurate predictions and adapt behavior thereafter, and not a set of specialized abilities which would only work under specific environmental conditions.

IQ tests

Gardner argues that IQ tests only measure linguistic and logical-mathematical abilities. He argues the importance of assessing in an "intelligence-fair" manner. While traditional paper-and-pen examinations favor linguistic and logical skills, there is a need for intelligence-fair measures that value the distinct modalities of thinking and learning that uniquely define each intelligence.

Psychologist Alan S. Kaufman points out that IQ tests have measured spatial abilities for 70 years. Modern IQ tests are greatly influenced by the Cattell–Horn–Carroll theory which incorporates a general intelligence but also many more narrow abilities. While IQ tests do give an overall IQ score, they now also give scores for many more narrow abilities.

Lack of empirical evidence

Many of Gardner's "intelligences" correlate with the g factor, supporting the idea of a single dominant type of intelligence. Each of the domains proposed by Gardner involved a blend of g, of cognitive abilities other than g, and, in some cases, of non-cognitive abilities or of personality characteristics.

The Johnson O'Connor Research Foundation has tested hundreds of thousands of people to determine their "aptitudes" ("intelligences"), such as manual dexterity, musical ability, spatial visualization, and memory for numbers. There is correlation of these aptitudes with the g factor, but not all are strongly correlated; correlation between the g factor and "inductive speed" ("quickness in seeing relationships among separate facts, ideas, or observations") is only 0.5, considered a moderate correlation.

A critical review of MI theory argues that there is little empirical evidence to support it:

To date, there have been no published studies that offer evidence of the validity of the multiple intelligences. In 1994 Sternberg reported finding no empirical studies. In 2000 Allix reported finding no empirical validating studies, and at that time Gardner and Connell conceded that there was "little hard evidence for MI theory" (2000, p. 292).[citation needed] In 2004 Sternberg and Grigerenko stated that there were no validating studies for multiple intelligences, and in 2004 Gardner asserted that he would be "delighted were such evidence to accrue", and admitted that "MI theory has few enthusiasts among psychometricians or others of a traditional psychological background" because they require "psychometric or experimental evidence that allows one to prove the existence of the several intelligences".

The same review presents evidence to demonstrate that cognitive neuroscience research does not support the theory of multiple intelligences:

... the human brain is unlikely to function via Gardner's multiple intelligences. Taken together the evidence for the intercorrelations of subskills of IQ measures, the evidence for a shared set of genes associated with mathematics, reading, and g, and the evidence for shared and overlapping "what is it?" and "where is it?" neural processing pathways, and shared neural pathways for language, music, motor skills, and emotions suggest that it is unlikely that each of Gardner's intelligences could operate "via a different set of neural mechanisms" (1999, p. 99). Equally important, the evidence for the "what is it?" and "where is it?" processing pathways, for Kahneman's two decision-making systems, and for adapted cognition modules suggests that these cognitive brain specializations have evolved to address very specific problems in our environment. Because Gardner claimed that the intelligences are innate potentialities related to a general content area, MI theory lacks a rationale for the phylogenetic emergence of the intelligences.

However, more recent research from Branton Shearer in 2017 was able to identify both structures that activate in common, as well as separately, across Gardner's 8 intelligences.

Counter-Enlightenment

From Wikipedia, the free encyclopedia
Divine Justice smites Jean-Baptiste Pigalle's statue of Voltaire. Anonymous, 1773

The Counter-Enlightenment refers to a loose collection of intellectual stances that arose during the European Enlightenment in opposition to its mainstream attitudes and ideals. The Counter-Enlightenment is generally seen to have continued from the 18th century into the early 19th century, especially with the rise of Romanticism. Its thinkers did not necessarily agree to a set of counter-doctrines but instead each challenged specific elements of Enlightenment thinking, such as the belief in progress, the rationality of all humans, liberal democracy, and the increasing secularisation of European society.

Scholars differ on who is to be included among the major figures of the Counter-Enlightenment. In Italy, Giambattista Vico criticised the spread of reductionism and the Cartesian method, which he saw as unimaginative and stifling creative thinking. Decades later, Joseph de Maistre in Sardinia and Edmund Burke in Britain both criticised the anti-religious ideas of the Enlightenment for leading to the Reign of Terror and a totalitarian police state following the French Revolution. The ideas of Jean-Jacques Rousseau and Johann Georg Hamann were also significant to the rise of the Counter-Enlightenment with French and German Romanticism respectively.

In the late 20th century, the concept of the Counter-Enlightenment was popularised by pro-Enlightenment historian Isaiah Berlin as a tradition of relativist, anti-rationalist, vitalist, and organic thinkers stemming largely from Hamann and subsequent German Romantics. While Berlin is largely credited with having refined and promoted the concept, the first known use of the term in English occurred in 1949 and there were several earlier uses of it across other European languages, including by German philosopher Friedrich Nietzsche.

Term usage

Joseph-Marie, Comte de Maistre was one of the more prominent altar-and-throne counter-revolutionaries who vehemently opposed Enlightenment ideas.

Early usage

Despite criticism of the Enlightenment being a widely discussed topic in twentieth- and twenty-first century thought, the term "Counter-Enlightenment" was slow to enter general usage. It was first mentioned briefly in English in William Barrett's 1949 article "Art, Aristocracy and Reason" in Partisan Review. He used the term again in his 1958 book on existentialism, Irrational Man; however, his comment on Enlightenment criticism was very limited. In Germany, the expression "Gegen-Aufklärung" has a longer history. It was probably coined by Friedrich Nietzsche in "Nachgelassene Fragmente" in 1877.

Lewis White Beck used this term in his Early German Philosophy (1969), a book about Counter-Enlightenment in Germany. Beck claims that there is a counter-movement arising in Germany in reaction to Frederick II's secular authoritarian state. On the other hand, Johann Georg Hamann and his fellow philosophers believe that a more organic conception of social and political life, a more vitalistic view of nature, and an appreciation for beauty and the spiritual life of man have been neglected by the eighteenth century.

Isaiah Berlin

Isaiah Berlin established this term's place in the history of ideas. He used it to refer to a movement that arose primarily in late 18th- and early 19th-century Germany against the rationalism, universalism and empiricism that are commonly associated with the Enlightenment. Berlin's essay "The Counter-Enlightenment" was first published in 1973, and later reprinted in a collection of his works, Against the Current, in 1981. The term has been more widely used since.

Isaiah Berlin traces the Counter-Enlightenment back to J. G. Hamann (shown).

Berlin argues that, while there were opponents of the Enlightenment outside of Germany (e.g. Joseph de Maistre) and before the 1770s (e.g. Giambattista Vico), Counter-Enlightenment thought did not take hold until the Germans "rebelled against the dead hand of France in the realms of culture, art and philosophy, and avenged themselves by launching the great counter-attack against the Enlightenment." This German reaction to the imperialistic universalism of the French Enlightenment and Revolution, which had been forced on them first by the francophile Frederick II of Prussia, then by the armies of Revolutionary France and finally by Napoleon, was crucial to the shift of consciousness that occurred in Europe at this time, leading eventually to Romanticism. The consequence of this revolt against the Enlightenment was pluralism. The opponents to the Enlightenment played a more crucial role than its proponents, some of whom were monists, whose political, intellectual and ideological offspring have been terreur and totalitarianism.

Darrin McMahon

In his book Enemies of the Enlightenment (2001), historian Darrin McMahon extends the Counter-Enlightenment back to pre-Revolutionary France and down to the level of "Grub Street". McMahon focuses on the early opponents to the Enlightenment in France, unearthing a long-forgotten "Grub Street" literature in the late 18th and early 19th centuries aimed at the philosophes. He delves into the obscure world of the "low Counter-Enlightenment" that attacked the encyclopédistes and fought to prevent the dissemination of Enlightenment ideas in the second half of the century. Many people from earlier times attacked the Enlightenment for undermining religion and the social and political order. It later became a major theme of conservative criticism of the Enlightenment. After the French Revolution, it appeared to vindicate the warnings of the anti-philosophes in the decades prior to 1789.

Graeme Garrard

Rousseau is identified by Graeme Garrard as the originator of the Counter-Enlightenment.

Cardiff University professor Graeme Garrard claims that historian William R. Everdell was the first to situate Rousseau as the "founder of the Counter-Enlightenment" in his 1971 dissertation and in his 1987 book, Christian Apologetics in France, 1730–1790: The Roots of Romantic Religion. In his 1996 article, "the Origin of the Counter-Enlightenment: Rousseau and the New Religion of Sincerity", in the American Political Science Review (Vol. 90, No. 2), Arthur M. Melzer corroborates Everdell's view in placing the origin of the Counter-Enlightenment in the religious writings of Jean-Jacques Rousseau, further showing Rousseau as the man who fired the first shot in the war between the Enlightenment and its opponents. Graeme Garrard follows Melzer in his "Rousseau's Counter-Enlightenment" (2003). This contradicts Berlin's depiction of Rousseau as a philosophe (albeit an erratic one) who shared the basic beliefs of his Enlightenment contemporaries. But similar to McMahon, Garrard traces the beginning of Counter-Enlightenment thought back to France and prior to the German Sturm und Drang movement of the 1770s. Garrard's book Counter-Enlightenments (2006) broadens the term even further, arguing against Berlin that there was no single "movement" called "The Counter-Enlightenment". Rather, there have been many Counter-Enlightenments, from the middle of the 18th century to the 20th century among critical theorists, postmodernists and feminists. The Enlightenment has opponents on all points of its ideological compass, from the far left to the far right, and all points in between. Each of the Enlightenment's challengers depicted it as they saw it or wanted others to see it, resulting in a vast range of portraits, many of which are not only different but incompatible.

James Schmidt

The idea of Counter-Enlightenment has evolved in the following years. The historian James Schmidt questioned the idea of "Enlightenment" and therefore of the existence of a movement opposing it. As the conception of "Enlightenment" has become more complex and difficult to maintain, so has the idea of the "Counter-Enlightenment". Advances in Enlightenment scholarship in the last quarter-century have challenged the stereotypical view of the 18th century as an "Age of Reason", leading Schmidt to speculate on whether the Enlightenment might not actually be a creation of its opponents, but the other way round. The fact that the term "Enlightenment" was first used in 1894 in English to refer to a historical period supports the argument that it was a late construction projected back onto the 18th century.

The French Revolution

Political thinker Edmund Burke opposed the French Revolution in his Reflections on the Revolution in France.

By the mid-1790s, the Reign of Terror during the French Revolution fueled a major reaction against the Enlightenment. Many leaders of the French Revolution and their supporters made Voltaire and Rousseau, as well as Marquis de Condorcet's ideas of reason, progress, anti-clericalism, and emancipation, central themes to their movement. It led to an unavoidable backlash to the Enlightenment as there were people opposed to the revolution. Many counter-revolutionary writers, such as Edmund Burke, Joseph de Maistre and Augustin Barruel, asserted an intrinsic link between the Enlightenment and the Revolution. They blamed the Enlightenment for undermining traditional beliefs that sustained the ancien regime. As the Revolution became increasingly bloody, the idea of "Enlightenment" was discredited, too. Hence, the French Revolution and its aftermath have contributed to the development of Counter-Enlightenment thought.

Edmund Burke was among the first of the Revolution's opponents to relate the philosophes to the instability in France in the 1790s. His Reflections on the Revolution in France (1790) identifies the Enlightenment as the principal cause of the French revolution. In Burke's opinion, the philosophes provided the revolutionary leaders with the theories on which their political schemes were based.

Augustin Barruel's Counter-Enlightenment ideas were well developed before the revolution. He worked as an editor for the anti-philosophes literary journal, L'Année Littéraire. Barruel argues in his Memoirs Illustrating the History of Jacobinism (1797) that the Revolution was the consequence of a conspiracy of philosophes and freemasons.

In Considerations on France (1797), Joseph de Maistre interprets the Revolution as divine punishment for the sins of the Enlightenment. According to him, "the revolutionary storm is an overwhelming force of nature unleashed on Europe by God that mocked human pretensions."

Romanticism

In the 1770s, the "Sturm und Drang" movement started in Germany. It questioned some key assumptions and implications of the Aufklärung and the term "Romanticism" was first coined. Many early Romantic writers such as Chateaubriand, Friedrich von Hardenberg (Novalis) and Samuel Taylor Coleridge inherited the Counter-Revolutionary antipathy towards the philosophes. All three directly blamed the philosophes in France and the Aufklärer in Germany for devaluing beauty, spirit and history in favour of a view of man as a soulless machine and a view of the universe as a meaningless, disenchanted void lacking richness and beauty. One particular concern to early Romantic writers was the allegedly anti-religious nature of the Enlightenment since the philosophes and Aufklärer were generally deists, opposed to revealed religion. Some historians, such as Hamann, nevertheless contend that this view of the Enlightenment as an age hostile to religion is common ground between these Romantic writers and many of their conservative Counter-Revolutionary predecessors. However, not many have commented on the Enlightenment, except for Chateaubriand, Novalis, and Coleridge, since the term itself did not exist at the time and most of their contemporaries ignored it.

The Sleep of Reason Produces Monsters, c. 1797, 21.5 cm × 15 cm. One of the most famous prints of Spaniard Francisco Goya

The historian Jacques Barzun argues that Romanticism has its roots in the Enlightenment. It was not anti-rational, but rather balanced rationality against the competing claims of intuition and the sense of justice. This view is expressed in Goya's Sleep of Reason, in which the nightmarish owl offers the dozing social critic of Los Caprichos a piece of drawing chalk. Even the rational critic is inspired by irrational dream-content under the gaze of the sharp-eyed lynx. Marshall Brown makes much the same argument as Barzun in Romanticism and Enlightenment, questioning the stark opposition between these two periods.

By the middle of the 19th century, the memory of the French Revolution was fading and so was the influence of Romanticism. In this optimistic age of science and industry, there were few critics of the Enlightenment, and few explicit defenders. Friedrich Nietzsche is a notable and highly influential exception. After an initial defence of the Enlightenment in his so-called "middle period" (late 1870s to early 1880s), Nietzsche turned vehemently against it.

Totalitarianism and Fascism

Totalitarianism as a product of the Enlightenment

After World War II, the Enlightenment re-emerged as a key organizing concept in social and political thought and the history of ideas, often suggesting links between counter-enlightenment ideas and fascism.

There was also, conversely, new counter-enlightenment literature, blaming the 18th-century Age of Reason for totalitarianism. The locus classicus of this view is Max Horkheimer and Theodor Adorno's Dialectic of Enlightenment (1947). Adorno and Horkheimer take "enlightenment" as their target including the specifically 18th-century form, – i.e. "The Enlightenment". Dialectic of Enlightenment traces the degeneration of the general concept of enlightenment, from ancient Greece (epitomized by the cunning "bourgeois" hero Odysseus) to 20th-century fascism. Adorno and Horkheimer claim that The Enlightenment is epitomized by the Marquis de Sade. However, some philosophers have rejected Adorno and Horkheimer's claim that Sade's moral skepticism is actually coherent, or that it reflects Enlightenment thought.

Nazism and Fascism as products of the Counter-Enlightenment

Many historians and other scholars have argued that fascism was a product of the Counter-Enlightenment itself. For example, Ze'ev Sternhell called fascism "an exacerbated form of the tradition of counter-Enlightenment": with fascism, "Europe created for the first time a set of political movements and regimes whose project was nothing but the destruction of Enlightenment culture." Similar opinions were expressed by such historians as Georges Bensoussan and Enzo Traverso, who noted "Counter-Enlightenment tendencies, combined with industrial and technical progress, a state monopoly over violence, and the rationalisation of methods of domination" and "Counter-Enlightenment (Gegenaufklärung) and the cult of modern technology, a synthesis of Teutonic mythologies and biological nationalism" in Nazism, thus recognizing it as grounded on intellectual traditions of counter-Enlightenment, but mixing them with "instrumental reason" which allowed adopting "the methods of industrial production and scientific management were employed" for such irrational goals as racial extermination. Prior to these historians, various philosophers described fascism as a "revolt against reason" and a force hostile to scientific objectivity and rational inqury, namely Umberto Eco, Bertrand Russell, Richard Wolin and Jason Stanley.

Chaos theory

From Wikipedia, the free encyclopedia
A plot of the Lorenz attractor for values r = 28, σ = 10, b = 8/3
A plot of the 3D Lorenz attractor
An animation of a double-rod pendulum at an intermediate energy showing chaotic behavior. Starting the pendulum from a slightly different initial condition would result in a vastly different trajectory. The double-rod pendulum is one of the simplest dynamical systems with chaotic solutions.

Chaos theory is an interdisciplinary area of scientific study and branch of mathematics. It focuses on underlying patterns and deterministic laws of dynamical systems that are highly sensitive to initial conditions. These were once thought to have completely random states of disorder and irregularities. Chaos theory states that within the apparent randomness of chaotic complex systems, there are underlying patterns, interconnection, constant feedback loops, repetition, self-similarity, fractals and self-organization. The butterfly effect, an underlying principle of chaos, describes how a small change in one state of a deterministic nonlinear system can result in large differences in a later state (meaning there is sensitive dependence on initial conditions). A metaphor for this behavior is that a butterfly flapping its wings in Brazil can cause or prevent a tornado in Texas.

Small differences in initial conditions, such as those due to errors in measurements or due to rounding errors in numerical computation, can yield widely diverging outcomes for such dynamic systems, rendering long-term prediction of their behavior impossible in general. This can happen even though these systems are deterministic, meaning that their future behavior follows a unique evolution and is fully determined by their initial conditions, with no random elements involved. In other words, despite the deterministic nature of these systems, this does not make them predictable. This behavior is known as deterministic chaos, or simply chaos. The theory was summarized by Edward Lorenz as:

Chaos: When the present determines the future but the approximate present does not approximately determine the future.

Chaotic behavior exists in many natural systems, including fluid flow, heartbeat irregularities, weather and climate. It also occurs spontaneously in some systems with artificial components, such as road traffic. This behavior can be studied through the analysis of a chaotic mathematical model or through analytical techniques such as recurrence plots and Poincaré maps. Chaos theory has applications in a variety of disciplines, including meteorologyanthropologysociology, environmental science, computer science, engineering, economics, ecology, and pandemic crisis management. The theory formed the basis for such fields of study as complex dynamical systems, edge of chaos theory and self-assembly processes.

Introduction

Chaos theory concerns deterministic systems which are predictable for some amount of time and then appear to become random. The amount of time for which the behavior of a chaotic system can be effectively predicted depends on three things: How much uncertainty can be tolerated in the forecast, how accurately its current state can be measured, and a time scale depending on the dynamics of the system, called the Lyapunov time. Some examples of Lyapunov times are: Chaotic electrical circuits, about 1 millisecond; weather systems, a few days (unproven); the inner solar system, 4 to 5 million years. In chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast. This means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random.

Chaotic dynamics

The map defined by x → 4 x (1 – x) and y → (x + y) mod 1 displays sensitivity to initial x positions. Here, two series of x and y values diverge markedly over time from a tiny initial difference.

In common usage, "chaos" means "a state of disorder". However, in chaos theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition, originally formulated by Robert L. Devaney, says that to classify a dynamical system as chaotic, it must have these properties:

  1. it must be sensitive to initial conditions,
  2. it must be topologically transitive,
  3. it must have dense periodic orbits.

In some cases, the last two properties above have been shown to actually imply sensitivity to initial conditions. In the discrete-time case, this is true for all continuous maps on metric spaces. In these cases, while it is often the most practically significant property, "sensitivity to initial conditions" need not be stated in the definition.

If attention is restricted to intervals, the second property implies the other two. An alternative and a generally weaker definition of chaos uses only the first two properties in the above list.

Sensitivity to initial conditions

Lorenz equations used to generate plots for the y variable. The initial conditions for x and z were kept the same but those for y were changed between 1.001, 1.0001 and 1.00001. The values for , and were 45.91, 16 and 4 respectively. As can be seen from the graph, even the slightest difference in initial values causes significant changes after about 12 seconds of evolution in the three cases. This is an example of sensitive dependence on initial conditions.

Sensitivity to initial conditions means that each point in a chaotic system is arbitrarily closely approximated by other points that have significantly different future paths or trajectories. Thus, an arbitrarily small change or perturbation of the current trajectory may lead to significantly different future behavior.

Sensitivity to initial conditions is popularly known as the "butterfly effect", so-called because of the title of a paper given by Edward Lorenz in 1972 to the American Association for the Advancement of Science in Washington, D.C., entitled Predictability: Does the Flap of a Butterfly's Wings in Brazil set off a Tornado in Texas?. The flapping wing represents a small change in the initial condition of the system, which causes a chain of events that prevents the predictability of large-scale phenomena. Had the butterfly not flapped its wings, the trajectory of the overall system could have been vastly different.

As suggested in Lorenz's book entitled The Essence of Chaos, published in 1993, "sensitive dependence can serve as an acceptable definition of chaos". In the same book, Lorenz defined the butterfly effect as: "The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration." The above definition is consistent with the sensitive dependence of solutions on initial conditions (SDIC). An idealized skiing model was developed to illustrate the sensitivity of time-varying paths to initial positions. A predictability horizon can be determined before the onset of SDIC (i.e., prior to significant separations of initial nearby trajectories).

A consequence of sensitivity to initial conditions is that if we start with a limited amount of information about the system (as is usually the case in practice), then beyond a certain time, the system would no longer be predictable. This is most prevalent in the case of weather, which is generally predictable only about a week ahead. This does not mean that one cannot assert anything about events far in the future - only that some restrictions on the system are present. For example, we know that the temperature of the surface of the earth will not naturally reach 100 °C (212 °F) or fall below −130 °C (−202 °F) on earth (during the current geologic era), but we cannot predict exactly which day will have the hottest temperature of the year.

In more mathematical terms, the Lyapunov exponent measures the sensitivity to initial conditions, in the form of rate of exponential divergence from the perturbed initial conditions. More specifically, given two starting trajectories in the phase space that are infinitesimally close, with initial separation , the two trajectories end up diverging at a rate given by

where is the time and is the Lyapunov exponent. The rate of separation depends on the orientation of the initial separation vector, so a whole spectrum of Lyapunov exponents can exist. The number of Lyapunov exponents is equal to the number of dimensions of the phase space, though it is common to just refer to the largest one. For example, the maximal Lyapunov exponent (MLE) is most often used, because it determines the overall predictability of the system. A positive MLE, coupled with the solution's boundedness, is usually taken as an indication that the system is chaotic.

In addition to the above property, other properties related to sensitivity of initial conditions also exist. These include, for example, measure-theoretical mixing (as discussed in ergodic theory) and properties of a K-system.

Non-periodicity

A chaotic system may have sequences of values for the evolving variable that exactly repeat themselves, giving periodic behavior starting from any point in that sequence. However, such periodic sequences are repelling rather than attracting, meaning that if the evolving variable is outside the sequence, however close, it will not enter the sequence and in fact, will diverge from it. Thus for almost all initial conditions, the variable evolves chaotically with non-periodic behavior.

Topological mixing

Six iterations of a set of states passed through the logistic map. The first iterate (blue) is the initial condition, which essentially forms a circle. Animation shows the first to the sixth iteration of the circular initial conditions. It can be seen that mixing occurs as we progress in iterations. The sixth iteration shows that the points are almost completely scattered in the phase space. Had we progressed further in iterations, the mixing would have been homogeneous and irreversible. The logistic map has equation . To expand the state-space of the logistic map into two dimensions, a second state, , was created as , if and otherwise.
The map defined by x → 4 x (1 – x) and y → (x + y) mod 1 also displays topological mixing. Here, the blue region is transformed by the dynamics first to the purple region, then to the pink and red regions, and eventually to a cloud of vertical lines scattered across the space.

Topological mixing (or the weaker condition of topological transitivity) means that the system evolves over time so that any given region or open set of its phase space eventually overlaps with any other given region. This mathematical concept of "mixing" corresponds to the standard intuition, and the mixing of colored dyes or fluids is an example of a chaotic system.

Topological mixing is often omitted from popular accounts of chaos, which equate chaos with only sensitivity to initial conditions. However, sensitive dependence on initial conditions alone does not give chaos. For example, consider the simple dynamical system produced by repeatedly doubling an initial value. This system has sensitive dependence on initial conditions everywhere, since any pair of nearby points eventually becomes widely separated. However, this example has no topological mixing, and therefore has no chaos. Indeed, it has extremely simple behavior: all points except 0 tend to positive or negative infinity.

Topological transitivity

A map is said to be topologically transitive if for any pair of non-empty open sets , there exists such that . Topological transitivity is a weaker version of topological mixing. Intuitively, if a map is topologically transitive then given a point x and a region V, there exists a point y near x whose orbit passes through V. This implies that it is impossible to decompose the system into two open sets.

An important related theorem is the Birkhoff Transitivity Theorem. It is easy to see that the existence of a dense orbit implies topological transitivity. The Birkhoff Transitivity Theorem states that if X is a second countable, complete metric space, then topological transitivity implies the existence of a dense set of points in X that have dense orbits.

Density of periodic orbits

For a chaotic system to have dense periodic orbits means that every point in the space is approached arbitrarily closely by periodic orbits. The one-dimensional logistic map defined by x → 4 x (1 – x) is one of the simplest systems with density of periodic orbits. For example,  →  → (or approximately 0.3454915 → 0.9045085 → 0.3454915) is an (unstable) orbit of period 2, and similar orbits exist for periods 4, 8, 16, etc. (indeed, for all the periods specified by Sharkovskii's theorem).

Sharkovskii's theorem is the basis of the Li and Yorke (1975) proof that any continuous one-dimensional system that exhibits a regular cycle of period three will also display regular cycles of every other length, as well as completely chaotic orbits.

Strange attractors

The Lorenz attractor displays chaotic behavior. These two plots demonstrate sensitive dependence on initial conditions within the region of phase space occupied by the attractor.

Some dynamical systems, like the one-dimensional logistic map defined by x → 4 x (1 – x), are chaotic everywhere, but in many cases chaotic behavior is found only in a subset of phase space. The cases of most interest arise when the chaotic behavior takes place on an attractor, since then a large set of initial conditions leads to orbits that converge to this chaotic region.

An easy way to visualize a chaotic attractor is to start with a point in the basin of attraction of the attractor, and then simply plot its subsequent orbit. Because of the topological transitivity condition, this is likely to produce a picture of the entire final attractor, and indeed both orbits shown in the figure on the right give a picture of the general shape of the Lorenz attractor. This attractor results from a simple three-dimensional model of the Lorenz weather system. The Lorenz attractor is perhaps one of the best-known chaotic system diagrams, probably because it is not only one of the first, but it is also one of the most complex, and as such gives rise to a very interesting pattern that, with a little imagination, looks like the wings of a butterfly.

Unlike fixed-point attractors and limit cycles, the attractors that arise from chaotic systems, known as strange attractors, have great detail and complexity. Strange attractors occur in both continuous dynamical systems (such as the Lorenz system) and in some discrete systems (such as the Hénon map). Other discrete dynamical systems have a repelling structure called a Julia set, which forms at the boundary between basins of attraction of fixed points. Julia sets can be thought of as strange repellers. Both strange attractors and Julia sets typically have a fractal structure, and the fractal dimension can be calculated for them.

Coexisting attractors

Coexisting chaotic and non-chaotic attractors within the generalized Lorenz model. There are 128 orbits in different colors, beginning with different initial conditions for dimensionless time between 0.625 and 5 and a heating parameter r = 680. Chaotic orbits recurrently return close to the saddle point at the origin. Nonchaotic orbits eventually approach one of two stable critical points, as shown with large blue dots. Chaotic and nonchaotic orbits occupy different regions of attraction within the phase space.

In contrast to single type chaotic solutions, studies using Lorenz models have emphasized the importance of considering various types of solutions. For example, coexisting chaotic and non-chaotic may appear within the same model (e.g., the double pendulum system) using the same modeling configurations but different initial conditions. The findings of attractor coexistence, obtained from classical and generalized Lorenz models, suggested a revised view that "the entirety of weather possesses a dual nature of chaos and order with distinct predictability", in contrast to the conventional view of "weather is chaotic".

Minimum complexity of a chaotic system

Bifurcation diagram of the logistic map xr x (1 – x). Each vertical slice shows the attractor for a specific value of r. The diagram displays period-doubling as r increases, eventually producing chaos. Darker points are visited more frequently.

Discrete chaotic systems, such as the logistic map, can exhibit strange attractors whatever their dimensionality. In contrast, for continuous dynamical systems, the Poincaré–Bendixson theorem shows that a strange attractor can only arise in three or more dimensions. Finite-dimensional linear systems are never chaotic; for a dynamical system to display chaotic behavior, it must be either nonlinear or infinite-dimensional.

The Poincaré–Bendixson theorem states that a two-dimensional differential equation has very regular behavior. The Lorenz attractor discussed below is generated by a system of three differential equations such as:

where , , and make up the system state, is time, and , , are the system parameters. Five of the terms on the right hand side are linear, while two are quadratic; a total of seven terms. Another well-known chaotic attractor is generated by the Rössler equations, which have only one nonlinear term out of seven. Sprott found a three-dimensional system with just five terms, that had only one nonlinear term, which exhibits chaos for certain parameter values. Zhang and Heidel showed that, at least for dissipative and conservative quadratic systems, three-dimensional quadratic systems with only three or four terms on the right-hand side cannot exhibit chaotic behavior. The reason is, simply put, that solutions to such systems are asymptotic to a two-dimensional surface and therefore solutions are well behaved.

While the Poincaré–Bendixson theorem shows that a continuous dynamical system on the Euclidean plane cannot be chaotic, two-dimensional continuous systems with non-Euclidean geometry can still exhibit some chaotic properties.

The above set of three ordinary differential equations has been referred to as the three-dimensional Lorenz model. Since 1963, higher-dimensional Lorenz models have been developed in numerous studies for examining the impact of an increased degree of nonlinearity, as well as its collective effect with heating and dissipations, on solution stability.

Chaos and linear systems

Perhaps surprisingly, chaos may occur also in linear systems, provided they are infinite dimensional. A theory of linear chaos is being developed in functional analysisQuantum mechanics is also often considered as a prime example of linear non chaotic theory, that dampens out chaotic behaviour in the same manner that viscosity dampens out turbulence, this is actually not the case for quantum mechanical systems with infinite degrees of freedom, such as strongly correlated systems that do exhibit forms of nano scale turbulence.

Other characteristics of Chaos

Infinite dimensional maps

The straightforward generalization of coupled discrete maps is based upon convolution integral which mediates interaction between spatially distributed maps: ,

where kernel is propagator derived as Green function of a relevant physical system,

might be logistic map alike or complex map. For examples of complex maps the Julia set or Ikeda map may serve. When wave propagation problems at distance with wavelength are considered the kernel may have a form of Green function for Schrödinger equation:.

.

Spontaneous order

Under the right conditions, chaos spontaneously evolves into a lockstep pattern. In the Kuramoto model, four conditions suffice to produce synchronization in a chaotic system. Examples include the coupled oscillation of Christiaan Huygens' pendulums, fireflies, neurons, the London Millennium Bridge resonance, and large arrays of Josephson junctions.

Moreover, from the theoretical physics standpoint, dynamical chaos itself, in its most general manifestation, is a spontaneous order. The essence here is that most orders in nature arise from the spontaneous breakdown of various symmetries. This large family of phenomena includes elasticity, superconductivity, ferromagnetism, and many others. According to the supersymmetric theory of stochastic dynamics, chaos, or more precisely, its stochastic generalization, is also part of this family. The corresponding symmetry being broken is the topological supersymmetry which is hidden in all stochastic (partial) differential equations, and the corresponding order parameter is a field-theoretic embodiment of the butterfly effect.

Combinatorial (or complex) chaos

There are also definitions of chaos that don't require the sensitivity on initial conditions property, such as combinatorial chaos (I.e. applying recursively a discrete combinatorial action). This is also comparable and similar to chaos generated by cellular automata. This is important because this type of chaos it's also equivalent to a turing machine, you can execute computation with such dynamical systems, and as such the halting problem is not decidable, therefore some computational algorithms may never end. This is ultimately a very different way for a system to be unpredictable.

History

Barnsley fern created using the chaos game. Natural forms (ferns, clouds, mountains, etc.) may be recreated through an iterated function system (IFS).

James Clerk Maxwell was the first scientist to emphasize the importance of initial conditions, and he is seen as being one of the earliest to discuss chaos theory, with work in the 1860s and 1870s. In the 1880s, while studying the three-body problem, Henri Poincaré found that there can be orbits that are nonperiodic, and yet not forever increasing nor approaching a fixed point. In 1898, Jacques Hadamard published an influential study of the motion of a free particle gliding frictionlessly on a surface of constant negative curvature, called "Hadamard's billiards". Hadamard was able to show that all trajectories are unstable, in that all particle trajectories diverge exponentially from one another, with a positive Lyapunov exponent.

Later studies, also on the topic of nonlinear differential equations, were carried out by George David BirkhoffAndrey Nikolaevich KolmogorovMary Lucy Cartwright and John Edensor Littlewood, and Stephen Smale. Experimentalists and mathematicians had encountered turbulence in fluid motion, chaotic behaviour in society and economy, nonperiodic oscillation in radio circuits and fractal patterns in nature without the benefit of a theory to explain what they were seeing.

Despite initial insights in the first half of the twentieth century, chaos theory became formalized as such only after mid-century, when it first became evident to some scientists that linear theory which is smooth and continuous, and which was the prevailing system theory at that time, simply could not explain the observed behavior of certain experiments like that of the logistic map which has jump and erratic behaviours. Both of these observations underline the connection of chaos to either stochastic or non-linear dynamical systems, but definitely non-differentiable and non-continuous time evolution.

What had been attributed to measure imprecision and simple "noise" was considered by chaos theorists as a full component of the studied systems. In 1959 Boris Valerianovich Chirikov proposed a criterion for the emergence of classical chaos in Hamiltonian systems (Chirikov criterion). He applied this criterion to explain some experimental results on plasma confinement in open mirror traps. This is regarded as the very first physical theory of chaos, which succeeded in explaining a concrete experiment. And Boris Chirikov himself is considered as a pioneer in classical and quantum chaos.

The main catalyst for the development of chaos theory was the electronic computer. Much of the mathematics of chaos theory involves the repeated iteration of simple mathematical formulas, which would be impractical to do by hand. Electronic computers made these repeated calculations practical, while figures and images made it possible to visualize these systems. As a graduate student in Chihiro Hayashi's laboratory at Kyoto University, Yoshisuke Ueda was experimenting with analog computers and noticed, on November 27, 1961, what he called "randomly transitional phenomena". Yet his advisor did not agree with his conclusions at the time, and did not allow him to report his findings until 1970.

Turbulence in the tip vortex from an airplane wing. Studies of the critical point beyond which a system creates turbulence were important for chaos theory, analyzed for example by the Soviet physicist Lev Landau, who developed the Landau-Hopf theory of turbulence. David Ruelle and Floris Takens later predicted, against Landau, that fluid turbulence could develop through a strange attractor, a main concept of chaos theory.

Edward Lorenz was an early pioneer of the theory. His interest in chaos came about accidentally through his work on weather prediction in 1961. Lorenz and his collaborator Ellen Fetter and Margaret Hamilton were using a simple digital computer, a Royal McBee LGP-30, to run weather simulations. They wanted to see a sequence of data again, and to save time they started the simulation in the middle of its course. They did this by entering a printout of the data that corresponded to conditions in the middle of the original simulation. To their surprise, the weather the machine began to predict was completely different from the previous calculation. They tracked this down to the computer printout. The computer worked with 6-digit precision, but the printout rounded variables off to a 3-digit number, so a value like 0.506127 printed as 0.506. This difference is tiny, and the consensus at the time would have been that it should have no practical effect. However, Lorenz discovered that small changes in initial conditions produced large changes in long-term outcome. Lorenz's discovery, which gave its name to Lorenz attractors, showed that even detailed atmospheric modeling cannot, in general, make precise long-term weather predictions.

In 1963, Benoit Mandelbrot, studying information theory, discovered that noise in many phenomena (including stock prices and telephone circuits) was patterned like a Cantor set, a set of points with infinite roughness and detail. Mandelbrot described both the "Noah effect" (in which sudden discontinuous changes can occur) and the "Joseph effect" (in which persistence of a value can occur for a while, yet suddenly change afterwards). In 1967, he published "How long is the coast of Britain? Statistical self-similarity and fractional dimension", showing that a coastline's length varies with the scale of the measuring instrument, resembles itself at all scales, and is infinite in length for an infinitesimally small measuring device. Arguing that a ball of twine appears as a point when viewed from far away (0-dimensional), a ball when viewed from fairly near (3-dimensional), or a curved strand (1-dimensional), he argued that the dimensions of an object are relative to the observer and may be fractional. An object whose irregularity is constant over different scales ("self-similarity") is a fractal (examples include the Menger sponge, the Sierpiński gasket, and the Koch curve or snowflake, which is infinitely long yet encloses a finite space and has a fractal dimension of circa 1.2619). In 1982, Mandelbrot published The Fractal Geometry of Nature, which became a classic of chaos theory.

In December 1977, the New York Academy of Sciences organized the first symposium on chaos, attended by David Ruelle, Robert May, James A. Yorke (coiner of the term "chaos" as used in mathematics), Robert Shaw, and the meteorologist Edward Lorenz. The following year Pierre Coullet and Charles Tresser published "Itérations d'endomorphismes et groupe de renormalisation", and Mitchell Feigenbaum's article "Quantitative Universality for a Class of Nonlinear Transformations" finally appeared in a journal, after 3 years of referee rejections. Thus Feigenbaum (1975) and Coullet & Tresser (1978) discovered the universality in chaos, permitting the application of chaos theory to many different phenomena.

In 1979, Albert J. Libchaber, during a symposium organized in Aspen by Pierre Hohenberg, presented his experimental observation of the bifurcation cascade that leads to chaos and turbulence in Rayleigh–Bénard convection systems. He was awarded the Wolf Prize in Physics in 1986 along with Mitchell J. Feigenbaum for their inspiring achievements.

In 1986, the New York Academy of Sciences co-organized with the National Institute of Mental Health and the Office of Naval Research the first important conference on chaos in biology and medicine. There, Bernardo Huberman presented a mathematical model of the eye tracking dysfunction among people with schizophrenia. This led to a renewal of physiology in the 1980s through the application of chaos theory, for example, in the study of pathological cardiac cycles.

In 1987, Per Bak, Chao Tang and Kurt Wiesenfeld published a paper in Physical Review Letters describing for the first time self-organized criticality (SOC), considered one of the mechanisms by which complexity arises in nature.

Alongside largely lab-based approaches such as the Bak–Tang–Wiesenfeld sandpile, many other investigations have focused on large-scale natural or social systems that are known (or suspected) to display scale-invariant behavior. Although these approaches were not always welcomed (at least initially) by specialists in the subjects examined, SOC has nevertheless become established as a strong candidate for explaining a number of natural phenomena, including earthquakes, (which, long before SOC was discovered, were known as a source of scale-invariant behavior such as the Gutenberg–Richter law describing the statistical distribution of earthquake sizes, and the Omori law describing the frequency of aftershocks), solar flares, fluctuations in economic systems such as financial markets (references to SOC are common in econophysics), landscape formation, forest fires, landslides, epidemics, and biological evolution (where SOC has been invoked, for example, as the dynamical mechanism behind the theory of "punctuated equilibria" put forward by Niles Eldredge and Stephen Jay Gould). Given the implications of a scale-free distribution of event sizes, some researchers have suggested that another phenomenon that should be considered an example of SOC is the occurrence of wars. These investigations of SOC have included both attempts at modelling (either developing new models or adapting existing ones to the specifics of a given natural system), and extensive data analysis to determine the existence and/or characteristics of natural scaling laws.

Also in 1987 James Gleick published Chaos: Making a New Science, which became a best-seller and introduced the general principles of chaos theory as well as its history to the broad public. Initially the domain of a few, isolated individuals, chaos theory progressively emerged as a transdisciplinary and institutional discipline, mainly under the name of nonlinear systems analysis. Alluding to Thomas Kuhn's concept of a paradigm shift exposed in The Structure of Scientific Revolutions (1962), many "chaologists" (as some described themselves) claimed that this new theory was an example of such a shift, a thesis upheld by Gleick.

The availability of cheaper, more powerful computers broadens the applicability of chaos theory. Currently, chaos theory remains an active area of research, involving many different disciplines such as mathematics, topology, physicssocial systemspopulation modeling, biology, meteorology, astrophysics, information theory, computational neuroscience, pandemic crisis management, etc.

The sensitive dependence on initial conditions (i.e., butterfly effect) has been illustrated using the following folklore:

For want of a nail, the shoe was lost.
For want of a shoe, the horse was lost.
For want of a horse, the rider was lost.
For want of a rider, the battle was lost.
For want of a battle, the kingdom was lost.
And all for the want of a horseshoe nail.

Based on the above, many people mistakenly believe that the impact of a tiny initial perturbation monotonically increases with time and that any tiny perturbation can eventually produce a large impact on numerical integrations. However, in 2008, Lorenz stated that he did not feel that this verse described true chaos but that it better illustrated the simpler phenomenon of instability and that the verse implicitly suggests that subsequent small events will not reverse the outcome. Based on the analysis, the verse only indicates divergence, not boundedness. Boundedness is important for the finite size of a butterfly pattern. The characteristic of the aforementioned verse was described as "finite-time sensitive dependence".

Applications

A conus textile shell, similar in appearance to Rule 30, a cellular automaton with chaotic behaviour

Although chaos theory was born from observing weather patterns, it has become applicable to a variety of other situations. Some areas benefiting from chaos theory today are geology, mathematics, biology, computer science, economicsengineeringfinancemeteorology, philosophy, anthropologyphysics, politicspopulation dynamics, and robotics. A few categories are listed below with examples, but this is by no means a comprehensive list as new applications are appearing.

Cryptography

Chaos theory has been used for many years in cryptography. In the past few decades, chaos and nonlinear dynamics have been used in the design of hundreds of cryptographic primitives. These algorithms include image encryption algorithms, hash functions, secure pseudo-random number generators, stream ciphers, watermarking, and steganography. The majority of these algorithms are based on uni-modal chaotic maps and a big portion of these algorithms use the control parameters and the initial condition of the chaotic maps as their keys. From a wider perspective, without loss of generality, the similarities between the chaotic maps and the cryptographic systems is the main motivation for the design of chaos based cryptographic algorithms. One type of encryption, secret key or symmetric key, relies on diffusion and confusion, which is modeled well by chaos theory. Another type of computing, DNA computing, when paired with chaos theory, offers a way to encrypt images and other information. Many of the DNA-Chaos cryptographic algorithms are proven to be either not secure, or the technique applied is suggested to be not efficient.

Robotics

Robotics is another area that has recently benefited from chaos theory. Instead of robots acting in a trial-and-error type of refinement to interact with their environment, chaos theory has been used to build a predictive model. Chaotic dynamics have been exhibited by passive walking biped robots.

Biology

For over a hundred years, biologists have been keeping track of populations of different species with population models. Most models are continuous, but recently scientists have been able to implement chaotic models in certain populations. For example, a study on models of Canadian lynx showed there was chaotic behavior in the population growth. Chaos can also be found in ecological systems, such as hydrology. While a chaotic model for hydrology has its shortcomings, there is still much to learn from looking at the data through the lens of chaos theory. Another biological application is found in cardiotocography. Fetal surveillance is a delicate balance of obtaining accurate information while being as noninvasive as possible. Better models of warning signs of fetal hypoxia can be obtained through chaotic modeling.

As Perry points out, modeling of chaotic time series in ecology is helped by constraint. There is always potential difficulty in distinguishing real chaos from chaos that is only in the model. Hence both constraint in the model and or duplicate time series data for comparison will be helpful in constraining the model to something close to the reality, for example Perry & Wall 1984. Gene-for-gene co-evolution sometimes shows chaotic dynamics in allele frequencies. Adding variables exaggerates this: Chaos is more common in models incorporating additional variables to reflect additional facets of real populations. Robert M. May himself did some of these foundational crop co-evolution studies, and this in turn helped shape the entire field. Even for a steady environment, merely combining one crop and one pathogen may result in quasi-periodic- or chaotic- oscillations in pathogen population.

Economics

It is possible that economic models can also be improved through an application of chaos theory, but predicting the health of an economic system and what factors influence it most is an extremely complex task. Economic and financial systems are fundamentally different from those in the classical natural sciences since the former are inherently stochastic in nature, as they result from the interactions of people, and thus pure deterministic models are unlikely to provide accurate representations of the data. The empirical literature that tests for chaos in economics and finance presents very mixed results, in part due to confusion between specific tests for chaos and more general tests for non-linear relationships.

Chaos could be found in economics by the means of recurrence quantification analysis. In fact, Orlando et al. by the means of the so-called recurrence quantification correlation index were able to detect hidden changes in time series. Then, the same technique was employed to detect transitions from laminar (regular) to turbulent (chaotic) phases as well as differences between macroeconomic variables and highlight hidden features of economic dynamics. Finally, chaos theory could help in modeling how an economy operates as well as in embedding shocks due to external events such as COVID-19.

Finite predictability in weather and climate

Due to the sensitive dependence of solutions on initial conditions (SDIC), also known as the butterfly effect, chaotic systems like the Lorenz 1963 model imply a finite predictability horizon. This means that while accurate predictions are possible over a finite time period, they are not feasible over an infinite time span. Considering the nature of Lorenz's chaotic solutions, the committee led by Charney et al. in 1966 extrapolated a doubling time of five days from a general circulation model, suggesting a predictability limit of two weeks. This connection between the five-day doubling time and the two-week predictability limit was also recorded in a 1969 report by the Global Atmospheric Research Program (GARP). To acknowledge the combined direct and indirect influences from the Mintz and Arakawa model and Lorenz's models, as well as the leadership of Charney et al., Shen et al. refer to the two-week predictability limit as the "Predictability Limit Hypothesis," drawing an analogy to Moore's Law.

AI-extended modeling framework

In AI-driven large language models, responses can exhibit sensitivities to factors like alterations in formatting and variations in prompts. These sensitivities are akin to butterfly effects. Although classifying AI-powered large language models as classical deterministic chaotic systems poses challenges, chaos-inspired approaches and techniques (such as ensemble modeling) may be employed to extract reliable information from these expansive language models (see also "Butterfly Effect in Popular Culture").

Other areas

In chemistry, predicting gas solubility is essential to manufacturing polymers, but models using particle swarm optimization (PSO) tend to converge to the wrong points. An improved version of PSO has been created by introducing chaos, which keeps the simulations from getting stuck. In celestial mechanics, especially when observing asteroids, applying chaos theory leads to better predictions about when these objects will approach Earth and other planets. Four of the five moons of Pluto rotate chaotically. In quantum physics and electrical engineering, the study of large arrays of Josephson junctions benefitted greatly from chaos theory. Closer to home, coal mines have always been dangerous places where frequent natural gas leaks cause many deaths. Until recently, there was no reliable way to predict when they would occur. But these gas leaks have chaotic tendencies that, when properly modeled, can be predicted fairly accurately.

Chaos theory can be applied outside of the natural sciences, but historically nearly all such studies have suffered from lack of reproducibility; poor external validity; and/or inattention to cross-validation, resulting in poor predictive accuracy (if out-of-sample prediction has even been attempted). Glass and Mandell and Selz have found that no EEG study has as yet indicated the presence of strange attractors or other signs of chaotic behavior.

Redington and Reidbord (1992) attempted to demonstrate that the human heart could display chaotic traits. They monitored the changes in between-heartbeat intervals for a single psychotherapy patient as she moved through periods of varying emotional intensity during a therapy session. Results were admittedly inconclusive. Not only were there ambiguities in the various plots the authors produced to purportedly show evidence of chaotic dynamics (spectral analysis, phase trajectory, and autocorrelation plots), but also when they attempted to compute a Lyapunov exponent as more definitive confirmation of chaotic behavior, the authors found they could not reliably do so.

In their 1995 paper, Metcalf and Allen maintained that they uncovered in animal behavior a pattern of period doubling leading to chaos. The authors examined a well-known response called schedule-induced polydipsia, by which an animal deprived of food for certain lengths of time will drink unusual amounts of water when the food is at last presented. The control parameter (r) operating here was the length of the interval between feedings, once resumed. The authors were careful to test a large number of animals and to include many replications, and they designed their experiment so as to rule out the likelihood that changes in response patterns were caused by different starting places for r.

Time series and first delay plots provide the best support for the claims made, showing a fairly clear march from periodicity to irregularity as the feeding times were increased. The various phase trajectory plots and spectral analyses, on the other hand, do not match up well enough with the other graphs or with the overall theory to lead inexorably to a chaotic diagnosis. For example, the phase trajectories do not show a definite progression towards greater and greater complexity (and away from periodicity); the process seems quite muddied. Also, where Metcalf and Allen saw periods of two and six in their spectral plots, there is room for alternative interpretations. All of this ambiguity necessitate some serpentine, post-hoc explanation to show that results fit a chaotic model.

By adapting a model of career counseling to include a chaotic interpretation of the relationship between employees and the job market, Amundson and Bright found that better suggestions can be made to people struggling with career decisions. Modern organizations are increasingly seen as open complex adaptive systems with fundamental natural nonlinear structures, subject to internal and external forces that may contribute chaos. For instance, team building and group development is increasingly being researched as an inherently unpredictable system, as the uncertainty of different individuals meeting for the first time makes the trajectory of the team unknowable.

The red cars and blue cars take turns to move; the red ones only move upwards, and the blue ones move rightwards. Every time, all the cars of the same colour try to move one step if there is no car in front of it. Here, the model has self-organized in a somewhat geometric pattern where there are some traffic jams and some areas where cars can move at top speed.

Traffic forecasting may benefit from applications of chaos theory. Better predictions of when a congestion will occur would allow measures to be taken to disperse it before it would have occurred. Combining chaos theory principles with a few other methods has led to a more accurate short-term prediction model (see the plot of the BML traffic model at right).

Chaos theory has been applied to environmental water cycle data (also hydrological data), such as rainfall and streamflow. These studies have yielded controversial results, because the methods for detecting a chaotic signature are often relatively subjective. Early studies tended to "succeed" in finding chaos, whereas subsequent studies and meta-analyses called those studies into question and provided explanations for why these datasets are not likely to have low-dimension chaotic dynamics.

Chemical revolution

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Chemical_revolution   ...