Search This Blog

Sunday, February 8, 2026

Theory of everything

From Wikipedia, the free encyclopedia

A theory of everything (TOE) or final theory is a hypothetical coherent theoretical framework of physics containing all physical principles. The scope of the concept of a "theory of everything" varies. The original technical concept referred to unification of the four fundamental interactions: electromagnetism, strong and weak nuclear forces, and gravity. Finding such a theory of everything is one of the major unsolved problems in physics. Numerous popular books apply the words "theory of everything" to more expansive concepts such as predicting everything in the universe from logic alone, complete with discussions on how this is not possible.

Starting with Isaac Newton's unification of terrestrial gravity, responsible for weight, with celestial gravity, responsible for planetary orbits, concepts in fundamental physics have been successively unified. The phenomena of electricity and magnetism were combined by James Clerk Maxwell's theory of electromagnetism and Albert Einstein's theory of relativity explained how they are connected. By the 1930s, Paul Dirac combined relativity and quantum mechanics and, working with other physicists, developed quantum electrodynamics that combines quantum mechanics and electromagnetism. Work on nuclear and particle physics lead to the discovery of the strong nuclear and weak nuclear forces which were combined in the quantum field theory to implemented the Standard Model of physics, a unification of all forces except gravity. The lone fundamental force not built into the Standard Model is gravity. General relativity provides a theoretical framework for understanding gravity across scales from the laboratory to planets to the complete universe, but it has not been successfully unified with quantum mechanics.

General relativity and quantum mechanics have been repeatedly validated in their separate fields of relevance. Since the usual domains of applicability of general relativity and quantum mechanics are so different, most situations require that only one of the two theories be used. The two theories are considered incompatible in regions of extremely small scale – the Planck scale – such as those that exist within a black hole or during the beginning stages of the universe (i.e., the moment immediately following the Big Bang). To resolve the incompatibility, a theoretical framework revealing a deeper underlying reality, unifying gravity with the other three interactions, must be discovered to harmoniously integrate the realms of general relativity and quantum mechanics into a seamless whole: a theory of everything may be defined as a comprehensive theory that, in principle, would be capable of describing all physical phenomena in the universe.

In pursuit of this goal, quantum gravity has become one area of active research. One example is string theory, which evolved into a candidate for the theory of everything, but not without drawbacks (most notably, its apparent lack of currently testable predictions) and controversy. String theory posits that at the beginning of the universe (up to 10−43 seconds after the Big Bang), the four fundamental forces were once a single fundamental force. According to string theory, every particle in the universe, at its most ultramicroscopic level (Planck length), consists of varying combinations of vibrating strings (or strands) with preferred patterns of vibration. String theory further claims that it is through these specific oscillatory patterns of strings that a particle of unique mass and force charge is created (that is to say, the electron is a type of string that vibrates one way, while the up quark is a type of string vibrating another way, and so forth). String theory/M-theory proposes six or seven dimensions of spacetime in addition to the four common dimensions for a ten- or eleven-dimensional spacetime.

Name

The scientific use of the term theory of everything occurred in the title of an article by physicist John Ellis in 1986 but it was mentioned by John Henry Schwarz in a conference proceedings in 1985.

Historical antecedents

Antiquity to 19th century

Archimedes was possibly the first philosopher to have described nature with axioms (or principles) and then deduce new results from them. Once Isaac Newton proposed his universal law of gravitation, mathematician Pierre-Simon Laplace suggested that such laws could in principle allow deterministic prediction of the future state of the universe. Any "theory of everything" is similarly expected to be based on axioms and to deduce all observable phenomena from them.

In the late 17th century, Isaac Newton's description of the long-distance force of gravity implied that not all forces in nature result from things coming into contact. Newton's work in his Mathematical Principles of Natural Philosophy dealt with this in a further example of unification, in this case unifying Galileo's work on terrestrial gravity, Kepler's laws of planetary motion and the phenomenon of tides by explaining these apparent actions at a distance under one single law: the law of universal gravitation. Newton achieved the first great unification in physics, and he further is credited with laying the foundations of future endeavors for a grand unified theory.

An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.

— Essai philosophique sur les probabilités by Pierre-Simon Laplace, Introduction. 1814

Modern quantum mechanics implies that uncertainty is inescapable, and thus that Laplace's vision has to be amended: a theory of everything must include gravitation and quantum mechanics. Even ignoring quantum mechanics, chaos theory is sufficient to guarantee that the future of any sufficiently complex mechanical or astronomical system is unpredictable.

In 1820, Hans Christian Ørsted discovered a connection between electricity and magnetism, triggering decades of work that culminated in 1865, in James Clerk Maxwell's theory of electromagnetism, which achieved the second great unification in physics. During the 19th and early 20th centuries, it gradually became apparent that many common examples of forces – contact forces, elasticity, viscosity, friction, and pressure – result from electrical interactions between the smallest particles of matter.

In his experiments of 1849–1850, Michael Faraday was the first to search for a unification of gravity with electricity and magnetism. However, he found no connection.

Early 20th century

In the late 1920s, the then new quantum mechanics showed that the chemical bonds between atoms were examples of (quantum) electrical forces, justifying Dirac's boast that "the underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known".

After 1915, when Albert Einstein published the theory of gravity (general relativity), the search for a unified field theory combining gravity with electromagnetism began with a renewed interest. In Einstein's day, the strong and the weak forces had not yet been discovered, yet he found the potential existence of two other distinct forces, gravity and electromagnetism, far more alluring. This launched his 40-year voyage in search of the so-called "unified field theory" that he hoped would show that these two forces are really manifestations of one grand, underlying principle. During the last few decades of his life, this ambition alienated Einstein from the rest of mainstream of physics, as the mainstream was instead far more excited about the emerging framework of quantum mechanics. Einstein wrote to a friend in the early 1940s, "I have become a lonely old chap who is mainly known because he doesn't wear socks and who is exhibited as a curiosity on special occasions." Prominent contributors were Gunnar Nordström, Hermann Weyl, Arthur Eddington, David HilbertTheodor Kaluza, Oskar Klein (see Kaluza–Klein theory), and most notably, Albert Einstein and his collaborators. Einstein searched in earnest for, but ultimately failed to find, a unifying theory (see Einstein–Maxwell–Dirac equations).

Late 20th century and the nuclear interactions

In the 20th century, the search for a unifying theory was interrupted by the discovery of the strong and weak nuclear forces, which differ both from gravity and from electromagnetism. A further hurdle was the acceptance that in a theory of everything, quantum mechanics had to be incorporated from the outset, rather than emerging as a consequence of a deterministic unified theory, as Einstein had hoped.

Gravity and electromagnetism are able to coexist as entries in a list of classical forces, but for many years it seemed that gravity could not be incorporated into the quantum framework, let alone unified with the other fundamental forces. For this reason, work on unification, for much of the 20th century, focused on understanding the three forces described by quantum mechanics: electromagnetism and the weak and strong forces. The first two were combined in 1967–1968 by Sheldon Glashow, Steven Weinberg, and Abdus Salam into the electroweak force. Electroweak unification is a broken symmetry: the electromagnetic and weak forces appear distinct at low energies because the particles carrying the weak force, the W and Z bosons, have non-zero masses (80.4 GeV/c2 and 91.2 GeV/c2, respectively), whereas the photon, which carries the electromagnetic force, is massless. At higher energies W bosons and Z bosons can be created easily and the unified nature of the force becomes apparent.

While the strong and electroweak forces coexist under the Standard Model of particle physics, they remain distinct. Thus, the pursuit of a theory of everything remained unsuccessful: neither a unification of the strong and electroweak forces – which Laplace would have called 'contact forces' – nor a unification of these forces with gravitation had been achieved.

Modern physics

A depiction of the cGh cube
 
Depicted as a Venn diagram

Conventional sequence of theories

A theory of everything would unify all the fundamental interactions of nature: gravitation, the strong interaction, the weak interaction, and electromagnetism. Because the weak interaction can transform elementary particles from one kind into another, the theory of everything should also predict all the different kinds of particles possible. The usual assumed path of theories is given in the following graph, where each unification step leads one level up on the graph.





Theory of everything













Quantum gravity










Space Curvature



Electronuclear force (Grand Unified Theory)

















Standard model of cosmology


Standard Model of particle physics
















Strong interaction
SU(3)





Electroweak interaction
SU(2) x U(1)Y





























Weak interaction
SU(2)




Electromagnetism
U(1)EM






































Electricity



Magnetism



In this graph, electroweak unification occurs at around 100 GeV, grand unification is predicted to occur at 1016 GeV, and unification of the GUT force with gravity is expected at the Planck energy, roughly 1019 GeV.

Several Grand Unified Theories (GUTs) have been proposed to unify electromagnetism and the weak and strong forces. Grand unification would imply the existence of an electronuclear force; it is expected to set in at energies of the order of 1016 GeV, far greater than could be reached by any currently feasible particle accelerator. Although the simplest grand unified theories have been experimentally ruled out, the idea of a grand unified theory, especially when linked with supersymmetry, remains a favorite candidate in the theoretical physics community. Supersymmetric grand unified theories seem plausible not only for their theoretical "beauty", but because they naturally produce large quantities of dark matter, and because the inflationary force may be related to grand unified theory physics (although it does not seem to form an inevitable part of the theory). Yet grand unified theories are clearly not the final answer; both the current Standard Model and all proposed GUTs are quantum field theories which require the problematic technique of renormalization to yield sensible answers. This is usually regarded as a sign that these are only effective field theories, omitting crucial phenomena relevant only at very high energies.

The final step in the graph requires resolving the separation between quantum mechanics and gravitation, often equated with general relativity. Numerous researchers concentrate their efforts on this specific step; nevertheless, no accepted theory of quantum gravity, and thus no accepted theory of everything, has emerged with observational evidence. It is usually assumed that the theory of everything will also solve the remaining problems of grand unified theories.

In addition to explaining the forces listed in the graph, a theory of everything may also explain the status of at least two candidate forces suggested by modern cosmology: an inflationary force and dark energy. Furthermore, cosmological experiments also suggest the existence of dark matter, supposedly composed of fundamental particles outside the scheme of the Standard Model. However, the existence of these forces and particles has not been proven.

String theory and M-theory

Unsolved problem in physics
Is string theory, superstring theory, or M-theory, or some other variant on this theme, a step on the road to a "theory of everything", or just a blind alley?

Since the 1990s, some physicists such as Edward Witten believe that 11-dimensional M-theory, which is described in some limits by one of the five perturbative superstring theories, and in another by the maximally-supersymmetric eleven-dimensional supergravity, is the theory of everything. There is no widespread consensus on this issue.

One remarkable property of string/M-theory is that seven extra dimensions are required for the theory's consistency, on top of the four dimensions in our universe. In this regard, string theory can be seen as building on the insights of the Kaluza–Klein theory, in which it was realized that applying general relativity to a 5-dimensional universe, with one space dimension small and curled up, looks from the 4-dimensional perspective like the usual general relativity together with Maxwell's electrodynamics. This lent credence to the idea of unifying gauge and gravity interactions, and to extra dimensions, but did not address the detailed experimental requirements. Another important property of string theory is its supersymmetry, which together with extra dimensions are the two main proposals for resolving the hierarchy problem of the Standard Model, which is (roughly) the question of why gravity is so much weaker than any other force. The extra-dimensional solution involves allowing gravity to propagate into the other dimensions while keeping other forces confined to a 4-dimensional spacetime, an idea that has been realized with explicit stringy mechanisms.

Research into string theory has been encouraged by a variety of theoretical and experimental factors. On the experimental side, the particle content of the Standard Model supplemented with neutrino masses fits into a spinor representation of SO(10), a subgroup of E8 that routinely emerges in string theory, such as in heterotic string theory or (sometimes equivalently) in F-theory. String theory has mechanisms that may explain why fermions come in three hierarchical generations, and explain the mixing rates between quark generations. On the theoretical side, it has begun to address some of the key questions in quantum gravity, such as resolving the black hole information paradox, counting the correct entropy of black holes and allowing for topology-changing processes. It has also led to many insights in pure mathematics and in ordinary, strongly-coupled gauge theory due to the Gauge/String duality.

In the late 1990s, it was noted that one major hurdle in this endeavor is that the number of possible 4-dimensional universes is incredibly large. The small, "curled up" extra dimensions can be compactified in an enormous number of different ways (one estimate is 10500) each of which leads to different properties for the low-energy particles and forces. This array of models is known as the string theory landscape.

One proposed solution is that many or all of these possibilities are realized in one or another of a huge number of universes, but that only a small number of them are habitable. Hence what we normally conceive as the fundamental constants of the universe are ultimately the result of the anthropic principle rather than dictated by theory. This has led to criticism of string theory, arguing that it cannot make useful (i.e., original, falsifiable, and verifiable) predictions and regarding it as a pseudoscience/philosophy. Others disagree, and string theory remains an active topic of investigation in theoretical physics.

Loop quantum gravity

Current research on loop quantum gravity may eventually play a fundamental role in a theory of everything, but that is not its primary aim. Loop quantum gravity also introduces a lower bound on the possible length scales.

There have been recent claims that loop quantum gravity may be able to reproduce features resembling the Standard Model. So far only the first generation of fermions (leptons and quarks) with correct parity properties have been modelled by Sundance Bilson-Thompson using preons constituted of braids of spacetime as the building blocks. However, there is no derivation of the Lagrangian that would describe the interactions of such particles, nor is it possible to show that such particles are fermions, nor that the gauge groups or interactions of the Standard Model are realised. Use of quantum computing concepts made it possible to demonstrate that the particles are able to survive quantum fluctuations.

This model leads to an interpretation of electric and color charge as topological quantities (electric as number and chirality of twists carried on the individual ribbons and colour as variants of such twisting for fixed electric charge).

Bilson-Thompson's original paper suggested that the higher-generation fermions could be represented by more complicated braidings, although explicit constructions of these structures were not given. The electric charge, color, and parity properties of such fermions would arise in the same way as for the first generation. The model was expressly generalized for an infinite number of generations and for the weak force bosons (but not for photons or gluons) in a 2008 paper by Bilson-Thompson, Hackett, Kauffman and Smolin.

Present status

At present, there is no candidate theory of everything that includes the Standard Model of particle physics and general relativity and that, at the same time, is able to calculate the fine-structure constant or the mass of the electron. Most particle physicists expect that the outcome of ongoing experiments – the search for new particles at the large particle accelerators and for dark matter – are needed in order to provide further input for a theory of everything.

Other proposals

The search for a Theory of Everything is hindered by fundamental incompatibility between the noncommutative and discrete operator algebra structures underlying quantum mechanics and the commutative continuous geometric nature of classical spacetime in general relativity. Reconciling the background-independent, diffeomorphism-invariant formulation of gravity with the fixed-background, time-ordered framework of quantum theory raises profound conceptual issues such as the problem of time and quantum measurement. While a fully successful and experimentally confirmed unified field theory remains elusive, several recent proposals have been advanced, each employing distinct mathematical structures and physical assumptions.

Twistor theory, developed by Roger Penrose, reinterprets the structure of spacetime and fundamental particles through complex geometric objects called twistors. Instead of treating spacetime points as fundamental, twistor theory encodes physical fields and particles into complex projective spaces, aiming to unify quantum theory and general relativity in a geometric framework. Twistors provide potential descriptions of massless fields and scattering amplitudes and have influenced modern approaches in mathematical physics and quantum field theory, including advances in scattering amplitude calculations. Twistor theory has not yet yielded a complete unified field theory.

Alain Connes developed a geometric framework known as noncommutative geometry in which spacetime is extended via noncommutative operator algebras. When combined with spectral triples, this approach can reproduce features of the Standard Model, including the Higgs field, from purely geometric data.

Asymptotic safety, a concept developed by Steven Weinberg in 1976 and also known as Quantum Einstein Gravity and nonperturbative renormalizability, suggests that gravity could find a role in quantum theory if its behavior at very high energies becomes stabilized into a nontrivial ultraviolet (UV) fixed point. This form has been studied through functional renormalization group methods and on the lattice, and applied in cosmology, particle physics, black hole physics, and quantum gravity. Whereas overwhelming numerical evidence does exist that such a fixed point does occur in lower-dimensional constructions and in the numerics, a rigorous proof even for four-dimensional spacetime remains to be found.

Arguments against

In parallel to the intense search for a theory of everything, various scholars have debated the possibility of its discovery.

Gödel's incompleteness theorem

A number of scholars claim that Gödel's incompleteness theorem suggests that attempts to construct a theory of everything are bound to fail. Gödel's theorem, informally stated, asserts that any formal theory sufficient to express elementary arithmetical facts and strong enough for them to be proved is either inconsistent (both a statement and its denial can be derived from its axioms) or incomplete, in the sense that there is a true statement that can't be derived in the formal theory.

The Benedictine priest and science writer Stanley Jaki, in his 1966 book The Relevance of Physics, suggested that Gödel's theorem dooms searches for a deterministic "theory of everything" at least as a consistent non-trivial mathematical theory.

Freeman Dyson has stated that "Gödel's theorem implies that pure mathematics is inexhaustible. No matter how many problems we solve, there will always be other problems that cannot be solved within the existing rules. […] Because of Gödel's theorem, physics is inexhaustible too. The laws of physics are a finite set of rules, and include the rules for doing mathematics, so that Gödel's theorem applies to them."

Stephen Hawking originally believed that a theory of everything could be found, but after considering Gödel's Theorem, he concluded that one was not obtainable: "Some people will be very disappointed if there is not an ultimate theory that can be formulated as a finite number of principles. I used to belong to that camp, but I have changed my mind."

Jürgen Schmidhuber (1997) has argued against this view; he asserts that Gödel's theorems are irrelevant for computable physics. In 2000, Schmidhuber explicitly constructed limit-computable, deterministic universes whose pseudo-randomness based on undecidable, Gödel-like halting problems is extremely hard to detect but does not prevent formal theories of everything describable by very few bits of information.

Related critique was offered by Solomon Feferman and others. Douglas S. Robertson offers Conway's game of life as an example: The underlying rules are simple and complete, but there are formally undecidable questions about the game's behaviors. Analogously, it may (or may not) be possible to completely state the underlying rules of physics with a finite number of well-defined laws, but there is little doubt that there are questions about the behavior of physical systems which are formally undecidable on the basis of those underlying laws.

Fundamental limits in accuracy

No physical theory to date is believed to be precisely accurate. Instead, physics has proceeded by a series of "successive approximations" allowing more and more accurate predictions over a wider and wider range of phenomena. Some physicists believe that it is therefore a mistake to confuse theoretical models with the true nature of reality, and hold that the series of approximations will never terminate in the "truth". Einstein himself expressed this view on occasions.

Definition of fundamental laws

There is a philosophical debate within the physics community as to whether a theory of everything deserves to be called the fundamental law of the universe. One view is the hard reductionist position that the theory of everything is the fundamental law and that all other theories that apply within the universe are a consequence of the theory of everything. Another view is that emergent laws, which govern the behavior of complex systems, should be seen as equally fundamental. Examples of emergent laws are the second law of thermodynamics and the theory of natural selection. The advocates of emergence argue that emergent laws, especially those describing complex or living systems are independent of the low-level, microscopic laws. In this view, emergent laws are as fundamental as a theory of everything.

Impossibility of calculation

Weinberg points out that calculating the precise motion of an actual projectile in the Earth's atmosphere is impossible. So how can we know we have an adequate theory for describing the motion of projectiles? Weinberg suggests that we know principles (Newton's laws of motion and gravitation) that work "well enough" for simple examples, like the motion of planets in empty space. These principles have worked so well on simple examples that we can be reasonably confident they will work for more complex examples. For example, although general relativity includes equations that do not have exact solutions, it is widely accepted as a valid theory because all of its equations with exact solutions have been experimentally verified. Likewise, a theory of everything must work for a wide range of simple examples in such a way that we can be reasonably confident it will work for every situation in physics. Difficulties in creating a theory of everything often begin to appear when combining quantum mechanics with the theory of general relativity, as the equations of quantum mechanics begin to falter when the force of gravity is applied to them.

Human intelligence

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Human_intelligence

Human intelligence is the intellectual capability of humans, which is marked by complex cognitive feats and high levels of motivation and self-awareness. Using their intelligence, humans are able to learn, form concepts, understand, and apply logic and reason. Human intelligence is also thought to encompass their capacities to recognize patterns, plan, innovate, solve problems, make decisions, retain information, and use language to communicate.

There are conflicting ideas about how intelligence should be conceptualized and measured. In psychometrics, human intelligence is commonly assessed by intelligence quotient (IQ) tests, although the validity of these tests is disputed. Several subcategories of intelligence, such as emotional intelligence and social intelligence, have been proposed, and there remains significant debate as to whether these represent distinct forms of intelligence.

There is also ongoing debate regarding how an individual's level of intelligence is formed, ranging from the idea that intelligence is fixed at birth to the idea that it is malleable and can change depending on a person's mindset and efforts.

History

Psychologists such as Thomas Suddendorf argue we can learn about human intelligence by studying close relatives like primates. We can also get insights into the evolution of human brain by comparing the human brain with that of other organisms which in turn can offer insights into evolution of human intelligence.

Correlates

As a construct and as measured by intelligence tests, intelligence is one of the most useful concepts in psychology, because it correlates with many relevant variables, for instance the probability of suffering an accident, or the amount of one's salary. Other examples include:

Education
According to a 2018 metastudy of educational effects on intelligence, education appears to be the "most consistent, robust, and durable method" known for raising intelligence.
Personality
A landmark set of meta-analyses synthesizing thousands of studies including millions of people from over 50 countries found that many personality traits are intricately related to cognitive abilities. Neuroticism-related traits display the most negative relations, whereas traits like activity, industriousness, compassion, and openness are positively related to various abilities.
Myopia
A number of studies have shown a correlation between IQ and myopia. Some suggest that the reason for the correlation is environmental: either people with a higher IQ are more likely to damage their eyesight with prolonged reading, or people who read more are more likely to attain a higher IQ; others contend that a genetic link exists.
Aging
There is evidence that aging causes a decline in cognitive functions. In one cross-sectional study, various cognitive functions measured declines by about 0.8 in z-score from age 20 to age 50; the cognitive functions included speed of processing, working memory, and long-term memory.
Genes
A number of single-nucleotide polymorphisms in human DNA are correlated with higher IQ scores.

Theories

Relevance of IQ tests

In psychology, human intelligence is commonly assessed by IQ scores that are determined by IQ tests. In general, higher IQ scores are associated with better outcomes in life. However, while IQ test scores show a high degree of inter-test reliability, and predict certain forms of achievement effectively, their construct validity as a holistic measure of human intelligence is considered dubious. While IQ tests are generally understood to measure some forms of intelligence, they may fail to serve as an accurate measure of broader definitions of human intelligence inclusive of creativity and social intelligence. According to psychologist Wayne Weiten, "IQ tests are valid measures of the kind of intelligence necessary to do well in academic work. But if the purpose is to assess intelligence in a broader sense, the validity of IQ tests is questionable."

Theory of multiple intelligences

Howard Gardner's theory of multiple intelligences is based on studies of normal children and adults, of gifted individuals (including so-called "savants"), of persons who have suffered brain damage, of experts and virtuosos, and of individuals from diverse cultures. Gardner breaks intelligence down into components. In the first edition of his book Frames of Mind (1983), he described seven distinct types of intelligence: logical-mathematical, linguistic, spatial, musical, kinesthetic, interpersonal, and intrapersonal. In a second edition, he added two more types of intelligence: naturalist and existential intelligences. He argues that psychometric (IQ) tests address only linguistic and logical plus some aspects of spatial intelligence. A criticism of Gardner's theory is that it has never been tested, or subjected to peer review, by Gardner or anyone else, and indeed that it is unfalsifiable. Others (e.g. Locke, 2005) suggest that recognizing many specific forms of intelligence (specific aptitude theory) implies a political—rather than scientific—agenda, intended to appreciate the uniqueness in all individuals, rather than recognizing potentially true and meaningful differences in individual capacities. Schmidt and Hunter suggest that the predictive validity of specific aptitudes over and above that of general mental ability, or "g", has not received empirical support. On the other hand, Jerome Bruner agreed with Gardner that the intelligences were "useful fictions", and went on to state that "his approach is so far beyond the data-crunching of mental testers that it deserves to be cheered."

Triarchic theory of intelligence

Robert Sternberg proposed the triarchic theory of intelligence to provide a more comprehensive description of intellectual competence than traditional differential or cognitive theories of human ability. The triarchic theory describes three fundamental aspects of intelligence:

  1. Analytic intelligence comprises the mental processes through which intelligence is expressed.
  2. Creative intelligence is necessary when an individual is confronted with a challenge that is nearly, but not entirely, novel or when an individual is engaged in automatizing the performance of a task.
  3. Practical intelligence is bound to a sociocultural milieu and involves adaptation to, selection of, and shaping of the environment to maximize fit in the context.

The triarchic theory does not argue against the validity of a general intelligence factor; instead, the theory posits that general intelligence is part of analytic intelligence, and only by considering all three aspects of intelligence can the full range of intellectual functioning be understood.

Sternberg updated the triarchic theory and renamed it to the Theory of Successful Intelligence. He now defines intelligence as an individual's assessment of success in life by the individual's own (idiographic) standards and within the individual's sociocultural context. Success is achieved by using combinations of analytical, creative, and practical intelligence. The three aspects of intelligence are referred to as processing skills. The processing skills are applied to the pursuit of success through what were the three elements of practical intelligence: adapting to, shaping of, and selecting of one's environments. The mechanisms that employ the processing skills to achieve success include utilizing one's strengths and compensating or correcting for one's weaknesses.

Sternberg's theories and research on intelligence remain contentious within the scientific community.

PASS theory of intelligence

Based on A. R. Luria's (1966) seminal work on the modularization of brain function, and supported by decades of neuroimaging research, the PASS Theory of Intelligence (Planning/Attention/Simultaneous/Successive) proposes that cognition is organized in three systems and the following four processes:

  1. Planning involves executive functions responsible for controlling and organizing behavior, selecting and constructing strategies, and monitoring performance.
  2. Attention is responsible for maintaining arousal levels and alertness, and ensuring focus on relevant stimuli.
  3. Simultaneous processing is engaged when the relationship between items and their integration into whole units of information is required. Examples of this include recognizing figures, such as a triangle within a circle vs. a circle within a triangle, or the difference between "he had a shower before breakfast" and "he had breakfast before a shower."
  4. Successive processing is required for organizing separate items in a sequence such as remembering a sequence of words or actions exactly in the order in which they had just been presented.

These four processes are functions of four areas of the brain. Planning is broadly located in the front part of our brains, the frontal lobe. Attention and arousal are combined functions of the frontal lobe and the lower parts of the cortex, although the parietal lobes are also involved in attention as well. Simultaneous processing and Successive processing occur in the posterior region or the back of the brain. Simultaneous processing is broadly associated with the occipital and the parietal lobes while Successive processing is broadly associated with the frontal-temporal lobes. The PASS theory is heavily indebted both to Luria and to studies in cognitive psychology involved in promoting a better look at intelligence.

Piaget's theory and Neo-Piagetian theories

In Piaget's theory of cognitive development the focus is not on mental abilities but rather on a child's mental models of the world. As a child develops, the child creates increasingly more accurate models of the world which enable the child to interact with the world more effectively. One example is object permanence with which the child develops a model in which objects continue to exist even when they cannot be seen, heard, or touched.

Piaget's theory described four main stages and many sub-stages in the development. These four main stages are:

  1. sensorimotor stage (birth–2 years)
  2. pre-operational stage (2–7 years)
  3. concrete operational stage (7–11 years)
  4. formal operations stage (11–16 years)

Progress through these stages is correlated with, but not identical to psychometric IQ. Piaget conceptualizes intelligence as an activity more than as a capacity.

One of Piaget's most famous studies focused purely on the discriminative abilities of children between the ages of two and a half years old, and four and a half years old. He began the study by taking children of different ages and placing two lines of sweets, one with the sweets in a line spread further apart, and one with the same number of sweets in a line placed more closely together. He found that, "Children between 2 years, 6 months old and 3 years, 2 months old correctly discriminate the relative number of objects in two rows; between 3 years, 2 months and 4 years, 6 months they indicate a longer row with fewer objects to have 'more'; after 4 years, 6 months they again discriminate correctly". Initially younger children were not studied, because if at the age of four years a child could not conserve quantity, then a younger child presumably could not either. The results show however that children that are younger than three years and two months have quantity conservation, but as they get older they lose this quality, and do not recover it until four and a half years old. This attribute may be lost temporarily because of an overdependence on perceptual strategies, which correlates more candy with a longer line of candy, or because of the inability for a four-year-old to reverse situations.

This experiment demonstrated several results. First, younger children have a discriminative ability that shows the logical capacity for cognitive operations exists earlier than previously acknowledged. Also, young children can be equipped with certain qualities for cognitive operations, depending on how logical the structure of the task is. Research also shows that children develop explicit understanding at age five and as a result, the child will count the sweets to decide which has more. Finally the study found that overall quantity conservation is not a basic characteristic of humans' native inheritance.

Piaget's theory has been criticized on the grounds that the age of appearance of a new model of the world, such as object permanence, is dependent on how the testing is done (see the article on object permanence). More generally, the theory may be very difficult to test empirically because of the difficulty of proving or disproving that a mental model is the explanation for the results of the testing.

Neo-Piagetian theories of cognitive development expand Piaget's theory in various ways such as also considering psychometric-like factors such as processing speed and working memory, "hypercognitive" factors like self-monitoring, more stages, and more consideration on how progress may vary in different domains such as spatial or social.

Parieto-frontal integration theory of intelligence

Based on a review of 37 neuroimaging studies, Jung and Haier proposed that the biological basis of intelligence stems from how well the frontal and parietal regions of the brain communicate and exchange information with each other. Subsequent neuroimaging and lesion studies report general consensus with the theory. A review of the neuroscience and intelligence literature concludes that the parieto-frontal integration theory is the best available explanation for human intelligence differences.

Investment theory

Based on the Cattell–Horn–Carroll theory, the tests of intelligence most often used in the relevant studies include measures of fluid ability (gf) and crystallized ability (gc); that differ in their trajectory of development in people. The "investment theory" by Cattell states that the individual differences observed in the procurement of skills and knowledge (gc) are partially attributed to the "investment" of gf, thus suggesting the involvement of fluid intelligence in every aspect of the learning process. The investment theory suggests that personality traits affect "actual" ability, and not scores on an IQ test.

Hebb's theory of intelligence suggested a bifurcation as well, Intelligence A (physiological), that could be seen as a semblance of fluid intelligence and Intelligence B (experiential), similar to crystallized intelligence.

Intelligence compensation theory (ICT)

The intelligence compensation theory states that individuals who are comparatively less intelligent work harder and more methodically, and become more resolute and thorough (more conscientious) in order to achieve goals, to compensate for their "lack of intelligence" whereas more intelligent individuals do not require traits/behaviours associated with the personality factor conscientiousness to progress as they can rely on the strength of their cognitive abilities as opposed to structure or effort. The theory suggests the existence of a causal relationship between intelligence and conscientiousness, such that the development of the personality trait of conscientiousness is influenced by intelligence. This assumption is deemed plausible as it is unlikely that the reverse causal relationship could occur; implying that the negative correlation would be higher between fluid intelligence (gf) and conscientiousness. This is justified by the timeline of development of gf, gc, and personality, as crystallized intelligence would not have developed completely when personality traits develop. Subsequently, during school-going ages, more conscientious children would be expected to gain more crystallized intelligence (knowledge) through education, as they would be more efficient, thorough, hard-working, and dutiful.

This theory has recently been contradicted by evidence that identifies compensatory sample selection which attributes the findings to the bias that comes from selecting samples containing people above a certain threshold of achievement.

Bandura's theory of self-efficacy and cognition

The view of cognitive ability has evolved over the years, and it is no longer viewed as a fixed property held by an individual. Instead, the current perspective describes it as a general capacity, comprising not only cognitive, but motivational, social, and behavioural aspects as well. These facets work together to perform numerous tasks. An essential skill often overlooked is that of managing emotions and aversive experiences that can compromise one's quality of thought and activity. Bandura bridges the link between intelligence and success by crediting individual differences in self-efficacy. Bandura's theory identifies the difference between possessing skills and being able to apply them in challenging situations. The theory suggests that individuals with the same level of knowledge and skill may perform badly, averagely, or excellently based on differences in self-efficacy.

A key role of cognition is to allow for one to predict events and in turn devise methods to deal with these events effectively. These skills are dependent on processing of unclear and ambiguous stimuli. People must be able to rely on their reserve of knowledge to identify, develop, and execute options. They must be able to apply the learning acquired from previous experiences. Thus, a stable sense of self-efficacy is essential to stay focused on tasks in the face of challenging situations.

Bandura's theory of self-efficacy and intelligence suggests that individuals with a relatively low sense of self-efficacy in any field will avoid challenges. This effect is heightened when they perceive the situations as personal threats. When failure occurs, they recover from it more slowly than others, and credit the failure to an insufficient aptitude. On the other hand, persons with high levels of self-efficacy hold a task-diagnostic aim that leads to effective performance.

Process, personality, intelligence and knowledge theory (PPIK)

Predicted growth curves for Intelligence as process, crystallized intelligence, occupational knowledge, and avocational knowledge based on Ackerman's PPIK Theory

Developed by Ackerman, the PPIK (process, personality, intelligence, and knowledge) theory further develops the approach on intelligence as proposed by Cattell, the Investment theory, and Hebb, suggesting a distinction between intelligence as knowledge and intelligence as process (two concepts that are comparable and related to gc and gf respectively, but broader and closer to Hebb's notions of "Intelligence A" and "Intelligence B") and integrating these factors with elements such as personality, motivation, and interests.

Ackerman describes the difficulty of distinguishing process from knowledge, as content cannot be eliminated from any ability test.

Personality traits are not significantly correlated with the intelligence as process aspect except in the context of psychopathology. One exception to this generalization has been the finding of sex differences in cognitive abilities, specifically abilities in mathematical and spatial form.

On the other hand, the intelligence as knowledge factor has been associated with personality traits of Openness and Typical Intellectual Engagement, which also strongly correlate with verbal abilities (associated with crystallized intelligence).

Latent inhibition

It appears that Latent inhibition, the phenomenon of familiar stimuli having a postponed reaction time when compared with unfamiliar stimuli, has a positive correlation with creativity.

Improving

Genetic engineering

Because intelligence appears to be at least partly dependent on brain structure and the genes shaping brain development, it has been proposed that genetic engineering could be used to enhance intelligence, a process sometimes called biological uplift in science fiction. Genetic enhancement experiments on mice have demonstrated superior ability in learning and memory in various behavioral tasks.

Education

Higher IQ leads to greater success in education, but independently, education raises IQ scores. A 2017 meta-analysis suggests education increases IQ by 1–5 points per year of education, or at least increases IQ test-taking ability.

Nutrition and chemicals

Substances which actually or purportedly improve intelligence or other mental functions are called nootropics. A meta analysis shows omega-3 fatty acids improve cognitive performance among those with cognitive deficits, but not among healthy subjects. A meta-regression shows omega-3 fatty acids improve the moods of patients with major depression (major depression is associated with cognitive nutrient deficits).

Activities and adult neural development

Digital tools

Digital media

There is research and development about the cognitive impacts of smartphones and digital technology.

Some educators and experts have raised some concerns about how technology may negatively affect students' thinking abilities and academic performance.

Measured results of the study

Brain training

Attempts to raise IQ with brain training have led to increases on aspects related with the training tasks – for instance working memory – but it is yet unclear if these increases generalize to increased intelligence per se.

A 2008 research paper claimed that practicing a dual n-back task can increase fluid intelligence (gf), as measured in several different standard tests. This finding received some attention from popular media, including an article in Wired. However, a subsequent criticism of the paper's methodology questioned the experiment's validity and took issue with the lack of uniformity in the tests used to evaluate the control and test groups. For example, the progressive nature of Raven's Advanced Progressive Matrices (APM) test may have been compromised by modifications of time restrictions (i.e., 10 minutes were allowed to complete a normally 45-minute test).

Philosophy

Efforts to influence intelligence raise ethical issues. Neuroethics considers the ethical, legal, and social implications of neuroscience, and deals with issues such as the difference between treating a human neurological disease and enhancing the human brain, and how wealth impacts access to neurotechnology. Neuroethical issues interact with the ethics of human genetic engineering.

Transhumanist theorists study the possibilities and consequences of developing and using techniques to enhance human abilities and aptitudes.

Eugenics is a social philosophy that advocates the improvement of human hereditary traits through various forms of intervention. Eugenics has variously been regarded as meritorious or deplorable in different periods of history, falling greatly into disrepute after the defeat of Nazi Germany in World War II.

Measuring

Chart of IQ Distributions on 1916 Stanford-Binet Test
Score distribution chart for sample of 905 children tested on 1916 Stanford-Binet Test

The approach to understanding intelligence with the most supporters and published research over the longest period of time is based on psychometric testing. It is also by far the most widely used in practical settings. Intelligence quotient (IQ) tests include the Stanford-Binet, Raven's Progressive Matrices, the Wechsler Adult Intelligence Scale and the Kaufman Assessment Battery for Children. There are also psychometric tests that are not intended to measure intelligence itself but some closely related construct such as scholastic aptitude. In the United States examples include the SSAT, the SAT, the ACT, the GRE, the MCAT, the LSAT, and the GMAT. Regardless of the method used, almost any test that requires examinees to reason and has a wide range of question difficulty will produce intelligence scores that are approximately normally distributed in the general population.

Intelligence tests are widely used in educational, business, and military settings because of their efficacy in predicting behavior. IQ and g (discussed in the next section) are correlated with many important social outcomes—individuals with low IQs are more likely to be divorced, have a child out of marriage, be incarcerated, and need long-term welfare support, while individuals with high IQs are associated with more years of education, higher status jobs and higher income. Intelligence as measured by Psychometric tests has been found to be highly correlated with successful training and performance outcomes (e.g., adaptive performance), and IQ/g is the single best predictor of successful job performance; however, some researchers although largely concurring with this finding have advised caution in citing the strength of the claim due to a number of factors, these include: statistical assumptions imposed underlying some of these studies, studies done prior to 1970 which appear inconsistent with more recent studies, and ongoing debates within the Psychology literature as to the validity of current IQ measurement tools.

General intelligence factor or g

There are many different kinds of IQ tests using a wide variety of test tasks. Some tests consist of a single type of task, others rely on a broad collection of tasks with different contents (visual-spatial, verbal, numerical) and asking for different cognitive processes (e.g., reasoning, memory, rapid decisions, visual comparisons, spatial imagery, reading, and retrieval of general knowledge). The psychologist Charles Spearman early in the 20th century carried out the first formal factor analysis of correlations between various test tasks. He found a trend for all such tests to correlate positively with each other, which is called a positive manifold. Spearman found that a single common factor explained the positive correlations among tests. Spearman named it g for "general intelligence factor". He interpreted it as the core of human intelligence that, to a larger or smaller degree, influences success in all cognitive tasks and thereby creates the positive manifold. This interpretation of g as a common cause of test performance is still dominant in psychometrics. (Although, an alternative interpretation was recently advanced by van der Maas and colleagues. Their mutualism model assumes that intelligence depends on several independent mechanisms, none of which influences performance on all cognitive tests. These mechanisms support each other so that efficient operation of one of them makes efficient operation of the others more likely, thereby creating the positive manifold.)

IQ tests can be ranked by how highly they load on the g factor. Tests with high g-loadings are those that correlate highly with most other tests. One comprehensive study investigating the correlations between a large collection of tests and tasks has found that the Raven's Progressive Matrices have a particularly high correlation with most other tests and tasks. The Raven's is a test of inductive reasoning with abstract visual material. It consists of a series of problems, sorted approximately by increasing difficulty. Each problem presents a 3 x 3 matrix of abstract designs with one empty cell; the matrix is constructed according to a rule, and the person must find out the rule to determine which of 8 alternatives fits into the empty cell. Because of its high correlation with other tests, the Raven's Progressive Matrices are generally acknowledged as a good indicator of general intelligence. This is problematic, however, because there are substantial gender differences on the Raven's, which are not found when g is measured directly by computing the general factor from a broad collection of tests.

Several critics, such as Stephen Jay Gould, have been critical of g, seeing it as a statistical artifact, and that IQ tests instead measure a number of unrelated abilities. The 1995 American Psychological Association's report "Intelligence: Knowns and Unknowns" stated that IQ tests do correlate and that the view that g is a statistical artifact was a minority one.

General collective intelligence factor or c

A recent scientific understanding of collective intelligence, defined as a group's general ability to perform a wide range of tasks, expands the areas of human intelligence research applying similar methods and concepts to groups. Definition, operationalization and methods are similar to the psychometric approach of general individual intelligence where an individual's performance on a given set of cognitive tasks is used to measure intelligence indicated by the general intelligence factor g extracted via factor analysis. In the same vein, collective intelligence research aims to discover a c factor' explaining between-group differences in performance as well as structural and group compositional causes for it.

Historical psychometric theories

Several different theories of intelligence have historically been important for psychometrics. Often they emphasized more factors than a single one like in g factor.

Cattell–Horn–Carroll theory

Many of the broad, recent IQ tests have been greatly influenced by the Cattell–Horn–Carroll theory. It is argued to reflect much of what is known about intelligence from research. A hierarchy of factors for human intelligence is used. g is at the top. Under it there are 10 broad abilities that in turn are subdivided into 70 narrow abilities. The broad abilities are:

  • Fluid intelligence (Gf): includes the broad ability to reason, form concepts, and solve problems using unfamiliar information or novel procedures.
  • Crystallized intelligence (Gc): includes the breadth and depth of a person's acquired knowledge, the ability to communicate one's knowledge, and the ability to reason using previously learned experiences or procedures.
  • Quantitative reasoning (Gq): the ability to comprehend quantitative concepts and relationships and to manipulate numerical symbols.
  • Reading & writing ability (Grw): includes basic reading and writing skills.
  • Short-term memory (Gsm): is the ability to apprehend and hold information in immediate awareness and then use it within a few seconds.
  • Long-term storage and retrieval (Glr): is the ability to store information and fluently retrieve it later in the process of thinking.
  • Visual processing (Gv): is the ability to perceive, analyze, synthesize, and think with visual patterns, including the ability to store and recall visual representations.
  • Auditory processing (Ga): is the ability to analyze, synthesize, and discriminate auditory stimuli, including the ability to process and discriminate speech sounds that may be presented under distorted conditions.
  • Processing speed (Gs): is the ability to perform automatic cognitive tasks, particularly when measured under pressure to maintain focused attention.
  • Decision/reaction time/speed (Gt): reflect the immediacy with which an individual can react to stimuli or a task (typically measured in seconds or fractions of seconds; not to be confused with Gs, which typically is measured in intervals of 2–3 minutes). See Mental chronometry.

Modern tests do not necessarily measure of all of these broad abilities. For example, Gq and Grw may be seen as measures of school achievement and not IQ. Gt may be difficult to measure without special equipment.

g was earlier often subdivided into only Gf and Gc which were thought to correspond to the nonverbal or performance subtests and verbal subtests in earlier versions of the popular Wechsler IQ test. More recent research has shown the situation to be more complex.

Insufficiency of measurement via IQ

Reliability and validity are very different concepts. While reliability reflects reproducibility, validity refers to whether the test measures what it purports to measure. While IQ tests are generally considered to measure some forms of intelligence, they may fail to serve as an accurate measure of broader definitions of human intelligence inclusive of, for example, creativity and social intelligence. For this reason, psychologist Wayne Weiten argues that their construct validity must be carefully qualified, and not be overstated. According to Weiten, "IQ tests are valid measures of the kind of intelligence necessary to do well in academic work. But if the purpose is to assess intelligence in a broader sense, the validity of IQ tests is questionable."

Along these same lines, critics such as Keith Stanovich do not dispute the capacity of IQ test scores to predict some kinds of achievement, but argue that basing a concept of intelligence on IQ test scores alone neglects other important aspects of mental ability.  Robert Sternberg, another significant critic of IQ as the main measure of human cognitive abilities, argued that reducing the concept of intelligence to the measure of g does not fully account for the different skills and knowledge types that produce success in human society.

Despite these criticisms, clinical psychologists generally regard IQ scores as having sufficient statistical validity for many clinical purposes, such as diagnosing intellectual disability, tracking cognitive decline, and informing personnel decisions, because they provide well-normed, easily interpretable indices with known standard errors.

A study suggested that intelligence is composed of distinct cognitive systems, each of which having its own capacity and being (to some degree) independent of other components, with the cognitive profile being emergent from anatomically distinct cognitive systems (such as brain regions or neural networks). For example, IQ and reading-/language-related traits/skills appear to be influenced "at least partly [by] distinct genetic factors".

Various types of potential measures related to some definitions of intelligence but not part of IQ measurement include:

Intelligence across cultures

Psychologists have shown that the definition of human intelligence is unique to the culture that one is studying. Robert Sternberg is among the researchers who have discussed how one's culture affects the person's interpretation of intelligence, and he further believes that to define intelligence in only one way without considering different meanings in cultural contexts may cast an investigative and unintentionally egocentric view on the world. To negate this, psychologists offer the following definitions of intelligence:

  1. Successful intelligence is the skills and knowledge needed for success in life, according to one's own definition of success, within one's sociocultural context.
  2. Analytical intelligence is the result of intelligence's components applied to fairly abstract but familiar kinds of problems.
  3. Creative intelligence is the result of intelligence's components applied to relatively novel tasks and situations.
  4. Practical intelligence is the result of intelligence's components applied to experience for purposes of adaption, shaping and selection.

Although typically identified by its western definition, multiple studies support the idea that human intelligence carries different meanings across cultures around the world. In many Eastern cultures, intelligence is mainly related with one's social roles and responsibilities. A Chinese conception of intelligence would define it as the ability to empathize with and understand others — although this is by no means the only way that intelligence is defined in China. In several African communities, intelligence is shown similarly through a social lens. However, rather than through social roles, as in many Eastern cultures, it is exemplified through social responsibilities. For example, in the language of Chi-Chewa, which is spoken by some ten million people across central Africa, the equivalent term for intelligence implies not only cleverness but also the ability to take on responsibility. Furthermore, within American culture there are a variety of interpretations of intelligence present as well. One of the most common views on intelligence within American societies defines it as a combination of problem-solving skills, deductive reasoning skills, and Intelligence quotient (IQ), while other American societies point out that intelligent people should have a social conscience, accept others for who they are, and be able to give advice or wisdom.

Motivational intelligence

Motivational intelligence refers to an individual's capacity to comprehend and utilize various motivations, such as the need for achievement, affiliation, or power. It involves understanding tacit knowledge related to these motivations. This concept encompasses the ability to recognize and appreciate the diverse values, behaviors, and cultural differences of others, driven by intrinsic interest rather than solely to enhance interaction effectiveness.

Research suggests a relationship between motivational intelligence, international experiences, and leadership. Individuals with higher levels of motivational intelligence tend to exhibit greater enthusiasm for learning about other cultures, thereby contributing to their effectiveness in cross-cultural settings. However, studies have also revealed variations in motivational intelligence across ethnicities, with Asian students demonstrating higher cognitive cultural intelligence but lower motivational intelligence compared to other groups.

Investigations have explored the impact of motivational intelligence on job motivation. A study conducted on employees of Isfahan Gas Company indicated a positive and significant relationship between motivational intelligence and two of its indicators, namely adaptability and social relationship, with job motivation. These findings highlight the potential influence of motivational intelligence on individuals' motivation levels within work contexts.

Motivational intelligence has been identified as a strong predictor, superseding knowledge intelligence, behavioral intelligence, and strategic intelligence. It holds a crucial role in promoting cooperation, which is considered the ideal and essential element of motivational intelligence. Therapeutic approaches grounded in motivational intelligence emphasize a collaborative partnership between the therapist and client. The therapist creates an environment conducive to change without imposing their views or attempting to force awareness or acceptance of reality onto the client.

Motivational intelligence encompasses the understanding of motivations, such as achievement, affiliation, and power, as well as the appreciation of cultural differences and values. It has been found to impact areas such as international experiences, leadership, job motivation, and cooperative therapeutic interventions.

Theory of everything

From Wikipedia, the free encyclopedia en.wikipedia.org/wiki/Theory_of_everything   Beyond the Standard M...