Search This Blog

Wednesday, December 9, 2020

Theory of multiple intelligences

From Wikipedia, the free encyclopedia

The theory of multiple intelligences proposes the differentiation of human intelligence into specific “modalities of intelligence”, rather than defining intelligence as a single, general ability. The theory has been criticized by mainstream psychology for its lack of empirical evidence, and its dependence on subjective judgement.

Separation criteria

According to the theory, an intelligence 'modality' must fulfill eight criteria:

  1. potential for brain isolation by brain damage
  2. place in evolutionary history
  3. presence of core operations
  4. susceptibility to encoding (symbolic expression)
  5. a distinct developmental progression
  6. the existence of savants, prodigies and other exceptional people
  7. support from experimental psychology
  8. support from psychometric findings

The intelligence modalities

In Frames of Mind: The Theory of Multiple Intelligences (1983), Howard Gardner proposed eight abilities that manifest multiple intelligences.

Musical-rhythmic and harmonic

This area of intelligence with sensitivity to the sounds, rhythms, and tones of music. People with musical intelligence normally have good pitch or might possess absolute pitch, and are able to sing, play musical instruments, and compose music. They have sensitivity to rhythm, pitch, meter, tone, melody or timbre.

Visual-spatial

This area deals with spatial judgment and the ability to visualize with the mind's eye. Spatial ability is one of the three factors beneath g in the hierarchical model of intelligence.

Verbal-linguistic

People with high verbal-linguistic intelligence display a facility with words and languages. They are typically good at reading, writing, telling stories and memorizing words along with dates. Verbal ability is one of the most g-loaded abilities. This type of intelligence is measured with the Verbal IQ in WAIS-IV.

Logical-mathematical

This area has to do with logic, abstractions, reasoning, numbers and critical thinking. This also has to do with having the capacity to understand the underlying principles of some kind of causal system. Logical reasoning is closely linked to fluid intelligence and to general intelligence (g factor).

Bodily-kinesthetic

The core elements of the bodily-kinesthetic intelligence are control of one's bodily motions and the capacity to handle objects skillfully. Gardner elaborates to say that this also includes a sense of timing, a clear sense of the goal of a physical action, along with the ability to train responses.

People who have high bodily-kinesthetic intelligence should be generally good at physical activities such as sports, dance and making things.

Gardner believes that careers that suit those with high bodily-kinesthetic intelligence include: athletes, dancers, musicians, actors, builders, police officers, and soldiers. Although these careers can be duplicated through virtual simulation, they will not produce the actual physical learning that is needed in this intelligence.

Interpersonal

In theory, individuals who have high interpersonal intelligence are characterized by their sensitivity to others' moods, feelings, temperaments, motivations, and their ability to cooperate to work as part of a group. According to Gardner in How Are Kids Smart: Multiple Intelligences in the Classroom, "Inter- and Intra- personal intelligence is often misunderstood with being extroverted or liking other people..." Those with high interpersonal intelligence communicate effectively and empathize easily with others, and may be either leaders or followers. They often enjoy discussion and debate." Gardner has equated this with emotional intelligence of Goleman.

Gardner believes that careers that suit those with high interpersonal intelligence include sales persons, politicians, managers, teachers, lecturers, counselors and social workers.

Intrapersonal

This area has to do with introspective and self-reflective capacities. This refers to having a deep understanding of the self; what one's strengths or weaknesses are, what makes one unique, being able to predict one's own reactions or emotions.

Naturalistic

Not part of Gardner's original seven, naturalistic intelligence was proposed by him in 1995. "If I were to rewrite Frames of Mind today, I would probably add an eighth intelligence – the intelligence of the naturalist. It seems to me that the individual who is readily able to recognize flora and fauna, to make other consequential distinctions in the natural world, and to use this ability productively (in hunting, in farming, in biological science) is exercising an important intelligence and one that is not adequately encompassed in the current list." This area has to do with nurturing and relating information to one's natural surroundings. Examples include classifying natural forms such as animal and plant species and rocks and mountain types. This ability was clearly of value in our evolutionary past as hunters, gatherers, and farmers; it continues to be central in such roles as botanist or chef.

This sort of ecological receptiveness is deeply rooted in a "sensitive, ethical, and holistic understanding" of the world and its complexities – including the role of humanity within the greater ecosphere.

Existential

Gardner did not want to commit to a spiritual intelligence, but suggested that an "existential" intelligence may be a useful construct, also proposed after the original 7 in his 1999 book. The hypothesis of an existential intelligence has been further explored by educational researchers.

Additional intelligences

In January 2016, Gardner mentioned in an interview with BigThink that he is considering adding the teaching-pedagogical intelligence "which allows us to be able to teach successfully to other people". In the same interview, he explicitly refused some other suggested intelligences like humour, cooking and sexual intelligence. Professor Nan B. Adams (2004) argues that based on Gardner's definition of Multiple Intelligences, digital intelligence – a meta-intelligence composed of many other identified intelligences and stemmed from human interactions with digital computers – now exists.

Physical intelligence

Physical intelligence, also known as bodily-kinasthetic intelligence, is any intelligence derived through physical and practiced learning such as sports, dance, or craftsmanship. It may refer to the ability to use one's hands to create, to express oneself with one's body, a reliance on tactile mechanisms and movement, and accuracy in controlling body movement. An individual with high physical intelligence is someone who is adept at using their physical body to solve problems and express ideas and emotions. The ability to control the physical body and the mind-body connection is part of a much broader range of human potential as set out in Howard Gardner’s Theory of multiple intelligences.

Characteristics

American baseball player, Babe Ruth

Exhibiting well developed bodily kinasthetic intelligence will be reflected in a person's movements and how they use their physical body. Often people with high physical intelligence will have excellent hand-eye coordination and be very agile; they are precise and accurate in movement and can express themselves using their body. Gardner referred to the idea of natural skill and innate physical intelligence within his discussion of the autobiographical story of Babe Ruth – a legendary baseball player who, at 15, felt that he has been ‘born’ on the pitcher's mound. Individuals with a high body-kinesthetic, or physical intelligence, are likely to be successful in physical careers, including athletes, dancers, musicians, police officers, and soldiers.

Theory

A professor of Education at Harvard University, developmental psychologist Howard Gardner, outlined nine types of intelligence, including spatial intelligence and linguistic intelligence among others. His seminal work, Frame of Mind, was published in 1983 and was influenced by the works of Alfred Binet and the German psychologist William Stern, who originally coined the term 'Intelligence quotient' (IQ). Within his paradigm of intelligence, Gardner defines it as being "the ability to learn" or "to solve problems," referring to intelligence as a "bio-psychological potential to process information".

Gardner suggested that each individual may possess all of the various forms of intelligence to some extent, but that there is always a dominant, or primary, form. Gardner granted each of the different forms of intelligence equal importance, and he proposed that they have the potential to be nurtured and so strengthened, or ignored and weakened. There have been various critiques of Gardner's work, however, predominantly due to the lack of empirical evidence used to support his thinking. Furthermore, some have suggested that the 'intelligences' refer to talents, personality, or ability rather than a distinct form of intelligence.

Impact on education

Within his Theory of Multiple Intelligences, Gardner stated that our "educational system is heavily biased towards linguistic modes of intersection and assessment and, to a somewhat lesser degree, toward logical quantities modes as well". His work went on to shape educational pedagogy and influence relevant policy and legislation across the world; with particular reference to how teachers must assess students’ progress to establish the most effective teaching methods for the individual learner. Gardner's research into the field of learning regarding bodily kinasthetic intelligence has resulted in the use of activities that require physical movement and exertion, with students exhibiting a high level of physical intelligence reporting to benefit from 'learning through movement' in the classroom environment.

Although the distinction between intelligences has been set out in great detail, Gardner opposes the idea of labelling learners to a specific intelligence. Gardner maintains that his theory should "empower learners", not restrict them to one modality of learning. According to Gardner, an intelligence is "a biopsychological potential to process information that can be activated in a cultural setting to solve problems or create products that are of value in a culture." According to a 2006 study, each of the domains proposed by Gardner involves a blend of the general g factor, cognitive abilities other than g, and, in some cases, non-cognitive abilities or personality characteristics.

Critical reception

Gardner argues that there is a wide range of cognitive abilities, but that there are only very weak correlations among them. For example, the theory postulates that a child who learns to multiply easily is not necessarily more intelligent than a child who has more difficulty on this task. The child who takes more time to master multiplication may best learn to multiply through a different approach, may excel in a field outside mathematics, or may be looking at and understanding the multiplication process at a fundamentally deeper level.

Intelligence tests and psychometrics have generally found high correlations between different aspects of intelligence, rather than the low correlations which Gardner's theory predicts, supporting the prevailing theory of general intelligence rather than multiple intelligences (MI). The theory has been criticized by mainstream psychology for its lack of empirical evidence, and its dependence on subjective judgement.

Definition of intelligence

One major criticism of the theory is that it is ad hoc: that Gardner is not expanding the definition of the word "intelligence", but rather denies the existence of intelligence as traditionally understood, and instead uses the word "intelligence" where other people have traditionally used words like "ability" and "aptitude". This practice has been criticized by Robert J. Sternberg, Eysenck, and Scarr. White (2006) points out that Gardner's selection and application of criteria for his "intelligences" is subjective and arbitrary, and that a different researcher would likely have come up with different criteria.

Defenders of MI theory argue that the traditional definition of intelligence is too narrow, and thus a broader definition more accurately reflects the differing ways in which humans think and learn.

Some criticisms arise from the fact that Gardner has not provided a test of his multiple intelligences. He originally defined it as the ability to solve problems that have value in at least one culture, or as something that a student is interested in. He then added a disclaimer that he has no fixed definition, and his classification is more of an artistic judgment than fact:

Ultimately, it would certainly be desirable to have an algorithm for the selection of intelligence, such that any trained researcher could determine whether a candidate's intelligence met the appropriate criteria. At present, however, it must be admitted that the selection (or rejection) of a candidate's intelligence is reminiscent more of an artistic judgment than of a scientific assessment.

Generally, linguistic and logical-mathematical abilities are called intelligence, but artistic, musical, athletic, etc. abilities are not. Gardner argues this causes the former to be needlessly aggrandized. Certain critics are wary of this widening of the definition, saying that it ignores "the connotation of intelligence ... [which] has always connoted the kind of thinking skills that makes one successful in school."

Gardner writes "I balk at the unwarranted assumption that certain human abilities can be arbitrarily singled out as intelligence while others cannot." Critics hold that given this statement, any interest or ability can be redefined as "intelligence". Thus, studying intelligence becomes difficult, because it diffuses into the broader concept of ability or talent. Gardner's edition of the naturalistic intelligence and conceptions of the existential and moral intelligence are seen as the fruits of this diffusion. Defenders of the MI theory would argue that this is simply a recognition of the broad scope of inherent mental abilities and that such an exhaustive scope by nature defies a one-dimensional classification such as an IQ value.

The theory and definitions have been critiqued by Perry D. Klein as being so unclear as to be tautologous and thus unfalsifiable. Having a high musical ability means being good at music while at the same time being good at music is explained by having high musical ability.

Henri Wallon argues that "We can not distinguish intelligence from its operations". Yves Richez distinguishes 10 Natural Operating Modes (Modes Opératoires Naturels – MoON). Richez's studies are premised on a gap between Chinese thought and Western thought. In China, the notion of "being" (self) and the notion of "intelligence" don't exist. These are claimed to be Graeco-Roman inventions derived from Plato. Instead of intelligence, Chinese refers to "operating modes", which is why Yves Richez does not speak of "intelligence" but of "natural operating modes" (MoON).

Neo-Piagetian criticism

Andreas Demetriou suggests that theories which overemphasize the autonomy of the domains are as simplistic as the theories that overemphasize the role of general intelligence and ignore the domains. He agrees with Gardner that there are indeed domains of intelligence that are relevantly autonomous of each other. Some of the domains, such as verbal, spatial, mathematical, and social intelligence are identified by most lines of research in psychology. In Demetriou's theory, one of the neo-Piagetian theories of cognitive development, Gardner is criticized for underestimating the effects exerted on the various domains of intelligences by the various subprocesses that define overall processing efficiency, such as speed of processing, executive functions, working memory, and meta-cognitive processes underlying self-awareness and self-regulation. All of these processes are integral components of general intelligence that regulate the functioning and development of different domains of intelligence.

The domains are to a large extent expressions of the condition of the general processes, and may vary because of their constitutional differences but also differences in individual preferences and inclinations. Their functioning both channels and influences the operation of the general processes. Thus, one cannot satisfactorily specify the intelligence of an individual or design effective intervention programs unless both the general processes and the domains of interest are evaluated.

Human adaptation to multiple environments

The premise of the multiple intelligences hypothesis, that human intelligence is a collection of specialist abilities, have been criticized for not being able to explain human adaptation to most if not all environments in the world. In this context, humans are contrasted to social insects that indeed have a distributed "intelligence" of specialists, and such insects may spread to climates resembling that of their origin but the same species never adapt to a wide range of climates from tropical to temperate by building different types of nests and learning what is edible and what is poisonous. While some such as the leafcutter ant grow fungi on leaves, they do not cultivate different species in different environments with different farming techniques as human agriculture does. It is therefore argued that human adaptability stems from a general ability to falsify hypotheses and make more generally accurate predictions and adapt behavior thereafter, and not a set of specialized abilities which would only work under specific environmental conditions.

IQ tests

Gardner argues that IQ tests only measure linguistic and logical-mathematical abilities. He argues the importance of assessing in an "intelligence-fair" manner. While traditional paper-and-pen examinations favor linguistic and logical skills, there is a need for intelligence-fair measures that value the distinct modalities of thinking and learning that uniquely define each intelligence.

Psychologist Alan S. Kaufman points out that IQ tests have measured spatial abilities for 70 years. Modern IQ tests are greatly influenced by the Cattell-Horn-Carroll theory which incorporates a general intelligence but also many more narrow abilities. While IQ tests do give an overall IQ score, they now also give scores for many more narrow abilities.

Lack of empirical evidence

According to a 2006 study, many of Gardner's "intelligences" correlate with the g factor, supporting the idea of a single dominant type of intelligence. According to the study, each of the domains proposed by Gardner involved a blend of g, of cognitive abilities other than g, and, in some cases, of non-cognitive abilities or of personality characteristics.

The Johnson O'Connor Research Foundation has tested hundreds of thousands of people to determine their "aptitudes" ("intelligences"), such as manual dexterity, musical ability, spatial visualization, and memory for numbers. There is correlation of these aptitudes with the g factor, but not all are strongly correlated; correlation between the g factor and "inductive speed" ("quickness in seeing relationships among separate facts, ideas, or observations") is only 0.5, considered a moderate correlation.

Linda Gottfredson (2006) has argued that thousands of studies support the importance of intelligence quotient (IQ) in predicting school and job performance, and numerous other life outcomes. In contrast, empirical support for non-g intelligences is either lacking or very poor. She argued that despite this, the ideas of multiple non-g intelligences are very attractive to many due to the suggestion that everyone can be smart in some way.

A critical review of MI theory argues that there is little empirical evidence to support it:

To date, there have been no published studies that offer evidence of the validity of the multiple intelligences. In 1994 Sternberg reported finding no empirical studies. In 2000 Allix reported finding no empirical validating studies, and at that time Gardner and Connell conceded that there was "little hard evidence for MI theory" (2000, p. 292). In 2004 Sternberg and Grigerenko stated that there were no validating studies for multiple intelligences, and in 2004 Gardner asserted that he would be "delighted were such evidence to accrue", and admitted that "MI theory has few enthusiasts among psychometricians or others of a traditional psychological background" because they require "psychometric or experimental evidence that allows one to prove the existence of the several intelligences."

The same review presents evidence to demonstrate that cognitive neuroscience research does not support the theory of multiple intelligences:

... the human brain is unlikely to function via Gardner's multiple intelligences. Taken together the evidence for the intercorrelations of subskills of IQ measures, the evidence for a shared set of genes associated with mathematics, reading, and g, and the evidence for shared and overlapping "what is it?" and "where is it?" neural processing pathways, and shared neural pathways for language, music, motor skills, and emotions suggest that it is unlikely that each of Gardner's intelligences could operate "via a different set of neural mechanisms" (1999, p. 99). Equally important, the evidence for the "what is it?" and "where is it?" processing pathways, for Kahneman's two decision-making systems, and for adapted cognition modules suggests that these cognitive brain specializations have evolved to address very specific problems in our environment. Because Gardner claimed that the intelligences are innate potentialities related to a general content area, MI theory lacks a rationale for the phylogenetic emergence of the intelligences.

The theory of multiple intelligences is sometimes cited as an example of pseudoscience because it lacks empirical evidence or falsifiability, though Gardner has argued otherwise.

Use in education

Gardner defines intelligence as "bio-psychological potential to process information that can be activated in a cultural setting to solve problems or create products that are of value in a culture." According to Gardner, there are more ways to do this than just through logical and linguistic intelligence. Gardner believes that the purpose of schooling "should be to develop intelligence and to help people reach vocational and avocational goals that are appropriate to their particular spectrum of intelligence. People who are helped to do so, [he] believe[s], feel more engaged and competent and therefore more inclined to serve the society in a constructive way."

Gardner contends that IQ tests focus mostly on logical and linguistic intelligence. Upon doing well on these tests, the chances of attending a prestigious college or university increase, which in turn creates contributing members of society. While many students function well in this environment, there are those who do not. Gardner's theory argues that students will be better served by a broader vision of education, wherein teachers use different methodologies, exercises and activities to reach all students, not just those who excel at linguistic and logical intelligence. It challenges educators to find "ways that will work for this student learning this topic".

James Traub's article in The New Republic notes that Gardner's system has not been accepted by most academics in intelligence or teaching. Gardner states that "while Multiple Intelligences theory is consistent with much empirical evidence, it has not been subjected to strong experimental tests ... Within the area of education, the applications of the theory are currently being examined in many projects. Our hunches will have to be revised many times in light of actual classroom experience."

Jerome Bruner agreed with Gardner that the intelligence was "useful fictions," and went on to state that "his approach is so far beyond the data-crunching of mental testers that it deserves to be cheered."

George Miller, a prominent cognitive psychologist, wrote in The New York Times Book Review that Gardner's argument consisted of "hunch and opinion" and Charles Murray and Richard J. Herrnstein in The Bell Curve (1994) called Gardner's theory "uniquely devoid of psychometric or other quantitative evidence."

In spite of its lack of general acceptance in the psychological community, Gardner's theory has been adopted by many schools, where it is often conflated with learning styles, and hundreds of books have been written about its applications in education. Some of the applications of Gardner's theory have been described as "simplistic" and Gardner himself has said he is "uneasy" with the way his theory has been used in schools. Gardner has denied that multiple intelligences are learning styles and agrees that the idea of learning styles is incoherent and lacking in empirical evidence. Gardner summarizes his approach with three recommendations for educators: individualize the teaching style (to suit the most effective method for each student), pluralize the teaching (teach important materials in multiple ways), and avoid the term "styles" as being confusing.

 

Human intelligence

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Human_intelligence

Human intelligence is the intellectual capability of humans, which is marked by complex cognitive feats and high levels of motivation and self-awareness.

Through intelligence, humans possess the cognitive abilities to learn, form concepts, understand, apply logic, and reason, including the capacities to recognize patterns, plan, innovate, solve problems, make decisions, retain information, and use language to communicate.

Correlates

As a construct and measured by intelligence tests, intelligence is considered to be one of the most useful concepts used in psychology, because it correlates with many relevant variables, for instance the probability of suffering an accident, salary, and more.

Education

According to a 2018 metastudy of educational effects on intelligence, education appears to be the "most consistent, robust, and durable method" known for raising intelligence.

Myopia

A number of studies have shown a correlation between IQ and myopia. Some suggest that the reason for the correlation is environmental, whereby people with a higher IQ are more likely to damage their eyesight with prolonged reading, or the other way around whereby people who read more are more likely to reach a higher IQ, while others contend that a genetic link exists.

Aging

There is evidence that aging causes a decline in cognitive functions. In one cross-sectional study, various cognitive functions measured declines by about 0.8 in z-score from age 20 to age 50, the cognitive functions included speed of processing, working memory, and long-term memory.

Genes

A number of single-nucleotide polymorphisms in human DNA are correlated with intelligence.

Theories

Relevance of IQ tests

In psychology, human intelligence is commonly assessed by IQ scores that are determined by IQ tests. However, while IQ test scores show a high degree of inter-test reliability, and predict certain forms of achievement rather effectively, their construct validity as a holistic measure of human intelligence is considered dubious. While IQ tests are generally understood to measure some forms of intelligence, they may fail to serve as an accurate measure of broader definitions of human intelligence inclusive of creativity and social intelligence. According to psychologist Wayne Weiten, "IQ tests are valid measures of the kind of intelligence necessary to do well in academic work. But if the purpose is to assess intelligence in a broader sense, the validity of IQ tests is questionable."

Theory of multiple intelligences

Howard Gardner's theory of multiple intelligences is based on studies not only of normal children and adults, but also of gifted individuals (including so-called "savants"), of persons who have suffered brain damage, of experts and virtuosos, and of individuals from diverse cultures. Gardner breaks intelligence down into at least a number of different components. In the first edition of his book Frames of Mind (1983), he described seven distinct types of intelligence—logical-mathematical, linguistic, spatial, musical, kinesthetic, interpersonal, and intrapersonal. In a second edition of this book, he added two more types of intelligence—naturalist and existential intelligences. He argues that psychometric (IQ) tests address only linguistic and logical plus some aspects of spatial intelligence. A major criticism of Gardner's theory is that it has never been tested, or subjected to peer review, by Gardner or anyone else, and indeed that it is unfalsifiable. Others (e.g. Locke, 2005) have suggested that recognizing many specific forms of intelligence (specific aptitude theory) implies a political—rather than scientific—agenda, intended to appreciate the uniqueness in all individuals, rather than recognizing potentially true and meaningful differences in individual capacities. Schmidt and Hunter (2004) suggest that the predictive validity of specific aptitudes over and above that of general mental ability, or "g", has not received empirical support. On the other hand, Jerome Bruner agreed with Gardner that the intelligences were "useful fictions", and went on to state that "his approach is so far beyond the data-crunching of mental testers that it deserves to be cheered."

Howard Gardner describes his first seven intelligences as follows:

  • Linguistic intelligence: People high in linguistic intelligence have an affinity for words, both spoken and written.
  • Logical-mathematical intelligence: It implies logical and mathematical abilities.
  • Spatial intelligence: The ability to form a mental model of a spatial world and to be able to maneuver and operate using that model.
  • Musical intelligence: Those with musical intelligence have excellent pitch, and may even be absolute pitch.
  • Bodily-kinesthetic intelligence: The ability to solve problems or to fashion products using one's whole body, or parts of the body. Gifted people in this intelligence may be good dancers, athletes, surgeons, craftspeople, and others.
  • Interpersonal intelligence: The ability to see things from the perspective of others, or to understand people in the sense of empathy. Strong interpersonal intelligence would be an asset in those who are teachers, politicians, clinicians, religious leaders, etc.
  • Intrapersonal intelligence: It is a capacity to form an accurate, veridical model of oneself and to be able to use that model to operate effectively in life.

Triarchic theory of intelligence

Robert Sternberg proposed the triarchic theory of intelligence to provide a more comprehensive description of intellectual competence than traditional differential or cognitive theories of human ability. The triarchic theory describes three fundamental aspects of intelligence:

  • Analytic intelligence comprises the mental processes through which intelligence is expressed.
  • Creative intelligence is necessary when an individual is confronted with a challenge that is nearly, but not entirely, novel or when an individual is engaged in automatizing the performance of a task.
  • Practical intelligence is bound in a sociocultural milieu and involves adaptation to, selection of, and shaping of the environment to maximize fit in the context.

The triarchic theory does not argue against the validity of a general intelligence factor; instead, the theory posits that general intelligence is part of analytic intelligence, and only by considering all three aspects of intelligence can the full range of intellectual functioning be fully understood.

More recently, the triarchic theory has been updated and renamed as the Theory of Successful Intelligence by Sternberg. Intelligence is now defined as an individual's assessment of success in life by the individual's own (idiographic) standards and within the individual's sociocultural context. Success is achieved by using combinations of analytical, creative, and practical intelligence. The three aspects of intelligence are referred to as processing skills. The processing skills are applied to the pursuit of success through what were the three elements of practical intelligence: adapting to, shaping of, and selecting of one's environments. The mechanisms that employ the processing skills to achieve success include utilizing one's strengths and compensating or correcting for one's weaknesses.

Sternberg's theories and research on intelligence remain contentious within the scientific community.

PASS theory of intelligence

Based on A. R. Luria's (1966) seminal work on the modularization of brain function, and supported by decades of neuroimaging research, the PASS Theory of Intelligence proposes that cognition is organized in three systems and four processes. The first process is the Planning, which involves executive functions responsible for controlling and organizing behavior, selecting and constructing strategies, and monitoring performance. The second is the Attention process, which is responsible for maintaining arousal levels and alertness, and ensuring focus on relevant stimuli. The next two are called Simultaneous and Successive processing and they involve encoding, transforming, and retaining information. Simultaneous processing is engaged when the relationship between items and their integration into whole units of information is required. Examples of this include recognizing figures, such as a triangle within a circle vs. a circle within a triangle, or the difference between 'he had a shower before breakfast' and 'he had breakfast before a shower.' Successive processing is required for organizing separate items in a sequence such as remembering a sequence of words or actions exactly in the order in which they had just been presented. These four processes are functions of four areas of the brain. Planning is broadly located in the front part of our brains, the frontal lobe. Attention and arousal are combined functions of the frontal lobe and the lower parts of the cortex, although the parietal lobes are also involved in attention as well. Simultaneous processing and Successive processing occur in the posterior region or the back of the brain. Simultaneous processing is broadly associated with the occipital and the parietal lobes while Successive processing is broadly associated with the frontal-temporal lobes. The PASS (Planning/Attention/Simultaneous/Successive) theory is heavily indebted to both Luria (1966, 1973), and studies in cognitive psychology involved in promoting a better look at intelligence.

Piaget's theory and Neo-Piagetian theories

In Piaget's theory of cognitive development the focus is not on mental abilities but rather on a child's mental models of the world. As a child develops, increasingly more accurate models of the world are developed which enable the child to interact with the world better. One example being object permanence where the child develops a model where objects continue to exist even when they cannot be seen, heard, or touched.

Piaget's theory described four main stages and many sub-stages in the development. These four main stages are:

  • sensory motor stage (birth-2yrs);
  • pre-operational stage (2yrs-7rs);
  • concrete operational stage (7rs-11yrs); and
  • formal operations stage (11yrs-16yrs)

Degree of progress through these stages are correlated, but not identical with psychometric IQ. Piaget conceptualizes intelligence as an activity more than a capacity.

One of Piaget's most famous studies focused purely on the discriminative abilities of children between the ages of two and a half years old, and four and a half years old. He began the study by taking children of different ages and placing two lines of sweets, one with the sweets in a line spread further apart, and one with the same number of sweets in a line placed more closely together. He found that, "Children between 2 years, 6 months old and 3 years, 2 months old correctly discriminate the relative number of objects in two rows; between 3 years, 2 months and 4 years, 6 months they indicate a longer row with fewer objects to have "more"; after 4 years, 6 months they again discriminate correctly". Initially younger children were not studied, because if at the age of four years a child could not conserve quantity, then a younger child presumably could not either. The results show however that children that are younger than three years and two months have quantity conservation, but as they get older they lose this quality, and do not recover it until four and a half years old. This attribute may be lost temporarily because of an overdependence on perceptual strategies, which correlates more candy with a longer line of candy, or because of the inability for a four-year-old to reverse situations. By the end of this experiment several results were found. First, younger children have a discriminative ability that shows the logical capacity for cognitive operations exists earlier than acknowledged. This study also reveals that young children can be equipped with certain qualities for cognitive operations, depending on how logical the structure of the task is. Research also shows that children develop explicit understanding at age 5 and as a result, the child will count the sweets to decide which has more. Finally the study found that overall quantity conservation is not a basic characteristic of humans' native inheritance.

Piaget's theory has been criticized for the age of appearance of a new model of the world, such as object permanence, being dependent on how the testing is done (see the article on object permanence). More generally, the theory may be very difficult to test empirically because of the difficulty of proving or disproving that a mental model is the explanation for the results of the testing.

Neo-Piagetian theories of cognitive development expand Piaget's theory in various ways such as also considering psychometric-like factors such as processing speed and working memory, "hypercognitive" factors like self-monitoring, more stages, and more consideration on how progress may vary in different domains such as spatial or social.

Parieto-frontal integration theory of intelligence

Based on a review of 37 neuroimaging studies, Jung and Haier (2007) proposed that the biological basis of intelligence stems from how well the frontal and parietal regions of the brain communicate and exchange information with each other. Subsequent neuroimaging and lesion studies report general consensus with the theory. A review of the neuroscience and intelligence literature concludes that the parieto-frontal integration theory is the best available explanation for human intelligence differences.

Investment theory

Based on the Cattell–Horn–Carroll theory, the tests of intelligence most often used in the relevant studies include measures of fluid ability (Gf) and crystallized ability (Gc); that differ in their trajectory of development in individuals. The 'investment theory' by Cattell states that the individual differences observed in the procurement of skills and knowledge (Gc) are partially attributed to the 'investment' of Gf, thus suggesting the involvement of fluid intelligence in every aspect of the learning process. It is essential to highlight that the investment theory suggests that personality traits affect 'actual' ability, and not scores on an IQ test. In association, Hebb's theory of intelligence suggested a bifurcation as well, Intelligence A (physiological), that could be seen as a semblance of fluid intelligence and Intelligence B (experiential), similar to crystallized intelligence.

Intelligence compensation theory (ICT)

The intelligence compensation theory (a term first coined by Wood and Englert, 2009) states that individuals who are comparatively less intelligent work harder, more methodically, become more resolute and thorough (more conscientious) in order to achieve goals, to compensate for their 'lack of intelligence' whereas more intelligent individuals do not require traits/behaviours associated with the personality factor conscientiousness to progress as they can rely on the strength of their cognitive abilities as opposed to structure or effort. The theory suggests the existence of a causal relationship between intelligence and conscientiousness, such that the development of the personality trait conscientiousness is influenced by intelligence. This assumption is deemed plausible as it is unlikely that the reverse causal relationship could occur; implying that the negative correlation would be higher between fluid intelligence (Gf) and conscientiousness. The justification being the timeline of development of Gf, Gc and personality, as crystallized intelligence would not have developed completely when personality traits develop. Subsequently, during school-going ages, more conscientious children would be expected to gain more crystallized intelligence (knowledge) through education, as they would be more efficient, thorough, hard-working and dutiful.

This theory has recently been contradicted by evidence, that identifies compensatory sample selection. Thus, attributing the previous findings to the bias in selecting samples with individuals above a certain threshold of achievement.

Bandura's theory of self-efficacy and cognition

The view of cognitive ability has evolved over the years, and it is no longer viewed as a fixed property held by an individual. Instead, the current perspective describes it as a general capacity, comprising not only cognitive, but motivational, social and behavioural aspects as well. These facets work together to perform numerous tasks. An essential skill often overlooked is that of managing emotions, and aversive experiences that can compromise one's quality of thought and activity. The link between intelligence and success has been bridged by crediting individual differences in self-efficacy. Bandura's theory identifies the difference between possessing skills and being able to apply them in challenging situations. Thus, the theory suggests that individuals with the same level of knowledge and skill may perform badly, averagely or excellently based on differences in self-efficacy.

A key role of cognition is to allow for one to predict events and in turn devise methods to deal with these events effectively. These skills are dependent on processing of stimuli that is unclear and ambiguous. To learn the relevant concepts, individuals must be able to rely on the reserve of knowledge to identify, develop and execute options. They must be able to apply the learning acquired from previous experiences. Thus, a stable sense of self-efficacy is essential to stay focused on tasks in the face of challenging situations.

To summarize, Bandura's theory of self-efficacy and intelligence suggests that individuals with a relatively low sense of self-efficacy in any field will avoid challenges. This effect is heightened when they perceive the situations as personal threats. When failure occurs, they recover from it more slowly than others, and credit it to an insufficient aptitude. On the other hand, persons with high levels of self-efficacy hold a task-diagnostic aim that leads to effective performance.

Process, personality, intelligence and knowledge theory (PPIK)

Predicted growth curves for Intelligence as process, crystallized intelligence, occupational knowledge and avocational knowledge based on Ackerman's PPIK Theory.

Developed by Ackerman, the PPIK (process, personality, intelligence and knowledge) theory further develops the approach on intelligence as proposed by Cattell, the Investment theory and Hebb, suggesting a distinction between intelligence as knowledge and intelligence as process (two concepts that are comparable and related to Gc and Gf respectively, but broader and closer to Hebb's notions of "Intelligence A" and "Intelligence B") and integrating these factors with elements such as personality, motivation and interests.

Ackerman describes the difficulty of distinguishing process from knowledge, as content cannot be entirely eliminated from any ability test. Personality traits have not shown to be significantly correlated with the intelligence as process aspect except in the context of psychopathology. One exception to this generalization has been the finding of sex differences in cognitive abilities, specifically abilities in mathematical and spatial form. On the other hand, the intelligence as knowledge factor has been associated with personality traits of Openness and Typical Intellectual Engagement, which also strongly correlate with verbal abilities (associated with crystallized intelligence).

Latent inhibition

It appears that Latent inhibition can influence one's creativity.

Improving

Because intelligence appears to be at least partly dependent on brain structure and the genes shaping brain development, it has been proposed that genetic engineering could be used to enhance the intelligence, a process sometimes called biological uplift in science fiction. Experiments on mice have demonstrated superior ability in learning and memory in various behavioral tasks.

IQ leads to greater success in education, but independently education raises IQ scores. A 2017 meta-analysis suggests education increases IQ by 1-5 points per year of education, or at least increases IQ test taking ability.

Attempts to raise IQ with brain training have led to increases on aspects related with the training tasks – for instance working memory – but it is yet unclear if these increases generalize to increased intelligence per se.

A 2008 research paper claimed that practicing a dual n-back task can increase fluid intelligence (Gf), as measured in several different standard tests. This finding received some attention from popular media, including an article in Wired. However, a subsequent criticism of the paper's methodology questioned the experiment's validity and took issue with the lack of uniformity in the tests used to evaluate the control and test groups. For example, the progressive nature of Raven's Advanced Progressive Matrices (APM) test may have been compromised by modifications of time restrictions (i.e., 10 minutes were allowed to complete a normally 45-minute test).

Substances which actually or purportedly improve intelligence or other mental functions are called nootropics. A meta analysis shows omega 3 fatty acids improves cognitive performance among those with cognitive deficits, but not among healthy subjects. A meta-regression shows omega 3 fatty acids improve the moods of patients with major depression (major depression is associated with mental deficits). However, exercise, not just performance-enhancing drugs, enhances cognition for healthy and non healthy subjects as well.

On the philosophical front, conscious efforts to influence intelligence raise ethical issues. Neuroethics considers the ethical, legal and social implications of neuroscience, and deals with issues such as the difference between treating a human neurological disease and enhancing the human brain, and how wealth impacts access to neurotechnology. Neuroethical issues interact with the ethics of human genetic engineering.

Transhumanist theorists study the possibilities and consequences of developing and using techniques to enhance human abilities and aptitudes.

Eugenics is a social philosophy which advocates the improvement of human hereditary traits through various forms of intervention. Eugenics has variously been regarded as meritorious or deplorable in different periods of history, falling greatly into disrepute after the defeat of Nazi Germany in World War II.

Measuring

Chart of IQ Distributions on 1916 Stanford-Binet Test
Score distribution chart for sample of 905 children tested on 1916 Stanford-Binet Test

The approach to understanding intelligence with the most supporters and published research over the longest period of time is based on psychometric testing. It is also by far the most widely used in practical settings. Intelligence quotient (IQ) tests include the Stanford-Binet, Raven's Progressive Matrices, the Wechsler Adult Intelligence Scale and the Kaufman Assessment Battery for Children. There are also psychometric tests that are not intended to measure intelligence itself but some closely related construct such as scholastic aptitude. In the United States examples include the SSAT, the SAT, the ACT, the GRE, the MCAT, the LSAT, and the GMAT. Regardless of the method used, almost any test that requires examinees to reason and has a wide range of question difficulty will produce intelligence scores that are approximately normally distributed in the general population.

Intelligence tests are widely used in educational, business, and military settings because of their efficacy in predicting behavior. IQ and g (discussed in the next section) are correlated with many important social outcomes—individuals with low IQs are more likely to be divorced, have a child out of marriage, be incarcerated, and need long-term welfare support, while individuals with high IQs are associated with more years of education, higher status jobs and higher income. Intelligence is significantly correlated with successful training and performance outcomes, and IQ/g is the single best predictor of successful job performance.

General intelligence factor or g

There are many different kinds of IQ tests using a wide variety of test tasks. Some tests consist of a single type of task, others rely on a broad collection of tasks with different contents (visual-spatial, verbal, numerical) and asking for different cognitive processes (e.g., reasoning, memory, rapid decisions, visual comparisons, spatial imagery, reading, and retrieval of general knowledge). The psychologist Charles Spearman early in the 20th century carried out the first formal factor analysis of correlations between various test tasks. He found a trend for all such tests to correlate positively with each other, which is called a positive manifold. Spearman found that a single common factor explained the positive correlations among tests. Spearman named it g for "general intelligence factor". He interpreted it as the core of human intelligence that, to a larger or smaller degree, influences success in all cognitive tasks and thereby creates the positive manifold. This interpretation of g as a common cause of test performance is still dominant in psychometrics. (Although, an alternative interpretation was recently advanced by van der Maas and colleagues. Their mutualism model assumes that intelligence depends on several independent mechanisms, none of which influences performance on all cognitive tests. These mechanisms support each other so that efficient operation of one of them makes efficient operation of the others more likely, thereby creating the positive manifold.)

IQ tasks and tests can be ranked by how highly they load on the g factor. Tests with high g-loadings are those that correlate highly with most other tests. One comprehensive study investigating the correlations between a large collection of tests and tasks has found that the Raven's Progressive Matrices have a particularly high correlation with most other tests and tasks. The Raven's is a test of inductive reasoning with abstract visual material. It consists of a series of problems, sorted approximately by increasing difficulty. Each problem presents a 3 x 3 matrix of abstract designs with one empty cell; the matrix is constructed according to a rule, and the person must find out the rule to determine which of 8 alternatives fits into the empty cell. Because of its high correlation with other tests, the Raven's Progressive Matrices are generally acknowledged as a good indicator of general intelligence. This is problematic, however, because there are substantial gender differences on the Raven's, which are not found when g is measured directly by computing the general factor from a broad collection of tests.

General collective intelligence factor or c

A recent scientific understanding of collective intelligence, defined as a group's general ability to perform a wide range of tasks, expands the areas of human intelligence research applying similar methods and concepts to groups. Definition, operationalization and methods are similar to the psychometric approach of general individual intelligence where an individual's performance on a given set of cognitive tasks is used to measure intelligence indicated by the general intelligence factor g extracted via factor analysis. In the same vein, collective intelligence research aims to discover a ‘c factor’ explaining between-group differences in performance as well as structural and group compositional causes for it.

Historical psychometric theories

Several different theories of intelligence have historically been important for psychometrics. Often they emphasized more factors than a single one like in g factor.

Cattell–Horn–Carroll theory

Many of the broad, recent IQ tests have been greatly influenced by the Cattell–Horn–Carroll theory. It is argued to reflect much of what is known about intelligence from research. A hierarchy of factors for human intelligence is used. g is at the top. Under it there are 10 broad abilities that in turn are subdivided into 70 narrow abilities. The broad abilities are:

  • Fluid intelligence (Gf): includes the broad ability to reason, form concepts, and solve problems using unfamiliar information or novel procedures.
  • Crystallized intelligence (Gc): includes the breadth and depth of a person's acquired knowledge, the ability to communicate one's knowledge, and the ability to reason using previously learned experiences or procedures.
  • Quantitative reasoning (Gq): the ability to comprehend quantitative concepts and relationships and to manipulate numerical symbols.
  • Reading & writing ability (Grw): includes basic reading and writing skills.
  • Short-term memory (Gsm): is the ability to apprehend and hold information in immediate awareness and then use it within a few seconds.
  • Long-term storage and retrieval (Glr): is the ability to store information and fluently retrieve it later in the process of thinking.
  • Visual processing (Gv): is the ability to perceive, analyze, synthesize, and think with visual patterns, including the ability to store and recall visual representations.
  • Auditory processing (Ga): is the ability to analyze, synthesize, and discriminate auditory stimuli, including the ability to process and discriminate speech sounds that may be presented under distorted conditions.
  • Processing speed (Gs): is the ability to perform automatic cognitive tasks, particularly when measured under pressure to maintain focused attention.
  • Decision/reaction time/speed (Gt): reflect the immediacy with which an individual can react to stimuli or a task (typically measured in seconds or fractions of seconds; not to be confused with Gs, which typically is measured in intervals of 2–3 minutes).

Modern tests do not necessarily measure of all of these broad abilities. For example, Gq and Grw may be seen as measures of school achievement and not IQ. Gt may be difficult to measure without special equipment.

g was earlier often subdivided into only Gf and Gc which were thought to correspond to the nonverbal or performance subtests and verbal subtests in earlier versions of the popular Wechsler IQ test. More recent research has shown the situation to be more complex.

Controversies

While not necessarily a dispute about the psychometric approach itself, there are several controversies regarding the results from psychometric research.

One criticism has been against the early research such as craniometry. A reply has been that drawing conclusions from early intelligence research is like condemning the auto industry by criticizing the performance of the Model T.

Several critics, such as Stephen Jay Gould, have been critical of g, seeing it as a statistical artifact, and that IQ tests instead measure a number of unrelated abilities. The American Psychological Association's report "Intelligence: Knowns and Unknowns" stated that IQ tests do correlate and that the view that g is a statistical artifact is a minority one.

Intelligence across cultures

Psychologists have shown that the definition of human intelligence is unique to the culture that one is studying. Robert Sternberg is among the researchers who have discussed how one's culture affects the person's interpretation of intelligence, and he further believes that to define intelligence in only one way without considering different meanings in cultural contexts may cast an investigative and unintentionally egocentric view on the world. To negate this, psychologists offer the following definitions of intelligence:

  1. Successful intelligence is the skills and knowledge needed for success in life, according to one's own definition of success, within one's sociocultural context.
  2. Analytical intelligence is the result of intelligence's components applied to fairly abstract but familiar kinds of problems.
  3. Creative intelligence is the result of intelligence's components applied to relatively novel tasks and situations.
  4. Practical intelligence is the result of intelligence's components applied to experience for purposes of adaption, shaping and selection.

Although typically identified by its western definition, multiple studies support the idea that human intelligence carries different meanings across cultures around the world. In many Eastern cultures, intelligence is mainly related with one's social roles and responsibilities. A Chinese conception of intelligence would define it as the ability to empathize with and understand others — although this is by no means the only way that intelligence is defined in China. In several African communities, intelligence is shown similarly through a social lens. However, rather than through social roles, as in many Eastern cultures, it is exemplified through social responsibilities. For example, in the language of Chi-Chewa, which is spoken by some ten million people across central Africa, the equivalent term for intelligence implies not only cleverness but also the ability to take on responsibility. Furthermore, within American culture there are a variety of interpretations of intelligence present as well. One of the most common views on intelligence within American societies defines it as a combination of problem-solving skills, deductive reasoning skills, and Intelligence quotient (IQ), while other American societies point out that intelligent people should have a social conscience, accept others for who they are, and be able to give advice or wisdom.

Doomsday argument

 

Future of an expanding universe

From Wikipedia, the free encyclopedia

Observations suggest that the expansion of the universe will continue forever. If so, then a popular theory is that the universe will cool as it expands, eventually becoming too cold to sustain life. For this reason, this future scenario once popularly called "Heat Death" is now known as the "Big Chill" or "Big Freeze".

If dark energy—represented by the cosmological constant, a constant energy density filling space homogeneously, or scalar fields, such as quintessence or moduli, dynamic quantities whose energy density can vary in time and space—accelerates the expansion of the universe, then the space between clusters of galaxies will grow at an increasing rate. Redshift will stretch ancient, incoming photons (even gamma rays) to undetectably long wavelengths and low energies. Stars are expected to form normally for 1012 to 1014 (1–100 trillion) years, but eventually the supply of gas needed for star formation will be exhausted. As existing stars run out of fuel and cease to shine, the universe will slowly and inexorably grow darker. According to theories that predict proton decay, the stellar remnants left behind will disappear, leaving behind only black holes, which themselves eventually disappear as they emit Hawking radiation. Ultimately, if the universe reaches a state in which the temperature approaches a uniform value, no further work will be possible, resulting in a final heat death of the universe.

Cosmology

Infinite expansion does not determine the overall spatial curvature of the universe. It can be open (with negative spatial curvature), flat, or closed (positive spatial curvature), although if it is closed, sufficient dark energy must be present to counteract the gravitational forces or else the universe will end in a Big Crunch.

Observations of the cosmic background radiation by the Wilkinson Microwave Anisotropy Probe and the Planck mission suggest that the universe is spatially flat and has a significant amount of dark energy. In this case, the universe should continue to expand at an accelerating rate. The acceleration of the universe's expansion has also been confirmed by observations of distant supernovae. If, as in the concordance model of physical cosmology (Lambda-cold dark matter or ΛCDM), dark energy is in the form of a cosmological constant, the expansion will eventually become exponential, with the size of the universe doubling at a constant rate.

If the theory of inflation is true, the universe went through an episode dominated by a different form of dark energy in the first moments of the Big Bang; but inflation ended, indicating an equation of state much more complicated than those assumed so far for present-day dark energy. It is possible that the dark energy equation of state could change again resulting in an event that would have consequences which are extremely difficult to parametrize or predict.

Future history

In the 1970s, the future of an expanding universe was studied by the astrophysicist Jamal Islam and the physicist Freeman Dyson. Then, in their 1999 book The Five Ages of the Universe, the astrophysicists Fred Adams and Gregory Laughlin divided the past and future history of an expanding universe into five eras. The first, the Primordial Era, is the time in the past just after the Big Bang when stars had not yet formed. The second, the Stelliferous Era, includes the present day and all of the stars and galaxies now seen. It is the time during which stars form from collapsing clouds of gas. In the subsequent Degenerate Era, the stars will have burnt out, leaving all stellar-mass objects as stellar remnantswhite dwarfs, neutron stars, and black holes. In the Black Hole Era, white dwarfs, neutron stars, and other smaller astronomical objects have been destroyed by proton decay, leaving only black holes. Finally, in the Dark Era, even black holes have disappeared, leaving only a dilute gas of photons and leptons.

This future history and the timeline below assume the continued expansion of the universe. If space in the universe begins to contract, subsequent events in the timeline may not occur because the Big Crunch, the collapse of the universe into a hot, dense state similar to that after the Big Bang, will supervene.

Timeline

The Stelliferous Era

From the present to about 1014 (100 trillion) years after the Big Bang

The observable universe is currently 1.38×1010 (13.8 billion) years old. This time is in the Stelliferous Era. About 155 million years after the Big Bang, the first star formed. Since then, stars have formed by the collapse of small, dense core regions in large, cold molecular clouds of hydrogen gas. At first, this produces a protostar, which is hot and bright because of energy generated by gravitational contraction. After the protostar contracts for a while, its center will become hot enough to fuse hydrogen and its lifetime as a star will properly begin.

Stars of very low mass will eventually exhaust all their fusible hydrogen and then become helium white dwarfs. Stars of low to medium mass, such as our own sun, will expel some of their mass as a planetary nebula and eventually become white dwarfs; more massive stars will explode in a core-collapse supernova, leaving behind neutron stars or black holes. In any case, although some of the star's matter may be returned to the interstellar medium, a degenerate remnant will be left behind whose mass is not returned to the interstellar medium. Therefore, the supply of gas available for star formation is steadily being exhausted.

Milky Way Galaxy and the Andromeda Galaxy merge into one

4–8 billion years from now (17.8 – 21.8 billion years after the Big Bang)

The Andromeda Galaxy is currently approximately 2.5 million light years away from our galaxy, the Milky Way Galaxy, and they are moving towards each other at approximately 300 kilometers (186 miles) per second. Approximately five billion years from now, or 19 billion years after the Big Bang, the Milky Way and the Andromeda Galaxy will collide with one another and merge into one large galaxy based on current evidence. Up until 2012, there was no way to confirm whether the possible collision was going to happen or not. In 2012, researchers came to the conclusion that the collision is definite after using the Hubble Space Telescope between 2002 and 2010 to track the motion of Andromeda. This results in the formation of Milkdromeda (also known as Milkomeda).

Coalescence of Local Group and galaxies outside the Local Super-cluster are no longer accessible

1011 (100 billion) to 1012 (1 trillion) years

The galaxies in the Local Group, the cluster of galaxies which includes the Milky Way and the Andromeda Galaxy, are gravitationally bound to each other. It is expected that between 1011 (100 billion) and 1012 (1 trillion) years from now, their orbits will decay and the entire Local Group will merge into one large galaxy.

Assuming that dark energy continues to make the universe expand at an accelerating rate, in about 150 billion years all galaxies outside the Local Supercluster will pass behind the cosmological horizon. It will then be impossible for events in the Local Group to affect other galaxies. Similarly it will be impossible for events after 150 billion years, as seen by observers in distant galaxies, to affect events in the Local Group. However, an observer in the Local Supercluster will continue to see distant galaxies, but events they observe will become exponentially more red shifted as the galaxy approaches the horizon until time in the distant galaxy seems to stop. The observer in the Local Supercluster never observes events after 150 billion years in their local time, and eventually all light and background radiation lying outside the local supercluster will appear to blink out as light becomes so red-shifted that its wavelength has become longer than the physical diameter of the horizon.

Technically, it will take an infinitely long time for all causal interaction between our local supercluster and this light; however, due to the redshifting explained above, the light will not necessarily be observed for an infinite amount of time, and after 150 billion years, no new causal interaction will be observed.

Therefore, after 150 billion years, intergalactic transportation and communication beyond the Local Super-cluster becomes causally impossible, unless ftl communication, warp drives, and/or traversable artificial wormholes are developed.

Luminosities of galaxies begin to diminish

8×1011 (800 billion) years

8×1011 (800 billion) years from now, the luminosities of the different galaxies, approximately similar until then to the current ones thanks to the increasing luminosity of the remaining stars as they age, will start to decrease, as the less massive red dwarf stars begin to die as white dwarfs.

Galaxies outside the Local Supercluster are no longer detectable

2×1012 (2 trillion) years

2×1012 (2 trillion) years from now, all galaxies outside the Local Supercluster will be red-shifted to such an extent that even gamma rays they emit will have wavelengths longer than the size of the observable universe of the time. Therefore, these galaxies will no longer be detectable in any way.

Degenerate Era

From 1014 (100 trillion) to 1040 (10 duodecillion) years

By 1014 (100 trillion) years from now, star formation will end, leaving all stellar objects in the form of degenerate remnants. If protons do not decay, stellar-mass objects will disappear more slowly, making this era last longer.

Star formation ceases

1012–14 (1–100 trillion) years

By 1014 (100 trillion) years from now, star formation will end. This period, known as the "Degenerate Era", will last until the degenerate remnants finally decay. The least massive stars take the longest to exhaust their hydrogen fuel (see stellar evolution). Thus, the longest living stars in the universe are low-mass red dwarfs, with a mass of about 0.08 solar masses (M), which have a lifetime of order 1013 (10 trillion) years. Coincidentally, this is comparable to the length of time over which star formation takes place. Once star formation ends and the least massive red dwarfs exhaust their fuel, nuclear fusion will cease. The low-mass red dwarfs will cool and become black dwarfs. The only objects remaining with more than planetary mass will be brown dwarfs, with mass less than 0.08 M, and degenerate remnants; white dwarfs, produced by stars with initial masses between about 0.08 and 8 solar masses; and neutron stars and black holes, produced by stars with initial masses over 8 M. Most of the mass of this collection, approximately 90%, will be in the form of white dwarfs. In the absence of any energy source, all of these formerly luminous bodies will cool and become faint.

The universe will become extremely dark after the last stars burn out. Even so, there can still be occasional light in the universe. One of the ways the universe can be illuminated is if two carbonoxygen white dwarfs with a combined mass of more than the Chandrasekhar limit of about 1.4 solar masses happen to merge. The resulting object will then undergo runaway thermonuclear fusion, producing a Type Ia supernova and dispelling the darkness of the Degenerate Era for a few weeks. Neutron stars could also collide, forming even brighter supernovae and dispelling up to 6 solar masses of degenerate gas into the interstellar medium. The resulting matter from these supernovae could potentially create new stars. If the combined mass is not above the Chandrasekhar limit but is larger than the minimum mass to fuse carbon (about 0.9 M), a carbon star could be produced, with a lifetime of around 106 (1 million) years. Also, if two helium white dwarfs with a combined mass of at least 0.3 M collide, a helium star may be produced, with a lifetime of a few hundred million years. Finally brown dwarfs can form new stars colliding with each other to form a red dwarf star, that can survive for 1013 (10 trillion) years, or accreting gas at very slow rates from the remaining interstellar medium until they have enough mass to start hydrogen burning as red dwarfs too. This process, at least on white dwarfs, could induce Type Ia supernovae too.

Planets fall or are flung from orbits by a close encounter with another star

1015 (1 quadrillion) years

Over time, the orbits of planets will decay due to gravitational radiation, or planets will be ejected from their local systems by gravitational perturbations caused by encounters with another stellar remnant.

Stellar remnants escape galaxies or fall into black holes

1019 to 1020 (10 to 100 quintillion) years

Over time, objects in a galaxy exchange kinetic energy in a process called dynamical relaxation, making their velocity distribution approach the Maxwell–Boltzmann distribution. Dynamical relaxation can proceed either by close encounters of two stars or by less violent but more frequent distant encounters. In the case of a close encounter, two brown dwarfs or stellar remnants will pass close to each other. When this happens, the trajectories of the objects involved in the close encounter change slightly, in such a way that their kinetic energies are more nearly equal than before. After a large number of encounters, then, lighter objects tend to gain speed while the heavier objects lose it.

Because of dynamical relaxation, some objects in the Universe will gain just enough energy to reach galactic escape velocity and depart the galaxy, leaving behind a smaller, denser galaxy. Since encounters are more frequent in the denser galaxy, the process then accelerates. The end result is that most objects (90% to 99%) are ejected from the galaxy, leaving a small fraction (maybe 1% to 10%) which fall into the central supermassive black hole. It has been suggested that the matter of the fallen remnants will form an accretion disk around it that will create a quasar, as long as enough matter is present there.

Possible ionization of matter

>1023 years from now

In an expanding universe with decreasing density and non-zero cosmological constant, matter density would reach zero, resulting in most matter except black dwarfs, neutron stars, black holes, and planets ionizing and dissipating at thermal equilibrium.

Future with proton decay

The following timeline assumes that protons do decay.

Chance: 1034 (10 decillion) – 1039 years (1 duodecillion)

The subsequent evolution of the universe depends on the possibility and rate of proton decay. Experimental evidence shows that if the proton is unstable, it has a half-life of at least 1034 years. Some of the Grand Unified theories (GUTs) predict long-term proton instability between 1031 and 1036 years, with the upper bound on standard (non-supersymmetry) proton decay at 1.4×1036 years and an overall upper limit maximum for any proton decay (including supersymmetry models) at 6×1039 years. Recent research showing proton lifetime (if unstable) at or exceeding 1034–1035 year range rules out simpler GUTs and most non-supersymmetry models.

Nucleons start to decay

Neutrons bound into nuclei are also suspected to decay with a half-life comparable to that of protons. Planets (substellar objects) would decay in a simple cascade process from heavier elements to pure hydrogen while radiating energy.

In the event that the proton does not decay at all, stellar objects would still disappear, but more slowly. See Future without proton decay below.

Shorter or longer proton half-lives will accelerate or decelerate the process. This means that after 1037 years (the maximum proton half-life used by Adams & Laughlin (1997)), one-half of all baryonic matter will have been converted into gamma ray photons and leptons through proton decay.

All nucleons decay

1040 (10 duodecillion) years

Given our assumed half-life of the proton, nucleons (protons and bound neutrons) will have undergone roughly 1,000 half-lives by the time the universe is 1040 years old. To put this into perspective, there are an estimated 1080 protons currently in the universe. This means that the number of nucleons will be slashed in half 1,000 times by the time the universe is 1040 years old. Hence, there will be roughly 0.51,000 (approximately 10−301) as many nucleons remaining as there are today; that is, zero nucleons remaining in the universe at the end of the Degenerate Age. Effectively, all baryonic matter will have been changed into photons and leptons. Some models predict the formation of stable positronium atoms with diameters greater than the observable universe's current diameter (roughly 6 · 1034 metres) in 1085 years, and that these will in turn decay to gamma radiation in 10141 years.

The supermassive black holes are all that remain of galaxies once all protons decay, but even these giants are not immortal.

If protons decay on higher order nuclear processes

Chance: 1065 to 10200 years

In the event that the proton does not decay according to the theories described above, the Degenerate Era will last longer, and will overlap or surpass the Black Hole Era. On a time scale of 1065 years solid matter will behave as liquid and become smooth spheres due to diffusion and gravity. Degenerate stellar objects can still experience proton decay, for example via processes involving the Adler–Bell–Jackiw anomaly, virtual black holes, or higher-dimension supersymmetry possibly with a half-life of under 10200 years.

>10150 years from now

Although protons are stable in standard model physics, a quantum anomaly may exist on the electroweak level, which can cause groups of baryons (protons and neutrons) to annihilate into antileptons via the sphaleron transition. Such baryon/lepton violations have a number of 3 and can only occur in multiples or groups of three baryons, which can restrict or prohibit such events. No experimental evidence of sphalerons has yet been observed at low energy levels, though they are believed to occur regularly at high energies and temperatures.

The photon, electron, positron, and neutrino are now the final remnants of the universe as the last of the supermassive black holes evaporate.

Black Hole Era

1040 (10 duodecillion) years to approximately 10100 (1 googol) years, up to 10108 years for the largest supermassive black holes

After 1040 years, black holes will dominate the universe. They will slowly evaporate via Hawking radiation. A black hole with a mass of around 1 M will vanish in around 2×1066 years. As the lifetime of a black hole is proportional to the cube of its mass, more massive black holes take longer to decay. A supermassive black hole with a mass of 1011 (100 billion) M will evaporate in around 2×1099 years.

The largest black holes in the universe are predicted to continue to grow. Larger black holes of up to 1014 (100 trillion) M may form during the collapse of superclusters of galaxies. Even these would evaporate over a timescale of 10106 to 10108 years.

Hawking radiation has a thermal spectrum. During most of a black hole's lifetime, the radiation has a low temperature and is mainly in the form of massless particles such as photons and hypothetical gravitons. As the black hole's mass decreases, its temperature increases, becoming comparable to the Sun's by the time the black hole mass has decreased to 1019 kilograms. The hole then provides a temporary source of light during the general darkness of the Black Hole Era. During the last stages of its evaporation, a black hole will emit not only massless particles, but also heavier particles, such as electrons, positrons, protons, and antiprotons.

Dark Era and Photon Age

From 10100 years (10 duotrigintillion years or 1 googol years)

After all the black holes have evaporated (and after all the ordinary matter made of protons has disintegrated, if protons are unstable), the universe will be nearly empty. Photons, neutrinos, electrons, and positrons will fly from place to place, hardly ever encountering each other. Gravitationally, the universe will be dominated by dark matter, electrons, and positrons (not protons).

By this era, with only very diffuse matter remaining, activity in the universe will have tailed off dramatically (compared with previous eras), with very low energy levels and very large time scales. Electrons and positrons drifting through space will encounter one another and occasionally form positronium atoms. These structures are unstable, however, and their constituent particles must eventually annihilate. Other low-level annihilation events will also take place, albeit very slowly. The universe now reaches an extremely low-energy state.

Future without proton decay

If the protons do not decay, stellar-mass objects will still become black holes, but more slowly. The following timeline assumes that proton decay does not take place.

Degenerate Era

Matter decays into iron

101100-1032000 years from now

In 101500 years, cold fusion occurring via quantum tunneling should make the light nuclei in stellar-mass objects fuse into iron-56 nuclei. Fission and alpha particle emission should make heavy nuclei also decay to iron, leaving stellar-mass objects as cold spheres of iron, called iron stars. Before this happens, in some black dwarfs the process is expected to lower their Chandrasekhar limit resulting in a supernova in 101100 years. Non-degenerate silicon has been calculated to tunnel to iron in approximately 1032000 years.

Black Hole Era

Collapse of iron stars to black holes

101026 to 101076 years from now

Quantum tunneling should also turn large objects into black holes, which (on these timescales) will instantaneously evaporate into subatomic particles. Depending on the assumptions made, the time this takes to happen can be calculated as from 101026 years to 101076 years. Quantum tunneling may also make iron stars collapse into neutron stars in around 101076 years..

Dark Era (without proton decay)

101076 years from now

With black holes evaporated, virtually no matter still exists, the universe having become an almost pure vacuum (possibly accompanied with a false vacuum). The expansion of the universe slowly cools it down to absolute zero.

Beyond

Beyond 102500 years if proton decay occurs, or 101076 years without proton decay

It is possible that a Big Rip event may occur far off into the future. This singularity would take place at a finite scale factor.

If the current vacuum state is a false vacuum, the vacuum may decay into a lower-energy state.

Presumably, extreme low-energy states imply that localized quantum events become major macroscopic phenomena rather than negligible microscopic events because the smallest perturbations make the biggest difference in this era, so there is no telling what may happen to space or time. It is perceived that the laws of "macro-physics" will break down, and the laws of quantum physics will prevail.

The universe could possibly avoid eternal heat death through random quantum tunneling and quantum fluctuations, given the non-zero probability of producing a new Big Bang in roughly 10101056 years.

Over an infinite amount of time, there could be a spontaneous entropy decrease, by a Poincaré recurrence or through thermal fluctuations (see also fluctuation theorem).

Massive black dwarves could also potentially explode into supernovae after up to 1032000 years, assuming protons do not decay. 

The possibilities above are based on a simple form of dark energy. But the physics of dark energy are still a very active area of research, and the actual form of dark energy could be much more complex. For example, during inflation dark energy affected the universe very differently than it does today, so it is possible that dark energy could trigger another inflationary period in the future. Until dark energy is better understood its possible effects are extremely difficult to predict or parametrize.

Inhalant

From Wikipedia, the free encyclopedia https://en.wikipedia.org/w...