Search This Blog

Monday, February 9, 2026

Chronology of the universe

From Wikipedia, the free encyclopedia

The chronology of the universe describes the history and future of the universe according to Big Bang cosmology.

Research published in 2015 estimates the earliest stages of the universe's existence as taking place 13.8 billion years ago, with an uncertainty of around 21 million years at the 68% confidence level.

Background

Expansion

The current accepted model of the history of the universe is based on the concept of the Big Bang: the universe started hot and dense then expanded and cooled. Different particles interact during each major stage in the expansion; as the universe expands the density falls and some particle interactions cease to be important. The character of the universe changes. Moreover, the rate of the expansion itself depends upon the nature of the existing particles, creating an interplay between cosmology and particle physics.

Time

The lookback time of extragalactic observations by their cosmological redshift up to z=20.

In cosmology, time and space are connected: space expands as time increases. Time at each point in space (for example a galaxy) can be uniquely defined in terms of an imaginary clock at that point. These clocks move with the point in space as the universe expands; they are synchronized to a single point in the distance past. Light from distant galaxies is emitted in the past then travels at the speed of light: knowledge about a distant galaxy is limited to one point in time called the lookback time. During the journey from a distant point, the universe continues to expand, stretching the wavelength of the light along the way, an effect called cosmological redshift. The redshift can be measured by comparing incoming light to known spectroscopic lines and the resulting value can be related to the comoving distance to the emitter. Consequently, experimental knowledge about the chronology of the universe is derived by observing distant light.

Overview

The NASA diagram shows the history of the universe from inflation until the present.

The chronology of the universe can be divided into five parts:

  • Inflation, the first era supported by experimental evidence, a period of exponential expansion that ends with the conversion of energy into particles,
  • Quark soup, the initial particles cool and coalesce, dark matter forms,
  • Big bang nucleosynthesis, combining nucleons create the cores of the first atoms,
  • Gravity builds cosmic structure, reduced density allows matter to dominate over radiation for control of expansion, photons decouple to form the cosmic background radiation, and gravitational attraction builds stars, galaxies, and clusters of galaxies.
  • Cosmic acceleration, continued expansion allows dark energy to overcome gravitational force, inhibiting larger structures.

With these large subsections are many events and transitions. Older models divided the chronology differently, using different terminology or emphasis.

Tabular summary

Modern cosmological chronologies begin with inflation, the earliest time period supported by solid observational evidence. Anything earlier is considered non-standard cosmology, the subject of a great deal of as-yet-unconfirmed research.

Article subsection Cosmic time Redshift Temperature: 72 

Description
Inflation unknown
not applicable Cosmic inflation expands space by a factor of the order of 1026 over a time of the order of 10−36 to 10−32 seconds.
Reheating unknown
unknown Converts the energy in the inflation field into a thermal bath of Standard Model particles, initiating the Hot Big Bang. Many mechanisms have been proposed.
Baryogenesis unknown
unknown Matter and antimatter are created with one extra particle of matter for every 1010 pairs. The pairs annihilate producing photons and leaving the matter particles. Many mechanisms have been proposed but no observations select one.
Electroweak phase transition 20×10−12 s 20×1015 > 1015 K
(150 GeV/kB)
The strong interaction becomes distinct from the electroweak interaction. Matter particles have mass. The sphere of space that will become the observable universe is approximately 300 light-seconds (~0.6 au) in radius at this time.
Quantum chromodynamics phase transition 20×10−6 s 1012 1015 K – 1012 K
(150 GeV/kB – 150 MeV/kB)
The quark–gluon plasma of matter particles coalesce into hadrons: mostly protons, neutrons, and pions.
Neutrino decoupling 1 s 6×109 1010 K
(1 MeV/kB)
Neutrinos cease interacting with baryonic matter, and form cosmic neutrino background. The sphere of space that will become the observable universe is approximately 10 light-years in radius at this time.
Electron-positron annihilation 6 s 2×109 1010 K – 109 K
(1 MeV/kB – 100 keV/kB)
As the temperature falls, photons no longer have sufficient energy to produce electron/positron pairs. Electrons and positrons annihilate, leaving photons.
Big Bang nucleosynthesis 10 s – 1000 s 4×108 109 K – 107 K
(0.1 MeV/kB – 1 keV/kB)
Protons and neutrons are bound into primordial atomic nuclei: hydrogen and helium-4. Trace amounts of deuterium, helium-3, and lithium-7 also form. At the end of this epoch, the spherical volume of space which will become the observable universe is about 300 light-years in radius, baryonic matter density is on the order of 4 grams per m3 (about 0.3% of sea level air density)—however, most energy at this time is in electromagnetic radiation.
Recombination 290 ka – 370 ka 1090 – 1270 4000 K
(0.4 eV/kB)
Electrons and atomic nuclei first become bound to form neutral atoms. Photons are no longer in thermal equilibrium with matter and the universe first becomes transparent. Recombination lasts for about 100 ka, during which the universe is becoming more and more transparent to photons. The photons of the cosmic microwave background radiation originate at this time. The spherical volume of space that will become the observable universe is 42 million light-years in radius at this time. The baryonic matter density at this time is about 500 million hydrogen and helium atoms per cubic metre, approximately a billion times higher than today. This density corresponds to pressure on the order of 10−17 atm.
Dark Ages 370 ka – 150 Ma?
(Only fully ends by about 1 Ga)
1100 – 20 4000 K – 60 K The time between recombination and the formation of the first stars. During this time, the only source of photons was hydrogen emitting radio waves at hydrogen line. Freely propagating CMB photons quickly (within about 3 million years) red-shifted to infrared, and the universe was devoid of visible light.
Star and galaxy formation and evolution Earliest galaxies: from about 300–400 Ma?
(first stars: similar or earlier)

Modern galaxies: 1 Ga – 10 Ga

(exact timings being researched)
From about 20 From about 60 K The earliest known galaxies existed by about 280 Ma. Galaxies coalesce into "proto-clusters" from about 1 Ga (redshift z = 6) and into galaxy clusters beginning at 3 Ga (z = 2.1), and into superclusters from about 5 Ga (z = 1.2). See: list of galaxy groups and clusters, list of superclusters.
Reionization 200 Ma – 1 Ga

(exact timings being researched)
20 – 6 60 K – 19 K The most distant astronomical objects observable with telescopes date to this period; as of June 2025, the most remote galaxy observed is MoM-z14, at a redshift of 14.44. The earliest "modern" Population I stars are formed in this period.
Present time 13.8 Ga 0 2.7 K Farthest observable photons at this moment are CMB photons. They arrive from a sphere with a radius of 46 billion light-years. The spherical volume inside it is commonly referred to as the observable universe.
Alternative subdivisions of the chronology (overlapping several of the above periods)
Radiation-dominated era From inflation (~ 10−32 sec) – 47 ka > 3600 > 104 K During this time, the energy density of massless and near-massless relativistic components such as photons and neutrinos, which move at or close to the speed of light, dominate both matter density and dark energy.
Matter-dominated era 47 ka – 9.8 Ga 3600 – 0.4 104 K – 4 K During this time, the energy density of matter dominates both radiation density and dark energy, resulting in a decelerated expansion of the universe.
Dark-energy-dominated era > 9.8 Ga < 0.4 < 4 K Matter density falls below dark energy density (vacuum energy), and expansion of space begins to accelerate. This time happens to correspond roughly to the time of the formation of the Solar System and the evolutionary history of life.
Stelliferous Era 150 Ma – 100 Ta 20 – −0.99 60 K – 0.03 K The time between the first formation of Population III stars until the cessation of star formation, leaving all stars in the form of degenerate remnants.
Far future > 100 Ta < −0.99 < 0.1 K The stelliferous era will end as stars eventually die and fewer are born to replace them, leading to a darkening universe. Various theories suggest a number of subsequent possibilities. Assuming proton decay, the matter may eventually evaporate into a Dark Era (heat death). Alternatively, the universe may collapse in a Big Crunch. Other suggested ends include a false vacuum catastrophe or a Big Rip as possible ends to the universe.

Inflation

Before c. 10−32 seconds after the Big Bang

At this point of the very early universe, the universe is thought to have expanded by at least a factor of 1026 in time on the order of 10−36 seconds. All of the mass-energy in all of the galaxies currently visible started in a sphere with a radius around 4×10−29 m, then grew to a sphere with a radius around 0.09m by the end of inflation. This phase of the cosmic expansion history is known as inflation or sometimes as the inflationary epoch.

Inflation explains how today's universe has concentrations of matter, like galaxies and clusters of galaxies, rather than having matter spatially uniform through the universe. Tiny quantum fluctuations in the universe, amplified by inflation, are believed to be the basis of large-scale structures that formed much later.

The mechanism that drove inflation remains unknown, although many models have been put forward. In several of the more prominent models, it is thought to have been triggered by the separation of the strong and electroweak interactions which ended the grand unification epoch. One of the theoretical products of this phase transition was a scalar field called the inflaton field. As this field settled into its lowest-energy state throughout the universe, it generated an enormous repulsive force that led to a rapid expansion of the universe.

The rapid expansion meant that any potential particles (or other "unwanted" artifacts, such as topological defects) remaining from the time before inflation were now distributed very thinly across the universe.

Reheating

It is not known exactly when the inflationary epoch ended, but it is thought to have been between 10−33 and 10−32 seconds after the Big Bang. The rapid expansion of space meant that any elementary particles remaining from the grand unification epoch were now distributed very thinly across the universe to the point where there is no physical temperature that can be associated with them. However, the large potential energy of the inflaton field was released at the end of the inflationary epoch, as the inflaton field decayed into other particles, known as reheating. This heating effect led to the universe being repopulated with a dense, hot mixture of Standard Model particles.

After inflation ended, the universe continued to expand. A region the size of a melon at that time has since grown to be our entire observable universe.

Hot Big Bang

The physical model for the chronology of the universe with strong observational and theoretical support is called the hot Big Bang model. The concept includes an early state of extreme temperature and density followed by expansion of the universe continuing to this day. A high-precision version of the Big Bang model using conventional physics, known as Lambda-CDM, agrees with a wide array of astrophysical observations. The concept is not extrapolated back to zero time. Within the standard model of cosmology the initial state is set by a process called inflation. The relative timeline for the earliest phenomena is unclear. Speculation on processes occurring before inflation involves physics considered outside of standard cosmology.

Electroweak phase transition

10−12 seconds after the Big Bang

As the universe's temperature continued to fall below 159.5±1.5 GeV/kB, electroweak symmetry breaking happened. So far as we know, it was the penultimate symmetry breaking event in the formation of the universe, the final one being chiral symmetry breaking in the quark sector. This has two related effects:

  1. Via the Higgs mechanism, all elementary particles interacting with the Higgs field became massive, having been massless at higher energy levels.
  2. As a side-effect, the weak nuclear force and electromagnetic force, and their respective bosons (the W and Z bosons and photon) began to manifest differently in the present universe. Before electroweak symmetry breaking, these bosons were all massless particles and interacted over long distances, but at this point the W and Z bosons abruptly became massive particles only interacting over distances smaller than the size of an atom, while the photon remained massless and remained a long-distance interaction.

After electroweak symmetry breaking, the fundamental interactions we know of—gravitation, electromagnetic, weak and strong interactions—all took their present forms, and fundamental particles had their expected masses, but the temperature of the universe was still too high to allow the stable formation of many of the particles we now see in the universe, so there were no protons or neutrons, and therefore no atoms, atomic nuclei, or molecules. (More precisely, any composite particles that formed by chance almost immediately broke up again due to the extreme energies.)

Quantum chromodynamics phase transition

Between 10−12 seconds and 10−5 seconds after the Big Bang

After cosmic inflation ended, the universe was filled with a hot quark–gluon plasma, the remains of reheating. From this point onwards the physics of the early universe is much better understood, and the energies involved in the quark epoch are directly accessible in particle physics experiments and other detectors.

The quark epoch began approximately 10−12 seconds after the Big Bang. This was the period in the evolution of the early universe immediately after electroweak symmetry breaking when the fundamental interactions of gravitation, electromagnetism, the strong interaction and the weak interaction had taken their present forms, but the temperature of the universe was still too high to allow quarks to bind together to form hadrons. The quark epoch ended when the universe was about 10−5 seconds old; two non-equilibrium events must have occurred next, formation of baryons and of dark matter.

Neutrino decoupling and cosmic neutrino background (CνB)

Around 1 second after the Big Bang

At approximately 1 second after the Big Bang neutrinos decouple and begin travelling freely through space. As neutrinos rarely interact with matter, these neutrinos still exist today, analogous to the much later cosmic microwave background emitted during recombination, around 370,000 years after the Big Bang. The neutrinos from this event have a very low energy, around 10−10 times the amount of those observable with present-day direct detection. Even high-energy neutrinos are notoriously difficult to detect, so this cosmic neutrino background (CνB) may not be directly observed in detail for many years, if at all.

However, Big Bang cosmology makes many predictions about the CνB, and there is very strong indirect evidence that the CνB exists, both from Big Bang nucleosynthesis predictions of the helium abundance, and from anisotropies in the cosmic microwave background (CMB). One of these predictions is that neutrinos will have left a subtle imprint on the CMB. It is well known that the CMB has irregularities. Some of the CMB fluctuations were roughly regularly spaced, because of the effect of baryonic acoustic oscillations. In theory, the decoupled neutrinos should have had a very slight effect on the phase of the various CMB fluctuations.

In 2015, it was reported that such shifts had been detected in the CMB. Moreover, the fluctuations corresponded to neutrinos of almost exactly the temperature predicted by Big Bang theory (1.96±0.02 K compared to a prediction of 1.95 K), and exactly three types of neutrino, the same number of neutrino flavors predicted by the Standard Model.

Cosmological models of this early time remain unsettled. The Standard Model of particle physics is only tested up to temperatures of order 1017K (10 TeV) in particle colliders, such as the Large Hadron Collider. Moreover, new physical phenomena not yet covered by the Standard Model could have been important before the time of neutrino decoupling, when the temperature of the universe was about 1010K (1 MeV).

Electron-positron annihilation

Between 1 second and 10 seconds after the Big Bang

The majority of hadrons and anti-hadrons annihilate each other leaving leptons (such as the electron, muons and certain neutrinos) and antileptons, dominating the mass of the universe. Initially leptons and antileptons are produced in pairs. About 10 seconds after the Big Bang the temperature of the universe falls to the point at which new lepton–antilepton pairs are no longer created and most remaining leptons and antileptons quickly annihilated each other, giving rise to pairs of high-energy photons, and leaving a small residue of non-annihilated leptons. After most leptons and antileptons are annihilated, most of the mass–energy in the universe is left in the form of photons.

Baryogenesis

Around 3 minutes after the Big Bang

Baryons are subatomic particles such as protons and neutrons that are composed of three quarks. It would be expected that both baryons, and particles known as antibaryons would have formed in equal numbers. However, almost no antibaryons are observed in nature. It is not clear how this came about. Any explanation for this phenomenon must allow the Sakharov conditions related to baryogenesis to have been satisfied at some time after the end of cosmological inflation. Current particle physics suggests asymmetries under which these conditions would be met, but these asymmetries appear to be too small to account for the observed baryon-antibaryon asymmetry of the universe.

Theory predicts that about 1 neutron remained for every 6 protons, with the ratio falling to 1:7 over time due to neutron decay. This is believed to be correct because, at a later stage, the neutrons and some of the protons fused, leaving hydrogen, a hydrogen isotope called deuterium, helium and other elements, which can be measured. A 1:7 ratio of hadrons would indeed produce the observed element ratios in the early and current universe.

Nucleosynthesis of light elements

Between 3 minutes and 20 minutes after the Big Bang

Between about 3 and 20 minutes after the Big Bang nuclear fusion reactions convert a 1:7 mixture of neutrons and protons in to a mix of protons, deuterium (a proton fused with a neutron), 3He, 4He, with trace amounts of 7Li and 7Be. These reactions end when the temperature falls below the 0.07MeV needed for nuclear fusion. The final mixture depends upon the reaction rates, the temperature, and the density of the components. The reaction rates can be measured in nuclear physics laboratories while the temperature and densities can be calculated from models of the expansion of the universe.

About 25% of the protons, and all the neutrons fuse to form deuterium, a hydrogen isotope, and almost all of the deuterium quickly fuses to form helium-4. Helium-4 has much higher binding energy than nuclei with 5 to 8 nucleons so only trace amounts of those nuclei are created. Heavier nuclei produced in stars do not appear because they require the combination of three Helium-4 nuclei and the density of Helium-4 is too low for many three way collisions to occur before the expansion cools the universe below the fusion temperature. Small amounts of tritium (another hydrogen isotope) and beryllium-7 and -8 are formed, but these are unstable and quickly decay. A small amount of deuterium is left unfused.

The amounts of each light element in the early universe can be estimated from old galaxies, and is strong evidence for the Big Bang. For example, the Big Bang should produce about 1 neutron for every 7 protons, allowing for 25% of all nucleons to be fused into helium-4 (2 protons and 2 neutrons out of every 16 nucleons), and this is the amount we find today, and far more than can be explained by production in stars. Similarly, deuterium fuses extremely easily; any alternative explanation must also explain how conditions existed for deuterium to form, but also left some of that deuterium unfused and not immediately fused again into helium. Any alternative must also explain the proportions of the various light elements and their isotopes. A few isotopes, such as lithium-7, were found to be present in amounts that differed from theory.

Matter-radiation equality

47,000 years after the Big Bang

Until now, the universe's large-scale dynamics and behavior have been determined mainly by radiation—meaning, those constituents that move relativistically (at or near the speed of light), such as photons and neutrinos. As the universe cools, from around 47,000 years (redshift z = 3600), the universe's large-scale behavior becomes dominated by matter instead. This occurs because the energy density of matter begins to exceed both the energy density of radiation and the vacuum energy density Around or shortly after 47,000 years, the densities of non-relativistic matter (atomic nuclei) and relativistic radiation (photons) become equal, the Jeans length, which determines the smallest structures that can form (due to competition between gravitational attraction and pressure effects), begins to fall and perturbations, instead of being wiped out by free streaming radiation, can begin to grow in amplitude.

According to the Lambda-CDM model, by this stage, the matter in the universe is around 84.5% cold dark matter and 15.5% "ordinary" matter. There is overwhelming evidence that dark matter exists and dominates the universe, but since the exact nature of dark matter is still not understood, the Big Bang theory does not presently cover any stages in its formation.

From this point on, and for several billion years to come, the presence of dark matter accelerates the formation of structure in the universe. In the early universe, dark matter gradually gathers in huge filaments under the effects of gravity, collapsing faster than ordinary (baryonic) matter because its collapse is not slowed by radiation pressure. This amplifies the tiny inhomogeneities (irregularities) in the density of the universe which were left by cosmic inflation. Over time, slightly denser regions become denser and slightly rarefied (emptier) regions become more rarefied. Ordinary matter eventually gathers together faster than it would otherwise do, because of the presence of these concentrations of dark matter.

The properties of dark matter that allow it to collapse quickly without radiation pressure also mean that it cannot lose energy by radiation. Losing energy is necessary for particles to collapse into dense structures beyond a certain point. Therefore, dark matter collapses into huge but diffuse filaments and haloes, and not into stars or planets. Ordinary matter, which can lose energy by radiation, forms dense objects and also gas clouds when it collapses.

Recombination, photon decoupling, and the cosmic microwave background (CMB)

9-year WMAP image of the cosmic microwave background radiation (2012). The radiation is isotropic to roughly one part in 100,000.

About 370,000 years after the Big Bang, two connected events occurred: the ending of recombination and photon decoupling. Recombination describes the ionized particles combining to form the first neutral atoms, and decoupling refers to the photons released ("decoupled") as the newly formed atoms settle into more stable energy states.

Just before recombination, the baryonic matter in the universe was at a temperature where it formed a hot ionized plasma. Most of the photons in the universe interacted with electrons and protons, and could not travel significant distances without interacting with ionized particles. As a result, the universe was opaque or "foggy". Although there was light, it was not possible to see, nor can we observe that light through telescopes.

Starting around 18,000 years, the universe has cooled to a point where free electrons can combine with helium nuclei to form He+
atoms. After around 50,000 years, as the universe cools, its behavior begins to be dominated by matter rather than radiation. At around 100,000 years, after the neutral helium atoms form, helium hydride is the first molecule. Much later, hydrogen and helium hydride react to form molecular hydrogen (H2), the fuel needed for the first stars. At about 370,000 years, neutral hydrogen atoms finish forming ("recombination" of hydrogen ions and electrons), greatly reducing the Thomson scattering of photons. No longer scattered by free electrons, the photons were "decoupled" from the earlier plasma and propagated freely. The majority of these photons still exist as the cosmic microwave background (CMB). This is the oldest era of the universe that we can directly observe today.

Directly combining in a low energy state (ground state) is less efficient, so these hydrogen atoms generally form with the electrons still in a high-energy state, and once combined, the electrons quickly release energy in the form of one or more photons as they transition to a low energy state. This release of photons is known as photon decoupling. Some of these decoupled photons are captured by other hydrogen atoms, the remainder remain free. By the end of recombination, most of the protons in the universe have formed neutral atoms. This change from charged to neutral particles means that the mean free path photons can travel before capture in effect becomes infinite, so any decoupled photons that have not been captured can travel freely over long distances (see Thomson scattering). The universe has become transparent to visible light, radio waves and other electromagnetic radiation for the first time in its history.

The photons released by these newly formed hydrogen atoms initially had a temperature/energy of around ~ 4000 K. This would have been visible to the eye as a pale yellow/orange tinted, or "soft", white color. Over billions of years since decoupling, as the universe has expanded, the photons have been red-shifted from visible light to radio waves (microwave radiation corresponding to a temperature of about 2.7 K). Red shifting describes the photons acquiring longer wavelengths and lower frequencies as the universe expanded over billions of years, so that they gradually changed from visible light to radio waves. These same photons can still be detected as radio waves today. They form the cosmic microwave background, and they provide crucial evidence of the early universe and how it developed.

Around the same time as recombination, existing pressure waves within the electron-baryon plasma—known as baryon acoustic oscillations—became embedded in the distribution of matter as it condensed, giving rise to a very slight preference in distribution of large-scale objects. Therefore, the cosmic microwave background is a picture of the universe at the end of this epoch including the tiny fluctuations generated during inflation (see 9-year WMAP image), and the spread of objects such as galaxies in the universe is an indication of the scale and size of the universe as it developed over time.

Gravity builds cosmic structure

370 thousand to about 1 billion years after the Big Bang

Even before recombination and decoupling, matter began to accumulate around clumps of dark matter. Clouds of hydrogen collapsed very slowly to form stars and galaxies.

Duration: 50 seconds.
Hubble Space TelescopeUltra Deep Field galaxies to Legacy Field zoom out (video 00:50; 2 May 2019)

Dark Ages

After recombination and decoupling, the universe was transparent and had cooled enough to allow light to travel long distances, but there were no light-producing structures such as stars and galaxies. Stars and galaxies are formed when dense regions of gas form due to the action of gravity, and this takes a long time within a near-uniform density of gas and on the scale required, so it is estimated that stars did not exist for perhaps hundreds of millions of years after recombination.

This period, known as the Dark Ages, began at photon decoupling around 370,000 years after the Big Bang and ends over a long period of time called reionization. During the Dark Ages, the temperature of the universe cooled from some 4000 K to about 60 K (3727 °C to about −213 °C), and only two sources of photons existed: the photons released during recombination/decoupling (as neutral hydrogen atoms formed), which we can still detect today as the cosmic microwave background (CMB), and photons occasionally released by neutral hydrogen atoms, known as the 21 cm spin line of neutral hydrogen. The hydrogen spin line is in the microwave range of frequencies, and within 3 million years, the CMB photons had redshifted out of visible light to infrared; from that time until the first stars, there were no visible light photons. Other than perhaps some rare statistical anomalies, the universe was truly dark.

The first generation of stars, known as Population III stars, formed within a few hundred million years after the Big Bang. These stars were the first source of visible light in the universe after recombination. Structures may have begun to emerge from around 150 million years, and early galaxies emerged from around 180 to 700 million years. As they emerged, the Dark Ages gradually ended. Because this process was gradual, the Dark Ages only ended fully at around 1 billion years, as the universe took on its present appearance.

Artist's impression of the first stars, 400 million years after the Big Bang

Oldest observations of stars and galaxies

At present, the oldest observations of stars and galaxies are from shortly after the start of reionization, with galaxies such as GN-z11 (Hubble Space Telescope, 2016) at about z≈11.1 (about 400 million years cosmic time). Hubble's successor, the James Webb Space Telescope, launched December 2021, is designed to detect objects up to 100 times fainter than Hubble, and much earlier in the history of the universe, back to redshift z≈20 (about 180 million years cosmic time). This is believed to be earlier than the first galaxies, and around the era of the first stars.

There is also an observational effort underway to detect the faint 21 cm spin line radiation, as it is in principle an even more powerful tool than the cosmic microwave background for studying the early universe.

Earliest structures and stars emerge

Around 150 million to 1 billion years after the Big Bang
The Hubble Ultra Deep Fields often feature galaxies that are examples of what the early Stelliferous Era was like.
Another Hubble image shows an infant galaxy forming nearby, which means this happened very recently on the cosmological timescale. This shows that new galaxy formation in the universe is still occurring.

The matter in the universe is around 84.5% cold dark matter and 15.5% "ordinary" matter. Since the start of the matter-dominated era, dark matter has gradually been gathering in huge spread-out (diffuse) filaments under the effects of gravity. Ordinary matter eventually gathers together faster than it would otherwise, because of the presence of these concentrations of dark matter. It is also slightly more dense at regular distances due to early baryon acoustic oscillations (BAO) which became embedded into the distribution of matter when photons decoupled. Unlike dark matter, ordinary matter can lose energy by many routes, which means that as it collapses, it can lose the energy which would otherwise hold it apart, and collapse more quickly, and into denser forms. Ordinary matter gathers where dark matter is denser, and in those places it collapses into clouds of mainly hydrogen gas. The first stars and galaxies form from these clouds. Where numerous galaxies have formed, galaxy clusters and superclusters will eventually arise. Large voids with few stars will develop between them, marking where dark matter became less common.

The exact timings of the first stars, galaxies, supermassive black holes, and quasars, and the start and end timings and progression of the period known as reionization, are still being actively researched, with new findings published periodically. As of 2019: the earliest confirmed galaxies (for example GN-z11) date from around 380–400 million years, suggesting surprisingly fast gas cloud condensation and stellar birth rates; and observations of the Lyman-alpha forest, and of other changes to the light from ancient objects, allow the timing for reionization and its eventual end to be narrowed down.

Structure formation in the Big Bang model proceeds hierarchically, due to gravitational collapse, with smaller structures forming before larger ones. The earliest structures to form are the first stars (known as Population III stars), dwarf galaxies, and quasars (which are thought to be bright, early active galaxies containing a supermassive black hole surrounded by an inward-spiraling accretion disk of gas). Before this epoch, the evolution of the universe could be understood through linear cosmological perturbation theory: that is, all structures could be understood as small deviations from a perfect homogeneous universe. This is computationally relatively easy to study. At this point non-linear structures begin to form, and the computational problem becomes much more difficult, involving, for example, N-body simulations with billions of particles. The Bolshoi cosmological simulation is a high precision simulation of this era.

These Population III stars are also responsible for turning the few light elements that were formed in the Big Bang (hydrogen, helium and small amounts of lithium) into many heavier elements. They can be huge as well as perhaps small—and non-metallic (no elements except hydrogen and helium). The larger stars have very short lifetimes compared to most Main Sequence stars we see today, so they commonly finish burning their hydrogen fuel and explode as supernovae after mere millions of years, seeding the universe with heavier elements over repeated generations. They mark the start of the Stelliferous Era.

As yet, no Population III stars have been found, so the understanding of them is based on computational models of their formation and evolution. Fortunately, observations of the cosmic microwave background radiation can be used to date when star formation began in earnest. Analysis of such observations made by the Planck microwave space telescope in 2016 concluded that the first generation of stars may have formed from around 300 million years after the Big Bang.

Quasars provide some additional evidence of early structure formation. Their light shows evidence of elements such as carbon, magnesium, iron and oxygen. This is evidence that by the time quasars formed, a massive phase of star formation had already taken place, including sufficient generations of Population III stars to give rise to these elements.

Reionization

Phases of the reionization

As the first stars, dwarf galaxies and quasars gradually form, the intense radiation they emit reionizes much of the surrounding universe; splitting the neutral hydrogen atoms back into a plasma of free electrons and protons for the first time since recombination and decoupling.

Reionization is evidenced from observations of quasars. Quasars are a form of active galaxy, and the most luminous objects observed in the universe. Electrons in neutral hydrogen have specific patterns of absorbing ultraviolet photons, related to electron energy levels and called the Lyman series. Ionized hydrogen does not have electron energy levels of this kind. Therefore, light travelling through ionized hydrogen and neutral hydrogen shows different absorption lines. Ionized hydrogen in the intergalactic medium (particularly electrons) can scatter light through Thomson scattering as it did before recombination, but the expansion of the universe and clumping of gas into galaxies resulted in a concentration too low to make the universe fully opaque by the time of reionization. Because of the immense distance travelled by light (billions of light years) to reach Earth from structures existing during reionization, any absorption by neutral hydrogen is redshifted by various amounts, rather than by one specific amount, indicating when the absorption of then-ultraviolet light happened. These features make it possible to study the state of ionization at many different times in the past.

Reionization began as "bubbles" of ionized hydrogen which became larger over time until the entire intergalactic medium was ionized, when the absorption lines by neutral hydrogen become rare. The absorption was due to the general state of the universe (the intergalactic medium) and not due to passing through galaxies or other dense areas. Reionization might have started to happen as early as z = 16 (250 million years of cosmic time) and was mostly complete by around z = 9 or 10 (500 million years), with the remaining neutral hydrogen becoming fully ionized z = 5 or 6 (1 billion years), when Gunn-Peterson troughs that show the presence of large amounts of neutral hydrogen disappear. The intergalactic medium remains predominantly ionized to the present day, the exception being some remaining neutral hydrogen clouds, which cause Lyman-alpha forests to appear in spectra.

These observations have narrowed down the period of time during which reionization took place, but the source of the photons that caused reionization is still not completely certain. To ionize neutral hydrogen, an energy larger than 13.6 eV is required, which corresponds to ultraviolet photons with a wavelength of 91.2 nm or shorter, implying that the sources must have produced significant amounts of ultraviolet and higher energy. Protons and electrons will recombine if energy is not continuously provided to keep them apart, which also sets limits on how numerous the sources were and their longevity. With these constraints, it is expected that quasars and first generation stars and galaxies were the main sources of energy. The current leading candidates from most to least significant are currently believed to be Population III stars (the earliest stars; possibly 70%), dwarf galaxies (very early small high-energy galaxies; possibly 30%), and a contribution from quasars (a class of active galactic nuclei).

However, by this time, matter had become far more spread out due to the ongoing expansion of the universe. Although the neutral hydrogen atoms were again ionized, the plasma was much more thin and diffuse, and photons were much less likely to be scattered. Despite being reionized, the universe remained largely transparent during reionization due how sparse the intergalactic medium was. Reionization gradually ended as the intergalactic medium became virtually completely ionized, although some regions of neutral hydrogen do exist, creating Lyman-alpha forests.

In August 2023, images of black holes and related matter in the very early universe by the James Webb Space Telescope were reported and discussed.

Galaxies, clusters and superclusters

Computer simulated view of the large-scale structure of a part of the universe about 50 million light-years across

Matter continues to draw together under the influence of gravity, to form galaxies. The stars from this time period, known as Population II stars, are formed early on in this process, with more recent Population I stars formed later. Gravitational attraction also gradually pulls galaxies towards each other to form groups, clusters and superclusters. Hubble Ultra Deep Field observations has identified a number of small galaxies merging to form larger ones, at 800 million years of cosmic time (13 billion years ago). (This age estimate is now believed to be slightly overstated).

Present and future

−13 —
−12 —
−11 —
−10 —
−9 —
−8 —
−7 —
−6 —
−5 —
−4 —
−3 —
−2 —
−1 —
0 —
Earliest quasar / black hole

Universe as it appears today

From 1 billion years, and for about 12.8 billion years, the universe has looked much as it does today and it will continue to appear very similar for many billions of years into the future. The thin disk of the Milky Way began to form when the universe was about 5 billion years old or 9 ± 2 Gya. The Solar System formed at about 9.2 billion years (4.6 Gya); the oldest organic matter consistent with life processes dates back 4 billion years.

The thinning of matter over time reduces the ability of the matter to gravitationally decelerate the expansion of the universe; in contrast, dark energy is a constant factor tending to accelerate the expansion of the universe. The universe's expansion passed an inflection point about five or six billion years ago when the universe entered the modern "dark-energy-dominated era" where the universe's expansion is now accelerating rather than decelerating. The present-day universe is quite well understood, but beyond about 100 billion years of cosmic time (about 86 billion years in the future), scientists are less sure which path the universe will take.

Dark energy-dominated era

From about 9.8 billion years after the Big Bang

From about 9.8 billion years of cosmic time, the universe's large-scale behavior is believed to have gradually changed for the third time in its history. Its behavior had originally been dominated by radiation (relativistic constituents such as photons and neutrinos) for the first 47,000 years, and since about 370,000 years of cosmic time, its behavior had been dominated by matter. During its matter-dominated era, the expansion of the universe had begun to slow down, as gravity reined in the initial outward expansion. But from about 9.8 billion years of cosmic time, observations show that the expansion of the universe slowly stops decelerating, and gradually begins to accelerate again, instead.

While the precise cause is not known, the observation is accepted as correct by the cosmologist community. By far the most accepted understanding is that this is due to an unknown form of energy which has been given the name "dark energy". "Dark" in this context means that it is not directly observed, but its existence can be deduced by examining the gravitational effect it has on the universe. Research is ongoing to understand this dark energy. Dark energy is now believed to be the single largest component of the universe, as it constitutes about 68.3% of the entire mass–energy of the physical universe.

Dark energy is believed to act like a cosmological constant—a scalar field that exists throughout space. Unlike gravity, the effects of such a field do not diminish (or only diminish slowly) as the universe grows. While matter and gravity have a greater effect initially, their effect quickly diminishes as the universe continues to expand. Objects in the universe, which are initially seen to be moving apart as the universe expands, continue to move apart, but their outward motion gradually slows down. This slowing effect becomes smaller as the universe becomes more spread out. Eventually, the outward and repulsive effect of dark energy begins to dominate over the inward pull of gravity. Instead of slowing down and perhaps beginning to move inward under the influence of gravity, from about 9.8 billion years of cosmic time, the expansion of space starts to slowly accelerate outward at a gradually increasing rate.

Beyond standard cosmology

Cosmogenesis

Cosmological models extrapolated back to 10−43 seconds combined with particle physics models both with and beyond the Standard Model allow well-informed speculation on the character and properties of the early universe.

Singularity

Approaching infinite temperature, a scale factor of zero, or time at zero is known to be outside of our physical models. Speculating about an initial gravitational singularity is not sensible: the conditions are outside of the range of the theory.

Planck epoch

Times within 10−43 seconds of the Big Bang

Since the standard model of cosmology predicts expansion of the universe from a very hot time in the distant past, it can be followed back to smaller and smaller scales. However, it cannot be followed back to zero space. Below distance known as a Planck length, the basis for the equations breaks down. The energy of particles in this time is so large that quantum effects take over from classical equations for gravity. The Planck time, 10−43 seconds, is therefore the beginning time for the Big Bang model of cosmology.

Grand unification epoch

Between 10−43 seconds and 10−36 seconds after the Big Bang

After the Planck era, the universe could, in principle, be modeled by extensions of the Standard Model of particle physics, for example, those called grand unified theories. Many such theories have proposed, but none have been successful in producing quantitative agreement with modern astrophysical observations. Nevertheless, the time between 10−43 and 10−36 seconds has been called the grand unification epoch.

Before the GUT epoch, the temperature of the universe exceeded 1015 GeV. As the universe expanded and cooled, it may have crossed a cosmological phase transition, which may have resulted in the large ratio of matter to antimatter we observe today. This phase transition is a thermodynamic effect similar to condensation of a gas or freezing of a liquid. While the transition in the GUT epoch is speculative, electroweak and quark-hadron transitions which happen later are supported by theoretical models with some successful predictions.

Electroweak epoch

Starting anywhere between 10−22 and 10−15 seconds after the Big Bang, until 10−12 seconds after the Big Bang

Sometime after inflation, the created particles went through thermalization, where mutual interactions lead to thermal equilibrium. Before the electroweak symmetry breaking, at a temperature of around 1015 K, approximately 10−15 seconds after the Big Bang, the electromagnetic and weak interaction had not yet separated, and the gauge bosons and fermions had not yet gained mass through the Higgs mechanism. This epoch ended with electroweak symmetry breaking, potentially through a phase transition. In some extensions of the Standard Model of particle physics, baryogenesis also happened at this stage, creating an imbalance between matter and antimatter (though in extensions to this model, this may have happened earlier). Little is known about the details of these processes.

Far future and ultimate fate

There are several competing scenarios for the long-term evolution of the universe. Which of them will happen, if any, depends on the precise values of physical constants such as the cosmological constant, the possibility of proton decay, the energy of the vacuum (meaning, the energy of "empty" space itself), and the natural laws beyond the Standard Model.

If the expansion of the universe continues and it stays in its present form, eventually all but the nearest galaxies will be carried away from us by the expansion of space at such a velocity that the observable universe will be limited to our own gravitationally bound local galaxy cluster. In the very long term (after many trillions—thousands of billions—of years, cosmic time), the Stelliferous Era will end, as stars cease to be born and even the longest-lived stars gradually die. Beyond this, all objects in the universe will cool and (with the possible exception of protons) gradually decompose back to their constituent particles and then into subatomic particles and very low-level photons and other fundamental particles, by a variety of possible processes.

The following scenarios have been proposed for the ultimate fate of the universe:

Scenario Description
Heat death As expansion continues, the universe becomes larger, colder, and more dilute; in time, all structures eventually decompose to subatomic particles and photons. In the case of indefinitely continuing cosmic expansion, the energy density in the universe will decrease until, after an estimated time of 101000 years, it reaches thermodynamic equilibrium and no more structure will be possible. This will happen only after an extremely long time because first, some (less than 0.1%) matter will collapse into black holes, which will then evaporate extremely slowly via Hawking radiation. The universe in this scenario will cease to be able to support life much earlier than this, after some 1014 years or so, when star formation ceases. In some Grand Unified Theories, proton decay after at least 1034 years will convert the remaining interstellar gas and stellar remnants into leptons (such as positrons and electrons) and photons. Some positrons and electrons will then recombine into photons. In this case, the universe has reached a high-entropy state consisting of a bath of particles and low-energy radiation. It is not known, however, whether it eventually achieves thermodynamic equilibrium., The hypothesis of a universal heat death stems from the 1850s ideas of William Thomson (Lord Kelvin), who extrapolated the classical theory of heat and irreversibility (as embodied in the first two laws of thermodynamics) to the universe as a whole.
Big Rip Expansion of space accelerates and at some point becomes so extreme that even subatomic particles and the fabric of spacetime are pulled apart and unable to exist. For any value of the dark energy content of the universe where the negative pressure ratio is less than −1, the expansion rate of the universe will continue to increase without limit. Gravitationally bound systems, such as clusters of galaxies, galaxies, and ultimately the Solar System will be torn apart. Eventually the expansion will be so rapid as to overcome the electromagnetic forces holding molecules and atoms together. Even atomic nuclei will be torn apart. Finally, forces and interactions even on the Planck scale—the smallest size for which the notion of "space" currently has a meaning—will no longer be able to occur as the fabric of spacetime itself is pulled apart and the universe as we know it will end in an unusual kind of singularity.
Big Crunch Expansion eventually slows and halts, then reverses as all matter accelerates towards its common centre. Currently considered to be likely incorrect. In the opposite of the "Big Rip" scenario, the expansion of the universe would at some point be reversed and the universe would contract towards a hot, dense state. This is a required element of oscillatory universe scenarios, such as the cyclic model, although a Big Crunch does not necessarily imply an oscillatory universe. Current observations suggest that this model of the universe is unlikely to be correct, and the expansion will continue or even accelerate.
Vacuum instability Collapse of the quantum fields that underpin all forces, particles and structures, to a different form. Cosmology traditionally has assumed a stable or at least metastable universe, but the possibility of a false vacuum in quantum field theory implies that the universe at any point in spacetime might spontaneously collapse into a lower-energy state (see Bubble nucleation), a more stable or "true vacuum", which would then expand outward from that point with the speed of light.

In this kind of protracted timescale, extremely rare quantum phenomena may also occur that are unlikely to be seen on a timescale smaller than trillions of years. These may also lead to unpredictable changes to the state of the universe which would not be likely to be significant on any smaller timescale. For example, on a timescale of millions of trillions of years, black holes might appear to evaporate almost instantly, uncommon quantum tunnelling phenomena would appear to be common, and quantum (or other) phenomena so unlikely that they might occur just once in a trillion years may occur many times.

Theory of multiple intelligences

The intelligence modalities

The theory of multiple intelligences (MI) posits that human intelligence is not a single general ability but comprises various distinct modalities, such as linguistic, logical-mathematical, musical, and spatial intelligences. Introduced in Howard Gardner's book Frames of Mind: The Theory of Multiple Intelligences (1983), this framework has gained popularity among educators who accordingly develop varied teaching strategies purported to cater to different student strengths.

Despite its educational impact, MI has faced criticism from the psychological and scientific communities. A primary point of contention is Gardner's use of the term "intelligences" to describe these modalities. Critics argue that labeling these abilities as separate intelligences expands the definition of intelligence beyond its traditional scope, leading to debates over its scientific validity.

While empirical research often supports a general intelligence factor (g-factor), Gardner contends that his model offers a more nuanced understanding of human cognitive abilities. This difference in defining and interpreting "intelligence" has fueled ongoing discussions about the theory's scientific robustness.

Separation criteria

Beginning in the late 1970s, using a pragmatic definition, Howard Gardner surveyed several disciplines and cultures around the world to determine skills and abilities essential to human development and culture building. He subjected candidate abilities to evaluation using eight criteria that must be substantively met to warrant their identification as an intelligence. Furthermore, the intelligences need to be relatively autonomous from each other, and composed of subsets of skills that are highly correlated and coherently organized.

In 1983, the field of cognitive neuroscience was embryonic but Gardner was one of the early psychological theorists to describe direct links between brain systems and intelligence. Likewise the field of educational neuroscience was yet to be conceived. Since Frames of Mind was published (1983) the terms cognitive science and cognitive neuroscience have become standard in the field with extensive libraries of scholarly and scientific papers and textbooks. Thus it is essential to examine neuroscience evidence as it pertains to MI validity.

Gardner defined intelligence as "a biopsychological potential to process information that can be activated in a cultural setting to solve problems or create products that are of value in a culture."

This definition is unique for several reasons that account for MI theory's broad appeal to educators as well as its rejection by mainstream psychologists who are rooted in the traditional conception of intelligence as an abstract, logical capacity. A fundamental element for each intelligence is a framework of clearly defined levels of skill, complexity and accomplishment. One model that fits with the MI framework is Bloom’s taxonomy where each intelligence can be delineated along different levels, ranging from basic knowledge up to their highest levels of analysis / synthesis.

MI is also unique because it gives full appreciation for the impact and interactions - via symbol systems - between the individual’s cognitions and their particular culture. As Gardner states,

The multiple intelligences commence as a set of uncommitted neurobiological potentials. They become crystallized and mobilized by the communication that takes place among human beings and, especially, by the systems of meaning-making that already exist in a given culture.

Unlike traditional practices beginning in the 19th century, MI theory is not built on the statistical analyses of psychometric test data searching for factors that account for academic achievement. Instead, Gardner employs a multi-disciplinary, cross-cultural methodology to evaluate which human capacities fit into a comprehensive model of intelligence. Eight criteria accounting for advances in neuroscience and the influence of cultural factors are used to qualify a capacity as an intelligence. These criteria are drawn from a more extensive database than what was acceptable and available to researchers in the late 19th and 20th centuries. Evidence is gathered from a variety of disciplines including psychology, neurology, biology, sociology, and anthropology as well as the arts and humanities. If a candidate faculty meets this set of criteria reasonably well then it can qualify as an intelligence. If it does not, then it is set aside or reconceptualized.

Criteria for each type of intelligence

The eight criteria can be grouped into four general categories:

  1. biology (neuroscience and evolution)
  2. analysis (core operations and symbol systems)
  3. psychology (skill development, individual differences)
  4. psychometrics (psychological experiments and test evidence)

The criteria briefly described are:

  • potential for brain isolation by brain damage
  • place in evolutionary history
  • presence of core operations
  • susceptibility to encoding (symbolic expression)
  • a distinct developmental progression
  • the existence of savants, prodigies and other exceptional people
  • support from experimental psychology
  • support from psychometric findings

This scientific method resembles the process used by astronomers to determine which celestial bodies to classify as a planet versus dwarf planet, star, comet, etc.

Forms of intelligences

In Frames of Mind and its sequels, Howard Gardner describes eight intelligences that can be expressed in everyday life in a variety of ways referred to as domains, skills, competencies, or talents. Like describing a multi-layer cake, the complexity depends upon how you slice the cake. One model integrates the eight intelligences with Sternberg's triarchic theory, so each intelligence is actively expressed in three ways: (1) creative, (2) academic / analytical and (3) practical thinking. In this analogy each of the eight cake layers are divided into three segments with different expressions sharing a central core. Exemplar professions and adult roles requiring specific intelligences are described along with their core skills and potential deficits. Several references to exemplar neuroscientific studies are also provided for each of the eight intelligences. Furthermore, some have suggested that the 'intelligences' refer to talents, personality, or ability rather than a distinct form of intelligence.

The two intelligences that are most associated with the traditional I.Q. or general intelligence are the linguistic and logical-mathematical intelligences. Some intelligence models and tests also include visual-spatial intelligence as a third element.

Musical

This area of intelligence includes sensitivity to the sounds, rhythms, pitch, and tones of music. People with musical intelligence normally may be able to sing, play musical instruments, or compose music. They have high sensitivity to pitch, meter, melody and timbre. Musical intelligence includes cognitive elements that contribute to a person’s success and quality of life. There is a strong relationship between music and emotions as evidenced in both popular and classical music spheres. Neuroscience investigators continue to investigate the interaction between music and cognitive performances. Music is deeply rooted in human evolutionary history (Paleolithic bone flute) and culture (every country on Earth has a national anthem') and our personal lives (many important life events are associated with particular types of music, like birthday songs, wedding songs, funeral dirges, etc.).

Deficits in musical processing and abilities include congenital amusia, tone deafness, musical hallucinations, musical anhedonia, acquired music agnosia, and arrhythmia (beat deafness).

Professions requiring essential musical skills include vocalist, instrumentalist, lyricist, dancer, sound engineer and composer. Musical intelligence is combined with kinesthetic to produce instrumentalists, dancers and, combined with a linguistic intelligence, for music critics and lyricists. Music combined with interpersonal intelligence is required for success as a music therapist or teacher.

Visual-spatial

This area deals with spatial awareness / judgment and the ability to visualize with the mind's eye.[17] It is composed of two main dimensions: A) mental visualization and B) perception of the physical world (spatial arrangements and objects). It includes both practical problem-solving as well as artistic creations. Spatial ability is one of the three factors beneath g (general intelligence) in the hierarchical model of intelligence. Many I.Q. tests include a measure of spatial problem-solving skills, e.g., block design and mental rotation of objects.

Visual-spatial intelligence can be expressed in both practical (e.g., drafting and building) or artistic (e.g., fine art, crafts, floral arrangements) ways. Or they can be combined in fields such as architecture, industrial design, landscape design, and fashion design. Visual-spatial processing is often combined with the kinesthetic intelligence and referred to as eye-hand or visual-motor integration for tasks such as hitting a baseball (see Babe Ruth example for Kinesthetic), sewing, golf or skiing.

Professions that emphasize skill with visual-spatial processing include carpentry, engineering, designers, pilots, firefighters, surgeons, commercial and fine arts and crafts. Spatial intelligence combined with linguistic is required for success as an art critic or textbook graphic designer. Spatial artistic skills combined with naturalist sensitivity produce a pet groomer or clothing designer, costumer.

Linguistic

The core linguistic ability is sensitivity to words and their meanings. People with high verbal-linguistic intelligence display a facility with expressive language and verbal comprehension. They are typically good at reading, writing, telling stories, rhetoric and memorizing words along with dates. Verbal ability is one of the most g-loaded abilities. Linguistic (academic aspect) intelligence is measured with the Verbal Intelligence Quotient (IQ) in Wechsler Adult Intelligence Scale (WAIS-IV).

Deficits in linguistic abilities include expressive and receptive aphasia, agraphia, specific language impairment, written language disorder and word recognition deficit (dyslexia).

Linguistic ability can be expressed according to Triarchic theory in three main ways: analytical-academic (reading, writing, definitions); practical (verbal or written directions, explanations, narration); and creative (story telling, poetry, lyrics, imaginative word play, science fiction).

Professions that require linguistic skills include teaching, sales, management, counselors, leaders, childcare, journalists, academics and politicians (debating and creating support for particular sets of values). Linguistic intelligence combines with all other intelligences to facilitate communication either via the spoken or written word. It is frequently highly correlated with the interpersonal intelligence to facilitate social interactions for education, business and human relations. Successful sports coaches combine three intelligences: kinesthetic, interpersonal and linguistic. Corporate managers require skills in the interpersonal, linguistic and logical-mathematical intelligences.

Logical-mathematical

This area has to do with logic, abstractions, reasoning, calculations, strategic and critical thinking. This intelligence includes the capacity to understand underlying principles of some kind of causal system. Logical reasoning is closely linked to fluid intelligence as well as to general intelligence (g factor).  This capacity is most often associated with convergent problem-solving but it also includes divergent thinking associated with “problem-finding”.

This intelligence is most closely associated with the cognitive development theory described by Jean Piaget (1983). The four main types of logical-mathematical intelligence include logical reasoning, calculations, practical thinking (common sense) and discovery.

Deficits in logical-mathematical thinking include acalculia, dyscalculia, mild cognitive impairment, dementia and intellectual disability.

Some critics believe that the logical and mathematics domains should be separate entities. However, Gardner argues that they both spring from the same source—abstractions taken from real world elements, e.g., logic from words and calculations from the manipulation from objects. This is not dissimilar from the relationship between musical intelligence and vocal or instrumental skills where they are very different expressions springing from a shared musical source.

Professions most closely associated with this intelligence include accounting, bookkeeping, banking, finance, engineering and the sciences. Logic-mathematical skills combine with all the other intelligences to facilitate complex problem solving and creation such as environmental engineering and scientists (naturalist); symphonies (music); public sculptures (visual-spatial) and choreography/ movement analysis (kinesthetic).

Bodily-kinesthetic

The core elements of the bodily-kinesthetic intelligence are control of one's bodily movements and fine motor control to handle objects skillfully. Gardner elaborates to say that this also includes a sense of timing, a clear sense of the goal of a physical action, along with the ability to train responses. Kinesthetic ability can be displayed in goal-directed activities (athletics, handcrafts, etc.) as well as in more expressive movements (drama, dance, mime and gestures). Expressive movements can be for either concepts or feelings. For example, saluting, shaking hands or facial expressions can convey both ideas and emotions. Two major kinesthetic categories are gross and fine motor skills.

Deficits in kinesthetic ability are described as proprioception disorders affecting body awareness, coordination, balance, dexterity and motor control.

Gardner believes that careers that suit those with high bodily-kinesthetic intelligence include: athletes, dancers, musicians, actors, craftspeople, builders, technicians, and firefighters. Although these careers can be duplicated through virtual simulation, they will not produce the actual physical learning that is needed in this intelligence.  

Often people with high physical intelligence combined with visual motion acuity will have excellent hand-eye coordination and be very agile; they are precise and accurate in movement (surgeons) and can express themselves using their body (actors and dancers). Gardner referred to the idea of natural skill and innate kinesthetic intelligence within his discussion of the autobiographical story of Babe Ruth – a legendary baseball player who, at 15, felt that he had been 'born' on the pitcher's mound. Seeing the pitched ball and coordinating one’s swing to meet it over the plate requires highly developed visual-motor integration. Each sport requires its own distinctive combination of specific skills associated with the kinesthetic and visual-spatial intelligences.

American baseball player Babe Ruth

Physical ability

Physical intelligence, also known as bodily-kinesthetic intelligence, is any intelligence derived through physical and practiced learning such as sports, dance, or craftsmanship. It may refer to the ability to use one's hands to create, to express oneself with one's body, a reliance on tactile mechanisms and movement, and accuracy in controlling body movement. An individual with high physical intelligence is someone who is adept at using their physical body to solve problems and express ideas and emotions. The ability to control the physical body and the mind-body connection is part of a much broader range of human potential as set out in Gardner's theory of multiple intelligences.

Characteristics

Exhibiting well developed bodily kinesthetic intelligence will be reflected in a person's movements and how they use their physical body. Often people with high physical intelligence will have excellent hand-eye coordination and be very agile; they are precise and accurate in movement and can express themselves using their body. Gardner referred to the idea of natural skill and innate physical intelligence within his discussion of the autobiographical story of Babe Ruth – a legendary baseball player who, at 15, felt that he has been 'born' on the pitcher's mound. Individuals with a high body-kinesthetic, or physical intelligence, are likely to be successful in physical careers, including athletes, dancers, musicians, police officers, and soldiers.

Interpersonal

In MI theory, individuals who have high interpersonal intelligence are characterized by their sensitivity to others' moods, feelings, temperaments, motivations, and their ability to cooperate or to lead a group. According to Thomas Armstrong in How Are Kids Smart: Multiple Intelligences in the Classroom, "Interpersonal intelligence is often misunderstood with being extroverted or liking other people. Those with high interpersonal intelligence communicate effectively and empathize easily with others, and may be either leaders or followers. They often enjoy discussion and debate." They have insightful understanding of other peoples' point of view. Daniel Goleman based his concept of emotional intelligence in part on the feeling aspects of the intrapersonal and interpersonal intelligences.  Interpersonal skill can be displayed in either one-on-one and group interactions.

Deficits in interpersonal understanding are described as ego centrism, narcissism, socio-pathology, Asperger’s Syndrome and autism.

Gardner believes that careers that suit those with high interpersonal intelligence include leaders, politicians, managers, teachers, clergy, counselors, social workers and sales persons. Mother Teresa, Martin Luther King and Lyndon Johnson are cited as historical leaders with exceptional interpersonal intelligence. Interpersonal combined with intrapersonal management are required for successful leaders, psychologists, life coaches and conflict negotiators. And obviously, team sports require specific combinations of the interpersonal and kinesthetic intelligences while individual sports emphasize the kinesthetic and intrapersonal intelligences (i.e., Tiger Woods and gymnasts).

In theory, individuals who have high interpersonal intelligence are characterized by their sensitivity to others' moods, feelings, temperaments, motivations, and their ability to cooperate to work as part of a group. According to Gardner in How Are Kids Smart: Multiple Intelligences in the Classroom, "Inter- and Intra- personal intelligence is often misunderstood with being extroverted or liking other people". "Those with high interpersonal intelligence communicate effectively and empathize easily with others, and may be either leaders or followers. They often enjoy discussion and debate." Gardner has equated this with emotional intelligence of Goleman.

Intrapersonal

This refers to having a deep and accurate understanding of the self; what one's strengths and weaknesses are, what makes one unique, being able to predict and manage one's own reactions, emotions and behaviors. Activities associated with this intelligence include introspection and self-reflection. Intrapersonal skills can be categorized in at least four areas: metacognition, awareness of thoughts, management of feelings and emotions, behavior, self-management, decision-making and judgment.

Deficits in intrapersonal understanding are described as anosognosia, depersonalization, dissociation and self-dysregulation (ADHD).

Leaders and people in high stress occupations need well developed intrapersonal skills, e.g., pilots, police and firefighters, entrepreneurs, middle managers, first responders and health care providers. Mahatma Gandhi, Jesus and Martin Luther King Jr. are all noted for their strong self-awareness. Deficits in intrapersonal understanding may be correlated with ADHD, substance abuse and emotional disturbances (mid-life crisis, etc.).

Intrapersonal intelligence may be correlated with concepts such as self-confidence, introspection and self-efficacy but it should not be confused with personality styles/preferences such as narcissism, self-esteem, introversion or shyness. High level performance in many demanding professions and roles requires exceptional intrapersonal intelligence: Olympic athletes, professional golfers, stage performers, CEOs, crisis managers.

Naturalistic

Not part of Gardner's original seven, naturalistic intelligence was proposed by him in 1995. "If I were to rewrite Frames of Mind today, I would probably add an eighth intelligence – the intelligence of the naturalist. It seems to me that the individual who is readily able to recognize flora and fauna, to make other consequential distinctions in the natural world, and to use this ability productively (in hunting, in farming, in biological science) is exercising an important intelligence and one that is not adequately encompassed in the current list." This area has to do with nurturing and relating information to one's natural surroundings. Examples include classifying natural forms such as animal and plant species and rocks and mountain types. Essential cognitive skills include pattern recognition, taxonomy and empathy for living beings. Nature deficit disorder describes a recent hypothesis that mental health is negatively impacted by a lack of attention to and understanding of nature, e.g., nature deficit disorder.

This sort of ecological receptiveness is deeply rooted in a "sensitive, ethical, and holistic understanding" of the world and its complexities – including the role of humanity within the greater ecosphere.

This ability continues to be central in such roles like veterinarians, ecological scientists and botanists.

Proposed additional intelligences

From the beginning Howard Gardner has stated that there may be more intelligences beyond the original seven identified in 1983. That is why the naturalist was added to the list in 1999. Several other human capacities were rejected because they do not meet enough of the criteria including personality characteristics such as humor, sexuality and extroversion.

Pedagogical and digital

In January 2016, Gardner mentioned in an interview with Big Think that he was considering adding the teaching–pedagogical intelligence "which allows us to be able to teach successfully to other people". In the same interview, he explicitly refused some other suggested intelligences like humour, cooking and sexual intelligence. Professor Nan B. Adams argues that based on Gardner's definition of multiple intelligences, digital intelligence – a meta-intelligence composed of many other identified intelligences and stemmed from human interactions with digital computers – now exists.

Use in education

Within his Theory of Multiple Intelligences, Gardner stated that our "educational system is heavily biased towards linguistic modes of intersection and assessment and, to a somewhat lesser degree, toward logical quantities modes as well". His work went on to shape educational pedagogy and influence relevant policy and legislation across the world; with particular reference to how teachers must assess students' progress to establish the most effective teaching methods for the individual learner. Gardner's research into the field of learning regarding bodily kinesthetic intelligence has resulted in the use of activities that require physical movement and exertion, with students exhibiting a high level of physical intelligence reporting to benefit from 'learning through movement' in the classroom environment.

Although the distinction between intelligences has been set out in great detail, Gardner opposes the idea of labelling learners to a specific intelligence. Gardner maintains that his theory should "empower learners", not restrict them to one modality of learning. According to Gardner, an intelligence is "a biopsychological potential to process information that can be activated in a cultural setting to solve problems or create products that are of value in a culture". According to a 2006 study, each of the domains proposed by Gardner involves a blend of the general g factor, cognitive abilities other than g, and, in some cases, non-cognitive abilities or personality characteristics.

Gardner defines an intelligence as "bio-psychological potential to process information that can be activated in a cultural setting to solve problems or create products that are of value in a culture". According to Gardner, there are more ways to do this than just through logical and linguistic intelligence. Gardner believes that the purpose of schooling "should be to develop intelligences and to help people reach vocational and avocational goals that are appropriate to their particular spectrum of intelligences. People who are helped to do so, [he] believe[s], feel more engaged and competent and therefore more inclined to serve society in a constructive way."

Gardner contends that Intelligence Quotient (IQ) tests focus mostly on logical and linguistic intelligence. Upon doing well on these tests, the chances of attending a prestigious college or university increase, which in turn creates contributing members of society. While many students function well in this environment, there are those who do not. Gardner's theory argues that students will be better served by a broader vision of education, wherein teachers use different methodologies, exercises and activities to reach all students, not just those who excel at linguistic and logical intelligence. It challenges educators to find "ways that will work for this student learning this topic".

James Traub's article in The New Republic notes that Gardner's system has not been accepted by most academics in intelligence or teaching. Gardner states that "while Multiple Intelligences theory is consistent with much empirical evidence, it has not been subjected to strong experimental tests ... Within the area of education, the applications of the theory are currently being examined in many projects. Our hunches will have to be revised many times in light of actual classroom experience."

Jerome Bruner agreed with Gardner that the intelligences were "useful fictions", and went on to state that "his approach is so far beyond the data-crunching of mental testers that it deserves to be cheered."

George Miller, a prominent cognitive psychologist, wrote in The New York Times Book Review that Gardner's argument consisted of "hunch and opinion" and Charles Murray and Richard J. Herrnstein in The Bell Curve (1994) called Gardner's theory "uniquely devoid of psychometric or other quantitative evidence".

Distinction to learning styles

The notion of learning styles is problematic, and their educational use is suspect. Gardner has regularly explained the distinction between Theory of multiple intelligences and various learning style models. A big problem is that there are more than 80 different learning styles models so it is difficult to know which model is being referred to when making a comparison or planning instruction. A key difference is that learning styles typically refer to sensory modalities, preferences, personality characteristics, attitudes, and interests while the multiple intelligences are cognitive abilities with defined levels of skill. It is easy to see why they are confused given the popularity of VAK (Visual, Auditory and Kinesthetic) and Introversion, Extroversion models. Their names sound alike and they share sensory systems (vision, hearing, physicality) but the eight intelligences are much more than the senses or personal preferences.

While learning style theories are fundamentally different from the eight intelligences, there is a model proposed by Richard Strong and others that integrates a person’s preference with the eight intelligences to produce a descriptive tapestry of a person’s intellectual dispositions. The four styles are Mastery, Understanding, Interpersonal, and Self-Expressive. For the visual-spatial intelligence expressed artistically, a person may have a distinct pattern of preferences for realistic imagery (Mastery), conceptual art (Understanding), portraiture (Interpersonal) or abstract expression (Self-Expressive). This model has not been tested empirically.

Talents and aptitudes

Intelligences not typically associated with academic achievement have been traditionally delegated to the status of talents or aptitudes—e.g., musical, visual-spatial, kinesthetic and naturalist. Gardner takes issue with this hierarchy because it lowers the importance of these “non-academic” intelligences and devalues their contribution to human thought, individual development and culture. Gardner is fine with calling them all talents (or aptitudes) (including logical-mathematical and linguistic) so long as they are seen to be of equal value.

In spite of its lack of general acceptance in the psychological community, Gardner's theory has been adopted by many schools, where it is often conflated with learning styles, and hundreds of books have been written about its applications in education. Some of the applications of Gardner's theory have been described as "simplistic" and Gardner himself has said he is "uneasy" with the way his theory has been used in schools. Gardner has denied that multiple intelligences are learning styles and agrees that the idea of learning styles is incoherent and lacking in empirical evidence. Gardner summarizes his approach with three recommendations for educators: individualize the teaching style (to suit the most effective method for each student), pluralize the teaching (teach important materials in multiple ways), and avoid the term "styles" as being confusing.

Criticism

Gardner argues that there is a wide range of cognitive abilities, but that there are only very weak correlations among them. For example, the theory postulates that a child who learns to multiply easily is not necessarily more intelligent than a child who has more difficulty on this task. The child who takes more time to master multiplication may best learn to multiply through a different approach, may excel in a field outside mathematics, or may be looking at and understanding the multiplication process at a fundamentally deeper level.

Intelligence tests and psychometrics have generally found high correlations between different aspects of intelligence, rather than the low correlations which Gardner's theory predicts, supporting the prevailing theory of general intelligence rather than multiple intelligences (MI). The theory has been criticized by mainstream psychology for its lack of empirical evidence, and its dependence on subjective judgement.

Definition of intelligence

A major criticism of the theory is that it is ad hoc: that Gardner is not expanding the definition of the word "intelligence", but rather denies the existence of intelligence as traditionally understood, and instead uses the word "intelligence" where other people have traditionally used words like "ability" and "aptitude". This practice has been criticized by Robert J. Sternberg, Michael Eysenck, and Sandra Scarr. White (2006) points out that Gardner's selection and application of criteria for his "intelligences" is subjective and arbitrary, and that a different researcher would likely have come up with different criteria.

Defenders of MI theory argue that the traditional definition of intelligence is too narrow, and thus a broader definition more accurately reflects the differing ways in which humans think and learn.

Some criticisms arise from the fact that Gardner has not provided a test of his multiple intelligences. He originally defined it as the ability to solve problems that have value in at least one culture, or as something that a student is interested in. He then added a disclaimer that he has no fixed definition, and his classification is more of an artistic judgment than fact:

Ultimately, it would certainly be desirable to have an algorithm for the selection of intelligence, such that any trained researcher could determine whether a candidate's intelligence met the appropriate criteria. At present, however, it must be admitted that the selection (or rejection) of a candidate's intelligence is reminiscent more of an artistic judgment than of a scientific assessment.

Generally, linguistic and logical-mathematical abilities are called intelligence, but artistic, musical, athletic, etc. abilities are not. Gardner argues this causes the former to be needlessly aggrandized. Certain critics are wary of this widening of the definition, saying that it ignores "the connotation of intelligence ... [which] has always connoted the kind of thinking skills that makes one successful in school."

Gardner writes "I balk at the unwarranted assumption that certain human abilities can be arbitrarily singled out as intelligence while others cannot." Critics hold that given this statement, any interest or ability can be redefined as "intelligence". Thus, studying intelligence becomes difficult, because it diffuses into the broader concept of ability or talent. Gardner's addition of the naturalistic intelligence and conceptions of the existential and moral intelligence are seen as the fruits of this diffusion. Defenders of the MI theory would argue that this is simply a recognition of the broad scope of inherent mental abilities and that such an exhaustive scope by nature defies a one-dimensional classification such as an IQ value.

The theory and definitions have been critiqued by Perry D. Klein as being so unclear as to be tautologous and thus unfalsifiable. Having a high musical ability means being good at music while at the same time being good at music is explained by having high musical ability.

Henri Wallon argues that "We can not distinguish intelligence from its operations". Yves Richez distinguishes 10 Natural Operating Modes (Modes Opératoires Naturels – MoON). Richez's studies are premised on a gap between Chinese thought and Western thought. In China, the notion of "being" (self) and the notion of "intelligence" do not exist. These are claimed to be Graeco-Roman inventions derived from Plato. Instead of intelligence, Chinese refers to "operating modes", which is why Yves Richez does not speak of "intelligence" but of "natural operating modes" (MoON).

Validity

Critics argue that MI cannot be taken seriously as a scientific theory of intelligence for a number of reasons, the most common are given below:

  • It is not scientific as in a body of knowledge acquired by performing replicated experiments in the laboratory.
  • There is conceptual confusion for determining exactly what intelligence is and what it isn’t, e.g., MI conflates personality, talent and learning styles with intelligence. MI does not value reasoning and academic skills.
  • There are no empirical, experimental studies using psychometrics to establish validity. The proposed intelligences are not proven to be sufficiently independent to warrant separate identification.
  • There is no evidence for educational efficacy and its use may undermine school effectiveness.

Neo-Piagetian criticism

Andreas Demetriou suggests that theories which overemphasize the autonomy of the domains are as simplistic as the theories that overemphasize the role of general intelligence and ignore the domains. He agrees with Gardner that there are indeed domains of intelligence that are relevantly autonomous of each other. Some of the domains, such as verbal, spatial, mathematical, and social intelligence are identified by most lines of research in psychology. In Demetriou's theory, one of the neo-Piagetian theories of cognitive development, Gardner is criticized for underestimating the effects exerted on the various domains of intelligences by the various subprocesses that define overall processing efficiency, such as speed of processing, executive functions, working memory, and meta-cognitive processes underlying self-awareness and self-regulation. All of these processes are integral components of general intelligence that regulate the functioning and development of different domains of intelligence.

The domains are to a large extent expressions of the condition of the general processes, and may vary because of their constitutional differences but also differences in individual preferences and inclinations. Their functioning both channels and influences the operation of the general processes. Thus, one cannot satisfactorily specify the intelligence of an individual or design effective intervention programs unless both the general processes and the domains of interest are evaluated.

Human adaptation to multiple environments

The premise of the multiple intelligences hypothesis, that human intelligence is a collection of specialist abilities, have been criticized for not being able to explain human adaptation to most if not all environments in the world. In this context, humans are contrasted to social insects that indeed have a distributed "intelligence" of specialists, and such insects may spread to climates resembling that of their origin but the same species never adapt to a wide range of climates from tropical to temperate by building different types of nests and learning what is edible and what is poisonous. While some such as the leafcutter ant grow fungi on leaves, they do not cultivate different species in different environments with different farming techniques as human agriculture does. It is therefore argued that human adaptability stems from a general ability to falsify hypotheses and make more generally accurate predictions and adapt behavior thereafter, and not a set of specialized abilities which would only work under specific environmental conditions.

IQ tests

Gardner argues that IQ tests only measure linguistic and logical-mathematical abilities. He argues the importance of assessing in an "intelligence-fair" manner. While traditional paper-and-pen examinations favor linguistic and logical skills, there is a need for intelligence-fair measures that value the distinct modalities of thinking and learning that uniquely define each intelligence.

Psychologist Alan S. Kaufman points out that IQ tests have measured spatial abilities for 70 years. Modern IQ tests are greatly influenced by the Cattell–Horn–Carroll theory which incorporates a general intelligence but also many more narrow abilities. While IQ tests do give an overall IQ score, they now also give scores for many more narrow abilities.

Lack of empirical evidence

Many of Gardner's "intelligences" correlate with the g factor, supporting the idea of a single dominant type of intelligence. Each of the domains proposed by Gardner involved a blend of g, of cognitive abilities other than g, and, in some cases, of non-cognitive abilities or of personality characteristics.

The Johnson O'Connor Research Foundation has tested hundreds of thousands of people to determine their "aptitudes" ("intelligences"), such as manual dexterity, musical ability, spatial visualization, and memory for numbers. There is correlation of these aptitudes with the g factor, but not all are strongly correlated; correlation between the g factor and "inductive speed" ("quickness in seeing relationships among separate facts, ideas, or observations") is only 0.5, considered a moderate correlation.

A critical review of MI theory argues that there is little empirical evidence to support it:

To date, there have been no published studies that offer evidence of the validity of the multiple intelligences. In 1994 Sternberg reported finding no empirical studies. In 2000 Allix reported finding no empirical validating studies, and at that time Gardner and Connell conceded that there was "little hard evidence for MI theory" (2000, p. 292).[citation needed] In 2004 Sternberg and Grigerenko stated that there were no validating studies for multiple intelligences, and in 2004 Gardner asserted that he would be "delighted were such evidence to accrue", and admitted that "MI theory has few enthusiasts among psychometricians or others of a traditional psychological background" because they require "psychometric or experimental evidence that allows one to prove the existence of the several intelligences".

The same review presents evidence to demonstrate that cognitive neuroscience research does not support the theory of multiple intelligences:

... the human brain is unlikely to function via Gardner's multiple intelligences. Taken together the evidence for the intercorrelations of subskills of IQ measures, the evidence for a shared set of genes associated with mathematics, reading, and g, and the evidence for shared and overlapping "what is it?" and "where is it?" neural processing pathways, and shared neural pathways for language, music, motor skills, and emotions suggest that it is unlikely that each of Gardner's intelligences could operate "via a different set of neural mechanisms" (1999, p. 99). Equally important, the evidence for the "what is it?" and "where is it?" processing pathways, for Kahneman's two decision-making systems, and for adapted cognition modules suggests that these cognitive brain specializations have evolved to address very specific problems in our environment. Because Gardner claimed that the intelligences are innate potentialities related to a general content area, MI theory lacks a rationale for the phylogenetic emergence of the intelligences.

However, more recent research from Branton Shearer in 2017 was able to identify both structures that activate in common, as well as separately, across Gardner's 8 intelligences.

Counter-Enlightenment

From Wikipedia, the free encyclopedia
Divine Justice smites Jean-Baptiste Pigalle's statue of Voltaire. Anonymous, 1773

The Counter-Enlightenment refers to a loose collection of intellectual stances that arose during the European Enlightenment in opposition to its mainstream attitudes and ideals. The Counter-Enlightenment is generally seen to have continued from the 18th century into the early 19th century, especially with the rise of Romanticism. Its thinkers did not necessarily agree to a set of counter-doctrines but instead each challenged specific elements of Enlightenment thinking, such as the belief in progress, the rationality of all humans, liberal democracy, and the increasing secularisation of European society.

Scholars differ on who is to be included among the major figures of the Counter-Enlightenment. In Italy, Giambattista Vico criticised the spread of reductionism and the Cartesian method, which he saw as unimaginative and stifling creative thinking. Decades later, Joseph de Maistre in Sardinia and Edmund Burke in Britain both criticised the anti-religious ideas of the Enlightenment for leading to the Reign of Terror and a totalitarian police state following the French Revolution. The ideas of Jean-Jacques Rousseau and Johann Georg Hamann were also significant to the rise of the Counter-Enlightenment with French and German Romanticism respectively.

In the late 20th century, the concept of the Counter-Enlightenment was popularised by pro-Enlightenment historian Isaiah Berlin as a tradition of relativist, anti-rationalist, vitalist, and organic thinkers stemming largely from Hamann and subsequent German Romantics. While Berlin is largely credited with having refined and promoted the concept, the first known use of the term in English occurred in 1949 and there were several earlier uses of it across other European languages, including by German philosopher Friedrich Nietzsche.

Term usage

Joseph-Marie, Comte de Maistre was one of the more prominent altar-and-throne counter-revolutionaries who vehemently opposed Enlightenment ideas.

Early usage

Despite criticism of the Enlightenment being a widely discussed topic in twentieth- and twenty-first century thought, the term "Counter-Enlightenment" was slow to enter general usage. It was first mentioned briefly in English in William Barrett's 1949 article "Art, Aristocracy and Reason" in Partisan Review. He used the term again in his 1958 book on existentialism, Irrational Man; however, his comment on Enlightenment criticism was very limited. In Germany, the expression "Gegen-Aufklärung" has a longer history. It was probably coined by Friedrich Nietzsche in "Nachgelassene Fragmente" in 1877.

Lewis White Beck used this term in his Early German Philosophy (1969), a book about Counter-Enlightenment in Germany. Beck claims that there is a counter-movement arising in Germany in reaction to Frederick II's secular authoritarian state. On the other hand, Johann Georg Hamann and his fellow philosophers believe that a more organic conception of social and political life, a more vitalistic view of nature, and an appreciation for beauty and the spiritual life of man have been neglected by the eighteenth century.

Isaiah Berlin

Isaiah Berlin established this term's place in the history of ideas. He used it to refer to a movement that arose primarily in late 18th- and early 19th-century Germany against the rationalism, universalism and empiricism that are commonly associated with the Enlightenment. Berlin's essay "The Counter-Enlightenment" was first published in 1973, and later reprinted in a collection of his works, Against the Current, in 1981. The term has been more widely used since.

Isaiah Berlin traces the Counter-Enlightenment back to J. G. Hamann (shown).

Berlin argues that, while there were opponents of the Enlightenment outside of Germany (e.g. Joseph de Maistre) and before the 1770s (e.g. Giambattista Vico), Counter-Enlightenment thought did not take hold until the Germans "rebelled against the dead hand of France in the realms of culture, art and philosophy, and avenged themselves by launching the great counter-attack against the Enlightenment." This German reaction to the imperialistic universalism of the French Enlightenment and Revolution, which had been forced on them first by the francophile Frederick II of Prussia, then by the armies of Revolutionary France and finally by Napoleon, was crucial to the shift of consciousness that occurred in Europe at this time, leading eventually to Romanticism. The consequence of this revolt against the Enlightenment was pluralism. The opponents to the Enlightenment played a more crucial role than its proponents, some of whom were monists, whose political, intellectual and ideological offspring have been terreur and totalitarianism.

Darrin McMahon

In his book Enemies of the Enlightenment (2001), historian Darrin McMahon extends the Counter-Enlightenment back to pre-Revolutionary France and down to the level of "Grub Street". McMahon focuses on the early opponents to the Enlightenment in France, unearthing a long-forgotten "Grub Street" literature in the late 18th and early 19th centuries aimed at the philosophes. He delves into the obscure world of the "low Counter-Enlightenment" that attacked the encyclopédistes and fought to prevent the dissemination of Enlightenment ideas in the second half of the century. Many people from earlier times attacked the Enlightenment for undermining religion and the social and political order. It later became a major theme of conservative criticism of the Enlightenment. After the French Revolution, it appeared to vindicate the warnings of the anti-philosophes in the decades prior to 1789.

Graeme Garrard

Rousseau is identified by Graeme Garrard as the originator of the Counter-Enlightenment.

Cardiff University professor Graeme Garrard claims that historian William R. Everdell was the first to situate Rousseau as the "founder of the Counter-Enlightenment" in his 1971 dissertation and in his 1987 book, Christian Apologetics in France, 1730–1790: The Roots of Romantic Religion. In his 1996 article, "the Origin of the Counter-Enlightenment: Rousseau and the New Religion of Sincerity", in the American Political Science Review (Vol. 90, No. 2), Arthur M. Melzer corroborates Everdell's view in placing the origin of the Counter-Enlightenment in the religious writings of Jean-Jacques Rousseau, further showing Rousseau as the man who fired the first shot in the war between the Enlightenment and its opponents. Graeme Garrard follows Melzer in his "Rousseau's Counter-Enlightenment" (2003). This contradicts Berlin's depiction of Rousseau as a philosophe (albeit an erratic one) who shared the basic beliefs of his Enlightenment contemporaries. But similar to McMahon, Garrard traces the beginning of Counter-Enlightenment thought back to France and prior to the German Sturm und Drang movement of the 1770s. Garrard's book Counter-Enlightenments (2006) broadens the term even further, arguing against Berlin that there was no single "movement" called "The Counter-Enlightenment". Rather, there have been many Counter-Enlightenments, from the middle of the 18th century to the 20th century among critical theorists, postmodernists and feminists. The Enlightenment has opponents on all points of its ideological compass, from the far left to the far right, and all points in between. Each of the Enlightenment's challengers depicted it as they saw it or wanted others to see it, resulting in a vast range of portraits, many of which are not only different but incompatible.

James Schmidt

The idea of Counter-Enlightenment has evolved in the following years. The historian James Schmidt questioned the idea of "Enlightenment" and therefore of the existence of a movement opposing it. As the conception of "Enlightenment" has become more complex and difficult to maintain, so has the idea of the "Counter-Enlightenment". Advances in Enlightenment scholarship in the last quarter-century have challenged the stereotypical view of the 18th century as an "Age of Reason", leading Schmidt to speculate on whether the Enlightenment might not actually be a creation of its opponents, but the other way round. The fact that the term "Enlightenment" was first used in 1894 in English to refer to a historical period supports the argument that it was a late construction projected back onto the 18th century.

The French Revolution

Political thinker Edmund Burke opposed the French Revolution in his Reflections on the Revolution in France.

By the mid-1790s, the Reign of Terror during the French Revolution fueled a major reaction against the Enlightenment. Many leaders of the French Revolution and their supporters made Voltaire and Rousseau, as well as Marquis de Condorcet's ideas of reason, progress, anti-clericalism, and emancipation, central themes to their movement. It led to an unavoidable backlash to the Enlightenment as there were people opposed to the revolution. Many counter-revolutionary writers, such as Edmund Burke, Joseph de Maistre and Augustin Barruel, asserted an intrinsic link between the Enlightenment and the Revolution. They blamed the Enlightenment for undermining traditional beliefs that sustained the ancien regime. As the Revolution became increasingly bloody, the idea of "Enlightenment" was discredited, too. Hence, the French Revolution and its aftermath have contributed to the development of Counter-Enlightenment thought.

Edmund Burke was among the first of the Revolution's opponents to relate the philosophes to the instability in France in the 1790s. His Reflections on the Revolution in France (1790) identifies the Enlightenment as the principal cause of the French revolution. In Burke's opinion, the philosophes provided the revolutionary leaders with the theories on which their political schemes were based.

Augustin Barruel's Counter-Enlightenment ideas were well developed before the revolution. He worked as an editor for the anti-philosophes literary journal, L'Année Littéraire. Barruel argues in his Memoirs Illustrating the History of Jacobinism (1797) that the Revolution was the consequence of a conspiracy of philosophes and freemasons.

In Considerations on France (1797), Joseph de Maistre interprets the Revolution as divine punishment for the sins of the Enlightenment. According to him, "the revolutionary storm is an overwhelming force of nature unleashed on Europe by God that mocked human pretensions."

Romanticism

In the 1770s, the "Sturm und Drang" movement started in Germany. It questioned some key assumptions and implications of the Aufklärung and the term "Romanticism" was first coined. Many early Romantic writers such as Chateaubriand, Friedrich von Hardenberg (Novalis) and Samuel Taylor Coleridge inherited the Counter-Revolutionary antipathy towards the philosophes. All three directly blamed the philosophes in France and the Aufklärer in Germany for devaluing beauty, spirit and history in favour of a view of man as a soulless machine and a view of the universe as a meaningless, disenchanted void lacking richness and beauty. One particular concern to early Romantic writers was the allegedly anti-religious nature of the Enlightenment since the philosophes and Aufklärer were generally deists, opposed to revealed religion. Some historians, such as Hamann, nevertheless contend that this view of the Enlightenment as an age hostile to religion is common ground between these Romantic writers and many of their conservative Counter-Revolutionary predecessors. However, not many have commented on the Enlightenment, except for Chateaubriand, Novalis, and Coleridge, since the term itself did not exist at the time and most of their contemporaries ignored it.

The Sleep of Reason Produces Monsters, c. 1797, 21.5 cm × 15 cm. One of the most famous prints of Spaniard Francisco Goya

The historian Jacques Barzun argues that Romanticism has its roots in the Enlightenment. It was not anti-rational, but rather balanced rationality against the competing claims of intuition and the sense of justice. This view is expressed in Goya's Sleep of Reason, in which the nightmarish owl offers the dozing social critic of Los Caprichos a piece of drawing chalk. Even the rational critic is inspired by irrational dream-content under the gaze of the sharp-eyed lynx. Marshall Brown makes much the same argument as Barzun in Romanticism and Enlightenment, questioning the stark opposition between these two periods.

By the middle of the 19th century, the memory of the French Revolution was fading and so was the influence of Romanticism. In this optimistic age of science and industry, there were few critics of the Enlightenment, and few explicit defenders. Friedrich Nietzsche is a notable and highly influential exception. After an initial defence of the Enlightenment in his so-called "middle period" (late 1870s to early 1880s), Nietzsche turned vehemently against it.

Totalitarianism and Fascism

Totalitarianism as a product of the Enlightenment

After World War II, the Enlightenment re-emerged as a key organizing concept in social and political thought and the history of ideas, often suggesting links between counter-enlightenment ideas and fascism.

There was also, conversely, new counter-enlightenment literature, blaming the 18th-century Age of Reason for totalitarianism. The locus classicus of this view is Max Horkheimer and Theodor Adorno's Dialectic of Enlightenment (1947). Adorno and Horkheimer take "enlightenment" as their target including the specifically 18th-century form, – i.e. "The Enlightenment". Dialectic of Enlightenment traces the degeneration of the general concept of enlightenment, from ancient Greece (epitomized by the cunning "bourgeois" hero Odysseus) to 20th-century fascism. Adorno and Horkheimer claim that The Enlightenment is epitomized by the Marquis de Sade. However, some philosophers have rejected Adorno and Horkheimer's claim that Sade's moral skepticism is actually coherent, or that it reflects Enlightenment thought.

Nazism and Fascism as products of the Counter-Enlightenment

Many historians and other scholars have argued that fascism was a product of the Counter-Enlightenment itself. For example, Ze'ev Sternhell called fascism "an exacerbated form of the tradition of counter-Enlightenment": with fascism, "Europe created for the first time a set of political movements and regimes whose project was nothing but the destruction of Enlightenment culture." Similar opinions were expressed by such historians as Georges Bensoussan and Enzo Traverso, who noted "Counter-Enlightenment tendencies, combined with industrial and technical progress, a state monopoly over violence, and the rationalisation of methods of domination" and "Counter-Enlightenment (Gegenaufklärung) and the cult of modern technology, a synthesis of Teutonic mythologies and biological nationalism" in Nazism, thus recognizing it as grounded on intellectual traditions of counter-Enlightenment, but mixing them with "instrumental reason" which allowed adopting "the methods of industrial production and scientific management were employed" for such irrational goals as racial extermination. Prior to these historians, various philosophers described fascism as a "revolt against reason" and a force hostile to scientific objectivity and rational inqury, namely Umberto Eco, Bertrand Russell, Richard Wolin and Jason Stanley.

Quasispecies model

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Quasispecies_model   ...