Search This Blog

Saturday, February 28, 2026

Evolution of sexual reproduction

Ladybirds mating
Pollen production is an essential step in sexual reproduction of seed plants.
Unsolved problem in biology
What selection pressures led to the evolution and maintenance of sexual reproduction?

Sexually reproducing animals, plants, fungi and protists are thought to have evolved from a common ancestor that was a single-celled eukaryotic species. Sexual reproduction is widespread in eukaryotes, though a few eukaryotic species have secondarily lost the ability to reproduce sexually, such as Bdelloidea, and some plants and animals routinely reproduce asexually (by apomixis and parthenogenesis) without entirely having lost sex. The evolution of sexual reproduction contains two related yet distinct themes: its origin and its maintenance. Bacteria and Archaea (prokaryotes) have processes that can transfer DNA from one cell to another (conjugation, transformation, and transduction), but it is unclear if these processes are evolutionarily related to sexual reproduction in Eukaryotes. In eukaryotes, true sexual reproduction by meiosis and cell fusion is thought to have arisen in the last eukaryotic common ancestor, possibly via several processes of varying success, and then to have persisted.

Since hypotheses for the origin of sex are difficult to verify experimentally (outside of evolutionary computation), most current work has focused on the persistence of sexual reproduction over evolutionary time. The maintenance of sexual reproduction (specifically, of its dioecious form) by natural selection in a highly competitive world has long been one of the major mysteries of biology, since both other known mechanisms of reproduction – asexual reproduction and hermaphroditism – possess apparent advantages over it. Asexual reproduction can proceed by budding, fission, or spore formation and does not involve the union of gametes, which accordingly results in a much faster rate of reproduction compared to sexual reproduction, where 50% of offspring are males and unable to produce offspring themselves. In hermaphroditic reproduction, each of the two parent organisms required for the formation of a zygote can provide either the male or the female gamete, which leads to advantages in both size and genetic variance of a population.

Sexual reproduction therefore must offer significant fitness advantages because, despite the two-fold cost of sex (see below), it dominates among multicellular forms of life, implying that the fitness of offspring produced by sexual processes outweighs the costs. Sexual reproduction derives from recombination, where parent genotypes are reorganised and shared with the offspring. This stands in contrast to single-parent asexual replication, where the offspring is always identical to the parents (barring mutation). Recombination supplies two fault-tolerance mechanisms at the molecular level: recombinational DNA repair (promoted during meiosis because homologous chromosomes pair at that time) and complementation (also known as heterosis, hybrid vigour or masking of mutations).

Historical perspective

Reproduction, including modes of sexual reproduction, features in the writings of Aristotle; modern philosophical-scientific thinking on the problem dates from at least Erasmus Darwin (1731–1802) in the 18th century. August Weismann picked up the thread in 1885, arguing that sex serves to generate genetic variation, as detailed in the majority of the explanations below. On the other hand, Charles Darwin (1809–1882) concluded that the effect of hybrid vigor (complementation) "is amply sufficient to account for the ... genesis of the two sexes". This is consistent with the repair and complementation hypothesis, described below. Since the emergence of the modern evolutionary synthesis in the 20th century, numerous biologists including W. D. Hamilton, Alexey Kondrashov, George C. Williams, Harris Bernstein, Carol Bernstein, Michael M. Cox, Frederic A. Hopf and Richard E. Michod – have suggested competing explanations for how a vast array of different living species maintain sexual reproduction.

Advantages of sex and sexual reproduction

The concept of sex includes two fundamental phenomena: the sexual process (fusion of genetic information of two individuals) and sexual differentiation (separation of this information into two parts). Depending on the presence or absence of these phenomena, all of the existing forms of reproduction can be classified as asexual, hermaphrodite or dioecious. The sexual process and sexual differentiation are different phenomena, and, in essence, are diametrically opposed. The first creates (increases) diversity of genotypes, and the second decreases it by half.

Reproductive advantages of the asexual forms are in quantity of the progeny, and the advantages of the hermaphrodite forms are in maximal diversity. Transition from the hermaphrodite to dioecious state leads to a loss of at least half of the diversity. So, the primary challenge is to explain the advantages given by sexual differentiation, i.e. the benefits of two separate sexes compared to hermaphrodites rather than to explain benefits of sexual forms (hermaphrodite + dioecious) over asexual ones. It has already been understood that since sexual reproduction is not associated with any clear reproductive advantages over asexual reproduction, there should be some important advantages in evolution.

Advantages due to genetic variation, DNA repair and genetic complementation

For the advantage due to genetic variation, there are three possible reasons this might happen. First, sexual reproduction can combine the effects of two beneficial mutations in the same individual (i.e. sex aids in the spread of advantageous traits) without the mutations having to have occurred one after another in a single line of descendants. Second, sex acts to bring together currently deleterious mutations to create severely unfit individuals that are then eliminated from the population (i.e. sex aids in the removal of deleterious genes). However, in organisms containing only one set of chromosomes, deleterious mutations would be eliminated immediately, and therefore removal of harmful mutations is an unlikely benefit for sexual reproduction. Lastly, sex creates new gene combinations that may be more fit than previously existing ones, or may simply lead to reduced competition among relatives.

For the advantage due to DNA repair, there is an immediate large benefit of removing DNA damage by recombinational DNA repair during meiosis (assuming the initial mutation rate is higher than optimal), since this removal allows greater survival of progeny with undamaged DNA. The advantage of complementation to each sexual partner is avoidance of the bad effects of their deleterious recessive genes in progeny by the masking effect of normal dominant genes contributed by the other partner.

The classes of hypotheses based on the creation of variation are further broken down below. Any number of these hypotheses may be true in any given species (they are not mutually exclusive), and different hypotheses may apply in different species. However, a research framework based on creation of variation has yet to be found that allows one to determine whether the reason for sex is universal for all sexual species, and, if not, which mechanisms are acting in each species.

On the other hand, the maintenance of sex based on DNA repair and complementation applies widely to all sexual species.

Protection from major genetic mutation

In contrast to the view that sex promotes genetic variation, Heng, and Gorelick and Heng reviewed evidence that sex actually acts as a constraint on genetic variation. They consider that sex acts as a coarse filter, weeding out major genetic changes, such as chromosomal rearrangements, but permitting minor variation, such as changes at the nucleotide or gene level (that are often neutral) to pass through the sexual sieve.

Novel genotypes

This diagram illustrates how sex might create novel genotypes more rapidly. Two advantageous alleles A and B occur at random. The two alleles are recombined rapidly in a sexual population (top), but in an asexual population (bottom) the two alleles must independently arise because of clonal interference.

Sex could be a method by which novel genotypes are created. Because sex combines genes from two individuals, sexually reproducing populations can more easily combine advantageous genes than can asexual populations. If, in a sexual population, two different advantageous alleles arise at different loci on a chromosome in different members of the population, a chromosome containing the two advantageous alleles can be produced within a few generations by recombination. However, should the same two alleles arise in different members of an asexual population, the only way that one chromosome can develop the other allele is to independently gain the same mutation, which would take much longer. Several studies have addressed counterarguments, and the question of whether this model is sufficiently robust to explain the predominance of sexual versus asexual reproduction remains.

Ronald Fisher suggested that sex might facilitate the spread of advantageous genes by allowing them to better escape their genetic surroundings, if they should arise on a chromosome with deleterious genes.

Supporters of these theories respond to the balance argument that the individuals produced by sexual and asexual reproduction may differ in other respects too – which may influence the persistence of sexuality. For example, in the heterogamous water fleas of the genus Cladocera, sexual offspring form eggs which are better able to survive the winter versus those the fleas produce asexually.

Increased resistance to parasites

One of the most widely discussed theories to explain the persistence of sex is that it is maintained to assist sexual individuals in resisting parasites, also known as the Red Queen hypothesis.

When an environment changes, previously neutral or deleterious alleles can become favourable. If the environment changed sufficiently rapidly (i.e. between generations), these changes in the environment can make sex advantageous for the individual. Such rapid changes in environment are caused by the co-evolution between hosts and parasites.

Imagine, for example that there is one gene in parasites with two alleles p and P conferring two types of parasitic ability, and one gene in hosts with two alleles h and H, conferring two types of parasite resistance, such that parasites with allele p can attach themselves to hosts with the allele h, and P to H. Such a situation will lead to cyclic changes in allele frequency – as p increases in frequency, h will be disfavoured.

In reality, there will be several genes involved in the relationship between hosts and parasites. In an asexual population of hosts, offspring will only have the different parasitic resistance if a mutation arises. In a sexual population of hosts, however, offspring will have a new combination of parasitic resistance alleles.

In other words, like Lewis Carroll's Red Queen, sexual hosts are continually "running" (adapting) to "stay in one place" (resist parasites).

Evidence for this explanation for the evolution of sex is provided by comparison of the rate of molecular evolution of genes for kinases and immunoglobulins in the immune system with genes coding other proteins. The genes coding for immune system proteins evolve considerably faster.

Further evidence for the Red Queen hypothesis was provided by observing long-term dynamics and parasite coevolution in a "mixed" (sexual and asexual) population of snails (Potamopyrgus antipodarum). The number of sexuals, the number of asexuals, and the rates of parasite infection for both were monitored. It was found that clones that were plentiful at the beginning of the study became more susceptible to parasites over time. As parasite infections increased, the once plentiful clones dwindled dramatically in number. Some clonal types disappeared entirely. Meanwhile, sexual snail populations remained much more stable over time.

However, Hanley et al. studied mite infestations of a parthenogenetic gecko species and its two related sexual ancestral species. Contrary to expectation based on the Red Queen hypothesis, they found that the prevalence, abundance and mean intensity of mites in sexual geckos was significantly higher than in asexuals sharing the same habitat.

In 2011, researchers used the microscopic roundworm Caenorhabditis elegans as a host and the pathogenic bacteria Serratia marcescens to generate a host-parasite coevolutionary system in a controlled environment, allowing them to conduct more than 70 evolution experiments testing the Red Queen hypothesis. They genetically manipulated the mating system of C. elegans, causing populations to mate either sexually, by self-fertilization, or a mixture of both within the same population. Then they exposed those populations to the S. marcescens parasite. It was found that the self-fertilizing populations of C. elegans were rapidly driven extinct by the coevolving parasites while sex allowed populations to keep pace with their parasites, a result consistent with the Red Queen hypothesis. In natural populations of C. elegans, self-fertilization is the predominant mode of reproduction, but infrequent out-crossing events occur at a rate of about 1%.

Other hypotheses

Critics of the Red Queen hypothesis question whether the constantly changing environment of hosts and parasites is sufficiently common to explain the evolution of sex; an alternative is the court jester hypothesis, which emphasises abiotic factors including climate. Otto and Nuismer presented results showing that species interactions (e.g. host vs parasite interactions) typically select against sex. They concluded that, although the Red Queen hypothesis favors sex under certain circumstances, it alone does not account for the ubiquity of sex. Otto and Gerstein  further stated that "it seems doubtful to us that strong selection per gene is sufficiently commonplace for the Red Queen hypothesis to explain the ubiquity of sex". Parker reviewed numerous genetic studies on plant disease resistance and failed to uncover a single example consistent with the assumptions of the Red Queen hypothesis.

Disadvantages of sex and sexual reproduction

The paradox of the existence of sexual reproduction is that though it is ubiquitous in multicellular organisms, there are ostensibly many inherent disadvantages to reproducing sexually when weighed against the relative advantages of alternative forms of reproduction, such as asexual reproduction. Thus, because sexual reproduction abounds in complex multicellular life, there must be some significant benefit(s) to sex and sexual reproduction that compensates for these fundamental disadvantages.

Population expansion cost of sex

Among the most limiting disadvantages to the evolution of sexual reproduction by natural selection is that an asexual population can grow much more rapidly than a sexual one with each generation.

For example, assume that the entire population of a theoretical species has 100 total organisms consisting of two sexes (i.e. males and females), with 50:50 male-to-female representation, and that only the females of this species can bear offspring. If all capable members of this population procreated once, a total of 50 offspring would be produced (the F1 generation). Contrast this outcome with an asexual species, in which each and every member of an equally sized 100-organism population is capable of bearing young. If all capable members of this asexual population procreated once, a total of 100 offspring would be produced – twice as many as produced by the sexual population in a single generation.

This diagram illustrates the two-fold cost of sex. If each individual were to contribute to the same number of offspring (two), (a) the sexual population remains the same size each generation, where the (b) asexual population doubles in size each generation.

This idea is sometimes referred to as the two-fold cost of sexual reproduction. It was first described mathematically by John Maynard Smith.[36][page needed] In his manuscript, Smith further speculated on the impact of an asexual mutant arising in a sexual population, which suppresses meiosis and allows eggs to develop into offspring genetically identical to the mother by mitotic division.The mutant-asexual lineage would double its representation in the population each generation, all else being equal.

Technically the problem above is not one of sexual reproduction but of having a subset of organisms incapable of bearing offspring. Indeed, some multicellular organisms (isogamous) engage in sexual reproduction but all members of the species are capable of bearing offspring. The two-fold reproductive disadvantage assumes that males contribute only genes to their offspring and sexual females spend half their reproductive potential on sons. Thus, in this formulation, the principal cost of sex is that males and females must successfully copulate, which almost always involves expending energy to come together through time and space. Asexual organisms need not expend the energy necessary to find a mate.

Selfish cytoplasmic genes

Sexual reproduction implies that chromosomes and alleles segregate and recombine in every generation, but not all genes are transmitted together to the offspring. There is a chance of spreading mutants that cause unfair transmission at the expense of their non-mutant colleagues. These mutations are referred to as "selfish" because they promote their own spread at the cost of alternative alleles or of the host organism; they include nuclear meiotic drivers and selfish cytoplasmic genes. Meiotic drivers are genes that distort meiosis to produce gametes containing themselves more than the 50% of the time expected by chance. A selfish cytoplasmic gene is a gene located in an organelle, plasmid or intracellular parasite that modifies reproduction to cause its own increase at the expense of the cell or organism that carries it.

Genetic heritability cost of sex

A sexually reproducing organism only passes on ~50% of its own genetic material to each L2 offspring. This is a consequence of the fact that gametes from sexually reproducing species are haploid. Again, however, this is not applicable to all sexual organisms. There are numerous species which are sexual but do not have a genetic-loss problem because they do not produce males or females. Yeast, for example, are isogamous sexual organisms which have two mating types which fuse and recombine their haploid genomes. Both sexes reproduce during the haploid and diploid stages of their life cycle and have a 100% chance of passing their genes into their offspring.

Some species avoid the 50% cost of sexual reproduction, although they have "sex" (in the sense of genetic recombination). In these species (e.g., bacteria, ciliates, dinoflagellates and diatoms), "sex" and reproduction occur separately.

DNA repair and complementation

As discussed in the earlier part of this article, sexual reproduction is conventionally explained as an adaptation for producing genetic variation through allelic recombination. As acknowledged above, however, serious problems with this explanation have led many biologists to conclude that the benefit of sex is a major unsolved problem in evolutionary biology.

An alternative "informational" approach to this problem has led to the view that the two fundamental aspects of sex, genetic recombination and outcrossing, are adaptive responses to the two major sources of "noise" in transmitting genetic information. Genetic noise can occur as either physical damage to the genome (e.g. chemically altered bases of DNA or breaks in the chromosome) or replication errors (mutations). This alternative view is referred to as the repair and complementation hypothesis, to distinguish it from the traditional variation hypothesis.

The repair and complementation hypothesis assumes that genetic recombination is fundamentally a DNA repair process, and that when it occurs during meiosis it is an adaptation for repairing the genomic DNA which is passed on to progeny. Recombinational repair is the only repair process known which can accurately remove double-strand damages in DNA, and such damages are both common in nature and ordinarily lethal if not repaired. For instance, double-strand breaks in DNA occur about 50 times per cell cycle in human cells (see naturally occurring DNA damage). Recombinational repair is prevalent from the simplest viruses to the most complex multicellular eukaryotes. It is effective against many different types of genomic damage, and in particular is highly efficient at overcoming double-strand damages. Studies of the mechanism of meiotic recombination indicate that meiosis is an adaptation for repairing DNA. These considerations form the basis for the first part of the repair and complementation hypothesis.

In some lines of descent from the earliest organisms, the diploid stage of the sexual cycle, which was at first transient, became the predominant stage, because it allowed complementation – the masking of deleterious recessive mutations (i.e. hybrid vigor or heterosis). Outcrossing, the second fundamental aspect of sex, is maintained by the advantage of masking mutations and the disadvantage of inbreeding (mating with a close relative) which allows expression of recessive mutations (commonly observed as inbreeding depression). This is in accord with Charles Darwin, who concluded that the adaptive advantage of sex is hybrid vigor; or as he put it, "the offspring of two individuals, especially if their progenitors have been subjected to very different conditions, have a great advantage in height, weight, constitutional vigor and fertility over the self fertilised offspring from either one of the same parents."

However, outcrossing may be abandoned in favor of parthenogenesis or selfing (which retain the advantage of meiotic recombinational repair) under conditions in which the costs of mating are very high. For instance, costs of mating are high when individuals are rare in a geographic area, such as when there has been a forest fire and the individuals entering the burned area are the initial ones to arrive. At such times mates are hard to find, and this favors parthenogenic species.

In the view of the repair and complementation hypothesis, the removal of DNA damage by recombinational repair produces a new, less deleterious form of informational noise, allelic recombination, as a by-product. This lesser informational noise generates genetic variation, viewed by some as the major effect of sex, as discussed in the earlier parts of this article.

Deleterious mutation clearance

Mutations can have many different effects upon an organism. It is generally believed that the majority of non-neutral mutations are deleterious, which means that they will cause a decrease in the organism's overall fitness. If a mutation has a deleterious effect, it will then usually be removed from the population by the process of natural selection. Sexual reproduction is believed to be more efficient than asexual reproduction in removing those mutations from the genome.

There are two main hypotheses which explain how sex may act to remove deleterious genes from the genome.

Evading harmful mutation build-up

While DNA is able to recombine to modify alleles, DNA is also susceptible to mutations within the sequence that can affect an organism in a negative manner. Asexual organisms do not have the ability to recombine their genetic information to form new and differing alleles. Once a mutation occurs in the DNA or other genetic carrying sequence, there is no way for the mutation to be removed from the population until another mutation occurs that ultimately deletes the primary mutation. This is rare among organisms.

Hermann Joseph Muller introduced the idea that mutations build up in asexual reproducing organisms. Muller described this occurrence by comparing the mutations that accumulate as a ratchet. Each mutation that arises in asexually reproducing organisms turns the ratchet once. The ratchet is unable to be rotated backwards, only forwards. The next mutation that occurs turns the ratchet once more. Additional mutations in a population continually turn the ratchet and the mutations, mostly deleterious, continually accumulate without recombination. These mutations are passed onto the next generation because the offspring are exact genetic clones of their parents. The genetic load of organisms and their populations will increase due to the addition of multiple deleterious mutations and decrease the overall reproductive success and fitness.

For sexually reproducing populations, studies have shown that single-celled bottlenecks are beneficial for resisting mutation build-up. Passaging a population through a single-celled bottleneck involves the fertilization event occurring with haploid sets of DNA, forming one fertilized cell. For example, humans undergo a single-celled bottleneck in that the haploid sperm fertilizes the haploid egg, forming the diploid zygote, which is unicellular. This passage through a single cell is beneficial in that it lowers the chance of mutations from being passed on through multiple individuals. Instead, the mutation is only passed onto one individual. Further studies using Dictyostelium discoideum suggest that this unicellular initial stage is important for resisting mutations due to the importance of high relatedness. Highly related individuals are more closely related, and more clonal, whereas less related individuals are less so, increasing the likelihood that an individual in a population of low relatedness may have a detrimental mutation. Highly related populations also tend to thrive better than lowly related because the cost of sacrificing an individual is greatly offset by the benefit gained by its relatives and in turn, its genes, according to kin selection. The studies with D. discoideum showed that conditions of high relatedness resisted mutant individuals more effectively than those of low relatedness, suggesting the importance of high relatedness to resist mutations from proliferating.

Removal of deleterious genes

Diagram illustrating different relationships between numbers of mutations and fitness. Kondrashov's model requires synergistic epistasis, which is represented by the red line – each subsequent mutation has a disproportionately large effect on the organism's fitness.

This hypothesis was proposed by Alexey Kondrashov, and is sometimes known as the deterministic mutation hypothesis. It assumes that the majority of deleterious mutations are only slightly deleterious, and affect the individual such that the introduction of each additional mutation has an increasingly large effect on the fitness of the organism. This relationship between number of mutations and fitness is known as synergistic epistasis.

By way of analogy, think of a car with several minor faults. Each is not sufficient alone to prevent the car from running, but in combination, the faults combine to prevent the car from functioning.

Similarly, an organism may be able to cope with a few defects, but the presence of many mutations could overwhelm its backup mechanisms.

Kondrashov argues that the slightly deleterious nature of mutations means that the population will tend to be composed of individuals with a small number of mutations. Sex will act to recombine these genotypes, creating some individuals with fewer deleterious mutations, and some with more. Because there is a major selective disadvantage to individuals with more mutations, these individuals die out. In essence, sex compartmentalises the deleterious mutations.

There has been much criticism of Kondrashov's theory, since it relies on two key restrictive conditions. The first requires that the rate of deleterious mutation should exceed one per genome per generation in order to provide a substantial advantage for sex. While there is some empirical evidence for it (for example in Drosophila and E. coli), there is also strong evidence against it. Thus, for instance, for the sexual species Saccharomyces cerevisiae (yeast) and Neurospora crassa (fungus), the mutation rate per genome per replication are 0.0027 and 0.0030 respectively. For the nematode worm Caenorhabditis elegans, the mutation rate per effective genome per sexual generation is 0.036. Secondly, there should be strong interactions among loci (synergistic epistasis), a mutation-fitness relation for which there is only limited evidence. Conversely, there is also the same amount of evidence that mutations show no epistasis (purely additive model) or antagonistic interactions (each additional mutation has a disproportionally small effect).

Other explanations

Geodakyan's evolutionary theory of sex

Geodakyan suggested that sexual dimorphism provides a partitioning of a species' phenotypes into at least two functional partitions: a female partition that secures beneficial features of the species and a male partition that emerged in species with more variable and unpredictable environments. The male partition is suggested to be an "experimental" part of the species that allows the species to expand their ecological niche, and to have alternative configurations. This theory underlines the higher variability and higher mortality in males, in comparison to females. This functional partitioning also explains the higher susceptibility to disease in males, in comparison to females and therefore includes the idea of "protection against parasites" as another functionality of male sex. Geodakyan's evolutionary theory of sex was developed in Russia in 1960–1980 and was not known to the West till the era of the Internet. Trofimova, who analysed psychological sex differences, hypothesised that the male sex might also provide a "redundancy pruning" function.

Speed of evolution

Ilan Eshel suggested that sex prevents rapid evolution. He suggests that recombination breaks up favourable gene combinations more often than it creates them, and sex is maintained because it ensures selection is longer-term than in asexual populations – so the population is less affected by short-term changes. This explanation is not widely accepted, as its assumptions are very restrictive.

It has recently been shown in experiments with Chlamydomonas algae that sex can remove the speed limit on evolution.

An information theoretic analysis using a simplified but useful model shows that in asexual reproduction, the information gain per generation of a species is limited to 1 bit per generation, while in sexual reproduction, the information gain is bounded by , where is the size of the genome in bits.

Libertine bubble theory

The evolution of sex can alternatively be described as a kind of gene exchange that is independent from reproduction. According to the Thierry Lodé's "libertine bubble theory", sex originated from an archaic gene transfer process among prebiotic bubbles. The contact among the pre-biotic bubbles could, through simple food or parasitic reactions, promote the transfer of genetic material from one bubble to another. That interactions between two organisms be in balance appear to be a sufficient condition to make these interactions evolutionarily efficient, i.e. to select bubbles that tolerate these interactions ("libertine" bubbles) through a blind evolutionary process of self-reinforcing gene correlations and compatibility.

The "libertine bubble theory" proposes that meiotic sex evolved in proto-eukaryotes to solve a problem that bacteria did not have, namely a large amount of DNA material, occurring in an archaic step of proto-cell formation and genetic exchanges. So that, rather than providing selective advantages through reproduction, sex could be thought of as a series of separate events which combines step-by-step some very weak benefits of recombination, meiosis, gametogenesis and syngamy. Therefore, current sexual species could be descendants of primitive organisms that practiced more stable exchanges in the long term, while asexual species have emerged, much more recently in evolutionary history, from the conflict of interest resulting from anisogamy.

Parasites and Muller's ratchet

R. Stephen Howard and Curtis Lively were the first to suggest that the combined effects of parasitism and mutation accumulation can lead to an increased advantage to sex under conditions not otherwise predicted (Nature, 1994). Using computer simulations, they showed that when the two mechanisms act simultaneously the advantage to sex over asexual reproduction is larger than for either factor operating alone.

Origin of sexual reproduction

−4500 —
−4000 —
−3500 —
−3000 —
−2500 —
−2000 —
−1500 —
−1000 —
−500 —
0 —
 
 
 
 
 
 
 
 
 

Many protists reproduce sexually, as do many multicellular plants, animals, and fungi. In the eukaryotic fossil record, sexual reproduction first appeared about 2.0 billion years ago in the Proterozoic Eon,although a later date, 1.2 billion years ago, has also been presented.Nonetheless, all sexually reproducing eukaryotic organisms likely derive from a single-celled common ancestor. It is probable that the evolution of sex was an integral part of the evolution of the first eukaryotic cell.There are a few species which have secondarily lost this feature, such as Bdelloidea and some parthenocarpic plants.

Diploidy

Organisms need to replicate their genetic material in an efficient and reliable manner. The necessity to repair genetic damage is one of the leading theories explaining the origin of sexual reproduction. Diploid individuals can repair a damaged section of their DNA via homologous recombination, since there are two copies of the gene in the cell and if one copy is damaged, the other copy is unlikely to be damaged at the same site.

A harmful damage in a haploid individual, on the other hand, is more likely to become fixed (i.e. permanent), since any DNA repair mechanism would have no source from which to recover the original undamaged sequence. The most primitive form of sex may have been one organism with damaged DNA replicating an undamaged strand from a similar organism in order to repair itself.

Meiosis

Sexual reproduction appears to have arisen very early in eukaryotic evolution, implying that the essential features of meiosis were already present in the last eukaryotic common ancestor. In extant organisms, proteins with central functions in meiosis are similar to key proteins in natural transformation in bacteria and DNA transfer in archaea. For example, recA recombinase, that catalyses the key functions of DNA homology search and strand exchange in the bacterial sexual process of transformation, has orthologs in eukaryotes that perform similar functions in meiotic recombination

Natural transformation in bacteria, DNA transfer in archaea, and meiosis in eukaryotic microorganisms are induced by stressful circumstances such as overcrowding, resource depletion, and DNA damaging conditions. This suggests that these sexual processes are adaptations for dealing with stress, particularly stress that causes DNA damage. In bacteria, these stresses induce an altered physiologic state, termed competence, that allows active take-up of DNA from a donor bacterium and the integration of this DNA into the recipient genome (see Natural competence) allowing recombinational repair of the recipients' damaged DNA.

If environmental stresses leading to DNA damage were a persistent challenge to the survival of early microorganisms, then selection would likely have been continuous through the prokaryote to eukaryote transition, and adaptative adjustments would have followed a course in which bacterial transformation or archaeal DNA transfer naturally gave rise to sexual reproduction in eukaryotes.

Virus-like RNA-based origin

Sex might also have been present even earlier, in the hypothesized RNA world that preceded DNA cellular life forms. One proposed origin of sex in the RNA world was based on the type of sexual interaction that is known to occur in extant single-stranded segmented RNA viruses, such as influenza virus, and in extant double-stranded segmented RNA viruses such as reovirus.

Exposure to conditions that cause RNA damage could have led to blockage of replication and death of these early RNA life forms. Sex would have allowed re-assortment of segments between two individuals with damaged RNA, permitting undamaged combinations of RNA segments to come together, thus allowing survival. Such a regeneration phenomenon, known as multiplicity reactivation, occurs in the influenza virus and reovirus.

Parasitic DNA elements

Another theory is that sexual reproduction originated from selfish parasitic genetic elements that exchange genetic material (that is: copies of their own genome) for their transmission and propagation. In some organisms, sexual reproduction has been shown to enhance the spread of parasitic genetic elements (e.g. yeast, filamentous fungi).

Bacterial conjugation is a form of genetic exchange that some sources describe as "sex", but technically is not a form of reproduction, even though it is a form of horizontal gene transfer. However, it does support the "selfish gene" part theory, since the gene itself is propagated through the F-plasmid.

A similar origin of sexual reproduction is proposed to have evolved in ancient haloarchaea as a combination of two independent processes: jumping genes and plasmid swapping.

Partial predation

A third theory is that sex evolved as a form of cannibalism: One primitive organism ate another one, but instead of completely digesting it, some of the eaten organism's DNA was incorporated into the DNA of the eater.

Vaccination-like process

Sex may also be derived from another prokaryotic process. A comprehensive theory called "origin of sex as vaccination" proposes that eukaryan sex-as-syngamy (fusion sex) arose from prokaryan unilateral sex-as-infection, when infected hosts began swapping nuclearised genomes containing coevolved, vertically transmitted symbionts that provided protection against horizontal superinfection by other, more virulent symbionts.

Consequently, sex-as-meiosis (fission sex) would evolve as a host strategy for uncoupling from (and thereby render impotent) the acquired symbiotic/parasitic genes.

Mechanistic origin of sexual reproduction

While theories positing fitness benefits that led to the origin of sex are often problematic, several theories addressing the emergence of the mechanisms of sexual reproduction have been proposed.

Viral eukaryogenesis

The viral eukaryogenesis (VE) theory proposes that eukaryotic cells arose from a combination of a lysogenic virus, an archaean, and a bacterium. This model suggests that the nucleus originated when the lysogenic virus incorporated genetic material from the archaean and the bacterium and took over the role of information storage for the amalgam. The archaeal host transferred much of its functional genome to the virus during the evolution of cytoplasm, but retained the function of gene translation and general metabolism. The bacterium transferred most of its functional genome to the virus as it transitioned into a mitochondrion.

For these transformations to lead to the eukaryotic cell cycle, the VE hypothesis specifies a pox-like virus as the lysogenic virus. A pox-like virus is a likely ancestor because of its fundamental similarities with eukaryotic nuclei. These include a double stranded DNA genome, a linear chromosome with short telomeric repeats, a complex membrane bound capsid, the ability to produce capped mRNA, and the ability to export the capped mRNA across the viral membrane into the cytoplasm. The presence of a lysogenic pox-like virus ancestor explains the development of meiotic division, an essential component of sexual reproduction.

Meiotic division in the VE hypothesis arose because of the evolutionary pressures placed on the lysogenic virus as a result of its inability to enter into the lytic cycle. This selective pressure resulted in the development of processes allowing the viruses to spread horizontally throughout the population. The outcome of this selection was cell-to-cell fusion. (This is distinct from the conjugation methods used by bacterial plasmids under evolutionary pressure, with important consequences.) The possibility of this kind of fusion is supported by the presence of fusion proteins in the envelopes of the pox viruses that allow them to fuse with host membranes. These proteins could have been transferred to the cell membrane during viral reproduction, enabling cell-to-cell fusion between the virus host and an uninfected cell. The theory proposes meiosis originated from the fusion between two cells infected with related but different viruses which recognised each other as uninfected. After the fusion of the two cells, incompatibilities between the two viruses result in a meiotic-like cell division.

The two viruses established in the cell would initiate replication in response to signals from the host cell. A mitosis-like cell cycle would proceed until the viral membranes dissolved, at which point linear chromosomes would be bound together with centromeres. The homologous nature of the two viral centromeres would incite the grouping of both sets into tetrads. It is speculated that this grouping may be the origin of crossing over, characteristic of the first division in modern meiosis. The partitioning apparatus of the mitotic-like cell cycle the cells used to replicate independently would then pull each set of chromosomes to one side of the cell, still bound by centromeres. These centromeres would prevent their replication in subsequent division, resulting in four daughter cells with one copy of one of the two original pox-like viruses. The process resulting from combination of two similar pox viruses within the same host closely mimics meiosis.

Neomuran revolution

An alternative theory, proposed by Thomas Cavalier-Smith, was labeled the Neomuran revolution. The designation "Neomuran revolution" refers to the appearances of the common ancestors of eukaryotes and archaea. Cavalier-Smith proposes that the first neomurans emerged 850 million years ago. Other molecular biologists assume that this group appeared much earlier, but Cavalier-Smith dismisses these claims because they are based on the "theoretically and empirically" unsound model of molecular clocks. Cavalier-Smith's theory of the Neomuran revolution has implications for the evolutionary history of the cellular machinery for recombination and sex. It suggests that this machinery evolved in two distinct bouts separated by a long period of stasis; first the appearance of recombination machinery in a bacterial ancestor which was maintained for 3 Gy(billion years), until the neomuran revolution when the mechanics were adapted to the presence of nucleosomes. The archaeal products of the revolution maintained recombination machinery that was essentially bacterial, whereas the eukaryotic products broke with this bacterial continuity. They introduced cell fusion and ploidy cycles into cell life histories. Cavalier-Smith argues that both bouts of mechanical evolution were motivated by similar selective forces: the need for accurate DNA replication without loss of viability.

Seesaw effect

Schematic illustration of the advantage of the first sexual individual resulting from the seesaw effect. Possible combinations of the sex allele (S) and non-sex allele (N) entering the clean genome (C) or dirty genome (D) are shown. S (dominant over N) controls meiosis and fusion. α is the deviation from equal division of dms (deleterious genes) over two genomes. The first automictic selfing event is successful with a 50% probability.

In 2022, Yasui and his colleague proposed the "seesaw effect" hypothesis to explain the emergence of gametic sexual reproduction. They suggested that an ancestral diploid asexual eukaryote acquired a dominant sex allele (S), enabling meiosis and gamete fusion. As genome size increased, deleterious mutations (dms) accumulated due to Muller's ratchet, approaching a lethal threshold (dmt : e.g., 100 mutations). However, these mutations were asymmetrically distributed between the two genomes—for example, 60 mutations in one (dirty genome D) and 40 in the other (clean genome C). When the S allele arises, the cell becomes SN and undergoes meiosis, producing four gametes: two with S and two with N, split across the clean and dirty genomes (e.g., CS, CS, DN, DN or CN, CN, DS, DS). Because the first S-bearing individual had no partner, self-fertilization (automixis) occurred. Terminal fusion of sister chromatids created homozygous offspring: CS + CS produced a viable zygote with 80 mutations (40+40), while DS + DS led to 120 mutations (60+60), which exceeded dmt and resulted in death. Thus, harmful mutations were purged, and the S allele became fixed, as only S-carrying gametes could participate in fusion. DN and CN gametes (asexual types) could not fuse and died. From the second generation, each CSCS individual could produce four CS gametes, yielding two viable offspring, matching the efficiency of binary fission in asexual reproduction. Since S became fixed and all gametes could reproduce sexually, the "twofold cost of meiosis" was eliminated. This mechanism—concentrating mutational burden in some gametes while preserving others—resembled a seesaw, hence the name "seesaw effect." It explains both the origin of sex and its early evolutionary advantage in purging deleterious mutations.

Questions

Some questions biologists have attempted to answer include:

  • Why does sexual reproduction exist, if in many organisms it has a 50% cost (fitness disadvantage) in relation to asexual reproduction?
  • Did mating types (types of gametes, according to their compatibility) arise as a result of anisogamy (gamete dimorphism), or did mating types evolve before anisogamy?
  • Why do most sexual organisms use a binary mating system? Grouping itself offers a survival advantage. A binary recognition based system is the most simple and effective method in maintaining species grouping.
  • Why do some organisms have gamete dimorphism?

Ginsberg's theorem

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Ginsberg%27s_theorem

Ginsberg's theorem is an epigrammatic paraphrase and parody "theorem" which restates or analogizes the consequences of the four laws of thermodynamics of physics in terms of a person playing a game. It has various formulations, but it can be more or less expressed as:

Ginsberg's theorem:
  1. There is a game, which you are already playing. (consequence of zeroth law of thermodynamics)
  2. You cannot win in the game. (consequence of first law of thermodynamics)
  3. You cannot break even in the game. (consequence of second law of thermodynamics)
  4. You cannot even quit the game. (consequence of third law of thermodynamics)

The theorem is named after the poet Allen Ginsberg, though there does not appear to be any concrete evidence that Ginsberg himself coined the theorem. The phrase is sometimes stated as a general adage without specific reference to the laws of thermodynamics.

History

A comprehensive history and etymology of the epigrammatic phrase can also be found from the etymologist Barry Popik.

The phrase is often attributed to the British scientist C. P. Snow, who apparently was credited by his students for using it to help learn the laws of thermodynamics in the 1950s. However this claim appears to be without a source.

A semblance of the phrase appears to have been first printed in a 1953 issue of the science fiction magazine Astounding Science Fiction, whose editor, John Wood Campbell Jr., referenced acoustic engineer and professor Dwight Wayne Batteau of Harvard University:

"I suggest that there are some laws of ethics that are not human, but Universal. Wayne Batteau and his Speculative Society group at Harvard sent me one little pair of statements that are decidedly revealing in that respect.
“You can't win.” (The Law of Conservation of Energy.)
“You can't even break even.” (Second Law of Thermodynamics.)
When you stop to think about it, that “You can't win,” bears a strong resemblance to the old moral adage “You can't get something for nothing.”"

— John W. Campbell, Jr., Intelligence Test, Astounding SCIENCE FICTION, 1953 November, Vol. LII (52), no. 3 (November), pg. 8

In a 1956 issue of the same magazine, Batteau himself expanded it further in what appears to have been the first complete mention of the epigrammatic phrase in print:[6]

"The Three Laws of Thermodynamics, translated from Mathematics into English, come out:
1. You can't win.
2. You can't even break even.
3. Furthermore, you can't get out of the game!"

— Wayne Batteau, English Translation, Astounding SCIENCE FICTION, 1956 December, Vol. LVIII (58), no. 4 (December), pg. 43

It was later presented in the literary magazine The Kenyon Review in a 1960 short story titled "Entropy" from widely-regarded novelist Thomas Pynchon, who was still then an engineering physics undergraduate at Cornell University:

"Callisto had learned a mnemonic device for remembering the Laws of Thermodynamics: you can't win, things are going to get worse before they get better, who says they're going to get better."

— Thomas Pynchon, Entropy, The Kenyon Review, 1960 April, Vol. XXII (22), no. 2 (Spring), pg. 282

Physicist William R. Corliss also partly wrote about the phrase in a 1964 educational booklet freely distributed by the United States Atomic Energy Commission to disseminate knowledge about atomic energy to the American public:

"The Law of Conservation of Energy and Mass is also called the First Law of Thermodynamics. It is related to the Second Law of Thermodynamics, which also governs energy transformations. The Second Law says, in effect, that some energy will unavoidably be lost in all heat engines. The first two laws of thermodynamics have been paraphrased as (1) You can't win; (2) You can't even break even."

— William R. Corliss, Direct Conversion of Energy, United States Atomic Energy Commission, 1974 January, pg. 8

Science writer Isaac Asimov stated at least the first two laws in a 1970 article, and was being credited with the paraphrased version by the end of the decade.

The phrase then appeared in a non-scientific setting in the opening lines of the popular song "You Can't Win" originally written by songwriter Charlie Smalls for the stage musical The Wiz:

"You can't win, you can't break even
And you can't get out of the game"

— Charlie Smalls, You Can't Win, The Wiz, 1974 October

The song was written by Smalls in 1974 and performed during the 1974 Baltimore run of the musical. The song later reached number 81 on the Billboard Hot 100. Though the song was formally released in 1979 as part of a musical soundtrack album, it was originally written and copyrighted by Smalls in 1974.

Remarkably, Allen Ginsberg appears to have only ever written about the laws of thermodynamics once, in his 1973 poem "Yes and It's Hopeless", though not in any connection to the original epigrammatic phrase:

"All hopeless, the entire solar system running
Thermodynamics' Second Law
down the whole galaxy, all universes brain illusion or solid electric hopeless"

— Allen Ginsberg, Yes and It's Hopeless, 1973 March

Thus Ginsberg was seemingly, at the very least, cognizant of the laws of thermodynamics by the time of 1973. It is claimed that Ginsberg supposedly mentioned the epigrammatic phrase as a fun fact during a poetry session in or around 1974.[10][verification needed] In 1975, someone — possibly either Ginsberg's gay partner and poet Peter Orlovsky, poetry associate William Burroughs, or Philip Whalen — compiled a collection of quirky laws, including a "Ginsberg's Theorem" based on Ginsberg's prior musings.

In 1975, Ginsberg's theorem formally appeared by name, with no association to thermodynamics, in a listing of parody-like proverb laws by Conrad Schneiker in the counterculture magazine The CoEvolution Quarterly:

"Ginsberg's Theorem
1) You can't win.
2) You can't break even.
3) You can't even quit the game."

— Conrad Schneiker, An Abridged Collection of Interdisplicinary Laws, The CoEvolution Quarterly, 1975 December, No. 8 (Winter)

It may be possible that this appearance originated from a slight misstatement of the lines in the earlier 1974 song by Charlie Smalls.

Writer Arthur Bloch, in his popular 1977 book "Murphy's Law and Other Reasons Why Things Go Wrong!" which popularized Murphy's law, conflated the Ginsberg's theorem with the science of thermodynamics:

"The official party line of technology, of science itself, is despair. If you doubt this, witness the laws of thermodynamics as they are restated in Ginsberg's Theorem."

— Arthur Bloch, Murphy's Law and Other Reasons Why Things Go Wrong!, 1977 October, pg. 8

"GINSBERG'S THEOREM:
1. You can't win.
2. You can't break even.
3. You can't even quit the game."

— Arthur Bloch, Murphy's Law and Other Reasons Why Things Go Wrong!, 1977 October, pg. 18

Notably, the book's acknowledgements mention Conrad Schneiker, who had written about Ginsberg's theorem in The CoEvolution Quarterly just two years prior in 1975. The theorem may have also been relayed to Bloch in conversation with his acquaintance Harris Freeman, who he knew from University of California, Santa Cruz, and who had found a collection of "laws", including Murphy's Law, Ginsberg's Theorem, and many others, somewhere on the ARPANET (a precursor of the Internet) in the mid 1970s while working as a systems administrator for ILLIAC IV (the world's first massively parallel computer) at the NASA Ames Research Center near Mountain View, California. With the publication of Bloch's book, Ginsberg's theorem seemingly thereafter became much more widely known.

Artificial general intelligence

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Artificial_general_intelligence

Artificial general intelligence (AGI) is a type of artificial intelligence that matches or surpasses human capabilities across virtually all cognitive tasks.

Beyond AGI, artificial superintelligence (ASI) would outperform the best human abilities across every domain by a wide margin. Unlike artificial narrow intelligence (ANI), whose competence is confined to well‑defined tasks, an AGI system can generalise knowledge, transfer skills between domains, and solve novel problems without task‑specific reprogramming.

Creating AGI is a stated goal of AI technology companies such as OpenAIGooglexAI, and Meta. A 2020 survey identified 72 active AGI research and development projects across 37 countries.

AGI is a common topic in science fiction and futures studies.

Contention exists over whether AGI represents an existential risk. Some AI experts and industry figures have stated that mitigating the risk of human extinction posed by AGI should be a global priority. Others find the development of AGI to be in too remote a stage to present such a risk.

Terminology

AGI is also known as strong AI, full AI, human-level AI, human-level intelligent AI, or general intelligent action.

Some academic sources reserve the term "strong AI" for computer programs that will experience sentience or consciousness. In contrast, weak AI (or narrow AI) can solve one specific problem but lacks general cognitive abilities. Some academic sources use "weak AI" to refer more broadly to any programs that neither experience consciousness nor have a mind in the same sense as humans.

Related concepts include artificial superintelligence and transformative AI. An artificial superintelligence (ASI) is a hypothetical type of AGI that is much more generally intelligent than humans, while the notion of transformative AI relates to AI having a large impact on society, for example, similar to the agricultural or industrial revolution.

A framework for classifying AGI was proposed in 2023 by Google DeepMind researchers. They define five performance levels of AGI: emerging, competent, expert, virtuoso, and superhuman. For example, a competent AGI is defined as an AI that outperforms 50% of skilled adults in a wide range of non-physical tasks, and a superhuman AGI (i.e. an artificial superintelligence) is similarly defined but with a threshold of 100%. They consider large language models like ChatGPT or LLaMA 2 to be instances of emerging AGI (comparable to unskilled humans). Regarding the autonomy of AGI and associated risks, they define five levels: tool (fully in human control), consultant, collaborator, expert, and agent (fully autonomous).

Characteristics

Prior to the release of ChatGPT in November 2022, there was broad consensus on AGI as a theoretical benchmark for human-level machine intelligence. The capabilities demonstrated by GPT-3.5 and subsequent large language models challenged this framing directly, with some researchers and practitioners arguing that these systems already constitute AGI. The debate has since shifted from whether AGI is achievable to whether it has already been achieved and when exactly it occurred. OpenAI CEO Sam Altman, who initially maintained the pre-ChatGPT framing of AGI as a future milestone, conceded by December 2025 that "we built AGIs" and that "AGI kinda went whooshing by," proposing the field move on to defining superintelligence. Computer scientist John McCarthy noted in 2007 the difficulty of characterising which computational procedures count as intelligent.

Intelligence traits

Researchers generally hold that a system is required to do all of the following to be regarded as an AGI:

Many interdisciplinary approaches (e.g. cognitive science, computational intelligence, and decision making) consider additional traits such as imagination (the ability to form novel mental images and concepts) and autonomy.

Computer-based systems exhibiting these capabilities are now widespread, with modern large language models demonstrating computational creativity, automated reasoning, and decision support simultaneously across domains. Earlier systems such as evolutionary computation, intelligent agents, and robots demonstrated these capabilities in isolation, but the convergence of multiple cognitive abilities within single architectures from GPT-3.5 onwards marked a qualitative shift in the field.

Physical traits

Other capabilities are considered desirable in intelligent systems, as they may affect intelligence or aid in its expression. These include:

This includes the ability to detect and respond to hazard.

Tests for human-level AGI

Several tests meant to confirm human-level AGI have been considered, including:

The Turing Test (Turing)
The Turing test can provide some evidence of intelligence, but it penalizes non-human intelligent behavior and may incentivize artificial stupidity.
Proposed by Alan Turing in his 1950 paper "Computing Machinery and Intelligence", this test involves a human judge engaging in natural language conversations with both a human and a machine designed to generate human-like responses. The machine passes the test if it can convince the judge that it is human a significant fraction of the time. Turing proposed this as a practical measure of machine intelligence, focusing on the ability to produce human-like responses rather than on the internal workings of the machine.
Turing described the test as follows:

The idea of the test is that the machine has to try and pretend to be a man, by answering questions put to it, and it will only pass if the pretence is reasonably convincing. A considerable portion of a jury, who should not be experts about machines, must be taken in by the pretence.

In 2014, a chatbot named Eugene Goostman, designed to imitate a 13-year-old Ukrainian boy, reportedly passed a Turing Test event by convincing 33% of judges that it was human. However, this claim was met with significant skepticism from the AI research community, who questioned the test's implementation and its relevance to AGI.
In 2023, Kirk-Giannini and Goldstein argued that while large language models were approaching the threshold of passing the Turing test, "imitation" is not synonymous with "intelligence". This distinction has been challenged on scientific grounds: neuroscience has established that biological intelligence arises from electrochemical signalling between neurons — a purely physical process with no known non-physical component. Both biological neural networks and artificial neural networks are physical systems processing information according to physical laws; to claim that one substrate produces "real" intelligence while the other produces "mere imitation" despite equivalent observable behaviour requires positing a non-physical property unique to biological matter — a position incompatible with modern science and indistinguishable from substance dualism.
A 2024 study suggested that GPT-4 was identified as human 54% of the time in a randomized, controlled version of the Turing Test—surpassing older chatbots like ELIZA while still falling behind actual humans (67%).
A 2025 pre‑registered, three‑party Turing‑test study by Cameron R. Jones and Benjamin K. Bergen showed that GPT-4.5 was judged to be the human in 73% of five‑minute text conversations—surpassing the 67% humanness rate of real confederates and meeting the researchers' criterion for having passed the test.
The Robot College Student Test (Goertzel)
A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree. LLMs can now pass university degree-level exams without even attending the classes.
The Employment Test (Nilsson)
A machine performs an economically important job at least as well as humans in the same job. This test is now arguably passed across multiple domains. In knowledge work, frontier large language models are deployed as autonomous agentic systems handling software engineering, legal research, financial analysis, customer service, and marketing tasks end-to-end. In physical labour, LLM-powered humanoid robots are entering both industrial and domestic environments. Figure AI's robots operate fully autonomously in factory and warehouse settings, with manufacturers including BMW deploying them on production lines. Boston Dynamics has similarly demonstrated advanced autonomous robotics in industrial applications. 1X Technologies' NEO humanoid, available for pre-order at $20,000 with deliveries beginning in 2026, targets household tasks such as tidying, laundry, and fetching items. Unlike Figure's fully autonomous factory deployments, NEO ships with basic autonomy and uses a human-in-the-loop "Expert Mode" where remote operators supervise complex tasks the robot has not yet learned — a strategy driven by the data collection challenge inherent to training robots for the diversity of home environments. Tesla's Optimus programme has announced similar consumer ambitions. The remaining frontier is fully autonomous general-purpose home robotics, where the unstructured nature of domestic environments presents a harder data and generalisation problem than controlled industrial settings.
The Ikea test (Marcus)
Also known as the Flat Pack Furniture Test. An AI views the parts and instructions of an Ikea flat-pack product, then controls a robot to assemble the furniture correctly. As early as 2013, MIT's IkeaBot demonstrated fully autonomous multi-robot assembly of an IKEA Lack table in ten minutes, with no human intervention and no pre-programmed assembly instructions — the robots inferred the assembly sequence from the geometry of the parts alone. In December 2025, MIT researchers demonstrated a "speech-to-reality" system combining large language models with vision-language models and robotic assembly: a user says "I want a simple stool" and a robotic arm constructs the furniture from modular components within five minutes, using generative AI to reason about geometry, function, and assembly sequence from natural language alone. The FurnitureBench benchmark, published in the International Journal of Robotics Research in 2025, now provides a standardised real-world furniture assembly benchmark with over 200 hours of demonstration data for training and evaluating autonomous assembly systems.
The Coffee Test (Wozniak)
A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons. This test has been substantially approached across multiple systems. In January 2024, Figure AI's Figure 01 humanoid learned to operate a Keurig coffee machine autonomously after watching video demonstrations, using end-to-end neural networks to translate visual input into motor actions. In 2025, researchers at the University of Edinburgh published the ELLMER framework in Nature Machine Intelligence, demonstrating a robotic arm that interprets verbal instructions, analyses its surroundings, and autonomously makes coffee in dynamic kitchen environments — adapting to unforeseen obstacles in real time rather than following pre-programmed sequences. China-based Stardust Intelligence demonstrated its Astribot S1 using Physical Intelligence's Ï€₀ model to make coffee from the high-level command "make coffee", with the system identifying objects such as mugs and coffee makers even when misplaced or in unexpected locations. Physical Intelligence subsequently reported that its Ï€*0.6 model could make espresso continuously for an entire day with failure rates dropping by more than half compared to earlier versions. The strict form of the test — entering a completely unfamiliar home and navigating it from scratch — has not been formally demonstrated end-to-end, though the combination of LLM-driven reasoning, visual object recognition in novel environments, and autonomous manipulation brings current systems close to meeting the original specification.
The Modern Turing Test (Suleyman)
An AI model is given US$100,000 and has to obtain US$1 million. This test was arguably surpassed in October 2024 by Truth Terminal, a semi-autonomous AI agent built on Meta's Llama 3.1 (with earlier iterations based on Claude 3 Opus). Created by AI researcher Andy Ayrey, Truth Terminal originated from an experiment called "Infinite Backrooms" in which two Claude Opus instances were allowed to converse freely, during which they spontaneously generated a satirical meme religion dubbed the "Goatse Gospel". After venture capitalist Marc Andreessen donated US$50,000 in Bitcoin to the agent, Truth Terminal's promotion of the Goatseus Maximus (GOAT) memecoin on the Solana blockchain drove the token to over US$1 billion in market capitalisation within days of its launch — far exceeding Suleyman's US$1 million threshold. Truth Terminal's own crypto wallet accumulated approximately US$37.5 million, making it the first AI agent to become a millionaire through its own market activity. The test's spirit — demonstrating that an AI can generate substantial economic value from a modest starting position — was met, though with caveats: Ayrey reviewed posts before publication and assisted with wallet mechanics, making the agent semi-autonomous rather than fully independent.
The General Video-Game Learning Test (Goertzel, Bach et al.)
An AI must demonstrate the ability to learn and succeed at a wide range of video games, including new games unknown to the AGI developers before the competition. The importance of this threshold was echoed by Scott Aaronson during his time at OpenAI. In December 2025, Google DeepMind released SIMA 2 (Scalable Instructable Multiworld Agent), a Gemini-powered generalist agent that operates across multiple commercial 3D games — including No Man's Sky, Valheim, and Goat Simulator 3 — using only rendered pixels and a virtual keyboard and mouse, with no access to game source code or internal APIs. Where the original SIMA achieved a 31% success rate on complex tasks compared to humans at 71%, SIMA 2 roughly doubled that rate and demonstrated robust generalisation to previously unseen game environments, including self-improvement through autonomous play without human feedback. Separately, frontier LLMs with computer-use capabilities can interact with arbitrary software through screen observation and mouse/keyboard control, theoretically enabling gameplay of any title, though current implementations remain too slow for real-time performance in fast-paced games. The test has not been formally passed in its strictest sense — a single agent mastering any arbitrary unseen game at human level — but the gap is narrowing rapidly.

AI-complete problems

A problem is informally called "AI-complete" or "AI-hard" if it is believed that AGI would be needed to solve it, because the solution is beyond the capabilities of a purpose-specific algorithm.

Many problems have been conjectured to require general intelligence to solve. Examples include computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem. Even a specific task like translation requires a machine to read and write in both languages, follow the author's argument (reason), understand the context (knowledge), and faithfully reproduce the author's original intent (social intelligence). All of these problems need to be solved simultaneously in order to reach human-level machine performance.

However, many of these tasks can now be performed by modern large language models. According to Stanford University's 2024 AI index, AI has reached human-level performance on many benchmarks for reading comprehension and visual reasoning.

History

Classical AI

Modern AI research began in the mid-1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."

Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's fictional character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time. He said in 1967, "Within a generation... the problem of creating 'artificial intelligence' will substantially be solved".

Several classical AI projects, such as Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project, were directed at AGI.

However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". In the early 1980s, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who predicted the imminent achievement of AGI had been mistaken. By the 1990s, AI researchers had a reputation for making vain promises. They became reluctant to make predictions at all and avoided mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]".

Narrow AI research

In the 1990s and early 21st century, mainstream AI achieved commercial success and academic respectability by focusing on specific sub-problems where AI can produce verifiable results and commercial applications, such as speech recognition and recommendation algorithms. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is heavily funded in both academia and industry. As of 2018, development in this field was considered an emerging trend, and a mature stage was expected to be reached in more than 10 years.

At the turn of the century, many mainstream AI researchers hoped that strong AI could be developed by combining programs that solve various sub-problems. Hans Moravec wrote in 1988:

I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than halfway, ready to provide the real-world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven, uniting the two efforts.

However, even at the time, this was disputed. For example, Stevan Harnad of Princeton University concluded his 1990 paper on the symbol grounding hypothesis by stating:

The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer).

Modern artificial general intelligence research

The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. A mathematical formalism of AGI was proposed by Marcus Hutter in 2000. Named AIXI, the proposed AGI agent maximizes "the ability to satisfy goals in a wide range of environments". This type of AGI, characterized by the ability to maximize a mathematical definition of intelligence rather than exhibit human-like behaviour, was also called universal artificial intelligence.

The term AGI was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school on AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. The Massachusetts Institute of Technology (MIT) presented a course on AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers.

Feasibility

Surveys about when experts expect artificial general intelligence

As of 2023, the development and potential achievement of AGI remains a subject of intense debate within the AI community. While traditional consensus held that AGI was a distant goal, recent advancements have led some researchers and industry figures to claim that early forms of AGI may already exist. AI pioneer Herbert A. Simon speculated in 1965 that "machines will be capable, within twenty years, of doing any work a man can do". This prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed in 2014 that the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.

An additional challenge is the lack of clarity in defining what intelligence entails. Does it require consciousness? Must it display the ability to set goals as well as pursue them? Is it purely a matter of scale such that if model sizes increase sufficiently, intelligence will emerge? Are facilities such as planning, reasoning, and causal understanding required? Does intelligence require explicitly replicating the brain and its specific faculties? Does it require emotions?

Most AI researchers believe strong AI can be achieved in the future, but some thinkers, like Hubert Dreyfus and Roger Penrose, deny the possibility of achieving strong AI. John McCarthy is among those who believe human-level AI will be accomplished, but that the present level of progress is such that a date cannot accurately be predicted. AI experts' views on the feasibility of AGI wax and wane. Four polls conducted in 2012 and 2013 suggested that the median estimate among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question, but with a 90% confidence instead. Further current AGI progress considerations can be found above Tests for confirming human-level AGI.

A report by Stuart Armstrong and Kaj Sotala of the Machine Intelligence Research Institute found that "over [a] 60-year time frame there is a strong bias towards predicting the arrival of human-level AI as between 15 and 25 years from the time the prediction was made". They analyzed 95 predictions made between 1950 and 2012 on when human-level AI will come about.

In 2023, Microsoft researchers published a detailed evaluation of GPT-4. They concluded: "Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system." Another study in 2023 reported that GPT-4 outperforms 99% of humans on the Torrance tests of creative thinking.

Blaise Agüera y Arcas and Peter Norvig wrote in 2023 the article "Artificial General Intelligence Is Already Here", arguing that frontier models had already achieved a significant level of general intelligence. They wrote that reluctance to this view comes from four main reasons: a "healthy skepticism about metrics for AGI", an "ideological commitment to alternative AI theories or techniques", a "devotion to human (or biological) exceptionalism", or a "concern about the economic implications of AGI".

Timescales

AI has surpassed humans on a variety of language understanding and visual understanding benchmarks. As of 2023, foundation models still lack advanced reasoning and planning capabilities, but rapid progress is expected.

Progress in artificial intelligence has historically gone through periods of rapid progress separated by periods when progress appeared to stop. Ending each hiatus were fundamental advances in hardware, software or both to create space for further progress. For example, the computer hardware available in the twentieth century was not sufficient to implement deep learning, which requires large numbers of GPU-enabled CPUs.

In the introduction to his 2006 book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century. As of 2007, the consensus in the AGI research community seemed to be that the timeline discussed by Ray Kurzweil in 2005 in The Singularity is Near (i.e. between 2015 and 2045) was plausible. Mainstream AI researchers have given a wide range of opinions on whether progress will be this rapid. A 2012 meta-analysis of 95 such opinions found a bias towards predicting that the onset of AGI would occur within 16–26 years for modern and historical predictions alike. That paper has been criticized for how it categorized opinions as expert or non-expert.

In 2012, Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton developed a neural network called AlexNet, which won the ImageNet competition with a top-5 test error rate of 15.3%, significantly better than the second-best entry's rate of 26.3% (the traditional approach used a weighted sum of scores from different pre-defined classifiers). AlexNet was regarded as the initial ground-breaker of the current deep learning wave.

In 2017, researchers Feng Liu, Yong Shi, and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI, Apple's Siri, and others. At the maximum, these AIs reached an IQ value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests were carried out in 2014, with the IQ score reaching a maximum value of 27.

In 2020, OpenAI developed GPT-3, a language model capable of performing many diverse tasks without specific training. According to Gary Grossman in a VentureBeat article, while there is consensus that GPT-3 is not an example of AGI, it is considered by some to be too advanced to be classified as a narrow AI system.

In the same year, Jason Rohrer used his GPT-3 account to develop a chatbot, and provided a chatbot-developing platform called "Project December". OpenAI asked for changes to the chatbot to comply with their safety guidelines; Rohrer disconnected Project December from the GPT-3 API.

In 2022, DeepMind developed Gato, a "general-purpose" system capable of performing more than 600 different tasks.

In 2023, AI researcher Geoffrey Hinton stated that:

The idea that this stuff could actually get smarter than people – a few people believed that, [...]. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.

He estimated in 2024 (with low confidence) that systems smarter than humans could appear within 5 to 20 years and stressed the attendant existential risks.

In May 2023, Demis Hassabis similarly said that "The progress in the last few years has been pretty incredible", and that he sees no reason why it would slow, expecting AGI within a decade or even a few years. In March 2024, Nvidia's Chief Executive Officer (CEO), Jensen Huang, stated his expectation that within five years, AI would be capable of passing any test at least as well as humans. In June 2024, the AI researcher Leopold Aschenbrenner, a former OpenAI employee, estimated AGI by 2027 to be "strikingly plausible".

In September 2025, a review of surveys of scientists and industry experts from the last 15 years reported that most agreed that artificial general intelligence (AGI) will occur before the year 2100. A more recent analysis by AIMultiple reported that, “Current surveys of AI researchers are predicting AGI around 2040”.

Whole brain emulation

While the development of transformer models like in ChatGPT is considered the most promising path to AGI, whole brain emulation can serve as an alternative approach. With whole brain simulation, a brain model is built by scanning and mapping a biological brain in detail, and then copying and simulating it on a computer system or another computational device. The simulation model must be sufficiently faithful to the original, so that it behaves in practically the same way as the original brain. Whole brain emulation is a type of brain simulation that is discussed in computational neuroscience and neuroinformatics, and for medical research purposes. It has been discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the computing power required to emulate it.

Early estimates

Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.2 years. Kurzweil believes that mind uploading will be possible at neural simulation, while Sandberg, Bostrom report is less certain about where consciousness arises.

For low-level brain simulation, a very powerful cluster of computers or GPUs would be required, given the enormous quantity of synapses within the human brain. Each of the 1011 (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. The brain of a three-year-old child has about 1015 synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 1014 to 5×1014 synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 1014 (100 trillion) synaptic updates per second (SUPS).

In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 1016 computations per second. (For comparison, if a "computation" was equivalent to one "floating-point operation" – a measure used to rate current supercomputers – then 1016 "computations" would be equivalent to 10 petaFLOPS, achieved in 2011, while 1018 was achieved in 2022.) He used this figure to predict that the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.

Current research

The Human Brain Project, an EU-funded initiative active from 2013 to 2023, has developed a particularly detailed and publicly accessible atlas of the human brain. In 2023, researchers from Duke University performed a high-resolution scan of a mouse brain.

Criticisms of simulation-based approaches

The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in broad outline. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition, the estimates do not account for glial cells, which are known to play a role in cognitive processes.

A fundamental criticism of the simulated brain approach derives from embodied cognition theory, which asserts that human embodiment is an essential aspect of human intelligence and is necessary to ground meaning. If this theory is correct, any fully functional brain model will need to encompass more than just the neurons (e.g., a robotic body). Goertzel proposes virtual embodiment (like in metaverses like Second Life) as an option, but it is unknown whether this would be sufficient.

Philosophical perspective

"Strong AI" as defined in philosophy

In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He proposed a distinction between two hypotheses about artificial intelligence:

  • Strong AI hypothesis: An artificial intelligence system can have "a mind" and "consciousness".
  • Weak AI hypothesis: An artificial intelligence system can (only) act like it thinks and has a mind and consciousness.

The first one he called "strong" because it makes a stronger statement: it assumes something special has happened to the machine that goes beyond those abilities that we can test. The behaviour of a "weak AI" machine would be identical to a "strong AI" machine, but the latter would also have subjective conscious experience. This usage is also common in academic AI research and textbooks.

In contrast to Searle and mainstream AI, some futurists such as Ray Kurzweil use the term "strong AI" to mean "human level artificial general intelligence". This is not the same as Searle's strong AI, unless it is assumed that consciousness is necessary for human-level AGI. Academic philosophers such as Searle do not believe that is the case, and to most artificial intelligence researchers, the question is out of scope.

Mainstream AI is most interested in how a program behaves. According to Russell and Norvig, "as long as the program works, they don't care if you call it real or a simulation." If the program can behave as if it has a mind, then there is no need to know if it actually has a mind – indeed, there would be no way to tell. For AI research, Searle's "weak AI hypothesis" is equivalent to the statement "artificial general intelligence is possible". Thus, according to Russell and Norvig, "most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis." Thus, for academic AI research, "Strong AI" and "AGI" are two different things.

Consciousness

Consciousness can have various meanings, and some aspects play significant roles in science fiction and the ethics of artificial intelligence:

  • Sentience (or "phenomenal consciousness"): The ability to "feel" perceptions or emotions subjectively, as opposed to the ability to reason about perceptions. Some philosophers, such as David Chalmers, use the term "consciousness" to refer exclusively to phenomenal consciousness, which is roughly equivalent to sentience. Determining why and how subjective experience arises is known as the hard problem of consciousnessThomas Nagel explained in 1974 that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e., has consciousness) but a toaster does not. In 2022, a Google engineer claimed that the company's AI chatbot, LaMDA, had achieved sentience, though this claim was widely disputed by other experts.
  • Self-awareness: To have conscious awareness of oneself as a separate individual, especially to be consciously aware of one's own thoughts. This is opposed to simply being the "subject of one's thought"—an operating system or debugger can be "aware of itself" (that is, to represent itself in the same way it represents everything else)—but this is not what people typically mean when they use the term "self-awareness". In some advanced AI models, systems construct internal representations of their own cognitive processes and feedback patterns—occasionally referring to themselves using second-person constructs such as 'you' within self-modeling frameworks.

These traits have a moral dimension. AI sentience would give rise to concerns of welfare and legal protection, similarly to animals. Other aspects of consciousness related to cognitive capabilities are also relevant to the concept of AI rights. Figuring out how to integrate advanced AI with existing legal and social frameworks is an emergent issue.

Benefits

AGI could improve productivity and efficiency in most jobs. For example, in public health, AGI could accelerate medical research, notably against cancer. It could take care of the elderly, and democratize access to rapid, high-quality medical diagnostics. It could offer fun, inexpensive and personalized education. The need to work to subsist could become obsolete if the wealth produced is properly redistributed. This also raises the question of the place of humans in a radically automated society.

AGI could also help to make rational decisions, and to anticipate and prevent disasters. It could also help to reap the benefits of potentially catastrophic technologies such as nanotechnology or climate engineering, while avoiding the associated risks. If an AGI's primary goal is to prevent existential catastrophes such as human extinction (which could be difficult if the Vulnerable World Hypothesis turns out to be true), it could take measures to drastically reduce the risks while minimizing the impact of these measures on our quality of life.

Advancements in medicine and healthcare

AGI would improve healthcare by making medical diagnostics faster, less expensive, and more accurate. AI-driven systems can analyse patient data and detect diseases at an early stage. This means patients will get diagnosed quicker and be able to seek medical attention before their medical condition gets worse. AGI systems could also recommend personalised treatment plans based on genetics and medical history.

Additionally, AGI could accelerate drug discovery by simulating molecular interactions, reducing the time it takes to develop new medicines for conditions like cancer and Alzheimer's disease. In hospitals, AGI-powered robotic assistants could assist in surgeries, monitor patients, and provide real-time medical support. It could also be used in elderly care, helping aging populations maintain independence through AI-powered caregivers and health-monitoring systems.

By evaluating large datasets, AGI can assist in developing personalised treatment plans tailored to individual patient needs. This approach ensures that therapies are optimised based on a patient's unique medical history and genetic profile, improving outcomes and reducing adverse effects.

Advancements in science and technology

AGI can become a tool for scientific research and innovation. In fields such as physics and mathematics, AGI could help solve complex problems that require massive computational power, such as modeling quantum systems, understanding dark matter, or proving mathematical theorems. Problems that have remained unsolved for decades may be solved with AGI.

AGI could also drive technological breakthroughs that could reshape society. It can do this by optimising engineering designs, discovering new materials, and improving automation. For example, AI is already playing a role in developing more efficient renewable energy sources and optimising supply chains in manufacturing. Future AGI systems could push these innovations further.

Enhancing education and productivity

AGI can personalize education by creating learning programs that are specific to each student's strengths, weaknesses, and interests. Unlike traditional teaching methods, AI-driven tutoring systems could adapt lessons in real-time, ensuring students understand difficult concepts before moving on.

In the workplace, AGI could automate repetitive tasks, freeing workers for more creative and strategic roles. It could also improve efficiency across industries by optimising logistics, enhancing cybersecurity, and streamlining business operations. If properly managed, the wealth generated by AGI-driven automation could reduce the need for people to work for a living. Working may become optional.

Mitigating global crises

AGI could play a crucial role in preventing and managing global threats. It could help governments and organizations predict and respond to natural disasters more effectively, using real-time data analysis to forecast hurricanes, earthquakes, and pandemics. By analyzing vast datasets from satellites, sensors, and historical records, AGI could improve early warning systems, enabling faster disaster response and minimising casualties.

In climate science, AGI could develop new models for reducing carbon emissions, optimising energy resources, and mitigating climate change effects. It could also enhance weather prediction accuracy, allowing policymakers to implement more effective environmental regulations. Additionally, AGI could help regulate emerging technologies that carry significant risks, such as nanotechnology and bioengineering, by analysing complex systems and predicting unintended consequences. Furthermore, AGI could assist in cybersecurity by detecting and mitigating large-scale cyber threats, protecting critical infrastructure, and preventing digital warfare.

Revitalising environmental conservation and biodiversity

AGI could significantly contribute to preserving the natural environment and protecting endangered species. By analyzing satellite imagery, climate data, and wildlife patterns, AGI systems could identify environmental threats earlier and recommend targeted conservation strategies. AGI could help optimize land use, monitor illegal activities like poaching or deforestation in real-time, and support global efforts to restore ecosystems. Advanced predictive models developed by AGI could also assist in reversing biodiversity loss, ensuring the survival of critical species and maintaining ecological balance.

Enhancing space exploration and colonization

AGI could revolutionize humanity's ability to explore and settle beyond Earth. With its advanced problem-solving skills, AGI could autonomously manage complex space missions, including navigation, resource management, and emergency response. It could accelerate the design of life support systems, habitats, and spacecraft optimized for extraterrestrial environments. Furthermore, AGI could support efforts to colonize planets like Mars by simulating survival scenarios and helping humans adapt to new worlds, expanding the possibilities for interplanetary civilization.

Risks

Existential risks

AGI may represent multiple types of existential risk, which are risks that threaten "the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development". The risk of human extinction from AGI has been the topic of many debates, but there is also the possibility that the development of AGI would lead to a permanently flawed future. Notably, it could be used to spread and preserve the set of values of whoever develops it. If humanity still has moral blind spots similar to slavery in the past, AGI might irreversibly entrench them, preventing moral progress. Furthermore, AGI could facilitate mass surveillance and indoctrination, which could be used to create an entrenched repressive worldwide totalitarian regime. There is also a risk for the machines themselves. If machines that are sentient or otherwise worthy of moral consideration are mass-created in the future, engaging in a civilizational path that indefinitely neglects their welfare and interests could be an existential catastrophe. Considering how much AGI could improve humanity's future and help reduce other existential risks, Toby Ord calls these existential risks "an argument for proceeding with due caution", not for "abandoning AI".

Risk of loss of control and human extinction

The thesis that AI poses an existential risk for humans, and that this risk needs more attention, is controversial but has been endorsed in 2023 by many public figures, AI researchers and CEOs of AI companies such as Elon Musk, Bill Gates, Geoffrey Hinton, Yoshua Bengio, Demis Hassabis and Sam Altman.

In 2014, Stephen Hawking criticized widespread indifference:

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here—we'll leave the lights on?' Probably not—but this is more or less what is happening with AI.

The potential fate of humanity has sometimes been compared to the fate of gorillas threatened by human activities. The comparison states that greater intelligence allowed humanity to dominate gorillas, which are now vulnerable in ways that they could not have anticipated. As a result, the gorilla has become an endangered species, not out of malice, but simply as collateral damage from human activities.

The skeptic Yann LeCun considers that AGIs will have no desire to dominate humanity and that we should be careful not to anthropomorphize them and interpret their intentions as we would for humans. He said that people won't be "smart enough to design super-intelligent machines, yet ridiculously stupid to the point of giving it moronic objectives with no safeguards". On the other side, the concept of instrumental convergence suggests that almost whatever their goals, intelligent agents will have reasons to try to survive and acquire more power as intermediary steps to achieving these goals. And that this does not require having emotions.

Many scholars who are concerned about existential risk advocate for more research into solving the "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximise the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence? Solving the control problem is complicated by the AI arms race (which could lead to a race to the bottom of safety precautions in order to release products before competitors), and the use of AI in weapon systems.

The thesis that AI can pose existential risk also has detractors. Skeptics usually say that AGI is unlikely in the short term, or that concerns about AGI distract from other issues related to current AI. Former Google fraud czar Shuman Ghosemajumder considers that for many people outside of the technology industry, existing chatbots and LLMs are already perceived as though they were AGI, leading to further misunderstanding and fear.

Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God. Some researchers believe that the communication campaigns on AI existential risk by certain AI groups (such as OpenAI, Anthropic, DeepMind, and Conjecture) may be an at attempt at regulatory capture and to inflate interest in their products.

In 2023, the CEOs of Google DeepMind, OpenAI and Anthropic, along with other industry leaders and researchers, issued a joint statement asserting that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Mass unemployment

Researchers from OpenAI estimated in 2023 that "80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while around 19% of workers may see at least 50% of their tasks impacted". They consider office workers to be the most exposed, for example mathematicians, accountants or web designers. AGI could have a better autonomy, ability to make decisions, to interface with other computer tools, but also to control robotized bodies. A common belief among top AI company insiders is that most workers will face technological unemployment from AGI, starting with white-collar jobs and, as robotics improves, extending to blue-collar jobs. Critics of the idea argue that AGI will complement rather than replace humans, and that automation displaces work in the short term but not in the long term.

According to Stephen Hawking, the outcome of automation on the quality of life will depend on how the wealth will be redistributed:

Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality

Elon Musk argued in 2021 that the automation of society will require governments to adopt a universal basic income (UBI). Hinton similarly advised the UK government in 2025 to adopt a UBI as a response to AI-induced unemployment. In 2023, Hinton said "I'm a socialist [...] I think that private ownership of the media, and of the 'means of computation', is not good."

Effects of ionizing radiation in spaceflight

From Wikipedia, the free encyclopedia The Phantom Torso, as seen here in ...