Search This Blog

Monday, September 10, 2018

Group selection

From Wikipedia, the free encyclopedia
 
Early explanations of social behavior, such as the lekking of blackcock, spoke of "the good of the species". Blackcocks at the Lek watercolour and bodycolour by Archibald Thorburn, 1901.

Group selection is a proposed mechanism of evolution in which natural selection acts at the level of the group, instead of at the more conventional level of the individual.

Early authors such as V. C. Wynne-Edwards and Konrad Lorenz argued that the behavior of animals could affect their survival and reproduction as groups, speaking for instance of actions for the good of the species. From the mid 1960s, evolutionary biologists such as John Maynard Smith argued that natural selection acted primarily at the level of the individual. They argued on the basis of mathematical models that individuals would not altruistically sacrifice fitness for the sake of a group. They persuaded the majority of biologists that group selection did not occur, other than in special situations such as the haplodiploid social insects like honeybees (in the Hymenoptera), where kin selection was possible.

In 1994 David Sloan Wilson and Elliott Sober argued for multi-level selection, including group selection, on the grounds that groups, like individuals, could compete. In 2010 three authors including E. O. Wilson, known for his work on social insects especially ants, again revisited the arguments for group selection. They argued that group selection can occur when competition between two or more groups, some containing altruistic individuals who act cooperatively together, is more important for survival than competition between individuals within each group. Their proposals provoked a strong rebuttal from a large group of evolutionary biologists.

As of yet, there is no clear consensus among biologists regarding the importance of group selection.  Steven Pinker expressed his ambivalence with the theory: "Human beings live in groups, are affected by the fortunes of their groups, and sometimes make sacrifices that benefit their groups. Does this mean that the human brain has been shaped by natural selection to promote the welfare of the group in competition with other groups, even when it damages the welfare of the person and his or her kin?... I think that this reasonableness is an illusion. The more carefully you think about group selection, the less sense it makes, and the more poorly it fits the facts of human psychology and history." However, there is active debate among specialists in many fields of study. It is possible that a theory of group selection can be modified to provide valuable explanations. Group selection could be useful for understanding the evolution of human culture, since humans form groups that are unlike any other animal. Group selection may be used to understand human history. Some researchers have used the framework to understand the development of human morality.

Early Developments

Charles Darwin developed the theory of evolution in his book, Origin of Species. Darwin also made the first suggestion of group selection in The Descent of Man that the evolution of groups could affect the survival of individuals. He wrote, "If one man in a tribe... invented a new snare or weapon, the tribe would increase in number, spread, and supplant other tribes. In a tribe thus rendered more numerous there would always be a rather better chance of the birth of other superior and inventive members."

Once Darwinism had been accepted, animal behavior was glibly explained with unsubstantiated hypotheses about survival value, which was largely taken for granted. The naturalist Konrad Lorenz had argued loosely in books like On Aggression (1966) that animal behavior patterns were "for the good of the species", without actually studying survival value in the field; Richard Dawkins noted that Lorenz was a "'good of the species' man" so accustomed to group selection thinking that he did not realize his views "contravened orthodox Darwinian theory". The ethologist Niko Tinbergen praised Lorenz for his interest in the survival value of behavior, and naturalists enjoyed Lorenz's writings for the same reason. In 1962, group selection was used as a popular explanation for adaptation by the zoologist V. C. Wynne-Edwards. In 1976, Richard Dawkins wrote a well-known book on the importance of evolution at the level of the gene or the individual, The Selfish Gene.

Social behavior in honeybees is explained by kin selection: their haplodiploid inheritance system makes workers very closely related to their queen (centre).

From the mid 1960s, evolutionary biologists argued that natural selection acted primarily at the level of the individual. In 1964, John Maynard Smith, C. M. Perrins (1964), and George C. Williams in his 1966 book Adaptation and Natural Selection cast serious doubt on group selection as a major mechanism of evolution; Williams's 1971 book Group Selection assembled writings from many authors on the same theme.

It was at that time generally agreed that the primary exception of social group selection was in the social insects, and the explanation was limited to the unique inheritance system (involving haplodiploidy) of the eusocial Hymenoptera such as honeybees, which encourages kin selection, since workers are closely related.

Kin selection and inclusive fitness theory

Early group selection models assumed that genes acted independently, for example a gene that coded for cooperation or altruism. Genetically-based reproduction of individuals implies that, in group formation, the altruistic genes would need a way to act for the benefit of members in the group to enhance the fitness of many individuals with the same gene. But it is expected from this model that individuals of the same species would compete against each other for the same resources. This would put cooperating individuals at a disadvantage, making genes for cooperation tend to be eliminated. Group selection on the level of the species is flawed because it is difficult to see how selective pressures would be applied to competing/non-cooperating individuals.

Experiments from the late 1970s suggested that selection involving groups was possible. Kin selection between related individuals is accepted as an explanation of altruistic behavior. In this model, genetically related individuals cooperate because survival advantages to one individual also benefit kin who share some fraction of the same genes, giving a mechanism for favoring genetic selection.

Inclusive fitness theory, first proposed by W. D. Hamilton in the early 1960s, gives a selection criterion for evolution of social traits when social behavior is costly to an individual organism's survival and reproduction. This behavior could emerge under conditions such that the statistical likelihood that benefits accrue to the survival and reproduction of other organisms whom also carry the social trait. Inclusive fitness theory is a general treatment of the statistical probabilities of social traits accruing to any other organisms likely to propagate a copy of the same social trait. Kin selection theory treats the narrower but simpler case of the benefits to close genetic relatives (or what biologists call 'kin') who may also carry and propagate the trait. A significant group of biologists support inclusive fitness as the explanation for social behavior in a wide range of species, as supported by experimental data. An article was published in Nature with over a hundred coauthors.

One of the questions about kin selection is the requirement that individuals must know if other individuals are related to them, or kin recognition. Any altruistic act has to preserve similar genes. One argument given by Hamilton is that many individuals operate in "viscous" conditions, so that they live in physical proximity to relatives. Under these conditions, they can act altruistically to any other individual, and it is likely that the other individual will be related. This population structure builds a continuum between individual selection, kin selection, kin group selection and group selection without a clear boundary for each level. However, early theoretical models by D.S. Wilson et al. and Taylor showed that pure population viscosity cannot lead to cooperation and altruism. This is because any benefit generated by kin cooperation is exactly cancelled out by kin competition; additional offspring from cooperation are eliminated by local competition. Mitteldorf and D. S. Wilson later showed that if the population is allowed to fluctuate, then local populations can temporarily store the benefit of local cooperation and promote the evolution of cooperation and altruism. By assuming individual differences in adaptations, Yang further showed that the benefit of local altruism can be stored in the form of offspring quality and thus promote the evolution of altruism even if the population does not fluctuate. This is because local competition among more individuals resulting from local altruism increases the average local fitness of the individuals that survive.

Another explanation for the recognition of genes for altruism is that a single trait, group reciprocal kindness, is capable of explaining the vast majority of altruism that is generally accepted as "good" by modern societies. The phenotype of altruism relies on recognition of the altruistic behavior by itself. The trait of kindness will be recognized by sufficiently intelligent and undeceived organisms in other individuals with the same trait. Moreover, the existence of such a trait predicts a tendency for kindness to unrelated organisms that are apparently kind, even if the organisms are of a completely different species. The gene need not be exactly the same, so long as the effect or phenotype is similar. Multiple versions of the gene—or even meme—would have virtually the same effect. This explanation was given by Richard Dawkins as an analogy of a man with a green beard. Green-bearded men tend to cooperate with each other simply by seeing a green beard, where the green beard trait is incidentally linked to the reciprocal kindness trait.

Multilevel selection theory

Kin selection or inclusive fitness is accepted as an explanation for cooperative behavior in many species, but there are some species, including some human behavior, that are difficult to explain with only this approach. In particular, it doesn't seem to explain the cause of the (relatively) rapid rise of human civilization. David Sloan Wilson has argued that other factors must also be considered in evolution. Since the 1990s, group selection models have seen a resurgence and further refinement.

Early group selection models were flawed because they assumed that genes acted independently; but genetically-based interactions among individuals are ubiquitous in group formation because genes must cooperate for the benefit of association in groups to enhance the fitness of group members. Additionally, group selection on the level of the species is flawed because it is difficult to see how selective pressures would be applied; selection in social species of groups against other groups, rather than the species entire, seems to be the level at which selective pressures are plausible. On the other hand, kin selection is accepted as an explanation of altruistic behavior. Some biologists argue that kin selection and multilevel selection are both needed to "obtain a complete understanding of the evolution of a social behavior system".

In 1994 David Sloan Wilson and Elliott Sober argued that the case against group selection had been overstated. They considered whether groups can have functional organization in the same way as individuals, and consequently whether groups can be "vehicles" for selection. They do not posit evolution on the level of the species, but selective pressures that winnow out small groups within a species, e.g. groups of social insects or primates. Groups that cooperate better might survive and reproduce more than those that did not. Resurrected in this way, Wilson & Sober's new group selection is called multilevel selection theory.

In 2010, M. A. Nowak, C. E. Tarnita and E. O. Wilson argued for multi-level selection, including group selection, to correct what they saw as deficits in the explanatory power of inclusive fitness. The response was a back-lash from 137 other evolutionary biologists who argued "that their arguments are based upon a misunderstanding of evolutionary theory and a misrepresentation of the empirical literature".

David Sloan Wilson and Elliott Sober's 1994 Multilevel Selection Model, illustrated by a nested set of Russian matryoshka dolls. Wilson himself compared his model to such a set.

Wilson compared the layers of competition and evolution to nested sets of Russian matryoshka dolls. The lowest level is the genes, next come the cells, then the organism level and finally the groups. The different levels function cohesively to maximize fitness, or reproductive success. The theory asserts that selection for the group level, involving competition between groups, must outweigh the individual level, involving individuals competing within a group, for a group-benefiting trait to spread.

Peter Turchin uses the same analogy of matryoshka dolls, but uses it to show the stepwise increase in size of social groups, from villages, tribes, regional societies, to nations, and finally to empires or civilizations. He points out that humans have the capability to consider themselves as a member of the entire range of affiliations, like the smallest doll inside the entire assembly of dolls.

Multilevel selection theory focuses on the phenotype because it looks at the levels that selection directly acts upon. For humans, social norms can be argued to reduce individual level variation and competition, thus shifting selection to the group level. The assumption is that variation between different groups is larger than variation within groups. Competition and selection can operate at all levels regardless of scale. Wilson wrote, "At all scales, there must be mechanisms that coordinate the right kinds of action and prevent disruptive forms of self-serving behavior at lower levels of social organization." E. O. Wilson summarized, "In a group, selfish individuals beat altruistic individuals. But, groups of altruistic individuals beat groups of selfish individuals."

Wilson ties the multilevel selection theory regarding humans to another theory, gene-culture coevolution, by acknowledging that culture seems to characterize a group-level mechanism for human groups to adapt to environmental changes.

MLS theory can be used to evaluate the balance between group selection and individual selection in specific cases. An experiment by William Muir compared egg productivity in hens, showing that a hyper-aggressive strain had been produced through individual selection, leading to many fatal attacks after only six generations; by implication, it could be argued that group selection must have been acting to prevent this in real life. Group selection has most often been postulated in humans and, notably, eusocial Hymenoptera that make cooperation a driving force of their adaptations over time and have a unique system of inheritance involving haplodiploidy that allows the colony to function as an individual while only the queen reproduces.

Wilson and Sober's work revived interest in multilevel selection. In a 2005 article, E. O. Wilson argued that kin selection could no longer be thought of as underlying the evolution of extreme sociality, for two reasons. First, he suggested, the argument that haplodiploid inheritance (as in the Hymenoptera) creates a strong selection pressure towards nonreproductive castes is mathematically flawed. Second, eusociality no longer seems to be confined to the hymenopterans; increasing numbers of highly social taxa have been found in the years since Wilson's foundational text Sociobiology: A New Synthesis was published in 1975. These including a variety of insect species, as well as two rodent species (the naked mole-rat and the Damaraland mole rat). Wilson suggests the equation for Hamilton's rule:
rb > c
(where b represents the benefit to the recipient of altruism, c the cost to the altruist, and r their degree of relatedness) should be replaced by the more general equation
rbk + be > c
in which bk is the benefit to kin (b in the original equation) and be is the benefit accruing to the group as a whole. He then argues that, in the present state of the evidence in relation to social insects, it appears that be>rbk, so that altruism needs to be explained in terms of selection at the colony level rather than at the kin level. However, kin selection and group selection are not distinct processes, and the effects of multi-level selection are already accounted for in Hamilton's rule, rb>c, provided that an expanded definition of r, not requiring Hamilton's original assumption of direct genealogical relatedness, is used, as proposed by E. O. Wilson himself.

Spatial populations of predators and prey show restraint of reproduction at equilibrium, both individually and through social communication, as originally proposed by Wynne-Edwards. While these spatial populations do not have well-defined groups for group selection, the local spatial interactions of organisms in transient groups are sufficient to lead to a kind of multi-level selection. There is however as yet no evidence that these processes operate in the situations where Wynne-Edwards posited them.

Rauch et al.'s analysis of a host-parasite situation, which was recognised as one where group selection was possible even by E. O. Wilson (1975), is broadly hostile to the whole idea of group selection. Specifically, the parasites do not individually moderate their transmission; rather, more transmissible variants "continually arise and grow rapidly for many generations but eventually go extinct before dominating the system."

Applications

Differing evolutionarily stable strategies

The problem with group selection is that for a whole group to get a single trait, it must spread through the whole group first by regular evolution. But, as J. L. Mackie suggested, when there are many different groups, each with a different evolutionarily stable strategy, there is selection between the different strategies, since some are worse than others. For example, a group where altruism was universal would indeed outcompete a group where every creature acted in its own interest, so group selection might seem feasible; but a mixed group of altruists and non-altruists would be vulnerable to cheating by non-altruists within the group, so group selection would collapse.

Implications in population biology

Social behaviors such as altruism and group relationships can impact many aspects of population dynamics, such as intraspecific competition and interspecific interactions. In 1871, Darwin argued that group selection occurs when the benefits of cooperation or altruism between subpopulations are greater than the individual benefits of egotism within a subpopulation. This supports the idea of multilevel selection, but kinship also plays an integral role because many subpopulations are composed of closely related individuals. An example of this can be found in lions, which are simultaneously cooperative and territorial. Within a pride, males protect the pride from outside males, and females, who are commonly sisters, communally raise cubs and hunt. However, this cooperation seems to be density dependent. When resources are limited, group selection favors prides that work together to hunt. When prey is abundant, cooperation is no longer beneficial enough to outweigh the disadvantages of altruism, and hunting is no longer cooperative.

Interactions between different species can also be affected by multilevel selection. Predator-prey relationships can also be affected. Individuals of certain monkey species howl to warn the group of the approach of a predator. The evolution of this trait benefits the group by providing protection, but could be disadvantageous to the individual if the howling draws the predator's attention to them. By affecting these interspecific interactions, multilevel and kinship selection can change the population dynamics of an ecosystem.

Multilevel selection attempts to explain the evolution of altruistic behavior in terms of quantitative genetics. Increased frequency or fixation of altruistic alleles can be accomplished through kin selection, in which individuals engage in altruistic behavior to promote the fitness of genetically similar individuals such as siblings. However, this can lead to inbreeding depression, which typically lowers the overall fitness of a population. However, if altruism were to be selected for through an emphasis on benefit to the group as opposed to relatedness and benefit to kin, both the altruistic trait and genetic diversity could be preserved. However, relatedness should still remain a key consideration in studies of multilevel selection. Experimentally imposed multilevel selection on Japanese quail was more effective by an order of magnitude on closely related kin groups than on randomized groups of individuals.

Gene-culture coevolution in humans

Humanity has developed extremely rapidly, arguably through gene-culture coevolution, leading to complex cultural artefacts like the gopuram of the Sri Mariammam temple, Singapore.

Gene-culture coevolution (also called dual inheritance theory) is a modern hypothesis (applicable mostly to humans) that combines evolutionary biology and modern sociobiology to indicate group selection. It treats culture as a separate evolutionary system that acts in parallel to the usual genetic evolution to transform human traits. It is believed that this approach of combining genetic influence with cultural influence over several generations is not present in the other hypotheses such as reciprocal altruism and kin selection, making gene-culture evolution one of the strongest realistic hypotheses for group selection. Fehr provides evidence of group selection taking place in humans presently with experimentation through logic games such as prisoner’s dilemma, the type of thinking that humans have developed many generations ago.

Gene-culture coevolution allows humans to develop highly distinct adaptations to the local pressures and environments more quickly than with genetic evolution alone. Robert Boyd and Peter J. Richerson, two strong proponents of cultural evolution, postulate that the act of social learning, or learning in a group as done in group selection, allows human populations to accrue information over many generations. This leads to cultural evolution of behaviors and technology alongside genetic evolution. Boyd and Richerson believe that the ability to collaborate evolved during the Middle Pleistocene, a million years ago, in response to a rapidly changing climate.

In 2003, Herbert Gintis examined cultural evolution statistically, offering evidence that societies that promote pro-social norms have higher survival rates than societies that do not.

Gintis wrote that genetic and cultural evolution can work together. Genes transfer information in DNA, and cultures transfer information encoded in brains, artifacts, or documents. Language, tools, lethal weapons, fire, cooking, etc., have a long-term effect on genetics. For example, cooking led to a reduction of size of the human gut, since less digestion is needed for cooked food. Language led to a change in the human larynx and an increase in brain size. Projectile weapons led to changes in human hands and shoulders, such that humans are much better at throwing objects than the closest human relative, the chimpanzee.

Criticism

The use of the Price equation to support group selection was challenged by van Veelen in 2012, arguing that it is based on invalid mathematical assumptions.

Richard Dawkins and other advocates of the gene-centered view of evolution remain unconvinced about group selection. In particular, Dawkins suggests that group selection fails to make an appropriate distinction between replicators and vehicles.

The psychologist Steven Pinker concluded that "group selection has no useful role to play in psychology or social science", as it "is not a precise implementation of the theory of natural selection, as it is, say, in genetic algorithms or artificial life simulations. Instead it is a loose metaphor, more like the struggle among kinds of tires or telephones."

The evolutionary biologist Jerry Coyne summarized the arguments in The New York Times in non-technical terms as follows:
Group selection isn't widely accepted by evolutionists for several reasons. First, it's not an efficient way to select for traits, like altruistic behavior, that are supposed to be detrimental to the individual but good for the group. Groups divide to form other groups much less often than organisms reproduce to form other organisms, so group selection for altruism would be unlikely to override the tendency of each group to quickly lose its altruists through natural selection favoring cheaters. Further, little evidence exists that selection on groups has promoted the evolution of any trait. Finally, other, more plausible evolutionary forces, like direct selection on individuals for reciprocal support, could have made humans prosocial. These reasons explain why only a few biologists, like [David Sloan] Wilson and E. O. Wilson (no relation), advocate group selection as the evolutionary source of cooperation.

Evolutionary algorithm

From Wikipedia, the free encyclopedia
 
In artificial intelligence, an evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions (see also loss function). Evolution of the population then takes place after the repeated application of the above operators.

Evolutionary algorithms often perform well approximating solutions to all types of problems because they ideally do not make any assumption about the underlying fitness landscape. Techniques from evolutionary algorithms applied to the modeling of biological evolution are generally limited to explorations of microevolutionary processes and planning models based upon cellular processes. In most real applications of EAs, computational complexity is a prohibiting factor. In fact, this computational complexity is due to fitness function evaluation. Fitness approximation is one of the solutions to overcome this difficulty. However, seemingly simple EA can solve often complex problems; therefore, there may be no direct link between algorithm complexity and problem complexity.

Implementation

Step One: Generate the initial population of individuals randomly. (First generation)

Step Two: Evaluate the fitness of each individual in that population (time limit, sufficient fitness achieved, etc.)

Step Three: Repeat the following regenerational steps until termination:
  1. Select the best-fit individuals for reproduction. (Parents)
  2. Breed new individuals through crossover and mutation operations to give birth to offspring.
  3. Evaluate the individual fitness of new individuals.
  4. Replace least-fit population with new individuals.

Types

Similar techniques differ in genetic representation and other implementation details, and the nature of the particular applied problem.
  • Genetic algorithm – This is the most popular type of EA. One seeks the solution of a problem in the form of strings of numbers (traditionally binary, although the best representations are usually those that reflect something about the problem being solved), by applying operators such as recombination and mutation (sometimes one, sometimes both). This type of EA is often used in optimization problems. Another name for it is fetura, from the Latin for breeding.
  • Genetic programming – Here the solutions are in the form of computer programs, and their fitness is determined by their ability to solve a computational problem.
  • Evolutionary programming – Similar to genetic programming, but the structure of the program is fixed and its numerical parameters are allowed to evolve.
  • Gene expression programming – Like genetic programming, GEP also evolves computer programs but it explores a genotype-phenotype system, where computer programs of different sizes are encoded in linear chromosomes of fixed length.
  • Evolution strategy – Works with vectors of real numbers as representations of solutions, and typically uses self-adaptive mutation rates.
  • Differential evolution – Based on vector differences and is therefore primarily suited for numerical optimization problems.
  • Neuroevolution – Similar to genetic programming but the genomes represent artificial neural networks by describing structure and connection weights. The genome encoding can be direct or indirect.
  • Learning classifier system – Here the solution is a set of classifiers (rules or conditions). A Michigan-LCS evolves at the level of individual classifiers whereas a Pittsburgh-LCS uses populations of classifier-sets. Initially, classifiers were only binary, but now include real, neural net, or S-expression types. Fitness is typically determined with either a strength or accuracy based reinforcement learning or supervised learning approach.

Comparison to biological processes

A possible limitation of many evolutionary algorithms is their lack of a clear genotype-phenotype distinction. In nature, the fertilized egg cell undergoes a complex process known as embryogenesis to become a mature phenotype. This indirect encoding is believed to make the genetic search more robust (i.e. reduce the probability of fatal mutations), and also may improve the evolvability of the organism. Such indirect (a.k.a. generative or developmental) encodings also enable evolution to exploit the regularity in the environment. Recent work in the field of artificial embryogeny, or artificial developmental systems, seeks to address these concerns. And gene expression programming successfully explores a genotype-phenotype system, where the genotype consists of linear multigenic chromosomes of fixed length and the phenotype consists of multiple expression trees or computer programs of different sizes and shapes.

Related techniques

Swarm algorithms include

Other population-based metaheuristic methods

  • Hunting Search - A method inspired by the group hunting of some animals such as wolves that organize their position to surround the prey, each of them relative to the position of the others and especially that of their leader. It is a continuous optimization method  adapted as a combinatorial optimization method.
  • Adaptive dimensional search – Unlike nature-inspired metaheuristic techniques, an adaptive dimensional search algorithm does not implement any metaphor as an underlying principle. Rather it uses a simple performance-oriented method, based on the update of the search dimensionality ratio (SDR) parameter at each iteration.
  • Firefly algorithm is inspired by the behavior of fireflies, attracting each other by flashing light. This is especially useful for multimodal optimization.
  • Harmony search – Based on the ideas of musicians' behavior in searching for better harmonies. This algorithm is suitable for combinatorial optimization as well as parameter optimization.
  • Gaussian adaptation – Based on information theory. Used for maximization of manufacturing yield, mean fitness or average information. See for instance Entropy in thermodynamics and information theory.
  • Memetic algorithm – A hybrid method, inspired by Richard Dawkins' notion of a meme, it commonly takes the form of a population-based algorithm coupled with individual learning procedures capable of performing local refinements. Emphasizes the exploitation of problem-specific knowledge, and tries to orchestrate local and global search in a synergistic way.

Examples

The computer simulations Tierra and Avida attempt to model macroevolutionary dynamics.

Gallery

Genetic algorithm

From Wikipedia, the free encyclopedia
 
The 2006 NASA ST5 spacecraft antenna. This complicated shape was found by an evolutionary computer design program to create the best radiation pattern. It is known as an evolved antenna.

In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on bio-inspired operators such as mutation, crossover and selection.

Methodology

Optimization problems

In a genetic algorithm, a population of candidate solutions (called individuals, creatures, or phenotypes) to an optimization problem is evolved toward better solutions. Each candidate solution has a set of properties (its chromosomes or genotype) which can be mutated and altered; traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible.

The evolution usually starts from a population of randomly generated individuals, and is an iterative process, with the population in each iteration called a generation. In each generation, the fitness of every individual in the population is evaluated; the fitness is usually the value of the objective function in the optimization problem being solved. The more fit individuals are stochastically selected from the current population, and each individual's genome is modified (recombined and possibly randomly mutated) to form a new generation. The new generation of candidate solutions is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population.

A typical genetic algorithm requires:
  1. a genetic representation of the solution domain,
  2. a fitness function to evaluate the solution domain.
A standard representation of each candidate solution is as an array of bits. Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size, which facilitates simple crossover operations. Variable length representations may also be used, but crossover implementation is more complex in this case. Tree-like representations are explored in genetic programming and graph-form representations are explored in evolutionary programming; a mix of both linear chromosomes and trees is explored in gene expression programming.

Once the genetic representation and the fitness function are defined, a GA proceeds to initialize a population of solutions and then to improve it through repetitive application of the mutation, crossover, inversion and selection operators.

Initialization

The population size depends on the nature of the problem, but typically contains several hundreds or thousands of possible solutions. Often, the initial population is generated randomly, allowing the entire range of possible solutions (the search space). Occasionally, the solutions may be "seeded" in areas where optimal solutions are likely to be found.

Selection

During each successive generation, a portion of the existing population is selected to breed a new generation. Individual solutions are selected through a fitness-based process, where fitter solutions (as measured by a fitness function) are typically more likely to be selected. Certain selection methods rate the fitness of each solution and preferentially select the best solutions. Other methods rate only a random sample of the population, as the former process may be very time-consuming.

The fitness function is defined over the genetic representation and measures the quality of the represented solution. The fitness function is always problem dependent. For instance, in the knapsack problem one wants to maximize the total value of objects that can be put in a knapsack of some fixed capacity. A representation of a solution might be an array of bits, where each bit represents a different object, and the value of the bit (0 or 1) represents whether or not the object is in the knapsack. Not every such representation is valid, as the size of objects may exceed the capacity of the knapsack. The fitness of the solution is the sum of values of all objects in the knapsack if the representation is valid, or 0 otherwise.

In some problems, it is hard or even impossible to define the fitness expression; in these cases, a simulation may be used to determine the fitness function value of a phenotype (e.g. computational fluid dynamics is used to determine the air resistance of a vehicle whose shape is encoded as the phenotype), or even interactive genetic algorithms are used.

Genetic operators

The next step is to generate a second generation population of solutions from those selected through a combination of genetic operators: crossover (also called recombination), and mutation.

For each new solution to be produced, a pair of "parent" solutions is selected for breeding from the pool selected previously. By producing a "child" solution using the above methods of crossover and mutation, a new solution is created which typically shares many of the characteristics of its "parents". New parents are selected for each new child, and the process continues until a new population of solutions of appropriate size is generated. Although reproduction methods that are based on the use of two parents are more "biology inspired", some research suggests that more than two "parents" generate higher quality chromosomes.

These processes ultimately result in the next generation population of chromosomes that is different from the initial generation. Generally the average fitness will have increased by this procedure for the population, since only the best organisms from the first generation are selected for breeding, along with a small proportion of less fit solutions. These less fit solutions ensure genetic diversity within the genetic pool of the parents and therefore ensure the genetic diversity of the subsequent generation of children.

Opinion is divided over the importance of crossover versus mutation. There are many references in Fogel (2006) that support the importance of mutation-based search.

Although crossover and mutation are known as the main genetic operators, it is possible to use other operators such as regrouping, colonization-extinction, or migration in genetic algorithms.

It is worth tuning parameters such as the mutation probability, crossover probability and population size to find reasonable settings for the problem class being worked on. A very small mutation rate may lead to genetic drift (which is non-ergodic in nature). A recombination rate that is too high may lead to premature convergence of the genetic algorithm. A mutation rate that is too high may lead to loss of good solutions, unless elitist selection is employed.

Heuristics

In addition to the main operators above, other heuristics may be employed to make the calculation faster or more robust. The speciation heuristic penalizes crossover between candidate solutions that are too similar; this encourages population diversity and helps prevent premature convergence to a less optimal solution.

Termination

This generational process is repeated until a termination condition has been reached. Common terminating conditions are:
  • A solution is found that satisfies minimum criteria
  • Fixed number of generations reached
  • Allocated budget (computation time/money) reached
  • The highest ranking solution's fitness is reaching or has reached a plateau such that successive iterations no longer produce better results
  • Manual inspection
  • Combinations of the above

The building block hypothesis

Genetic algorithms are simple to implement, but their behavior is difficult to understand. In particular it is difficult to understand why these algorithms frequently succeed at generating solutions of high fitness when applied to practical problems. The building block hypothesis (BBH) consists of:
  1. A description of a heuristic that performs adaptation by identifying and recombining "building blocks", i.e. low order, low defining-length schemata with above average fitness.
  2. A hypothesis that a genetic algorithm performs adaptation by implicitly and efficiently implementing this heuristic.
Goldberg describes the heuristic as follows:
"Short, low order, and highly fit schemata are sampled, recombined [crossed over], and resampled to form strings of potentially higher fitness. In a way, by working with these particular schemata [the building blocks], we have reduced the complexity of our problem; instead of building high-performance strings by trying every conceivable combination, we construct better and better strings from the best partial solutions of past samplings.
"Because highly fit schemata of low defining length and low order play such an important role in the action of genetic algorithms, we have already given them a special name: building blocks. Just as a child creates magnificent fortresses through the arrangement of simple blocks of wood, so does a genetic algorithm seek near optimal performance through the juxtaposition of short, low-order, high-performance schemata, or building blocks."
Despite the lack of consensus regarding the validity of the building-block hypothesis, it has been consistently evaluated and used as reference throughout the years. Many estimation of distribution algorithms, for example, have been proposed in an attempt to provide an environment in which the hypothesis would hold. Although good results have been reported for some classes of problems, skepticism concerning the generality and/or practicality of the building-block hypothesis as an explanation for GAs efficiency still remains. Indeed, there is a reasonable amount of work that attempts to understand its limitations from the perspective of estimation of distribution algorithms.

Limitations

There are limitations of the use of a genetic algorithm compared to alternative optimization algorithms:
  • Repeated fitness function evaluation for complex problems is often the most prohibitive and limiting segment of artificial evolutionary algorithms. Finding the optimal solution to complex high-dimensional, multimodal problems often requires very expensive fitness function evaluations. In real world problems such as structural optimization problems, a single function evaluation may require several hours to several days of complete simulation. Typical optimization methods can not deal with such types of problem. In this case, it may be necessary to forgo an exact evaluation and use an approximated fitness that is computationally efficient. It is apparent that amalgamation of approximate models may be one of the most promising approaches to convincingly use GA to solve complex real life problems.
  • Genetic algorithms do not scale well with complexity. That is, where the number of elements which are exposed to mutation is large there is often an exponential increase in search space size. This makes it extremely difficult to use the technique on problems such as designing an engine, a house or plane. In order to make such problems tractable to evolutionary search, they must be broken down into the simplest representation possible. Hence we typically see evolutionary algorithms encoding designs for fan blades instead of engines, building shapes instead of detailed construction plans, and airfoils instead of whole aircraft designs. The second problem of complexity is the issue of how to protect parts that have evolved to represent good solutions from further destructive mutation, particularly when their fitness assessment requires them to combine well with other parts.
  • The "better" solution is only in comparison to other solutions. As a result, the stop criterion is not clear in every problem.
  • In many problems, GAs have a tendency to converge towards local optima or even arbitrary points rather than the global optimum of the problem. This means that it does not "know how" to sacrifice short-term fitness to gain longer-term fitness. The likelihood of this occurring depends on the shape of the fitness landscape: certain problems may provide an easy ascent towards a global optimum, others may make it easier for the function to find the local optima. This problem may be alleviated by using a different fitness function, increasing the rate of mutation, or by using selection techniques that maintain a diverse population of solutions, although the No Free Lunch theorem proves that there is no general solution to this problem. A common technique to maintain diversity is to impose a "niche penalty", wherein, any group of individuals of sufficient similarity (niche radius) have a penalty added, which will reduce the representation of that group in subsequent generations, permitting other (less similar) individuals to be maintained in the population. This trick, however, may not be effective, depending on the landscape of the problem. Another possible technique would be to simply replace part of the population with randomly generated individuals, when most of the population is too similar to each other. Diversity is important in genetic algorithms (and genetic programming) because crossing over a homogeneous population does not yield new solutions. In evolution strategies and evolutionary programming, diversity is not essential because of a greater reliance on mutation.
  • Operating on dynamic data sets is difficult, as genomes begin to converge early on towards solutions which may no longer be valid for later data. Several methods have been proposed to remedy this by increasing genetic diversity somehow and preventing early convergence, either by increasing the probability of mutation when the solution quality drops (called triggered hypermutation), or by occasionally introducing entirely new, randomly generated elements into the gene pool (called random immigrants). Again, evolution strategies and evolutionary programming can be implemented with a so-called "comma strategy" in which parents are not maintained and new parents are selected only from offspring. This can be more effective on dynamic problems.
  • GAs cannot effectively solve problems in which the only fitness measure is a single right/wrong measure (like decision problems), as there is no way to converge on the solution (no hill to climb). In these cases, a random search may find a solution as quickly as a GA. However, if the situation allows the success/failure trial to be repeated giving (possibly) different results, then the ratio of successes to failures provides a suitable fitness measure.
  • For specific optimization problems and problem instances, other optimization algorithms may be more efficient than genetic algorithms in terms of speed of convergence. Alternative and complementary algorithms include evolution strategies, evolutionary programming, simulated annealing, Gaussian adaptation, hill climbing, and swarm intelligence (e.g.: ant colony optimization, particle swarm optimization) and methods based on integer linear programming. The suitability of genetic algorithms is dependent on the amount of knowledge of the problem; well known problems often have better, more specialized approaches.

Variants

Chromosome representation

The simplest algorithm represents each chromosome as a bit string. Typically, numeric parameters can be represented by integers, though it is possible to use floating point representations. The floating point representation is natural to evolution strategies and evolutionary programming. The notion of real-valued genetic algorithms has been offered but is really a misnomer because it does not really represent the building block theory that was proposed by John Henry Holland in the 1970s. This theory is not without support though, based on theoretical and experimental results. The basic algorithm performs crossover and mutation at the bit level. Other variants treat the chromosome as a list of numbers which are indexes into an instruction table, nodes in a linked list, hashes, objects, or any other imaginable data structure. Crossover and mutation are performed so as to respect data element boundaries. For most data types, specific variation operators can be designed. Different chromosomal data types seem to work better or worse for different specific problem domains. When bit-string representations of integers are used, Gray coding is often employed. In this way, small changes in the integer can be readily affected through mutations or crossovers. This has been found to help prevent premature convergence at so called Hamming walls, in which too many simultaneous mutations (or crossover events) must occur in order to change the chromosome to a better solution.

Other approaches involve using arrays of real-valued numbers instead of bit strings to represent chromosomes. Results from the theory of schemata suggest that in general the smaller the alphabet, the better the performance, but it was initially surprising to researchers that good results were obtained from using real-valued chromosomes. This was explained as the set of real values in a finite population of chromosomes as forming a virtual alphabet (when selection and recombination are dominant) with a much lower cardinality than would be expected from a floating point representation.
An expansion of the Genetic Algorithm accessible problem domain can be obtained through more complex encoding of the solution pools by concatenating several types of heterogenously encoded genes into one chromosome. This particular approach allows for solving optimization problems that require vastly disparate definition domains for the problem parameters. For instance, in problems of cascaded controller tuning, the internal loop controller structure can belong to a conventional regulator of three parameters, whereas the external loop could implement a linguistic controller (such as a fuzzy system) which has an inherently different description. This particular form of encoding requires a specialized crossover mechanism that recombines the chromosome by section, and it is a useful tool for the modelling and simulation of complex adaptive systems, especially evolution processes.

Elitism

A practical variant of the general process of constructing a new population is to allow the best organism(s) from the current generation to carry over to the next, unaltered. This strategy is known as elitist selection and guarantees that the solution quality obtained by the GA will not decrease from one generation to the next.

Parallel implementations

Parallel implementations of genetic algorithms come in two flavors. Coarse-grained parallel genetic algorithms assume a population on each of the computer nodes and migration of individuals among the nodes. Fine-grained parallel genetic algorithms assume an individual on each processor node which acts with neighboring individuals for selection and reproduction. Other variants, like genetic algorithms for online optimization problems, introduce time-dependence or noise in the fitness function.

Adaptive GAs

Genetic algorithms with adaptive parameters (adaptive genetic algorithms, AGAs) is another significant and promising variant of genetic algorithms. The probabilities of crossover (pc) and mutation (pm) greatly determine the degree of solution accuracy and the convergence speed that genetic algorithms can obtain. Instead of using fixed values of pc and pm, AGAs utilize the population information in each generation and adaptively adjust the pc and pm in order to maintain the population diversity as well as to sustain the convergence capacity. In AGA (adaptive genetic algorithm), the adjustment of pc and pm depends on the fitness values of the solutions. In CAGA (clustering-based adaptive genetic algorithm), through the use of clustering analysis to judge the optimization states of the population, the adjustment of pc and pm depends on these optimization states. It can be quite effective to combine GA with other optimization methods. GA tends to be quite good at finding generally good global solutions, but quite inefficient at finding the last few mutations to find the absolute optimum. Other techniques (such as simple hill climbing) are quite efficient at finding absolute optimum in a limited region. Alternating GA and hill climbing can improve the efficiency of GA[citation needed] while overcoming the lack of robustness of hill climbing.

This means that the rules of genetic variation may have a different meaning in the natural case. For instance – provided that steps are stored in consecutive order – crossing over may sum a number of steps from maternal DNA adding a number of steps from paternal DNA and so on. This is like adding vectors that more probably may follow a ridge in the phenotypic landscape. Thus, the efficiency of the process may be increased by many orders of magnitude. Moreover, the inversion operator has the opportunity to place steps in consecutive order or any other suitable order in favour of survival or efficiency.

A variation, where the population as a whole is evolved rather than its individual members, is known as gene pool recombination.

A number of variations have been developed to attempt to improve performance of GAs on problems with a high degree of fitness epistasis, i.e. where the fitness of a solution consists of interacting subsets of its variables. Such algorithms aim to learn (before exploiting) these beneficial phenotypic interactions. As such, they are aligned with the Building Block Hypothesis in adaptively reducing disruptive recombination. Prominent examples of this approach include the mGA, GEMGA and LLGA.

Problem domains

Problems which appear to be particularly appropriate for solution by genetic algorithms include timetabling and scheduling problems, and many scheduling software packages are based on GAs. GAs have also been applied to engineering. Genetic algorithms are often applied as an approach to solve global optimization problems.

Genetic algorithms have also been applied to evolving neural networks. In particular recurrent neural networks which are difficult to train with back propagation can be evolved using GAs.

As a general rule of thumb genetic algorithms might be useful in problem domains that have a complex fitness landscape as mixing, i.e., mutation in combination with crossover, is designed to move the population away from local optima that a traditional hill climbing algorithm might get stuck in. Observe that commonly used crossover operators cannot change any uniform population. Mutation alone can provide ergodicity of the overall genetic algorithm process (seen as a Markov chain).

Examples of problems solved by genetic algorithms include: mirrors designed to funnel sunlight to a solar collector, antennae designed to pick up radio signals in space, walking methods for computer figures, optimal design of aerodynamic bodies in complex flowfields.
 
In his Algorithm Design Manual, Skiena advises against genetic algorithms for any task:
[I]t is quite unnatural to model applications in terms of genetic operators like mutation and crossover on bit strings. The pseudobiology adds another level of complexity between you and your problem. Second, genetic algorithms take a very long time on nontrivial problems. [...] [T]he analogy with evolution—where significant progress require [sic] millions of years—can be quite appropriate.

[...]

I have never encountered any problem where genetic algorithms seemed to me the right way to attack it. Further, I have never seen any computational results reported using genetic algorithms that have favorably impressed me. Stick to simulated annealing for your heuristic search voodoo needs.
— Steven Skiena

History

In 1950, Alan Turing proposed a "learning machine" which would parallel the principles of evolution. Computer simulation of evolution started as early as in 1954 with the work of Nils Aall Barricelli, who was using the computer at the Institute for Advanced Study in Princeton, New Jersey. His 1954 publication was not widely noticed. Starting in 1957, the Australian quantitative geneticist Alex Fraser published a series of papers on simulation of artificial selection of organisms with multiple loci controlling a measurable trait. From these beginnings, computer simulation of evolution by biologists became more common in the early 1960s, and the methods were described in books by Fraser and Burnell (1970) and Crosby (1973). Fraser's simulations included all of the essential elements of modern genetic algorithms. In addition, Hans-Joachim Bremermann published a series of papers in the 1960s that also adopted a population of solution to optimization problems, undergoing recombination, mutation, and selection. Bremermann's research also included the elements of modern genetic algorithms. Other noteworthy early pioneers include Richard Friedberg, George Friedman, and Michael Conrad. Many early papers are reprinted by Fogel (1998).

Although Barricelli, in work he reported in 1963, had simulated the evolution of ability to play a simple game, artificial evolution became a widely recognized optimization method as a result of the work of Ingo Rechenberg and Hans-Paul Schwefel in the 1960s and early 1970s – Rechenberg's group was able to solve complex engineering problems through evolution strategies. Another approach was the evolutionary programming technique of Lawrence J. Fogel, which was proposed for generating artificial intelligence. Evolutionary programming originally used finite state machines for predicting environments, and used variation and selection to optimize the predictive logics. Genetic algorithms in particular became popular through the work of John Holland in the early 1970s, and particularly his book Adaptation in Natural and Artificial Systems (1975). His work originated with studies of cellular automata, conducted by Holland and his students at the University of Michigan. Holland introduced a formalized framework for predicting the quality of the next generation, known as Holland's Schema Theorem. Research in GAs remained largely theoretical until the mid-1980s, when The First International Conference on Genetic Algorithms was held in Pittsburgh, Pennsylvania.

Commercial products

In the late 1980s, General Electric started selling the world's first genetic algorithm product, a mainframe-based toolkit designed for industrial processes. In 1989, Axcelis, Inc. released Evolver, the world's first commercial GA product for desktop computers. The New York Times technology writer John Markoff wrote about Evolver in 1990, and it remained the only interactive commercial genetic algorithm until 1995. Evolver was sold to Palisade in 1997, translated into several languages, and is currently in its 6th version.

Related techniques

Parent fields

Genetic algorithms are a sub-field of:

Related fields

Evolutionary algorithms

Evolutionary algorithms is a sub-field of evolutionary computing.
  • Evolution strategies (ES, see Rechenberg, 1994) evolve individuals by means of mutation and intermediate or discrete recombination. ES algorithms are designed particularly to solve problems in the real-value domain. They use self-adaptation to adjust control parameters of the search. De-randomization of self-adaptation has led to the contemporary Covariance Matrix Adaptation Evolution Strategy (CMA-ES).
  • Evolutionary programming (EP) involves populations of solutions with primarily mutation and selection and arbitrary representations. They use self-adaptation to adjust parameters, and can include other variation operations such as combining information from multiple parents.
  • Estimation of Distribution Algorithm (EDA) substitutes traditional reproduction operators by model-guided operators. Such models are learned from the population by employing machine learning techniques and represented as Probabilistic Graphical Models, from which new solutions can be sampled or generated from guided-crossover.
  • Gene expression programming (GEP) also uses populations of computer programs. These complex computer programs are encoded in simpler linear chromosomes of fixed length, which are afterwards expressed as expression trees. Expression trees or computer programs evolve because the chromosomes undergo mutation and recombination in a manner similar to the canonical GA. But thanks to the special organization of GEP chromosomes, these genetic modifications always result in valid computer programs.
  • Genetic programming (GP) is a related technique popularized by John Koza in which computer programs, rather than function parameters, are optimized. Genetic programming often uses tree-based internal data structures to represent the computer programs for adaptation instead of the list structures typical of genetic algorithms.
  • Grouping genetic algorithm (GGA) is an evolution of the GA where the focus is shifted from individual items, like in classical GAs, to groups or subset of items. The idea behind this GA evolution proposed by Emanuel Falkenauer is that solving some complex problems, a.k.a. clustering or partitioning problems where a set of items must be split into disjoint group of items in an optimal way, would better be achieved by making characteristics of the groups of items equivalent to genes. These kind of problems include bin packing, line balancing, clustering with respect to a distance measure, equal piles, etc., on which classic GAs proved to perform poorly. Making genes equivalent to groups implies chromosomes that are in general of variable length, and special genetic operators that manipulate whole groups of items. For bin packing in particular, a GGA hybridized with the Dominance Criterion of Martello and Toth, is arguably the best technique to date.
  • Interactive evolutionary algorithms are evolutionary algorithms that use human evaluation. They are usually applied to domains where it is hard to design a computational fitness function, for example, evolving images, music, artistic designs and forms to fit users' aesthetic preference.

Swarm intelligence

Swarm intelligence is a sub-field of evolutionary computing.
  • Ant colony optimization (ACO) uses many ants (or agents) equipped with a pheromone model to traverse the solution space and find locally productive areas. Although considered an Estimation of distribution algorithm,
  • Particle swarm optimization (PSO) is a computational method for multi-parameter optimization which also uses population-based approach. A population (swarm) of candidate solutions (particles) moves in the search space, and the movement of the particles is influenced both by their own best known position and swarm's global best known position. Like genetic algorithms, the PSO method depends on information sharing among population members. In some problems the PSO is often more computationally efficient than the GAs, especially in unconstrained problems with continuous variables.

Other evolutionary computing algorithms

Evolutionary computation is a sub-field of the metaheuristic methods.
  • Memetic algorithm (MA), often called hybrid genetic algorithm among others, is a population-based method in which solutions are also subject to local improvement phases. The idea of memetic algorithms comes from memes, which unlike genes, can adapt themselves. In some problem areas they are shown to be more efficient than traditional evolutionary algorithms.
  • Bacteriologic algorithms (BA) inspired by evolutionary ecology and, more particularly, bacteriologic adaptation. Evolutionary ecology is the study of living organisms in the context of their environment, with the aim of discovering how they adapt. Its basic concept is that in a heterogeneous environment, there is not one individual that fits the whole environment. So, one needs to reason at the population level. It is also believed BAs could be successfully applied to complex positioning problems (antennas for cell phones, urban planning, and so on) or data mining.
  • Cultural algorithm (CA) consists of the population component almost identical to that of the genetic algorithm and, in addition, a knowledge component called the belief space.
  • Differential search algorithm (DS) inspired by migration of superorganisms.
  • Gaussian adaptation (normal or natural adaptation, abbreviated NA to avoid confusion with GA) is intended for the maximisation of manufacturing yield of signal processing systems. It may also be used for ordinary parametric optimisation. It relies on a certain theorem valid for all regions of acceptability and all Gaussian distributions. The efficiency of NA relies on information theory and a certain theorem of efficiency. Its efficiency is defined as information divided by the work needed to get the information. Because NA maximises mean fitness rather than the fitness of the individual, the landscape is smoothed such that valleys between peaks may disappear. Therefore it has a certain "ambition" to avoid local peaks in the fitness landscape. NA is also good at climbing sharp crests by adaptation of the moment matrix, because NA may maximise the disorder (average information) of the Gaussian simultaneously keeping the mean fitness constant.

Other metaheuristic methods

Metaheuristic methods broadly fall within stochastic optimisation methods.
  • Simulated annealing (SA) is a related global optimization technique that traverses the search space by testing random mutations on an individual solution. A mutation that increases fitness is always accepted. A mutation that lowers fitness is accepted probabilistically based on the difference in fitness and a decreasing temperature parameter. In SA parlance, one speaks of seeking the lowest energy instead of the maximum fitness. SA can also be used within a standard GA algorithm by starting with a relatively high rate of mutation and decreasing it over time along a given schedule.
  • Tabu search (TS) is similar to simulated annealing in that both traverse the solution space by testing mutations of an individual solution. While simulated annealing generates only one mutated solution, tabu search generates many mutated solutions and moves to the solution with the lowest energy of those generated. In order to prevent cycling and encourage greater movement through the solution space, a tabu list is maintained of partial or complete solutions. It is forbidden to move to a solution that contains elements of the tabu list, which is updated as the solution traverses the solution space.
  • Extremal optimization (EO) Unlike GAs, which work with a population of candidate solutions, EO evolves a single solution and makes local modifications to the worst components. This requires that a suitable representation be selected which permits individual solution components to be assigned a quality measure ("fitness"). The governing principle behind this algorithm is that of emergent improvement through selectively removing low-quality components and replacing them with a randomly selected component. This is decidedly at odds with a GA that selects good solutions in an attempt to make better solutions.

Other stochastic optimisation methods

  • The cross-entropy (CE) method generates candidates solutions via a parameterized probability distribution. The parameters are updated via cross-entropy minimization, so as to generate better samples in the next iteration.
  • Reactive search optimization (RSO) advocates the integration of sub-symbolic machine learning techniques into search heuristics for solving complex optimization problems. The word reactive hints at a ready response to events during the search through an internal online feedback loop for the self-tuning of critical parameters. Methodologies of interest for Reactive Search include machine learning and statistics, in particular reinforcement learning, active or query learning, neural networks, and metaheuristics.

World Wide Web Consortium

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/World_Wide_Web_Consortium World Wide We...