Search This Blog

Monday, April 28, 2025

Comparative genomics

From Wikipedia, the free encyclopedia
Whole genome alignment is a typical method in comparative genomics. This alignment of eight Yersinia bacteria genomes reveals 78 locally collinear blocks conserved among all eight taxa. Each chromosome has been laid out horizontally and homologous blocks in each genome are shown as identically colored regions linked across genomes. Regions that are inverted relative to Y. pestis KIM are shifted below a genome's center axis.

Comparative genomics is a branch of biological research that examines genome sequences across a spectrum of species, spanning from humans and mice to a diverse array of organisms from bacteria to chimpanzees. This large-scale holistic approach compares two or more genomes to discover the similarities and differences between the genomes and to study the biology of the individual genomes.[4] Comparison of whole genome sequences provides a highly detailed view of how organisms are related to each other at the gene level. By comparing whole genome sequences, researchers gain insights into genetic relationships between organisms and study evolutionary changes.[2] The major principle of comparative genomics is that common features of two organisms will often be encoded within the DNA that is evolutionarily conserved between them. Therefore, Comparative genomics provides a powerful tool for studying evolutionary changes among organisms, helping to identify genes that are conserved or common among species, as well as genes that give unique characteristics of each organism. Moreover, these studies can be performed at different levels of the genomes to obtain multiple perspectives about the organisms.[4]

The comparative genomic analysis begins with a simple comparison of the general features of genomes such as genome size, number of genes, and chromosome number. Table 1 presents data on several fully sequenced model organisms, and highlights some striking findings. For instance, while the tiny flowering plant Arabidopsis thaliana has a smaller genome than that of the fruit fly Drosophila melanogaster (157 million base pairs v. 165 million base pairs, respectively) it possesses nearly twice as many genes (25,000 v. 13,000). In fact, A. thaliana has approximately the same number of genes as humans (25,000). Thus, a very early lesson learned in the genomic era is that genome size does not correlate with evolutionary status, nor is the number of genes proportionate to genome size.[5]

Table 1: Comparative genome sizes of humans and other model organisms[2]
Organism Estimated size (base pairs) Chromosome number Estimated gene number
Human (Homo sapiens) 3.1 billion 46 25,000
Mouse (Mus musculus) 2.9 billion 40 25,000
Bovine (Bos taurus) 2.86 billion[6] 60[7] 22,000[8]
Fruit fly (Drosophila melanogater) 165 million 8 13,000
Plant (Arabidopsis thaliana) 157 million 10 25,000
Roundworm (Caenorhabditis elegans) 97 million 12 19,000
Yeast (Saccharomyces cerevisiae) 12 million 32 6,000
Bacteria (Escherichia coli) 4.6 million 1 3,200

In comparative genomics, synteny is the preserved order of genes on chromosomes of related species indicating their descent from a common ancestor. Synteny provides a framework in which the conservation of homologous genes and gene order is identified between genomes of different species.[9] Synteny blocks are more formally defined as regions of chromosomes between genomes that share a common order of homologous genes derived from a common ancestor.[10][11] Alternative names such as conserved synteny or collinearity have been used interchangeably.[12] Comparisons of genome synteny between and within species have provided an opportunity to study evolutionary processes that lead to the diversity of chromosome number and structure in many lineages across the tree of life;[13][14] early discoveries using such approaches include chromosomal conserved regions in nematodes and yeast,[15][16] evolutionary history and phenotypic traits of extremely conserved Hox gene clusters across animals and MADS-box gene family in plants,[17][18] and karyotype evolution in mammals and plants.[19]

Furthermore, comparing two genomes not only reveals conserved domains or synteny but also aids in detecting copy number variations, single nucleotide polymorphisms (SNPs), indels, and other genomic structural variations.

Virtually started as soon as the whole genomes of two organisms became available (that is, the genomes of the bacteria Haemophilus influenzae and Mycoplasma genitalium) in 1995, comparative genomics is now a standard component of the analysis of every new genome sequence.[2][20] With the explosion in the number of genome projects due to the advancements in DNA sequencing technologies, particularly the next-generation sequencing methods in late 2000s, this field has become more sophisticated, making it possible to deal with many genomes in a single study.[21] Comparative genomics has revealed high levels of similarity between closely related organisms, such as humans and chimpanzees, and, more surprisingly, similarity between seemingly distantly related organisms, such as humans and the yeast Saccharomyces cerevisiae.[22] It has also showed the extreme diversity of the gene composition in different evolutionary lineages.[20]

History

See also: History of genomics

Comparative genomics has a root in the comparison of virus genomes in the early 1980s.[20] For example, small RNA viruses infecting animals (picornaviruses) and those infecting plants (cowpea mosaic virus) were compared and turned out to share significant sequence similarity and, in part, the order of their genes.[23] In 1986, the first comparative genomic study at a larger scale was published, comparing the genomes of varicella-zoster virus and Epstein-Barr virus that contained more than 100 genes each.[24]

The first complete genome sequence of a cellular organism, that of Haemophilus influenzae Rd, was published in 1995.[25] The second genome sequencing paper was of the small parasitic bacterium Mycoplasma genitalium published in the same year.[26] Starting from this paper, reports on new genomes inevitably became comparative-genomic studies.[20]

Microbial genomes. The first high-resolution whole genome comparison system of microbial genomes of 10-15kbp was developed in 1998 by Art Delcher, Simon Kasif and Steven Salzberg and applied to the comparison of entire highly related microbial organisms with their collaborators at the Institute for Genomic Research (TIGR). The system is called MUMMER and was described in a publication in Nucleic Acids Research in 1999. The system helps researchers to identify large rearrangements, single base mutations, reversals, tandem repeat expansions and other polymorphisms. In bacteria, MUMMER enables the identification of polymorphisms that are responsible for virulence, pathogenicity, and anti-biotic resistance. The system was also applied to the Minimal Organism Project at TIGR and subsequently to many other comparative genomics projects.

Eukaryote genomes. Saccharomyces cerevisiae, the baker's yeast, was the first eukaryote to have its complete genome sequence published in 1996.[27] After the publication of the roundworm Caenorhabditis elegans genome in 1998[15] and together with the fruit fly Drosophila melanogaster genome in 2000,[28] Gerald M. Rubin and his team published a paper titled "Comparative Genomics of the Eukaryotes", in which they compared the genomes of the eukaryotes D. melanogaster, C. elegans, and S. cerevisiae, as well as the prokaryote H. influenzae.[29] At the same time, Bonnie Berger, Eric Lander, and their team published a paper on whole-genome comparison of human and mouse.[30]

With the publication of the large genomes of vertebrates in the 2000s, including human, the Japanese pufferfish Takifugu rubripes, and mouse, precomputed results of large genome comparisons have been released for downloading or for visualization in a genome browser. Instead of undertaking their own analyses, most biologists can access these large cross-species comparisons and avoid the impracticality caused by the size of the genomes.[31]

Next-generation sequencing methods, which were first introduced in 2007, have produced an enormous amount of genomic data and have allowed researchers to generate multiple (prokaryotic) draft genome sequences at once. These methods can also quickly uncover single-nucleotide polymorphisms, insertions and deletions by mapping unassembled reads against a well annotated reference genome, and thus provide a list of possible gene differences that may be the basis for any functional variation among strains.[21]

Evolutionary principles

One character of biology is evolution, evolutionary theory is also the theoretical foundation of comparative genomics, and at the same time the results of comparative genomics unprecedentedly enriched and developed the theory of evolution. When two or more of the genome sequence are compared, one can deduce the evolutionary relationships of the sequences in a phylogenetic tree. Based on a variety of biological genome data and the study of vertical and horizontal evolution processes, one can understand vital parts of the gene structure and its regulatory function.

Similarity of related genomes is the basis of comparative genomics. If two creatures have a recent common ancestor, the differences between the two species genomes are evolved from the ancestors' genome. The closer the relationship between two organisms, the higher the similarities between their genomes. If there is close relationship between them, then their genome will display a linear behaviour (synteny), namely some or all of the genetic sequences are conserved. Thus, the genome sequences can be used to identify gene function, by analyzing their homology (sequence similarity) to genes of known function.

Human FOXP2 gene and evolutionary conservation is shown in and multiple alignment (at bottom of figure) in this image from the UCSC Genome Browser. Note that conservation tends to cluster around coding regions (exons).

Orthologous sequences are related sequences in different species: a gene exists in the original species, the species divided into two species, so genes in new species are orthologous to the sequence in the original species. Paralogous sequences are separated by gene cloning (gene duplication): if a particular gene in the genome is copied, then the copy of the two sequences is paralogous to the original gene. A pair of orthologous sequences is called orthologous pairs (orthologs), a pair of paralogous sequence is called collateral pairs (paralogs). Orthologous pairs usually have the same or similar function, which is not necessarily the case for collateral pairs. In collateral pairs, the sequences tend to evolve into having different functions.

Comparative genomics exploits both similarities and differences in the proteins, RNA, and regulatory regions of different organisms to infer how selection has acted upon these elements. Those elements that are responsible for similarities between different species should be conserved through time (stabilizing selection), while those elements responsible for differences among species should be divergent (positive selection). Finally, those elements that are unimportant to the evolutionary success of the organism will be unconserved (selection is neutral).

One of the important goals of the field is the identification of the mechanisms of eukaryotic genome evolution. It is however often complicated by the multiplicity of events that have taken place throughout the history of individual lineages, leaving only distorted and superimposed traces in the genome of each living organism. For this reason comparative genomics studies of small model organisms (for example the model Caenorhabditis elegans and closely related Caenorhabditis briggsae) are of great importance to advance our understanding of general mechanisms of evolution.[32][33]

Role of CNVs in evolution

Comparative genomics plays a crucial role in identifying copy number variations (CNVs) and understanding their significance in evolution. CNVs, which involve deletions or duplications of large segments of DNA, are recognized as a major source of genetic diversity, influencing gene structure, dosage, and regulation. While single nucleotide polymorphisms (SNPs) are more common, CNVs impact larger genomic regions and can have profound effects on phenotype and diversity.[34] Recent studies suggest that CNVs constitute around 4.8–9.5% of the human genome and have a substantial functional and evolutionary impact. In mammals, CNVs contribute significantly to population diversity, influencing gene expression and various phenotypic traits.[35] Comparative genomics analyses of human and chimpanzee genomes have revealed that CNVs may play a greater role in evolutionary change compared to single nucleotide changes. Research indicates that CNVs affect more nucleotides than individual base-pair changes, with about 2.7% of the genome affected by CNVs compared to 1.2% by SNPs. Moreover, while many CNVs are shared between humans and chimpanzees, a significant portion is unique to each species. Additionally, CNVs have been associated with genetic diseases in humans, highlighting their importance in human health. Despite this, many questions about CNVs remain unanswered, including their origin and contributions to evolutionary adaptation and disease. Ongoing research aims to address these questions using techniques like comparative genomic hybridization, which allows for a detailed examination of CNVs and their significance. When investigators examined the raw sequence data of the human and chimpanzee.[36]

Significance of comparative genomics

Comparative genomics holds profound significance across various fields, including medical research, basic biology, and biodiversity conservation. For instance, in medical research, predicting how genomic variants limited ability to predict which genomic variants lead to changes in organism-level phenotypes, such as increased disease risk in humans, remains challenging due to the immense size of the genome, comprising about three billion nucleotides.[37][38][39]

To tackle this challenge, comparative genomics offers a solution by pinpointing nucleotide positions that have remained unchanged over millions of years of evolution. These conserved regions indicate potential sites where genetic alterations could have detrimental effects on an organism's fitness, thus guiding the search for disease-causing variants. Moreover, comparative genomics holds promise in unraveling the mechanisms of gene evolution, environmental adaptations, gender-specific differences, and population variations across vertebrate lineages.[40]

Furthermore, comparative studies enable the identification of genomic signatures of selection—regions in the genome that have undergone preferential increase and fixation in populations due to their functional significance in specific processes.[41] For instance, in animal genetics, indigenous cattle exhibit superior disease resistance and environmental adaptability but lower productivity compared to exotic breeds. Through comparative genomic analyses, significant genomic signatures responsible for these unique traits can be identified. Using insights from this signature, breeders can make informed decisions to enhance breeding strategies and promote breed development.[42]

Methods

Computational approaches are necessary for genome comparisons, given the large amount of data encoded in genomes. Many tools are now publicly available, ranging from whole genome comparisons to gene expression analysis.[43] This includes approaches from systems and control, information theory, string analysis and data mining.[44] Computational approaches will remain critical for research and teaching, especially when information science and genome biology is taught in conjunction.[45]

Phylogenetic tree of descendant species and reconstructed ancestors. The branch color represents breakpoint rates in RACFs (breakpoints per million years). Black branches represent nondetermined breakpoint rates. Tip colors depict assembly contiguity: black, scaffold-level genome assembly; green, chromosome-level genome assembly; yellow, chromosome-scale scaffold-level genome assembly. Numbers next to species names indicate diploid chromosome number (if known).[46]

Comparative genomics starts with basic comparisons of genome size and gene density. For instance, genome size is important for coding capacity and possibly for regulatory reasons. High gene density facilitates genome annotation, analysis of environmental selection. By contrast, low gene density hampers the mapping of genetic disease as in the human genome.

Sequence alignment

Alignments are used to capture information about similar sequences such as ancestry, common evolutionary descent, or common structure and function. Alignments can be done for both nucleotide and protein sequences.[47][48] Alignments consist of local or global pairwise alignments, and multiple sequence alignments. One way to find global alignments is to use a dynamic programming algorithm known as Needleman-Wunsch algorithmwhereas Smith–Waterman algorithm used to find local alignments. With the exponential growth of sequence databases and the emergence of longer sequences, there's a heightened interest in faster, approximate, or heuristic alignment procedures. Among these, the FASTA and BLAST algorithms are prominent for local pairwise alignment. Recent years have witnessed the development of programs tailored to aligning lengthy sequences, such as MUMmer (1999), BLASTZ (2003), and AVID (2003). While BLASTZ adopts a local approach, MUMmer and AVID are geared towards global alignment. To harness the benefits of both local and global alignment approaches, one effective strategy involves integrating them. Initially, a rapid variant of BLAST known as BLAT is employed to identify homologous "anchor" regions. These anchors are subsequently scrutinized to identify sets exhibiting conserved order and orientation. Such sets of anchors are then subjected to alignment using a global strategy.

Additionally, ongoing efforts focus on optimizing existing algorithms to handle the vast amount of genome sequence data by enhancing their speed. Furthermore, MAVID stands out as another noteworthy pairwise alignment program specifically designed for aligning multiple genomes.

Pairwise Comparison: The Pairwise comparison of genomic sequence data is widely utilized in comparative gene prediction. Many studies in comparative functional genomics lean on pairwise comparisons, wherein traits of each gene are compared with traits of other genes across species. his method yields many more comparisons than unique observations, making each comparison dependent on others.[49][50]

Multiple comparisons: The comparison of multiple genomes is a natural extension of pairwise inter-specific comparisons. Such comparisons typically aim to identify conserved regions across two phylogenetic scales: 1. Deep comparisons, often referred to as phylogenetic footprinting[51] reveal conservation across higher taxonomic units like vertebrates.[52] 2. Shallow comparisons, recently termed Phylogenetic shadowing,[53] probe conservation across a group of closely related species.

Chromosome by chromosome variation of indicine and taurine cattle. The genomic structural differences on chromosome X between indicine (Bos indicus Nelore cattle) and taurine cattle (Bos taurusHereford cattle) were identified using the SyRI tool.

Whole-genome alignment

Whole-genome alignment (WGA) involves predicting evolutionary relationships at the nucleotide level between two or more genomes. It integrates elements of colinear sequence alignment and gene orthology prediction, presenting a greater challenge due to the vast size and intricate nature of whole genomes. Despite its complexity, numerous methods have emerged to tackle this problem because WGAs play a crucial role in various genome-wide analyses, such as phylogenetic inference, genome annotation, and function prediction.[54] Thereby, SyRI (Synteny and Rearrangement Identifier) is one such method that utilizes whole genome alignment and it is designed to identify both structural and sequence differences between two whole-genome assemblies. By taking WGAs as input, SyRI initially scans for disparities in genome structures. Subsequently, it identifies local sequence variations within both rearranged and non-rearranged (syntenic) regions.[55]

Example of a phylogenetic tree created from an alignment of 250 unique spike protein sequences from the Betacoronavirus family.

Phylogenetic reconstruction

Another computational method for comparative genomics is phylogenetic reconstruction. It is used to describe evolutionary relationships in terms of common ancestors. The relationships are usually represented in a tree called a phylogenetic tree. Similarly, coalescent theory is a retrospective model to trace alleles of a gene in a population to a single ancestral copy shared by members of the population. This is also known as the most recent common ancestor. Analysis based on coalescence theory tries predicting the amount of time between the introduction of a mutation and a particular allele or gene distribution in a population. This time period is equal to how long ago the most recent common ancestor existed. The inheritance relationships are visualized in a form similar to a phylogenetic tree. Coalescence (or the gene genealogy) can be visualized using dendrograms.[56]

Example of synteny block and break. Genes located on chromosomes of two species are denoted in letters. Each gene is associated with a number representing the species they belong to (species 1 or 2). Orthologous genes are connected by dashed lines and genes without an orthologous relationship are treated as gaps in synteny programs.[57]

Genome maps

An additional method in comparative genomics is genetic mapping. In genetic mapping, visualizing synteny is one way to see the preserved order of genes on chromosomes. It is usually used for chromosomes of related species, both of which result from a common ancestor.[58] This and other methods can shed light on evolutionary history. A recent study used comparative genomics to reconstruct 16 ancestral karyotypes across the mammalian phylogeny. The computational reconstruction showed how chromosomes rearranged themselves during mammal evolution. It gave insight into conservation of select regions often associated with the control of developmental processes. In addition, it helped to provide an understanding of chromosome evolution and genetic diseases associated with DNA rearrangements.[citation needed]

Solid green squares indicate mammalian chromosomes maintained as a single synteny block (either as a single chromosome or fused with another MAM), with shades of the color indicating the fraction of the chromosome affected by intra-chromosomal rearrangements (the lightest shade is most affected). Split blocks demarcate mammalian chromosomes affected by inter-chromosomal rearrangements. Upper (green)triangles show the fraction of the chromosome affected by intra chromosomal rearrangements, and lower (red) triangles show the fraction affected by inter chromosomal rearrangements. Syntenic relationships of each MAM to the human genome are given at the right of the diagram. MAMX appears split in goat because its X chromosome is assembled as two separate fragments. BOR, boreoeutherian ancestor chromosome; EUA, Euarchontoglires ancestor chromo-some; EUC, Euarchonta ancestor chromosome; EUT, eutherian ancestor chromosome; PMT; Primatomorpha ancestor chromosome; PRT, primates (Hominidae) ancestor chromosome; THE, therian ancestor chromosome.
Image from the study Evolution of the ancestral mammalian karyotype and syntenic regions. It is a Visualization of the evolutionary history of reconstructed mammalian chromosomes based on the human lineage.[46]

Tools

Computational tools for analyzing sequences and complete genomes are developing quickly due to the availability of large amount of genomic data. At the same time, comparative analysis tools are progressed and improved. In the challenges about these analyses, it is very important to visualize the comparative results.[59]

Visualization of sequence conservation is a tough task of comparative sequence analysis. As we know, it is highly inefficient to examine the alignment of long genomic regions manually. Internet-based genome browsers provide many useful tools for investigating genomic sequences due to integrating all sequence-based biological information on genomic regions. When we extract large amount of relevant biological data, they can be very easy to use and less time-consuming.[59]

  • UCSC Browser: This site contains the reference sequence and working draft assemblies for a large collection of genomes.[60]
  • Ensembl: The Ensembl project produces genome databases for vertebrates and other eukaryotic species, and makes this information freely available online.[61]
  • MapView: The Map Viewer provides a wide variety of genome mapping and sequencing data.[62]
  • VISTA is a comprehensive suite of programs and databases for comparative analysis of genomic sequences. It was built to visualize the results of comparative analysis based on DNA alignments. The presentation of comparative data generated by VISTA can easily suit both small and large scale of data.[63]
  • BlueJay Genome Browser: A stand-alone visualization tool for the multi-scale viewing of annotated genomes and other genomic elements.[64]
  • SyRI: SyRI stands for Synteny and Rearrangement Identifier and is a versatile tool for comparative genomics, offering functionalities for synteny analysis and visualization, aiding in the prediction of genomic differences between related genomes using whole-genome assemblies (WGA).[65]
  • Synmap2: Specifically designed for synteny mapping, Synmap2 efficiently compares genetic maps or assemblies, providing insights into genome evolution and rearrangements among related organisms.[66]
  • GSAlign: GSAlign facilitates accurate alignment of genomic sequences, particularly useful for large-scale comparative genomics studies, enabling researchers to identify similarities and differences across genomes.[67]
  • IGV (Integrative Genomics Viewer): A widely-used tool for visualizing and analyzing genomic data, IGV supports comparative genomics by enabling users to explore alignments, variants, and annotations across multiple genomes.[68]
  • Manta: Manta is a rapid structural variant caller, crucial for comparative genomics as it detects genomic rearrangements such as insertions, deletions, inversions, and duplications, aiding in understanding genetic variation among populations or species.[69]
  • CNVNatar: CNVNatar specializes in detecting copy number variations (CNVs), which are crucial in understanding genome evolution and population genetics, providing insights into genomic structural changes across different organisms.[70]
  • PIPMaker: PIPMaker facilitates the alignment and comparison of two genomic sequences, enabling the identification of conserved regions, duplications, and evolutionary breakpoints, aiding in comparative genomics analyses.[71]
  • GLASS (Genome-wide Location and Sequence Searcher): GLASS is a tool for identifying conserved regulatory elements across genomes, crucial for comparative genomics studies focusing on understanding gene regulation and evolution.[72]
  • PatternHunter: PatternHunter is a versatile tool for sequence analysis, offering functionalities for identifying conserved patterns, motifs, and repeats across genomic sequences, aiding in comparative genomics studies of gene families and regulatory elements.
  • Mummer: Mummer is a suite of tools for whole-genome alignment and comparison, widely used in comparative genomics for identifying similarities, differences, and evolutionary events among genomes at various scales.[73]

An advantage of using online tools is that these websites are being developed and updated constantly. There are many new settings and content can be used online to improve efficiency.[59]

Selected applications

Agriculture

Agriculture is a field that reaps the benefits of comparative genomics. Identifying the loci of advantageous genes is a key step in breeding crops that are optimized for greater yield, cost-efficiency, quality, and disease resistance. For example, one genome wide association study conducted on 517 rice landraces revealed 80 loci associated with several categories of agronomic performance, such as grain weight, amylose content, and drought tolerance. Many of the loci were previously uncharacterized.[74] Not only is this methodology powerful, it is also quick. Previous methods of identifying loci associated with agronomic performance required several generations of carefully monitored breeding of parent strains, a time-consuming effort that is unnecessary for comparative genomic studies.[75]

Medicine

Vaccine development

The medical field also benefits from the study of comparative genomics. In an approach known as reverse vaccinology, researchers can discover candidate antigens for vaccine development by analyzing the genome of a pathogen or a family of pathogens.[76] Applying a comparative genomics approach by analyzing the genomes of several related pathogens can lead to the development of vaccines that are multi-protective. A team of researchers employed such an approach to create a universal vaccine for Group B Streptococcus, a group of bacteria responsible for severe neonatal infection.[77] Comparative genomics can also be used to generate specificity for vaccines against pathogens that are closely related to commensal microorganisms. For example, researchers used comparative genomic analysis of commensal and pathogenic strains of E. coli to identify pathogen-specific genes as a basis for finding antigens that result in immune response against pathogenic strains but not commensal ones.[78] In May 2019, using the Global Genome Set, a team in the UK and Australia sequenced thousands of globally-collected isolates of Group A Streptococcus, providing potential targets for developing a vaccine against the pathogen, also known as S. pyogenes.[79]

Personalized Medicine

Personalized Medicine, enabled by Comparative Genomics, represents a revolutionary approach in healthcare, tailoring medical treatment and disease prevention to the individual patient's genetic makeup.[80] By analyzing genetic variations across populations and comparing them with an individual's genome, clinicians can identify specific genetic markers associated with disease susceptibility, drug metabolism, and treatment response. By identifying genetic variants associated with drug metabolism pathways, drug targets, and adverse reactions, personalized medicine can optimize medication selection, dosage, and treatment regimens for individual patients. This approach minimizes the risk of adverse drug reactions, enhances treatment efficacy, and improves patient outcomes.

Cancer

Cancer Genomics represents a cutting-edge field within oncology that leverages comparative genomics to revolutionize cancer diagnosis, treatment, and prevention strategies. Comparative genomics plays a crucial role in cancer research by identifying driver mutations, and providing comprehensive analyses of mutations, copy number alterations, structural variants, gene expression, and DNA methylation profiles in large-scale studies across different cancer types. By analyzing the genomes of cancer cells and comparing them with healthy cells, researchers can uncover key genetic alterations driving tumorigenesis, tumor progression, and metastasis. This deep understanding of the genomic landscape of cancer has profound implications for precision oncology. Moreover, Comparative Genomics is instrumental in elucidating mechanisms of drug resistance—a major challenge in cancer treatment.

TCR loci from humans (H, top) and mice (M, bottom) are compared, with TCR elements in red, non-TCR genes in purple, and V segments in orange, other TCR elements in red. M6A, a putative methyltransferase; ZNF, a zinc-finger protein; OR, olfactory receptor genes; DAD1, defender against cell death; The sites of species-specific, processed pseudogenes are shown by gray triangles. See also GenBank accession numbers AE000658-62. Modified after Glusman et al. 2001.[81]

Mouse models in immunology

T cells (also known as a T lymphocytes or a thymocytes) are immune cells that grow from stem cells in the bone marrow. They assist to defend the body from infection and may aid in the fight against cancer. Because of their morphological, physiological, and genetic resemblance to humans, mice and rats have long been the preferred species for biomedical research animal models. Comparative Medicine Research is built on the ability to use information from one species to understand the same processes in another. We can get new insights into molecular pathways by comparing human and mouse T cells and their effects on the immune system utilizing comparative genomics. In order to comprehend its TCRs and their genes, Glusman conducted research on the sequencing of the human and mouse T cell receptor loci. TCR genes are well-known and serve as a significant resource for supporting functional genomics and understanding how genes and intergenic regions of the genome contribute to biological processes.[81]

T-cell immune receptors are important in seeing the world of pathogens in the cellular immune system. One of the reasons for sequencing the human and mouse TCR loci was to match the orthologous gene family sequences and discover conserved areas using comparative genomics. These, it was thought, would reflect two sorts of biological information: (1) exons and (2) regulatory sequences. In fact, the majority of V, D, J, and C exons could be identified in this method. The variable regions are encoded by multiple unique DNA elements that are rearranged and connected during T cell (TCR) differentiation: variable (V), diversity (D), and joining (J) elements for the and polypeptides; and V and J elements for the and polypeptides.[Figure 1] However, several short noncoding conserved blocks of the genome had been shown. Both human and mouse motifs are largely clustered in the 200 bp [Figure 2], the known 3′ enhancers in the TCR/ were identified, and a conserved region of 100 bp in the mouse J intron was subsequently shown to have a regulatory function.

[Figure 2] Gene structure of the human (top) and mouse (bottom) V, D, J, and C gene segments. The arrows represent the transcriptional direction of each TCR gene. The squares and circles represent going in a direct and reverse direction. Modified after Glusman et al. 2001.[81]

Comparisons of the genomic sequences within each physical site or location of a specific gene on a chromosome (locs) and across species allow for research on other mechanisms and other regulatory signals. Some suggest new hypotheses about the evolution of TCRs, to be tested (and improved) by comparison to the TCR gene complement of other vertebrate species. A comparative genomic investigation of humans and mice will obviously allow for the discovery and annotation of many other genes, as well as identifying in other species for regulatory sequences.[81]

Research

Comparative genomics also opens up new avenues in other areas of research. As DNA sequencing technology has become more accessible, the number of sequenced genomes has grown. With the increasing reservoir of available genomic data, the potency of comparative genomic inference has grown as well.

A notable case of this increased potency is found in recent primate research. Comparative genomic methods have allowed researchers to gather information about genetic variation, differential gene expression, and evolutionary dynamics in primates that were indiscernible using previous data and methods.[82]

Great Ape Genome Project

The Great Ape Genome Project used comparative genomic methods to investigate genetic variation with reference to the six great ape species, finding healthy levels of variation in their gene pool despite shrinking population size.[83] Another study showed that patterns of DNA methylation, which are a known regulation mechanism for gene expression, differ in the prefrontal cortex of humans versus chimps, and implicated this difference in the evolutionary divergence of the two species.

Neuroepigenetics

From Wikipedia, the free encyclopedia

Neuroepigenetics is the study of how epigenetic changes to genes affect the nervous system. These changes may effect underlying conditions such as addiction, cognition, and neurological development.

Mechanisms

Neuroepigenetic mechanisms regulate gene expression in the neuron. Often, these changes take place due to recurring stimuli. Neuroepigenetic mechanisms involve proteins or protein pathways that regulate gene expression by adding, editing or reading epigenetic marks such as methylation or acetylation. Some of these mechanisms include ATP-dependent chromatin remodeling, LINE1, and prion protein-based modifications. Other silencing mechanisms include the recruitment of specialized proteins that methylate DNA such that the core promoter element is inaccessible to transcription factors and RNA polymerase. As a result, transcription is no longer possible. One such protein pathway is the REST co-repressor complex pathway. There are also several non-coding RNAs that regulate neural function at the epigenetic level. These mechanisms, along with neural histone methylation, affect arrangement of synapses, neuroplasticity, and play a key role in learning and memory.

Methylation

DNA methyltransferases (DNMTs) are involved in regulation of the electrophysiological landscape of the brain through methylation of CpGs. Several studies have shown that inhibition or depletion of DNMT1 activity during neural maturation leads to hypomethylation of the neurons by removing the cell's ability to maintain methylation marks in the chromatin. This gradual loss of methylation marks leads to changes in the expression of crucial developmental genes that may be dosage sensitive, leading to neural degeneration. This was observed in the mature neurons in the dorsal portion of the mouse prosencephalon, where there was significantly greater amounts of neural degeneration and poor neural signaling in the absence of DNMT1. Despite poor survival rates amongst the DNMT1-depleted neurons, some of the cells persisted throughout the lifespan of the organism. The surviving cells reaffirmed that the loss of DNMT1 led to hypomethylation in the neural cell genome. These cells also exhibited poor neural functioning. In fact, a global loss of neural functioning was also observed in these model organisms, with the greatest amounts neural degeneration occurring in the prosencephalon.

Other studies showed a trend for DNMT3a and DNMT3b. However, these DNMT's add new methyl marks on unmethylated DNA, unlike DNMT1. Like DNMT1, the loss of DNMT3a and 3b resulted in neuromuscular degeneration two months after birth, as well as poor survival rates amongst the progeny of the mutant cells, even though DNMT3a does not regularly function to maintain methylation marks. This conundrum was addressed by other studies which recorded rare loci in mature neurons where DNMT3a acted as a maintenance DNMT. The Gfap locus, which codes for the formation and regulation of the cytoskeleton of astrocytes, is one such locus where this activity is observed. The gene is regularly methylated to downregulate glioma related cancers. DNMT inhibition leads to decreased methylation and increased synaptic activity. Several studies show that the methylation-related increase or decrease in synaptic activity occurs due to the upregulation or downregulation of receptors at the neurological synapse. Such receptor regulation plays a major role in many important mechanisms, such as the 'fight or flight' response. The glucocorticoid receptor (GR) is the most studied of these receptors. During stressful circumstances, there is a signaling cascade that begins from the pituitary gland and terminates due to a negative feedback loop from the adrenal gland. In this loop, the increase in the levels of the stress response hormone results in the increase of GR. Increase in GR results in the decrease of cellular response to the hormone levels. It has been shown that methylation of the I7 exon within the GR locus leads to a lower level of basal GR expression in mice. These mice were more susceptible to high levels of stress as opposed to mice with lower levels of methylation at the I7 exon. Up-regulation or down-regulation of receptors through methylation leads to change in synaptic activity of the neuron.

Hypermethylation, CpG islands, and tumor suppressing genes

CpG Islands (CGIs) are regulatory elements that can influence gene expression by allowing or interfering with transcription initiation or enhancer activity. CGIs are generally interspersed with the promoter regions of the genes they affect and may also affect more than one promoter region. In addition they may also include enhancer elements and be separate from the transcription start site. Hypermethylation at key CGIs can effectively silence expression of tumor suppressing genes and is common in gliomas. Tumor suppressing genes are those which inhibit a cell's progression towards cancer. These genes are commonly associated with important functions which regulate cell-cycle events. For example, PI3K and p53 pathways are affected by CGI promoter hypermethylation, this includes the promoters of the genes CDKN2/p16, RB, PTEN, TP53 and p14ARF. Importantly, glioblastomas are known to have high frequency of methylation at CGIs/promoter sites. For example, Epithelial Membrane Protein 3 (EMP3) is a gene which is involved in cell proliferation as well as cellular interactions. It is also thought to function as a tumor suppressor, and in glioblastomas is shown to be silenced via hypermethylation. Furthermore, introduction of the gene into EMP3-silenced neuroblasts results in reduced colony formation as well as suppressed tumor growth. In contrast, hypermethylation of promoter sites can also inhibit activity of oncogenes and prevent tumorigenesis. Such oncogenic pathways as the transformation growth factor (TGF)-beta signaling pathway stimulate cells to proliferate. In glioblastomas the overactivity of this pathway is associated with aggressive forms of tumor growth. Hypermethylation of PDGF-B, the TGF-beta target, inhibits uncontrolled proliferation.

Hypomethylation and aberrant histone modification

Global reduction in methylation is implicated in tumorigenesis. More specifically, wide spread CpG demethylation, contributing to global hypomethylation, is known to cause genomic instability leading to development of tumors. An important effect of this DNA modification is its transcriptional activation of oncogenes. For example, expression of MAGEA1 enhanced by hypomethylation interferes with p53 function.

Aberrant patterns of histone modifications can also take place at specific loci and ultimately manipulate gene activity. In terms of CGI promoter sites, methylation and loss of acetylation occurs frequently at H3K9. Furthermore, H3K9 dimethylation and trimethylation are repressive marks which, along with bivalent differentially methylated domains, are hypothesized to make tumor suppressing genes more susceptible to silencing. Abnormal presence or lack of methylation in glioblastomas are strongly linked to genes which regulate apoptosis, DNA repair, cell proliferation, and tumor suppression. One of the best known examples of genes affected by aberrant methylation that contributes to formation of glioblastomas is MGMT, a gene involved in DNA repair which encodes the protein O6-methylguanine-DNA methyltransferase. Methylation of the MGMT promoter is an important predictor of the effectiveness of alkylating agents to target glioblastomas. Hypermethylation of the MGMT promoter causes transcriptional silencing and is found in several cancer types including glioma, lymphoma, breast cancer, prostate cancer, and retinoblastoma.

Neuroplasticity

Neuroplasticity refers to the ability of the brain to undergo synaptic rearrangement as a response to recurring stimuli. Neurotrophin proteins play a major role in synaptic rearrangement, amongst other factors. Depletion of neurotrophin BDNF or BDNF signaling is one of the main factors in developing diseases such as Alzheimer's disease, Huntington's disease, and depression. Neuroplasticity can also occur as a consequence of targeted epigenetic modifications such as methylation and acetylation. Exposure to certain recurring stimuli leads to demethylation of particular loci and remethylation in a pattern that leads to a response to that particular stimulus. Like the histone readers, erasers and writers also modify histones by removing and adding modifying marks respectively. An eraser, neuroLSD1, is a modified version of the original Lysine Demethylase 1(LSD1) that exists only in neurons and assists with neuronal maturation. Although both versions of LSD1 share the same target, their expression patterns are vastly different and neuroLSD1 is a truncated version of LSD1. NeuroLSD1 increases the expression of immediate early genes (IEGs) involved in cell maturation. Recurring stimuli lead to differential expression of neuroLSD1, leading to rearrangement of loci. The eraser is also thought to play a major role in the learning of many complex behaviors and is way through which genes interact with the environment.

Neurodegenerative diseases

Alzheimer's disease

Alzheimer's disease (AD) is a neurodegenerative disease known to progressively affect memory and incite cognitive degradation. Epigenetic modifications both globally and on specific candidate genes are thought to contribute to the etiology of this disease. Immunohistochemical analysis of post-mortem brain tissues across several studies have revealed global decreases in both 5-methylcytosine (5mC) and 5-hydroxymethylcytosine (5hmC) in AD patients compared with controls. However, conflicting evidence has shown elevated levels of these epigenetic markers in the same tissues. Furthermore, these modifications appear to be affected early on in tissues associated with the pathophysiology of AD. The presence of 5mC at the promoters of genes is generally associated with gene silencing. 5hmC, which is the oxidized product of 5mC, via ten-eleven-translocase (TET), is thought to be associated with activation of gene expression, though the mechanisms underlying this activation are not fully understood.

Regardless of variations in results of methylomic analysis across studies, it is known that the presence of 5hmC increases with differentiation and aging of cells in the brain. Furthermore, genes which have a high prevalence of 5hmC are also implicated in the pathology of other age related neurodegenerative diseases, and are key regulators of ion transport, neuronal development, and cell death. For example, over-expression of 5-Lipoxygenase (5-LOX), an enzyme which generates pro-inflammatory mediators from arachidonic acid, in AD brains is associated with high prevalence of 5hmC at the 5-LOX gene promoter region.

Amyotrophic Lateral Sclerosis

DNA modifications at different transcriptional sites have been shown to contribute to neurodegenerative diseases. These include harmful transcriptional alterations such as those found in motor neuron functionality associated with Amyotrophic Lateral Sclerosis (ALS). Degeneration of upper and lower motor neurons, which contributes to muscle atrophy in ALS patients, is linked to chromatin modifications among a group of key genes. One important site that is regulated by epigenetic events is the hexanucleotide repeat expansion in C9orf72 within the chromosome 9p21. Hypermethylation of the C9orf72 related CpG Islands is shown to be associated with repeat expansion in ALS affected tissues. Overall, silencing of the C9orf72 gene may result in haploinsufficiency, and may therefore influence the presentation of disease. The activity of chromatin modifiers is also linked to prevalence of ALS. DNMT3A is an important methylating agent and has been shown to be present throughout the central nervous systems of those with ALS. Furthermore, over-expression of this de novo methyl transferase is also implicated in cell death of motor-neuron analogs.

Mutations in the FUS gene, that encodes an RNA/DNA binding protein, are causally linked to ALS. ALS patients with such mutations have increased levels of DNA damage. The protein encoded by the FUS gene is employed in the DNA damage response. It is recruited to DNA double-strand breaks and catalyzes recombinational repair of such breaks. In response to DNA damage, the FUS protein also interacts with histone deacetylase I, a protein employed in epigenetic alteration of histones. This interaction is necessary for efficient DNA repair. These findings suggest that defects in epigenetic signalling and DNA repair contribute to the pathogenesis of ALS.

Neuro-oncology

A multitude of genetic and epigenetic changes in DNA profiles in brain cells are thought to be linked to tumorgenesis. These alterations, along with changes in protein functions, are shown to induce uncontrolled cell proliferation, expansion, and metastasis. While genetic events such as deletions, translocations, and amplification give rise to activation of oncogenes and deactivation of tumor suppressing genes, epigenetic changes silence or up-regulate these same genes through key chromatin modifications.

Neurotoxicity

Neurotoxicity refers to damage made to the central or peripheral nervous systems due to chemical, biological, or physical exposure to toxins. Neurotoxicity can occur at any age and its effects may be short-term or long-term, depending on the mechanism of action of the neurotoxin and degree of exposure.

Certain metals are considered essential due to their role in key biochemical and physiological pathways, while the remaining metals are characterized as being nonessential. Nonessential metals do not serve a purpose in any biological pathway and the presence and accumulation in the brain of most can lead to neurotoxicity. These nonessential metals, when found inside the body compete with essential metals for binding sites, upset antioxidant balance, and their accumulation in the brain can lead to harmful side effects, such as depression and intellectual disability. An increase in nonessential heavy metal concentrations in air, water and food sources, and household products has increased the risk of chronic exposure.

Acetylation, methylation and histone modification are some of the most common epigenetic markers. While these changes do not directly affect the DNA sequence, they are able to alter the accessibility to genetic components, such as the promoter or enhancer regions, necessary for gene expression. Studies have shown that long-term maternal exposure to lead (Pb) contributes to decreased methylation in areas of the fetal epigenome, for example the interspaced repetitive sequences (IRSs) Alu1 and LINE-1. The hypomethylation of these IRSs has been linked to increased risk for cancers and autoimmune diseases later in life. Additionally, studies have found a relationship between chronic prenatal Pb exposure and neurological diseases, such as Alzheimer's and schizophrenia, as well as developmental issues. Furthermore, the acetylation and methylation changes induced by overexposure to lead result in decreased neurogenesis and neuron differentiation ability, and consequently interfere with early brain development.

Overexposure to essential metals can also have detrimental consequences on the epigenome. For example, when manganese, a metal normally used by the body as a cofactor, is present at high concentrations in the blood it can negatively affect the central nervous system. Studies have shown that accumulation of manganese leads to dopaminergic cell death and consequently plays a role in the onset of Parkinson's disease (PD). A hallmark of Parkinson's disease is the accumulation of α-Synuclein in the brain. Increased exposure to manganese leads to the downregulation of protein kinase C delta (PKCδ) through decreased acetylation and results in the misfolding of the α-Synuclein protein that allows aggregation and triggers apoptosis of dopaminergic cells.

Research

The field has only recently seen a growth in interest, as well as in research, due to technological advancements that facilitate better resolution of the minute modifications made to DNA. However, even with the significant advances in technology, studying the biology of neurological phenomena, such as cognition and addiction, comes with its own set of challenges. Biological study of cognitive processes, especially with humans, has many ethical caveats. Some procedures, such as brain biopsies of people with Rett syndrome, usually call for a fresh tissue sample that can only be extricated from the brain of deceased individual. In such cases, the researchers have no control over the age of brain tissue sample, thereby limiting research options. In case of addiction to substances such as alcohol, researchers utilize mouse models to mirror the human version of the disease. However, the mouse models are administered greater volumes of ethanol than humans normally consume in order to obtain more prominent phenotypes. Therefore, while the model organism and the tissue samples provide an accurate approximation of the biology of neurological phenomena, these approaches do not provide a complete and precise picture of the exact processes underlying a phenotype or a disease.

Neuroepigenetics had also remained underdeveloped due to the controversy surrounding the classification of genetic modifications in matured neurons as epigenetic phenomena. This discussion arises due to the fact that neurons do not undergo mitosis after maturation, yet the conventional definition of epigenetic phenomena emphasizes heritable changes passed on from parent to offspring. However, various histone modifications are placed by epigenetic modifiers such as DNA methyltransferases (DNMT) in neurons and these marks regulate gene expression throughout the life span of the neuron. The modifications heavily influence gene expression and arrangement of synapses within the brain. Finally, although not inherited, most of these marks are maintained throughout the life of the cell once they are placed on chromatin.

Spontaneous order

From Wikipedia, the free encyclopedia

Spontaneous order, also named self-organization in the hard sciences, is the spontaneous emergence of order out of seeming chaos. The term "self-organization" is more often used for physical changes and biological processes, while "spontaneous order" is typically used to describe the emergence of various kinds of social orders in human social networks from the behavior of a combination of self-interested individuals who are not intentionally trying to create order through planning. Proposed examples of systems which evolved through spontaneous order or self-organization include the evolution of life on Earth, language, crystal structure, the Internet, Wikipedia, and free market economy.

In economics and the social sciences, spontaneous order has been defined by Hayek as "the result of human actions, not of human design".

In economics, spontaneous order has been defined as an equilibrium behavior among self-interested individuals, which is most likely to evolve and survive, obeying the natural selection process "survival of the likeliest".

History

According to Murray Rothbard, the philosopher Zhuangzi (c. 369–286 BC) was the first to propose the idea of spontaneous order. Zhuangzi rejected the authoritarianism of Confucianism, writing that there "has been such a thing as letting mankind alone; there has never been such a thing as governing mankind [with success]." He articulated an early form of spontaneous order, asserting that "good order results spontaneously when things are let alone", a concept later "developed particularly by Proudhon in the nineteenth [century]".

In 1767, the sociologist and historian Adam Ferguson within the context of Scottish Enlightenment described society as the "result of human action, but not the execution of any human design".

Jacobs has suggested that the term "spontaneous order" was effectively coined by Michael Polanyi in his essay, "The Growth of Thought in Society," Economica 8 (November 1941): 428–56.

The Austrian School of Economics, led by Carl Menger, Ludwig von Mises and Friedrich Hayek made it a centerpiece in its social and economic thought. Hayek's theory of spontaneous order is the product of two related but distinct influences that do not always tend in the same direction. As an economic theorist, his explanations can be given a rational explanation. But as a legal and social theorist, he leans, by contrast, very heavily on a conservative and traditionalist approach which instructs us to submit blindly to a flow of events over which we can have little control.

Proposed examples

Markets

Many classical-liberal theorists, such as Hayek, have argued that market economies are a spontaneous order, and that they represent "a more efficient allocation of societal resources than any design could achieve." They claim this spontaneous order (referred to as the extended order in Hayek's The Fatal Conceit) is superior to any order a human mind can design due to the specifics of the information required. Centralized statistical data, they suppose, cannot convey this information because the statistics are created by abstracting away from the particulars of the situation.

According to Norman P. Barry, this is illustrated in the concept of the invisible hand proposed by Adam Smith in The Wealth of Nations.

Lawrence Reed, president of the Foundation for Economic Education, a libertarian think tank in the United States, argues that spontaneous order "is what happens when you leave people alone—when entrepreneurs... see the desires of people... and then provide for them." He further claims that "[entrepreneurs] respond to market signals, to prices. Prices tell them what's needed and how urgently and where. And it's infinitely better and more productive than relying on a handful of elites in some distant bureaucracy."

Anarchism

Anarchists argue that the state is in fact an artificial creation of the ruling elite, and that true spontaneous order would arise if it were eliminated. This is construed by some but not all as the ushering in of organization by anarchist law. In the anarchist view, such spontaneous order would involve the voluntary cooperation of individuals. According to the Oxford Dictionary of Sociology, "the work of many symbolic interactionists is largely compatible with the anarchist vision, since it harbours a view of society as spontaneous order."

Sobornost

The concept of spontaneous order can also be seen in the works of the Russian Slavophile movements and specifically in the works of Fyodor Dostoyevsky. The concept of an organic social manifestation as a concept in Russia expressed under the idea of sobornost. Sobornost was also used by Leo Tolstoy as an underpinning to the ideology of Christian anarchism. The concept was used to describe the uniting force behind the peasant or serf Obshchina in pre-Soviet Russia.

Other examples

Perhaps the most prominent exponent of spontaneous order is Friedrich Hayek. In addition to arguing the economy is a spontaneous order, which he termed a catallaxy, he argued that common law and the brain are also types of spontaneous orders. In The Republic of Science, Michael Polanyi also argued that science is a spontaneous order, a theory further developed by Bill Butos and Thomas McQuade in a variety of papers. Gus DiZerega has argued that democracy is the spontaneous order form of government, David Emmanuel Andersson has argued that religion in places like the United States is a spontaneous order, and Troy Camplin argues that artistic and literary production are spontaneous orders. Paul Krugman has also contributed to spontaneous order theory in his book The Self-Organizing Economy, in which he claims that cities are self-organizing systems. Credibility thesis suggests that the credibility of social institutions is the driving factor behind the endogenous self-organization of institutions and their persistence.

Different rules of game would cause different types of spontaneous order. If an economic society obeys the equal-opportunity rules, the resulting spontaneous order is reflected as an exponential income distribution; that is, for an equal-opportunity economic society, the exponential income distribution is most likely to evolve and survive. By analyzing datasets of household income from 66 countries and Hong Kong SAR, ranging from Europe to Latin America, North America and Asia, Tao et al found that, for all of these countries, the income structure for the great majority of populations (low and middle income classes) follows an exponential income distribution.

Criticism

Roland Kley writes about Hayek's theory of spontaneous order that "the foundations of Hayek's liberalism are so incoherent" because the "idea of spontaneous order lacks distinctness and internal structure." The three components of Hayek's theory are lack of intentionality, the "primacy of tacit or practical knowledge", and the "natural selection of competitive traditions." While the first feature, that social institutions may arise in some unintended fashion, is indeed an essential element of spontaneous order, the second two are only implications, not essential elements.

Hayek's theory has also been criticized for not offering a moral argument, and his overall outlook contains "incompatible strands that he never seeks to reconcile in a systematic manner."

Abby Innes has criticised many of the economic ideas as a fatal confrontation between economic libertarianism and reality, arguing that it represents a form of materialist utopia that has much in common with Soviet Russia.

Herd behavior

From Wikipedia, the free encyclopedia
 

Herd behavior is the behavior of individuals in a group acting collectively without centralized direction. Herd behavior occurs in animals in herds, packs, bird flocks, fish schools, and so on, as well as in humans. Voting, demonstrations, riots, general strikes, sporting events, religious gatherings, everyday decision-making, judgement, and opinion-forming, are all forms of human-based herd behavior.

Raafat, Chater and Frith proposed an integrated approach to herding, describing two key issues, the mechanisms of transmission of thoughts or behavior between individuals and the patterns of connections between them. They suggested that bringing together diverse theoretical approaches of herding behavior illuminates the applicability of the concept to many domains, ranging from cognitive neuroscience to economics.

Animal behavior

A group of animals fleeing from a predator shows the nature of herd behavior, for example in 1971, in the oft-cited article "Geometry for the Selfish Herd", evolutionary biologist W. D. Hamilton asserted that each individual group member reduces the danger to itself by moving as close as possible to the center of the fleeing group. Thus the herd appears as a unit in moving together, but its function emerges from the uncoordinated behavior of self-serving individuals.

Symmetry-breaking

Asymmetric aggregation of animals under panic conditions has been observed in many species, including humans, mice, and ants. Theoretical models have demonstrated symmetry-breaking similar to observations in empirical studies. For example, when panicked individuals are confined to a room with two equal and equidistant exits, a majority will favor one exit while the minority will favor the other.

Possible mechanisms for this behavior include Hamilton's selfish herd theory, neighbor copying, or the byproduct of communication by social animals or runaway positive feedback.

Characteristics of escape panic include:

  • Individuals attempt to move faster than normal.
  • Interactions between individuals become physical.
  • Exits become arched and clogged.
  • Escape is slowed by fallen individuals serving as obstacles.
  • Individuals display a tendency towards mass or copied behavior.
  • Alternative or less used exits are overlooked.

Human behavior

Early research

The philosophers Søren Kierkegaard and Friedrich Nietzsche were among the first to criticize what they referred to as "the crowd" (Kierkegaard) and "herd morality" and the "herd instinct" (Nietzsche) in human society. Modern psychological and economic research has identified herd behavior in humans to explain the phenomenon of large numbers of people acting in the same way at the same time. The British surgeon Wilfred Trotter popularized the "herd behavior" phrase in his book, Instincts of the Herd in Peace and War (1914). In The Theory of the Leisure Class, Thorstein Veblen explained economic behavior in terms of social influences such as "emulation", where some members of a group mimic other members of higher status. In "The Metropolis and Mental Life" (1903), early sociologist George Simmel referred to the "impulse to sociability in man", and sought to describe "the forms of association by which a mere sum of separate individuals are made into a 'society' ". Other social scientists explored behaviors related to herding, such as Sigmund Freud (crowd psychology), Carl Jung (collective unconscious), Everett Dean Martin (Behavior of Crowds) and Gustave Le Bon (the popular mind).

Swarm theory observed in non-human societies is a related concept and is being explored as it occurs in human society. Scottish journalist Charles Mackay identifies multiple facets of herd behaviour in his 1841 work, Extraordinary Popular Delusions and the Madness of Crowds.

Everyday decision-making

"Benign" herding behaviors may occur frequently in everyday decisions based on learning from the information of others, as when a person on the street decides which of two restaurants to dine in. Suppose that both look appealing, but both are empty because it is early evening; so at random, this person chooses restaurant A. Soon a couple walks down the same street in search of a place to eat. They see that restaurant A has customers while B is empty, and choose A on the assumption that having customers makes it the better choice. Because other passersby do the same thing into the evening, restaurant A does more business that night than B. This phenomenon is also referred as an information cascade.

Crowds

Crowds that gather on behalf of a grievance can involve herding behavior that turns violent, particularly when confronted by an opposing ethnic or racial group. The Los Angeles riots of 1992, New York Draft Riots, and Tulsa race massacre are notorious in U.S. history. The idea of a "group mind" or "mob behavior" was put forward by the French social psychologists Gabriel Tarde and Gustave Le Bon.

Sheeple

Sheeple (/ˈʃpəl/; a portmanteau of "sheep" and "people") is a derogatory term that highlights the passive herd behavior of people easily controlled by a governing power or market fads which likens them to sheep, a herd animal that is "easily" led about. The term is used to describe those who voluntarily acquiesce to a suggestion without any significant critical analysis or research, in large part due to the majority of a population having a similar mindset. Word Spy defines it as "people who are meek, easily persuaded, and tend to follow the crowd (sheep + people)". Merriam-Webster defines the term as "people who are docile, compliant, or easily influenced: people likened to sheep". The word is pluralia tantum, which means it does not have a singular form.

While its origins are unclear, the word was used by W. R. Anderson in his column Round About Radio, published in London 1945, where he wrote:

The simple truth is that you can get away with anything, in government. That covers almost all the evils of the time. Once in, nobody, apparently, can turn you out. The People, as ever (I spell it "Sheeple"), will stand anything.

Another early use was from Ernest Rogers, whose 1949 book The Old Hokum Bucket contained a chapter entitled "We the Sheeple". The Wall Street Journal first reported the label in print in 1984; the reporter heard the word used by the proprietor of the American Opinion bookstore. In this usage, taxpayers were derided for their blind conformity as opposed to those who thought independently. The term was first popularized in the late 1980s and early 1990s by conspiracy theorist and broadcaster Bill Cooper on his radio program The Hour of the Time which was broadcast internationally via shortwave radio stations. The program gained a small, yet dedicated following, inspiring many individuals who would later broadcast their own radio programs critical of the United States government. This then led to its regular use on the radio program Coast to Coast AM by Art Bell throughout the 1990s and early 2000s. These combined factors significantly increased the popularity of the word and led to its widespread use.

The term can also be used for those who seem inordinately tolerant, or welcoming, of widespread policies. In a column entitled "A Nation of Sheeple", columnist Walter E. Williams writes, "Americans sheepishly accepted all sorts of Transportation Security Administration nonsense. In the name of security, we've allowed fingernail clippers, eyeglass screwdrivers, and toy soldiers to be taken from us prior to boarding a plane."

Economics and finance

Currency crises

Currency crises tend to display herding behavior when foreign and domestic investors convert a government's currency into physical assets (like gold) or foreign currencies when they realize the government is unable to repay its debts. This is called a speculative attack and it will tend to cause moderate inflation in the short term. When consumers realize that the inflation of needed commodities is increasing, they will begin to stockpile and hoard goods, which will accelerate the rate of inflation even faster. This will ultimately crash the currency and likely lead to civil unrest

Stock market bubbles

Large stock market trends often begin and end with periods of frenzied buying (bubbles) or selling (crashes). Many observers cite these episodes as clear examples of herding behavior that is irrational and driven by emotion—greed in the bubbles, fear in the crashes. Individual investors join the crowd of others in a rush to get in or out of the market.

Some followers of the technical analysis school of investing see the herding behavior of investors as an example of extreme market sentiment. The academic study of behavioral finance has identified herding in the collective irrationality of investors, particularly the work of Nobel laureates Vernon L. Smith, Amos Tversky, Daniel Kahneman, and Robert Shiller. Hey and Morone (2004) analyzed a model of herd behavior in a market context.

Some empirical works on methods for detecting and measuring the extent of herding include Christie and Huang (1995) and Chang, Cheng and Khorana (2000). These results refer to a market with a well-defined fundamental value. A notable incident of possible herding is the 2007 uranium bubble, which started with flooding of the Cigar Lake Mine in Saskatchewan, during the year 2006.

Economic theory of herding

There are two strands of work in economic theory that consider why herding occurs and provide frameworks for examining its causes and consequences.

The first of these strands is that on herd behavior in a non-market context. The seminal references are Banerjee (1992) and Bikhchandani, Hirshleifer and Welch (1992), both of which showed that herd behavior may result from private information not publicly shared. More specifically, both of these papers showed that individuals, acting sequentially on the basis of private information and public knowledge about the behavior of others, may end up choosing the socially undesirable option. A large subsequent literature has examined the causes and consequences of such "herds" and information cascades.

The second strands concerns information aggregation in market contexts. A very early reference is the classic paper by Grossman and Stiglitz (1976) that showed that uninformed traders in a market context can become informed through the price in such a way that private information is aggregated correctly and efficiently. Subsequent work has shown that markets may systematically overweight public information; it has also studied the role of strategic trading as an obstacle to efficient information aggregation.

Marketing

Herd behavior is often a useful tool in marketing and, if used properly, can lead to increases in sales and changes to the structure of society. Whilst it has been shown that financial incentives cause action in large numbers of people, herd mentality often wins out in a case of "Keeping up with the Joneses".

Brand and product success

Communications technologies have contributed to the proliferation to consumer choice and "the power of crowds", Consumers increasingly have more access to opinions and information from both opinion leaders and formers on platforms that have largely user-generated content, and thus have more tools with which to complete any decision-making process. Popularity is seen as an indication of better quality, and consumers will use the opinions of others posted on these platforms as a powerful compass to guide them towards products and brands that align with their preconceptions and the decisions of others in their peer groups. Taking into account differences in needs and their position in the socialization process, Lessig & Park examined groups of students and housewives and the influence that these reference groups have on one another. By way of herd mentality, students tended to encourage each other towards beer, hamburger and cigarettes, whilst housewives tended to encourage each other towards furniture and detergent. Whilst this particular study was done in 1977, one cannot discount its findings in today's society. A study done by Burke, Leykin, Li and Zhang in 2014 on the social influence on shopper behavior shows that shoppers are influenced by direct interactions with companions, and as a group size grows, herd behaviour becomes more apparent. Discussions that create excitement and interest have greater impact on touch frequency and purchase likelihood grows with greater involvement caused by a large group. Shoppers in this Midwestern American shopping outlet were monitored and their purchases noted, and it was found up to a point, potential customers preferred to be in stores which had moderate levels of traffic. The other people in the store not only served as company, but also provided an inference point on which potential customers could model their behavior and make purchase decisions, as with any reference group or community.

Social media can also be a powerful tool in perpetuating herd behaviour. Its immeasurable amount of user-generated content serves as a platform for opinion leaders to take the stage and influence purchase decisions, and recommendations from peers and evidence of positive online experience all serve to help consumers make purchasing decisions. Gunawan and Huarng's 2015 study concluded that social influence is essential in framing attitudes towards brands, which in turn leads to purchase intention. Influencers form norms which their peers are found to follow, and targeting extroverted personalities increases chances of purchase even further. This is because the stronger personalities tend to be more engaged on consumer platforms and thus spread word of mouth information more efficiently. Many brands have begun to realise the importance of brand ambassadors and influencers, and it is being shown more clearly that herd behaviour can be used to drive sales and profits exponentially in favour of any brand through examination of these instances.

Social marketing

Marketing can easily transcend beyond commercial roots, in that it can be used to encourage action to do with health, environmentalism and general society. Herd mentality often takes a front seat when it comes to social marketing, paving the way for campaigns such as Earth Day, and the variety of anti-smoking and anti-obesity campaigns seen in every country. Within cultures and communities, marketers must aim to influence opinion leaders who in turn influence each other, as it is the herd mentality of any group of people that ensures a social campaign's success. A campaign run by Som la Pera in Spain to combat teenage obesity found that campaigns run in schools are more effective due to influence of teachers and peers, and students' high visibility, and their interaction with one another. Opinion leaders in schools created the logo and branding for the campaign, built content for social media and led in-school presentations to engage audience interaction. It was thus concluded that the success of the campaign was rooted in the fact that its means of communication was the audience itself, giving the target audience a sense of ownership and empowerment. As mentioned previously, students exert a high level of influence over one another, and by encouraging stronger personalities to lead opinions, the organizers of the campaign were able to secure the attention of other students who identified with the reference group.

Herd behaviour not only applies to students in schools where they are highly visible, but also amongst communities where perceived action plays a strong role. Between 2003 and 2004, California State University carried out a study to measure household conservation of energy, and motivations for doing so. It was found that factors like saving the environment, saving money or social responsibility did not have as great an impact on each household as the perceived behaviour of their neighbours did. Although the financial incentives of saving money, closely followed by moral incentives of protecting the environment, are often thought of as being a community's greatest guiding compass, more households responded to the encouragement to save energy when they were told that 77% of their neighbours were using fans instead of air conditioning, proving that communities are more likely to engage in a behaviour if they think that everyone else is already taking part.

Herd behaviours shown in the two examples exemplify that it can be a powerful tool in social marketing, and if harnessed correctly, has the potential to achieve great change. It is clear that opinion leaders and their influence achieve huge reach among their reference groups and thus can be used as the loudest voices to encourage others in any collective direction.

Collective animal behavior

Sort sol. Starling flock at sunset in Denmark

Collective animal behaviour is a form of social behavior involving the coordinated behavior of large groups of similar animals as well as emergent properties of these groups. This can include the costs and benefits of group membership, the transfer of information, decision-making process, locomotion and synchronization of the group. Studying the principles of collective animal behavior has relevance to human engineering problems through the philosophy of biomimetics. For instance, determining the rules by which an individual animal navigates relative to its neighbors in a group can lead to advances in the deployment and control of groups of swimming or flying micro-robots such as UAVs (Unmanned Aerial Vehicles).

Examples

Examples of collective animal behavior include:

History

The basis of collective animal behaviour originated from the study of collective phenomena; that is, repeated interactions among individuals that produce large scale patterns. The foundation of collective phenomena originates from the idea that collective systems can be understood from a set of techniques. For example, Nicolis and Prigogine (1977) employed the use of non-linear thermodynamics to help explain similarities between collective systems at different scales. Other studies aim to use physics, mathematics and chemistry to provide frameworks to study collective phenomena.

Proposed functions

Many functions of animal aggregations have been proposed. These proposed functions may be grouped into the four following categories: social and genetic, anti-predator, enhanced foraging, and increased locomotion efficiency.

Social interaction

Support for the social and genetic function of aggregations, especially those formed by fish, can be seen in several aspects of their behavior. For instance, experiments have shown that individual fish removed from a school will have a higher respiratory rate than those found in the school. This effect has been partly attributed to stress, although hydrodynamic factors were considered more important in this particular study. The calming effect of being with conspecifics may thus provide a social motivation for remaining in an aggregation. Herring, for instance, will become very agitated if they are isolated from conspecifics. Fish schools have also been proposed to serve a reproductive function since they provide increased access to potential mates. Some scientists have provided disadvantages to mating in aggregations by using robotic male crabs; a female is at a higher risk approaching a cluster, has the ability of comparing males, increasing mate competition.

Protection from predators

School of goldband fusiliers

Several anti-predator functions of animal aggregations have been proposed. One potential method by which fish schools or bird flocks may thwart predators is the ‘predator confusion effect’ proposed and demonstrated by Milinski and Heller (1978). This theory is based on the idea that it becomes difficult for predators to pick out individual prey from groups because the many moving targets create a sensory overload of the predator's visual channel. Milinski and Heller's findings have been corroborated both in experiment and computer simulations.

A second potential anti-predator effect of animal aggregations is the "many eyes" hypothesis. This theory states that as the size of the group increases, the task of scanning the environment for predators can be spread out over many individuals. Not only does this mass collaboration presumably provide a higher level of vigilance, it could also allow more time for individual feeding.

A third hypothesis for an anti-predatory effect of animal aggregation is the "encounter dilution" effect. Hamilton, for instance, proposed that the aggregation of animals was due to a "selfish" avoidance of a predator and was thus a form of cover-seeking. Another formulation of the theory was given by Turner and Pitcher and was viewed as a combination of detection and attack probabilities. In the detection component of the theory, it was suggested that potential prey might benefit by living together since a predator is less likely to chance upon a single group than a scattered distribution. In the attack component, it was thought that an attacking predator is less likely to eat a particular animal when a greater number of individuals are present. In sum, an individual has an advantage if it is in the larger of two groups, assuming that the probability of detection and attack does not increase disproportionately with the size of the group.

Enhanced foraging

A third proposed benefit of animal groups is that of enhanced foraging. This ability was demonstrated by Pitcher and others in their study of foraging behavior in shoaling cyprinids. In this study, the time it took for groups of minnows and goldfish to find a patch of food was quantified. The number of fishes in the groups was varied, and a statistically significant decrease in the amount of time necessary for larger groups to find food was established. Further support for an enhanced foraging capability of schools is seen in the structure of schools of predatory fish. Partridge and others analyzed the school structure of Atlantic bluefin tuna from aerial photographs and found that the school assumed a parabolic shape, a fact that was suggestive of cooperative hunting in this species (Partridge et al., 1983).

Increased locomotion efficiency

This theory states that groups of animals moving in a fluid environment may save energy when swimming or flying together, much in the way that bicyclists may draft one another in a peloton. Geese flying in a Vee formation are also thought to save energy by flying in the updraft of the wingtip vortex generated by the previous animal in the formation. Ducklings have also been shown to save energy by swimming in a line. Increased efficiencies in swimming in groups have also been proposed for schools of fish and Antarctic krill.

Another example can be seen in homing pigeons. When a homing pigeon is released with other individuals from its roost, these pigeon groups showed increased efficiency and decision making to shorten the distance of the route taken to return home, thus saving energy when flying between locations.

Costs of group living

Ectoparasitism and disease

Animals that form colonies form a cost of living in groups. These colonies exhibit a system with close physical proximity and increased contact between individuals, thus increasing transmission of disease and ectoparasites; a universal hazard of animals living in groups.

For example, cliff swallows that are commonly parasitized by swallow bugs incur a cost when forming colonies, as these parasitic bugs increase the mortality rates of cliff swallow nestlings. A study shows that the number of swallow bugs found in cliff swallow nests increased with the increase of cliff swallow colony size, thus reducing overall success of these colonies.  

Larger groups of animals tend to harbour an increased number of pathogens and are at a higher risk of epidemics. This is particularly due to the large amount of waste material produced by larger groups, allowing for a favourable environment for pathogens to thrive.

Intraspecific competition

Another cost to group living is the competition over food resources. As individuals group together, there is an increased nutritional requirement of the larger group compared to smaller groups. This causes an increased energetic cost as individuals now travel farther to visit resource patches.

An example of intraspecific competition can be seen within groups of whales and dolphins. Female bottle-nose dolphins with similar home ranges tend to have varied foraging habits in an effort to reduce and negate the intraspecific competition of resources. Benefits of group living on defence from predators is very evident in nature, however in locations of high resource competition poses an effect on the mortality of certain individuals. This can be seen in species of shoaling fish, where the initial aggregation of individuals to a group initially allowed for the protection from predators, however the limiting resources available changes over time, and mortality rates of these fish begin to increase, showing that resource competition is an important regulator of reef fish groups after the initial benefits of refuge grouping and predatory protection.

Interesting contrasts to the benefit of increased group size on foraging efficiency can be seen in nature particularly due to intraspecific interactions. A study conducted on the Alaskan moose shows that with increasing group size, there is a decrease in foraging efficiency. This is result of increased social aggression in the groups, as the individuals of the group spent most of its time in alert-alarm postures, thus spending less time foraging and feeding, reducing its foraging efficiency.

Reproduction and development

With increasing colony size and competition of resources within individuals of a group, reproductive rates and development of offspring may vary due to reduced resource availability. For example, a study conducted on groups of leaf monkeys show that infant monkeys in larger group sizes developed slower than those in smaller group sizes. This staggered infant development in the larger groups were closely related to the reduced energetic gain of mothers with reduced available nutrition, thus negatively affecting infant developmental rates. It was also shown that females within the larger groups reproduced more slowly compared to females in smaller groups.

The Eurasian badger (Meles meles) is an example of a species that incur a cost of group living on the successful reproductive rates. Females present in larger groups of badgers have an increased reproductive failure rate compared to solitary badgers. This is a result of increased reproductive competition within the female individuals in the group.

Stress

Another cost to group living is stress levels within individuals of a group. Stress levels within group living varies dependent on the size of the colony or group. A large group of animals may suffer larger levels of stress arising from intraspecific food competition. In contrast, smaller groups may have increased stress levels arising from the lack of adequate defense from predators as well as a reduced foraging efficiency.

An example can be seen in a study conducted on a species of ring-tail lemurs (Lemur catta). This study found that an optimum group size of around 10-20 individuals produces the lowest level of cortisol (an indicator of stress), while groups with smaller or larger than 10-20 individuals showed an increased level of cortisol production, thus an increased level of stress within the individuals of the larger and smaller groups.

Inbreeding

Another proposed cost to group living is the cost incurred to avoid inbreeding. Individuals may it be male or females in groups may disperse in an effort to avoid inbreeding. This poses a more detrimental effect on smaller, isolated groups of individuals, as they are at a greater risk of inbreeding and thus suppressing the group’s overall fitness.

Group structure

The structure of large animal groups has been difficult to study because of the large number of animals involved. The experimental approach is therefore often complemented by mathematical modeling of animal aggregations.

Experimental approach

The purpose of experiments investigating the structure of animal aggregations is to determine the 3D position of each animal within a volume at each point in time. It is important to know the internal structure of the group because that structure can be related to the proposed motivations for animal grouping. This capability requires the use of multiple cameras trained on the same volume in space, a technique known as stereophotogrammetry. When hundreds or thousands of animals occupy the study volume, it becomes difficult to identify each individual. In addition, animals may block one another in the camera views, a problem known as occlusion. Once the location of each animal at each point in time is known, various parameters describing the animal group can be extracted.

These parameters include:

Density: The density of an animal aggregation is the number of animals divided by the volume (or area) occupied by the aggregation. Density may not be a constant throughout the group. For instance, starling flocks have been shown to maintain higher densities on the edges than in the middle of the flock, a feature that is presumably related to defense from predators.

Polarity: The group polarity describes if the group animals are all pointing in the same direction or not. In order to determine this parameter, the average orientation of all animals in the group is determined. For each animal, the angular difference between its orientation and the group orientation is then found. The group polarity is then the average of these differences (Viscido 2004).

Nearest Neighbor Distance: The nearest neighbor distance (NND) describes the distance between the centroid of one animal (the focal animal) and the centroid of the animal nearest to the focal animal. This parameter can be found for each animal in an aggregation and then averaged. Care must be taken to account for the animals located at the edge of an animal aggregation. These animals have no neighbor in one direction.

Nearest Neighbor Position: In a polar coordinate system, the nearest neighbor position describes the angle and distance of the nearest neighbor to a focal animal.

Packing Fraction: Packing fraction is a parameter borrowed from physics to define the organization (or state i.e. solid, liquid, or gas) of 3D animal groups. It is an alternative measure to density. In this parameter, the aggregation is idealized as an ensemble of solid spheres, with each animal at the center of a sphere. The packing fraction is defined as the ratio of the total volume occupied by all individual spheres divided by the global volume of the aggregation (Cavagna 2008). Values range from zero to one, where a small packing fraction represents a dilute system like a gas. Cavagna found that the packing fraction for groups of starlings was 0.012.

Integrated Conditional Density: This parameter measures the density at various length scales and therefore describes the homogeneity of density throughout an animal group.

Pair Distribution Function: This parameter is usually used in physics to characterize the degree of spatial order in a system of particles. It also describes the density, but this measures describes the density at a distance away from a given point. Cavagna et al. found that flocks of starlings exhibited more structure than a gas but less than a liquid.

Modeling approach

The simplest mathematical models of animal aggregations generally instruct the individual animals to follow three rules:

  1. Move in the same direction as your neighbor
  2. Remain close to your neighbors
  3. Avoid collisions with your neighbors
A diagram illustrating the difference between 'metric distance' and 'topological distance' in reference to fish schools

Two examples of this simulation are the Boids program created by Craig Reynolds in 1986 and the Self Propelled Particle model. Many current models use variations on these rules. For instance, many models implement these three rules through layered zones around each animal. In the zone of repulsion very close to the animal, the focal animal will seek to distance itself from its neighbors in order to avoid a collision. In the slightly further away zone of alignment, a focal animal will align its direction of motion with its neighbors. In the outmost zone of attraction, extending the largest distance from the focal animal as it is able to sense, the focal animal will move towards a neighbor. The shape of these zones is affected by the sensory capabilities of the animal. For example, the visual field of a bird does not extend behind its body. Fish, on the other hand, rely on both vision and on hydrodynamic signals relayed through its lateral line. Antarctic krill rely on vision and on hydrodynamic signals relayed through its antennae.

Recent studies of starling flocks have shown, however, that each bird modifies its position relative to the six or seven animals directly surrounding it, no matter how close or how far away those animals are. Interactions between flocking starlings are thus based on a topological rule rather than a metric rule. It remains to be seen whether the same rule can be applied to other animals. Another recent study, based on an analysis of high speed camera footage of flocks above Rome and assuming minimal behavioural rules, has convincingly simulated a number of aspects of flock behaviour.

Collective decision making

Aggregations of animals are faced with decisions which they must make if they are to remain together. For a school of fish, an example of a typical decision might be which direction to swim when confronted by a predator. Social insects such as ants and bees must collectively decide where to build a new nest. A herd of elephants must decide when and where to migrate. How are these decisions made? Do stronger or more experienced 'leaders' exert more influence than other group members, or does the group make a decision by consensus? The answer probably depends on the species. While the role of a leading matriarch in an elephant herd is well known, studies have shown that some animal species use a consensus approach in their collective decision-making process.

A recent investigation showed that small groups of fish used consensus decision-making when deciding which fish model to follow. The fish did this by a simple quorum rule such that individuals watched the decisions of others before making their own decisions. This technique generally resulted in the 'correct' decision but occasionally cascaded into the 'incorrect' decision. In addition, as the group size increased, the fish made more accurate decisions in following the more attractive fish model. Consensus decision-making, a form of collective intelligence, thus effectively uses information from multiple sources to generally reach the correct conclusion.

Some simulations of collective decision-making use the Condorcet method to model the way groups of animals come to consensus.

Timeline of the universe

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Timeline_of_the_universe   Diagram of Evol...