The 1000 Genomes Project (abbreviated as 1KGP), launched in January 2008, was an international research effort to establish by far the most detailed catalogue of human genetic variation. Scientists planned to sequence the genomes
of at least one thousand anonymous participants from a number of
different ethnic groups within the following three years, using newly developed technologies
which were faster and less expensive. In 2010, the project finished its
pilot phase, which was described in detail in a publication in the
journal Nature. In 2012, the sequencing of 1092 genomes was announced in a Nature publication. In 2015, two papers in Nature reported results and the completion of the project and opportunities for future research.
Many rare variations, restricted to closely related groups, were
identified, and eight structural-variation classes were analyzed.
The project unites multidisciplinary research teams from institutes around the world, including China, Italy, Japan, Kenya, Nigeria, Peru, the United Kingdom, and the United States. Each will contribute to the enormous sequence dataset and to a refined human genome map, which will be freely accessible through public databases to the scientific community and the general public alike.
By providing an overview of all human genetic variation, the
consortium will generate a valuable tool for all fields of biological
science, especially in the disciplines of genetics, medicine, pharmacology, biochemistry, and bioinformatics.
Changes in the number and order of genes (A-D) create genetic diversity within and between populations.
The random sampling of gametes during sexual reproduction leads to genetic drift
— a random fluctuation in the population frequency of a trait — in
subsequent generations and would result in the loss of all variation in
the absence of external influence. It is postulated that the rate of
genetic drift is inversely proportional to population size, and that it
may be accelerated in specific situations such as bottlenecks, where the population size is reduced for a certain period of time, and by the founder effect (individuals in a population tracing back to a small number of founding individuals).
Anzai et al. demonstrated that indels account for 90.4% of all observed variations in the sequence of the major histocompatibility locus (MHC) between humans and chimpanzees. After taking multiple indels into consideration, the high degree of genomic similarity between the two species (98.6% nucleotide sequence identity) drops to only 86.7%. For example, a large deletion of 95 kilobases (kb) between the loci of the human MICA and MICBgenes, results in a single hybrid chimpanzee MIC gene, linking this region to a species-specific handling of several retroviral infections and the resultant susceptibility to various autoimmune diseases. The authors conclude that instead of more subtle SNPs, indels were the driving mechanism in primate speciation.
Besides mutations, SNPs and other structural variants such as copy-number variants (CNVs) are contributing to the genetic diversity in human populations. Using microarrays,
almost 1,500 copy number variable regions, covering around 12% of the
genome and containing hundreds of genes, disease loci, functional
elements and segmental duplications, have been identified in the HapMap
sample collection. Although the specific function of CNVs remains
elusive, the fact that CNVs span more nucleotide content per genome than
SNPs emphasizes the importance of CNVs in genetic diversity and
evolution.
Investigating human genomic variations holds great potential for
identifying genes that might underlie differences in disease resistance
(e.g. MHC region) or drug metabolism.
Natural selection
Natural selection in the evolution of a trait can be divided into three classes. Directional or positive selection refers to a situation where a certain allele has a greater fitness than other alleles, consequently increasing its population frequency (e.g. antibiotic resistance of bacteria). In contrast, stabilizing or negative selection
(also known as purifying selection) lowers the frequency or even
removes alleles from a population due to disadvantages associated with
it with respect to other alleles. Finally, a number of forms of balancing selection exist; those increase genetic variation within a species by being overdominant (heterozygous individuals are fitter than homozygous individuals, e.g. G6PD, a gene that is involved in both Hemolytic anaemia and malaria resistance) or can vary spatially within a species that inhabits different niches, thus favouring different alleles.
Some genomic differences may not affect fitness. Neutral variation,
previously thought to be “junk” DNA, is unaffected by natural selection
resulting in higher genetic variation at such sites when compared to
sites where variation does influence fitness.
It is not fully clear how natural selection has shaped population
differences; however, genetic candidate regions under selection have
been identified recently. Patterns of DNA polymorphisms
can be used to reliably detect signatures of selection and may help to
identify genes that might underlie variation in disease resistance or
drug metabolism. Barreiro et al. found evidence that negative selection has reduced population differentiation at the amino acid–altering
level (particularly in disease-related genes), whereas, positive
selection has ensured regional adaptation of human populations by
increasing population differentiation in gene regions (mainly nonsynonymous and 5'-untranslated region variants).
It is thought that most complex and Mendelian diseases
(except diseases with late onset, assuming that older individuals no
longer contribute to the fitness of their offspring) will have an effect
on survival and/or reproduction, thus, genetic factors underlying those
diseases should be influenced by natural selection. Although, diseases
that have late onset today could have been childhood diseases in the
past as genes delaying disease progression could have undergone
selection. Gaucher disease (mutations in the GBA gene), Crohn's disease (mutation of NOD2) and familial hypertrophic cardiomyopathy (mutations in MYH7, TNNT2, TPM1 and MYBPC3)
are all examples of negative selection. These disease mutations are
primarily recessive and segregate as expected at a low frequency,
supporting the hypothesized negative selection. There is evidence that
the genetic-basis of Type 1 Diabetes may have undergone positive selection.
Few cases have been reported, where disease-causing mutations appear at
the high frequencies supported by balanced selection. The most
prominent example is mutations of the G6PD locus where, if homozygous G6PD enzyme deficiency and consequently Hemolytic anaemia results, but in the heterozygous state are partially protective against malaria.
Other possible explanations for segregation of disease alleles at
moderate or high frequencies include genetic drift and recent
alterations towards positive selection due to environmental changes such
as diet or genetic hitch-hiking.
Genome-wide comparative analyses
of different human populations, as well as between species (e.g. human
versus chimpanzee) are helping us to understand the relationship between
diseases and selection and provide evidence of mutations in constrained
genes being disproportionally associated with heritable diseasephenotypes.
Genes implicated in complex disorders tend to be under less negative
selection than Mendelian disease genes or non-disease genes.
Project description
Goals
There
are two kinds of genetic variants related to disease. The first are
rare genetic variants that have a severe effect predominantly on simple
traits (e.g. Cystic fibrosis, Huntington disease). The second, more common, genetic variants have a mild effect and are thought to be implicated in complex traits (e.g. Cognition, Diabetes, Heart Disease).
Between these two types of genetic variants lies a significant gap of
knowledge, which the 1000 Genomes Project is designed to address.
The primary goal of this project is to create a complete and detailed catalogue of human genetic variations, which in turn can be used for association studies
relating genetic variation to disease. By doing so the consortium aims
to discover >95 % of the variants (e.g. SNPs, CNVs, indels) with minor allele frequencies as low as 1% across the genome and 0.1-0.5% in gene regions, as well as to estimate the population frequencies, haplotype backgrounds and linkage disequilibrium patterns of variant alleles.
Secondary goals will include the support of better SNP and probe selection for genotyping platforms in future studies and the improvement of the human reference sequence.
Furthermore, the completed database will be a useful tool for studying
regions under selection, variation in multiple populations and
understanding the underlying processes of mutation and recombination.
Outline
The human genome consists of approximately 3 billion DNA base pairs and is estimated to carry around 20,000 protein coding genes.
In designing the study the consortium needed to address several
critical issues regarding the project metrics such as technology
challenges, data quality standards and sequence coverage.
Over the course of the next three years, scientists at the Sanger Institute, BGI Shenzhen and the National Human Genome Research Institute’s
Large-Scale Sequencing Network are planning to sequence a minimum of
1,000 human genomes. Due to the large amount of sequence data that need
to be generated and analyzed it is possible that other participants may
be recruited over time.
Almost 10 billion bases will be sequenced per day over a period
of the two year production phase. This equates to more than two human
genomes every 24 hours; a groundbreaking capacity. Challenging the
leading experts of bioinformatics
and statistical genetics, the sequence dataset will comprise 6 trillion
DNA bases, 60-fold more sequence data than what has been published in DNA databases over the past 25 years.
To determine the final design of the full project three pilot
studies were designed and will be carried out within the first year of
the project. The first pilot intends to genotype 180 people of 3 major geographic groups
at low coverage (2x). For the second pilot study,
the genomes of two nuclear families (both parents and an adult child)
are going to be sequenced with deep coverage (20x per genome). The third
pilot study involves sequencing the coding regions (exons) of 1,000
genes in 1,000 people with deep coverage (20x).
It has been estimated that the project would likely cost more
than $500 million if standard DNA sequencing technologies were used.
Therefore, several new technologies (e.g. Solexa, 454, SOLiD)
will be applied, lowering the expected costs to between $30 million and
$50 million. The major support will be provided by the Wellcome Trust Sanger Institute in Hinxton, England; the Beijing Genomics Institute, Shenzhen (BGI Shenzhen), China; and the NHGRI, part of the National Institutes of Health (NIH).
Locations of population samples of 1000 Genomes Project. Each circle represents the number of sequences in the final release.
Based on the overall goals for the project, the samples will be chosen to provide power in populations where association studies
for common diseases are being carried out. Furthermore, the samples do
not need to have medical or phenotype information since the proposed
catalogue will be a basic resource on human variation.
For the pilot studies human genome samples from the HapMap collection will be sequenced. It will be useful to focus on samples that have additional data available (such as ENCODE sequence, genome-wide genotypes, fosmid-end sequence, structural variation assays, and gene expression) to be able to compare the results with those from other projects.
Complying with extensive ethical procedures, the 1000 Genomes
Project will then use samples from volunteer donors. The following
populations will be included in the study: Yoruba in Ibadan (YRI), Nigeria; Japanese in Tokyo (JPT); Chinese in Beijing (CHB); Utah residents with ancestry from northern and western Europe (CEU); Luhya in Webuye, Kenya (LWK); Maasai in Kinyawa, Kenya (MKK); Toscani in Italy (TSI); Peruvians in Lima, Peru (PEL); Gujarati Indians in Houston (GIH); Chinese in metropolitan Denver (CHD); people of Mexican ancestry in Los Angeles (MXL); and people of African ancestry in the southwestern United States (ASW).
Community meeting
Data
generated by the 1000 Genomes Project is widely used by the genetics
community, making the first 1000 Genomes Project one of the most cited
papers in biology.
To support this user community, the project held a community analysis
meeting in July 2012 that included talks highlighting key project
discoveries, their impact on population genetics and human disease
studies, and summaries of other large scale sequencing studies.
Project findings
Pilot phase
The pilot phase consisted of three projects:
low-coverage whole-genome sequencing of 179 individuals from 4 populations
high-coverage sequencing of 2 trios (mother-father-child)
exon-targeted sequencing of 697 individuals from 7 populations
It was found that on average, each person carries around 250-300
loss-of-function variants in annotated genes and 50-100 variants
previously implicated in inherited disorders. Based on the two trios, it
is estimated that the rate of de novo germline mutation is
approximately 10−8 per base per generation.
Metagenomics
allows the study of microbial communities like those present in this
stream receiving acid drainage from surface coal mining.
Metagenomics is the study of genetic material recovered directly from environmental samples. The broad field may also be referred to as environmental genomics, ecogenomics or community genomics.
While traditional microbiology and microbial genome sequencing and genomics rely upon cultivated clonalcultures, early environmental gene sequencing cloned specific genes (often the 16S rRNA gene) to produce a profile of diversity in a natural sample. Such work revealed that the vast majority of microbial biodiversity had been missed by cultivation-based methods.
Because of its ability to reveal the previously hidden diversity
of microscopic life, metagenomics offers a powerful lens for viewing the
microbial world that has the potential to revolutionize understanding
of the entire living world. As the price of DNA sequencing continues to fall, metagenomics now allows microbial ecology to be investigated at a much greater scale and detail than before. Recent studies use either "shotgun" or PCR directed sequencing to get largely unbiased samples of all genes from all the members of the sampled communities.
Etymology
The term "metagenomics" was first used by Jo Handelsman, Jon Clardy, Robert M. Goodman, Sean F. Brady, and others, and first appeared in publication in 1998.
The term metagenome referenced the idea that a collection of genes
sequenced from the environment could be analyzed in a way analogous to
the study of a single genome. In 2005, Kevin Chen and Lior Pachter (researchers at the University of California, Berkeley)
defined metagenomics as "the application of modern genomics technique
without the need for isolation and lab cultivation of individual
species".
History
Conventional sequencing begins with a culture of identical cells as a source of DNA.
However, early metagenomic studies revealed that there are probably
large groups of microorganisms in many environments that cannot be cultured and thus cannot be sequenced. These early studies focused on 16S ribosomalRNA sequences which are relatively short, often conserved within a species, and generally different between species. Many 16S rRNA sequences have been found which do not belong to any known cultured species,
indicating that there are numerous non-isolated organisms. These
surveys of ribosomal RNA (rRNA) genes taken directly from the
environment revealed that cultivation based methods find less than 1% of the bacterial and archaeal species in a sample.
Much of the interest in metagenomics comes from these discoveries that
showed that the vast majority of microorganisms had previously gone
unnoticed.
Early molecular work in the field was conducted by Norman R. Pace and colleagues, who used PCR to explore the diversity of ribosomal RNA sequences.
The insights gained from these breakthrough studies led Pace to propose
the idea of cloning DNA directly from environmental samples as early as
1985. This led to the first report of isolating and cloning bulk DNA from an environmental sample, published by Pace and colleagues in 1991 while Pace was in the Department of Biology at Indiana University. Considerable efforts ensured that these were not PCR
false positives and supported the existence of a complex community of
unexplored species. Although this methodology was limited to exploring
highly conserved, non-protein coding genes,
it did support early microbial morphology-based observations that
diversity was far more complex than was known by culturing methods. Soon
after that, Healy reported the metagenomic isolation of functional
genes from "zoolibraries" constructed from a complex culture of
environmental organisms grown in the laboratory on dried grasses in 1995. After leaving the Pace laboratory, Edward DeLong
continued in the field and has published work that has largely laid the
groundwork for environmental phylogenies based on signature 16S
sequences, beginning with his group's construction of libraries from marine samples.
In 2002, Mya Breitbart, Forest Rohwer,
and colleagues used environmental shotgun sequencing (see below) to
show that 200 liters of seawater contains over 5000 different viruses. Subsequent studies showed that there are more than a thousand viral species in human stool and possibly a million different viruses per kilogram of marine sediment, including many bacteriophages. Essentially all of the viruses in these studies were new species. In 2004, Gene Tyson, Jill Banfield, and colleagues at the University of California, Berkeley and the Joint Genome Institute sequenced DNA extracted from an acid mine drainage system. This effort resulted in the complete, or nearly complete, genomes for a handful of bacteria and archaea that had previously resisted attempts to culture them.
Flow diagram of a typical metagenome project
Beginning in 2003, Craig Venter, leader of the privately funded parallel of the Human Genome Project, has led the Global Ocean Sampling Expedition
(GOS), circumnavigating the globe and collecting metagenomic samples
throughout the journey. All of these samples are sequenced using shotgun
sequencing, in hopes that new genomes (and therefore new organisms)
would be identified. The pilot project, conducted in the Sargasso Sea, found DNA from nearly 2000 different species, including 148 types of bacteria never before seen. Venter has circumnavigated the globe and thoroughly explored the West Coast of the United States, and completed a two-year expedition to explore the Baltic, Mediterranean and Black
Seas. Analysis of the metagenomic data collected during this journey
revealed two groups of organisms, one composed of taxa adapted to
environmental conditions of 'feast or famine', and a second composed of
relatively fewer but more abundantly and widely distributed taxa
primarily composed of plankton.
Environmental
Shotgun Sequencing (ESS). (A) Sampling from habitat; (B) filtering
particles, typically by size; (C) Lysis and DNA extraction; (D) cloning
and library construction; (E) sequencing the clones; (F) sequence
assembly into contigs and scaffolds.
Shotgun metagenomics
Advances in bioinformatics,
refinements of DNA amplification, and the proliferation of
computational power have greatly aided the analysis of DNA sequences
recovered from environmental samples, allowing the adaptation of shotgun sequencing
to metagenomic samples (known also as whole metagenome shotgun or WMGS
sequencing). The approach, used to sequence many cultured microorganisms
and the human genome, randomly shears DNA, sequences many short sequences, and reconstructs them into a consensus sequence.
Shotgun sequencing reveals genes present in environmental samples.
Historically, clone libraries were used to facilitate this sequencing.
However, with advances in high throughput sequencing technologies, the
cloning step is no longer necessary and greater yields of sequencing
data can be obtained without this labour-intensive bottleneck step.
Shotgun metagenomics provides information both about which organisms are
present and what metabolic processes are possible in the community.
Because the collection of DNA from an environment is largely
uncontrolled, the most abundant organisms in an environmental sample are
most highly represented in the resulting sequence data. To achieve the
high coverage needed to fully resolve the genomes of under-represented
community members, large samples, often prohibitively so, are needed. On
the other hand, the random nature of shotgun sequencing ensures that
many of these organisms, which would otherwise go unnoticed using
traditional culturing techniques, will be represented by at least some
small sequence segments. An emerging approach combines shotgun sequencing and chromosome conformation capture (Hi-C), which measures the proximity of any two DNA sequences within the same cell, to guide microbial genome assembly.
High-throughput sequencing
The first metagenomic studies conducted using high-throughput sequencing used massively parallel 454 pyrosequencing. Three other technologies commonly applied to environmental sampling are the Ion Torrent Personal Genome Machine, the Illumina MiSeq or HiSeq and the Applied Biosystems SOLiD system. These techniques for sequencing DNA generate shorter fragments than Sanger sequencing;
Ion Torrent PGM System and 454 pyrosequencing typically produces
~400 bp reads, Illumina MiSeq produces 400-700bp reads (depending on
whether paired end options are used), and SOLiD produce 25–75 bp reads.
Historically, these read lengths were significantly shorter than the
typical Sanger sequencing read length of ~750 bp, however the Illumina
technology is quickly coming close to this benchmark. However, this
limitation is compensated for by the much larger number of sequence
reads. In 2009, pyrosequenced metagenomes generate 200–500 megabases,
and Illumina platforms generate around 20–50 gigabases, but these
outputs have increased by orders of magnitude in recent years.
An additional advantage to high throughput sequencing is that this
technique does not require cloning the DNA before sequencing, removing
one of the main biases and bottlenecks in environmental sampling.
Bioinformatics
The
data generated by metagenomics experiments are both enormous and
inherently noisy, containing fragmented data representing as many as
10,000 species. The sequencing of the cow rumen metagenome generated 279 gigabases, or 279 billion base pairs of nucleotide sequence data, while the human gut microbiome gene catalog identified 3.3 million genes assembled from 567.7 gigabases of sequence data.
Collecting, curating, and extracting useful biological information from
datasets of this size represent significant computational challenges
for researchers.
Sequence pre-filtering
The
first step of metagenomic data analysis requires the execution of
certain pre-filtering steps, including the removal of redundant,
low-quality sequences and sequences of probable eukaryotic origin (especially in metagenomes of human origin). The methods available for the removal of contaminating eukaryotic genomic DNA sequences include Eu-Detect and DeConseq.
Assembly
DNA sequence data from genomic and metagenomic projects are essentially the same, but genomic sequence data offers higher coverage while metagenomic data is usually highly non-redundant.
Furthermore, the increased use of second-generation sequencing
technologies with short read lengths means that much of future
metagenomic data will be error-prone. Taken in combination, these
factors make the assembly of metagenomic sequence reads into genomes
difficult and unreliable. Misassemblies are caused by the presence of repetitive DNA sequences that make assembly especially difficult because of the difference in the relative abundance of species present in the sample. Misassemblies can also involve the combination of sequences from more than one species into chimeric contigs.
There are several assembly programs, most of which can use information from paired-end tags in order to improve the accuracy of assemblies. Some programs, such as Phrap or Celera Assembler, were designed to be used to assemble single genomes but nevertheless produce good results when assembling metagenomic data sets. Other programs, such as Velvet assembler, have been optimized for the shorter reads produced by second-generation sequencing through the use of de Bruijn graphs.
The use of reference genomes allows researchers to improve the assembly
of the most abundant microbial species, but this approach is limited by
the small subset of microbial phyla for which sequenced genomes are
available.
After an assembly is created, an additional challenge is "metagenomic
deconvolution", or determining which sequences come from which species
in the sample.
Gene prediction
Metagenomic analysis pipelines use two approaches in the annotation of coding regions in the assembled contigs. The first approach is to identify genes based upon homology with genes that are already publicly available in sequence databases, usually by BLAST searches. This type of approach is implemented in the program MEGAN4. The second, ab initio,
uses intrinsic features of the sequence to predict coding regions based
upon gene training sets from related organisms. This is the approach
taken by programs such as GeneMark and GLIMMER. The main advantage of ab initio
prediction is that it enables the detection of coding regions that lack
homologs in the sequence databases; however, it is most accurate when
there are large regions of contiguous genomic DNA available for
comparison.
Gene annotations provide the "what", while measurements of species diversity provide the "who". In order to connect community composition and function in metagenomes, sequences must be binned. Binning is the process of associating a particular sequence with an organism. In similarity-based binning, methods such as BLAST
are used to rapidly search for phylogenetic markers or otherwise
similar sequences in existing public databases. This approach is
implemented in MEGAN. Another tool, PhymmBL, uses interpolated Markov models to assign reads. MetaPhlAn and AMPHORA
are methods based on unique clade-specific markers for estimating
organismal relative abundances with improved computational performances. Other tools, like mOTUs and MetaPhyler, use universal marker genes to profile prokaryotic species. With the mOTUs profiler is possible to profile species without a reference genome, improving the estimation of microbial community diversity. Recent methods, such as SLIMM,
use read coverage landscape of individual reference genomes to minimize
false-positive hits and get reliable relative abundances. In composition based binning, methods use intrinsic features of the sequence, such as oligonucleotide frequencies or codon usage bias. Once sequences are binned, it is possible to carry out comparative analysis of diversity and richness.
Data integration
The massive amount of exponentially growing sequence data is a daunting challenge that is complicated by the complexity of the metadata
associated with metagenomic projects. Metadata includes detailed
information about the three-dimensional (including depth, or height)
geography and environmental features of the sample, physical data about
the sample site, and the methodology of the sampling. This information is necessary both to ensure replicability
and to enable downstream analysis. Because of its importance, metadata
and collaborative data review and curation require standardized data
formats located in specialized databases, such as the Genomes OnLine
Database (GOLD).
Several tools have been developed to integrate metadata and
sequence data, allowing downstream comparative analyses of different
datasets using a number of ecological indices. In 2007, Folker Meyer and
Robert Edwards and a team at Argonne National Laboratory and the University of Chicago released the Metagenomics Rapid Annotation using Subsystem Technology server (MG-RAST) a community resource for metagenome data set analysis. As of June 2012 over 14.8 terabases (14x1012
bases) of DNA have been analyzed, with more than 10,000 public data
sets freely available for comparison within MG-RAST. Over 8,000 users
now have submitted a total of 50,000 metagenomes to MG-RAST. The Integrated Microbial Genomes/Metagenomes
(IMG/M) system also provides a collection of tools for functional
analysis of microbial communities based on their metagenome sequence,
based upon reference isolate genomes included from the Integrated Microbial Genomes (IMG) system and the Genomic Encyclopedia of Bacteria and Archaea (GEBA) project.
One of the first standalone tools for analysing high-throughput metagenome shotgun data was MEGAN (MEta Genome ANalyzer).
A first version of the program was used in 2005 to analyse the
metagenomic context of DNA sequences obtained from a mammoth bone.
Based on a BLAST comparison against a reference database, this tool
performs both taxonomic and functional binning, by placing the reads
onto the nodes of the NCBI taxonomy using a simple lowest common
ancestor (LCA) algorithm or onto the nodes of the SEED or KEGG classifications, respectively.
With the advent of fast and inexpensive sequencing instruments,
the growth of databases of DNA sequences is now exponential (e.g., the
NCBI GenBank database ).
Faster and efficient tools are needed to keep pace with the
high-throughput sequencing, because the BLAST-based approaches such as
MG-RAST or MEGAN run slowly to annotate large samples (e.g., several
hours to process a small/medium size dataset/sample).
Thus, ultra-fast classifiers have recently emerged, thanks to more
affordable powerful servers. These tools can perform the taxonomic
annotation at extremely high speed, for example CLARK (according to CLARK's authors, it can classify accurately "32 million
metagenomic short reads per minute"). At such a speed, a very large
dataset/sample of a billion short reads can be processed in about 30
minutes.
With the increasing availability of samples containing ancient
DNA and due to the uncertainty associated with the nature of those
samples (ancient DNA damage), FALCON,
a fast tool capable of producing conservative similarity estimates has
been made available. According to FALCON's authors, it can use relaxed
thresholds and edit distances without affecting the memory and speed
performance.
Comparative metagenomics
Comparative
analyses between metagenomes can provide additional insight into the
function of complex microbial communities and their role in host health. Pairwise or multiple comparisons between metagenomes can be made at the level of sequence composition (comparing GC-content
or genome size), taxonomic diversity, or functional complement.
Comparisons of population structure and phylogenetic diversity can be
made on the basis of 16S and other phylogenetic marker genes, or—in the
case of low-diversity communities—by genome reconstruction from the
metagenomic dataset. Functional comparisons between metagenomes may be made by comparing sequences against reference databases such as COG or KEGG, and tabulating the abundance by category and evaluating any differences for statistical significance. This gene-centric approach emphasizes the functional complement of the community
as a whole rather than taxonomic groups, and shows that the functional
complements are analogous under similar environmental conditions.
Consequently, metadata on the environmental context of the metagenomic
sample is especially important in comparative analyses, as it provides
researchers with the ability to study the effect of habitat upon
community structure and function.
Additionally, several studies have also utilized oligonucleotide
usage patterns to identify the differences across diverse microbial
communities. Examples of such methodologies include the dinucleotide
relative abundance approach by Willner et al. and the HabiSign approach of Ghosh et al.
This latter study also indicated that differences in tetranucleotide
usage patterns can be used to identify genes (or metagenomic reads)
originating from specific habitats. Additionally some methods as
TriageTools or Compareads detect similar reads between two read sets. The similarity measure they apply on reads is based on a number of identical words of length k shared by pairs of reads.
A key goal in comparative metagenomics is to identify microbial
group(s) which are responsible for conferring specific characteristics
to a given environment. However, due to issues in the sequencing
technologies artifacts need to be accounted for like in metagenomeSeq. Others have characterized inter-microbial interactions between the resident microbial groups. A GUI-based comparative metagenomic analysis application called Community-Analyzer has been developed by Kuntal et al.
which implements a correlation-based graph layout algorithm that not
only facilitates a quick visualization of the differences in the
analyzed microbial communities (in terms of their taxonomic
composition), but also provides insights into the inherent
inter-microbial interactions occurring therein. Notably, this layout
algorithm also enables grouping of the metagenomes based on the probable
inter-microbial interaction patterns rather than simply comparing
abundance values of various taxonomic groups. In addition, the tool
implements several interactive GUI-based functionalities that enable
users to perform standard comparative analyses across microbiomes.
Data analysis
Community metabolism
In many bacterial communities, natural or engineered (such as bioreactors), there is significant division of labor in metabolism (Syntrophy), during which the waste products of some organisms are metabolites for others. In one such system, the methanogenic bioreactor, functional stability requires the presence of several syntrophic species (Syntrophobacterales and Synergistia) working together in order to turn raw resources into fully metabolized waste (methane). Using comparative gene studies and expression experiments with microarrays or proteomics
researchers can piece together a metabolic network that goes beyond
species boundaries. Such studies require detailed knowledge about which
versions of which proteins are coded by which species and even by which
strains of which species. Therefore, community genomic information is
another fundamental tool (with metabolomics and proteomics) in the quest to determine how metabolites are transferred and transformed by a community.
Metagenomics allows researchers to access the functional and
metabolic diversity of microbial communities, but it cannot show which
of these processes are active. The extraction and analysis of metagenomic mRNA (the metatranscriptome) provides information on the regulation and expression profiles of complex communities. Because of the technical difficulties (the short half-life of mRNA, for example) in the collection of environmental RNA there have been relatively few in situ metatranscriptomic studies of microbial communities to date. While originally limited to microarray technology, metatranscriptomics studies have made use of transcriptomics technologies to measure whole-genome expression and quantification of a microbial community, first employed in analysis of ammonia oxidation in soils.
Viruses
Metagenomic sequencing is particularly useful in the study of viral
communities. As viruses lack a shared universal phylogenetic marker (as 16S RNA for bacteria and archaea, and 18S RNA
for eukarya), the only way to access the genetic diversity of the viral
community from an environmental sample is through metagenomics. Viral
metagenomes (also called viromes) should thus provide more and more
information about viral diversity and evolution. For example, a metagenomic pipeline called Giant Virus Finder showed the first evidence of existence of giant viruses in a saline desert and in Antarctic dry valleys.
The soils in which plants grow are inhabited by microbial communities, with one gram of soil containing around 109-1010 microbial cells which comprise about one gigabase of sequence information.
The microbial communities which inhabit soils are some of the most
complex known to science, and remain poorly understood despite their
economic importance. Microbial consortia perform a wide variety of ecosystem services necessary for plant growth, including fixing atmospheric nitrogen, nutrient cycling, disease suppression, and sequesteriron and other metals.
Functional metagenomics strategies are being used to explore the
interactions between plants and microbes through cultivation-independent
study of these microbial communities.
By allowing insights into the role of previously uncultivated or rare
community members in nutrient cycling and the promotion of plant growth,
metagenomic approaches can contribute to improved disease detection in crops and livestock and the adaptation of enhanced farming practices which improve crop health by harnessing the relationship between microbes and plants.
The efficient industrial-scale deconstruction of biomass requires novel enzymes with higher productivity and lower cost. Metagenomic approaches to the analysis of complex microbial communities allow the targeted screening of enzymes with industrial applications in biofuel production, such as glycoside hydrolases.
Furthermore, knowledge of how these microbial communities function is
required to control them, and metagenomics is a key tool in their
understanding. Metagenomic approaches allow comparative analyses between
convergent microbial systems like biogas fermenters or insectherbivores such as the fungus garden of the leafcutter ants.
Biotechnology
Microbial communities produce a vast array of biologically active chemicals that are used in competition and communication.
Many of the drugs in use today were originally uncovered in microbes;
recent progress in mining the rich genetic resource of non-culturable
microbes has led to the discovery of new genes, enzymes, and natural
products. The application of metagenomics has allowed the development of commodity and fine chemicals, agrochemicals and pharmaceuticals where the benefit of enzyme-catalyzedchiral synthesis is increasingly recognized.
Two types of analysis are used in the bioprospecting
of metagenomic data: function-driven screening for an expressed trait,
and sequence-driven screening for DNA sequences of interest.
Function-driven analysis seeks to identify clones expressing a desired
trait or useful activity, followed by biochemical characterization and
sequence analysis. This approach is limited by availability of a
suitable screen and the requirement that the desired trait be expressed
in the host cell. Moreover, the low rate of discovery (less than one per
1,000 clones screened) and its labor-intensive nature further limit
this approach. In contrast, sequence-driven analysis uses conserved DNA sequences to design PCR primers to screen clones for the sequence of interest.
In comparison to cloning-based approaches, using a sequence-only
approach further reduces the amount of bench work required. The
application of massively parallel sequencing also greatly increases the
amount of sequence data generated, which require high-throughput
bioinformatic analysis pipelines.
The sequence-driven approach to screening is limited by the breadth and
accuracy of gene functions present in public sequence databases. In
practice, experiments make use of a combination of both functional and
sequence-based approaches based upon the function of interest, the
complexity of the sample to be screened, and other factors. An example of success using metagenomics as a biotechnology for drug discovery is illustrated with the malacidin antibiotics.
Ecology
Metagenomics can provide valuable insights into the functional ecology of environmental communities.
Metagenomic analysis of the bacterial consortia found in the
defecations of Australian sea lions suggests that nutrient-rich sea lion
faeces may be an important nutrient source for coastal ecosystems. This
is because the bacteria that are expelled simultaneously with the
defecations are adept at breaking down the nutrients in the faeces into a
bioavailable form that can be taken up into the food chain.
DNA sequencing can also be used more broadly to identify species present in a body of water, debris filtered from the air, or sample of dirt. This can establish the range of invasive species and endangered species, and track seasonal populations.
Environmental remediation
Metagenomics can improve strategies for monitoring the impact of pollutants on ecosystems
and for cleaning up contaminated environments. Increased understanding
of how microbial communities cope with pollutants improves assessments
of the potential of contaminated sites to recover from pollution and
increases the chances of bioaugmentation or biostimulation trials to succeed.
Gut microbe characterization
Microbial communities play a key role in preserving human health, but their composition and the mechanism by which they do so remains mysterious.
Metagenomic sequencing is being used to characterize the microbial
communities from 15–18 body sites from at least 250 individuals. This is
part of the Human Microbiome initiative with primary goals to determine if there is a core human microbiome,
to understand the changes in the human microbiome that can be
correlated with human health, and to develop new technological and bioinformatics tools to support these goals.
Another medical study as part of the MetaHit (Metagenomics of the
Human Intestinal Tract) project consisted of 124 individuals from
Denmark and Spain consisting of healthy, overweight, and irritable bowel
disease patients. The study attempted to categorize the depth and
phylogenetic diversity of gastrointestinal bacteria. Using Illumina GA
sequence data and SOAPdenovo, a de Bruijn graph-based tool specifically
designed for assembly short reads, they were able to generate 6.58
million contigs greater than 500 bp for a total contig length of 10.3 Gb
and a N50 length of 2.2 kb.
The study demonstrated that two bacterial divisions,
Bacteroidetes and Firmicutes, constitute over 90% of the known
phylogenetic categories that dominate distal gut bacteria. Using the
relative gene frequencies found within the gut these researchers
identified 1,244 metagenomic clusters that are critically important for
the health of the intestinal tract. There are two types of functions in
these range clusters: housekeeping and those specific to the intestine.
The housekeeping gene clusters are required in all bacteria and are
often major players in the main metabolic pathways including central
carbon metabolism and amino acid synthesis. The gut-specific functions
include adhesion to host proteins and the harvesting of sugars from
globoseries glycolipids. Patients with irritable bowel syndrome were
shown to exhibit 25% fewer genes and lower bacterial diversity than
individuals not suffering from irritable bowel syndrome indicating that
changes in patients' gut biome diversity may be associated with this
condition.
While these studies highlight some potentially valuable medical
applications, only 31–48.8% of the reads could be aligned to 194 public
human gut bacterial genomes and 7.6–21.2% to bacterial genomes available
in GenBank which indicates that there is still far more research
necessary to capture novel bacterial genomes.
Infectious disease diagnosis
Differentiating
between infectious and non-infectious illness, and identifying the
underlying etiology of infection, can be quite challenging. For example,
more than half of cases of encephalitis
remain undiagnosed, despite extensive testing using state-of-the-art
clinical laboratory methods. Metagenomic sequencing shows promise as a
sensitive and rapid method to diagnose infection by comparing genetic
material found in a patient's sample to a database of thousands of
bacteria, viruses, and other pathogens
A natural experiment
is an empirical study in which individuals (or clusters of individuals)
are exposed to the experimental and control conditions that are
determined by nature or by other factors outside the control of the
investigators. The process governing the exposures arguably resembles random assignment. Thus, natural experiments are observational studies and are not controlled in the traditional sense of a randomized experiment.
Natural experiments are most useful when there has been a clearly
defined exposure involving a well defined subpopulation (and the absence
of exposure in a similar subpopulation) such that changes in outcomes
may be plausibly attributed to the exposure.
In this sense, the difference between a natural experiment and a
non-experimental observational study is that the former includes a
comparison of conditions that pave the way for causal inference, but the latter does not.
Natural experiments are employed as study designs when controlled experimentation is extremely difficult to implement or unethical, such as in several research areas addressed by epidemiology
(like evaluating the health impact of varying degrees of exposure to
ionizing radiation in people living near Hiroshima at the time of the
atomic blast) and economics (like estimating the economic return on amount of schooling in US adults).
History
John Snow's map showing the clustering of cholera cases in Soho during the London epidemic of 1854
One of the best-known early natural experiments was the 1854 Broad Street cholera outbreak in London, England. On 31 August 1854, a major outbreak of cholera struck Soho. Over the next three days, 127 people near Broad Street died. By the end of the outbreak 616 people died. The physician John Snow identified the source of the outbreak as the nearest public water pump, using a map of deaths and illness that revealed a cluster of cases around the pump.
In this example, Snow discovered a strong association between the
use of the water from the pump, and deaths and illnesses due to
cholera. Snow found that the Southwark and Vauxhall Waterworks Company, which supplied water to districts with high attack rates, obtained the water from the Thames downstream from where raw sewage was discharged into the river. By contrast, districts that were supplied water by the Lambeth Waterworks Company,
which obtained water upstream from the points of sewage discharge, had
low attack rates. Given the near-haphazard patchwork development of the
water supply in mid-nineteenth century London, Snow viewed the
developments as "an experiment...on the grandest scale."
Of course, the exposure to the polluted water was not under the control
of any scientist. Therefore, this exposure has been recognized as being
a natural experiment.
Recent examples
Family size
An aim of a study Angrist and Evans (1998)
was to estimate the effect of family size on the labor market outcomes
of the mother. For at least two reasons, the correlations between family
size and various outcomes (e.g., earnings) do not inform us about how
family size causally affects labor market outcomes. First, both labor
market outcomes and family size may be affected by unobserved "third"
variables (e.g., personal preferences). Second, labor market outcomes
themselves may affect family size (called "reverse causality"). For
example, a woman may defer having a child if she gets a raise at work.
The authors observed that two-child families with either two boys or two
girls are substantially more likely to have a third child than
two-child families with one boy and one girl. The sex of the first two
children, then, constitutes a kind of natural experiment: it is as if an
experimenter had randomly assigned some families to have two children
and others to have three. The authors were then able to credibly
estimate the causal effect of having a third child on labor market
outcomes. Angrist and Evans found that childbearing had a greater impact
on poor and less educated women than on highly educated women although
the earnings impact of having a third child tended to disappear by that
child's 13th birthday. They also found that having a third child had
little impact on husbands' earnings.
Game shows
Within
economics, game shows are a frequently studied form of natural
experiment. While game shows might seem to be artificial contexts, they
can be considered natural experiments due to the fact that the context
arises without interference of the scientist. Game shows have been used
to study a wide range of different types of economic behavior, such as
decision making under risk and cooperative behavior.
Smoking ban
In Helena, Montana a smoking ban
was in effect in all public spaces, including bars and restaurants,
during the six-month period from June 2002 to December 2002. Helena is
geographically isolated and served by only one hospital. The
investigators observed that the rate of heart attacks
dropped by 40% while the smoking ban was in effect. Opponents of the
law prevailed in getting the enforcement of the law suspended after six
months, after which the rate of heart attacks went back up. This study was an example of a natural experiment, called a case-crossover experiment,
where the exposure is removed for a time and then returned. The study
also potentially suggests that the inability to control variables in
natural experiments can impede investigators from drawing firm
conclusions. Critics argued that the particularly large percentage
fluctuation in the rate of myocardial infarction was likely due to
chance, given the small population size.
Nuclear weapons testing
Nuclear weapons testing released large quantities of radioactive isotopes into the atmosphere, some of which could be incorporated into biological tissues. The release stopped after the Partial Nuclear Test Ban Treaty in 1963, which prohibited atmospheric nuclear tests. This resembled a large-scale pulse-chase experiment,
but could not have been performed as a regular experiment in humans due
to scientific ethics. Several types of observations were made possible
(in people born before 1963), such as determination of the rate of
replacement for cells in different human tissues.
Vietnam War draft
An
important question in economics research is what determines earnings.
Angrist (1990) evaluated the effects of military service on lifetime
earnings. Using statistical methods developed in econometrics, Angrist capitalized on the approximate random assignment of the Vietnam War draft lottery, and used it as an instrumental variable
associated with eligibility (or non-eligibility) for military service.
Because many factors might predict whether someone serves in the
military, the draft lottery frames a natural experiment whereby those
drafted into the military can be compared against those not drafted
because the two groups should not differ substantially prior to military
service. Angrist found that the earnings of veterans were, on average,
about 15 percent less than the earnings of non-veterans.
Industrial melanism
With the Industrial Revolution in the nineteenth century, many species of moth, including the well-studied peppered moth, responded to the atmospheric pollution of sulphur dioxide and soot around cities with industrial melanism,
a dramatic increase in the frequency of dark forms over the formerly
abundant pale, speckled forms. In the twentieth century, as regulation
improved and pollution fell, providing the conditions for a large-scale
natural experiment, the trend towards industrial melanism was reversed,
and melanic forms quickly became scarce. The effect led the evolutionary
biologists L. M. Cook and J. R. G. Turner to conclude that "natural selection is the only credible explanation for the overall decline".