Search This Blog

Sunday, March 8, 2020

Applied Biosystems

From Wikipedia, the free encyclopedia
 
S.A. Edit this on Wikidata
Industrybiotechnology Edit this on Wikidata
Founded1981 Edit this on Wikidata
HeadquartersWaltham, Massachusetts, USA
Key people
Marc N. Casper (President & CEO)
Websitewww.thermofisher.com

Applied Biosystems is one of the various brands under the Life Technologies brand of Thermo Fisher Scientific corporation. The brand is focused on integrated systems for genetic analysis, which include computerized machines and the consumables used within them (such as reagents).

In 2008, a merger between Applied Biosystems and Invitrogen was finalized, creating Life Technologies. The latter was acquired by Thermo Fisher Scientific in 2014. Prior to 2008, the Applied Biosystems brand was owned by various entities in a corporate group parented by PerkinElmer. The roots of Applied Biosystems trace back to GeneCo (Genetic Systems Company), a pioneer biotechnology company founded in 1981 in Foster City, California. Through the 1980s and early 1990s, Applied Biosystems, Inc. operated independently and manufactured biochemicals and automated genetic engineering and diagnostic research instruments, including the principal brand of DNA sequencing machine used by the Human Genome Project consortium centers. Applied Biosystems' close ties to the consortium project led to the idea for the founding of Celera Genomics in 1998 as one of several independent competitors to the consortium.

In 1993 Applied Biosystems, Inc. was delisted from the NASDAQ when it was acquired by the old company known then as Perkin-Elmer. As the PE Applied Biosystems Division under that parent in 1998, it became consolidated with other acquisitions as the primary PE Biosystems Division. In 1999 its parent company reorganized and changed its name to PE Corporation, and the PE Biosystems Group (formerly again became publicly traded, as a tracking stock of its parent, along with its sister tracking stock company, Celera Genomics. In 2000 the parent became Applera Corporation. The Applied Biosystems name also returned that year, in the name change of the tracking stock from PE Biosystems Group to Applera Corporation-Applied Biosystems Group, an S&P 500 company, which remains as a publicly traded operating group within Applera Corp., along with its sibling operating group, Applera Corporation-Celera Group. Applera derives its name from the combination of its two component groups' names, Appl(iedCel)era In November 2008, a merger between Applied Biosystems and Invitrogen was finalized "creating a global leader in biotechnology reagents and systems". The new company was called Life Technologies.

History

Company History
Year Company Name
1981 Genetic Systems Company (GeneCo)
1982 Applied Biosystems, Inc. (ABI)
1993 Applied Biosystems, Perkin-Elmer
1996 PE Applied Biosystems
1998 PE Biosystems
2000 Applied Biosystems Group, Applera Corp
2002 Applied Biosystems
2008 Life Technologies
2014 Thermo Fisher Scientific

In May 1981, the company was founded by two scientist/engineers from Hewlett Packard, Sam Eletr and André Marion based on technology developed by Leroy Hood and Marvin H. Caruthers.

In August 1982, Applied Biosystems released its first commercial instrument, the Model 470A Protein Sequencer. The machine enabled scientists to determine the order of amino acids within a purified protein, which in turn correlated with the protein's function. With 40 employees, the company, reported first-time revenue of US$402,000.

In 1983 the company was led by its president and Chairman of the Board, Sam Eletr and Chief Operating Officer Andre Marion, the company doubled its number of employees to 80, and its stock went public on the NASDAQ exchange under the symbol ABIO, with revenues of US$5.9 million. A new product was a fluorescent molecular tag for immunodiagnostic assays.

The company released its second commercial instrument, the Model 380A DNA Synthesizer, which made oligonucleotides, short DNA strands, for polymerase chain reaction (PCR), DNA sequencing, and gene identification. The two sequencer and synthesizer products allowed molecular biologists to clone genes by building oligonucleotides with the desired protein's DNA sequence.

Automated DNA sequencing began at the California Institute of Technology, using fluorescent dyes, with Rights to the technology granted to Applied Biosystems. At CIT, Dr. Leroy Hood and Dr. Lloyd Smith, together pioneered those first DNA sequencing machines.

In 1984, Applied Biosystems sales revenue tripled to over US$18 million, with a second yearly profit, and with over 200 employees. Services included synthesizing custom DNA and protein fragments, and the sequencing of protein samples submitted from customers. The third major instrument made by Applied, the Model 430A Peptide Synthesizer, was introduced.

In 1985, Applied Biosystems sales revenue grew nearly 70% to over US$35 million, with a third yearly profit. Two new products included the Model 380B DNA Synthesizer and the 381A DNA Synthesizer. That year the company went international for the first time, when it established a wholly owned subsidiary in Great Britain to save shipping costs on chemical sales, which overall by then accounted for 17% of sales.

Also in 1985, Applied Biosystems acquired Brownlee Labs, a manufacturer of columns and pumps for high-performance liquid chromatography (HPLC) systems, after its founder, Robert Brownlee was diagnosed with AIDS-related complex in 1984. Brownlee's technology brought the new on-line 120A PTH Amino Acid Analyzer.

However, Brownlee then began a new company, which was viewed by Applied as a competitor. In 1989 Applied and Brownlee settled in a lawsuit over the conflict. As late as 1990, Brownlee publicly discussed what had been his contributions in the rocky relationship with Applied, before he died early the next year.

In 1986, Andre Marion became President and Chief Executive Officer. Sales revenue increased by 45% to nearly US$52 million. The company introduced six new products, totalling eleven automated instruments. The release of the Model 370A DNA Sequencing System, using fluorescent tags, revolutionized gene discovery. The Model 340A Nucleic Acid Extractor became used in medical labs to isolate DNA from bacteria, blood, and tissue.

In 1987, Sam Eletr resigned for health reasons. Revenues increase by 63% to nearly US$85 million, with 788 employees, and another six new instruments. Applied Biosystems acquired the Kratos Division of Spectros International PLC.

By 1988, the product line had increased to over 25 different automated instruments, over 400 liquid chromatography columns and components, and about 320 chemicals, biochemicals, and consumables. Sales revenue grew to over US$132 million, with almost 1000 employees in eight countries. In that year for the first time, genetic science reached the milestone of being able to identify individuals by their DNA.

In 1989, sales revenue reached nearly $160 million. Applied Biosystems maintained 15 offices in 9 different countries, and introduced four new products. The company developed enzyme-based reagent kits made by Promega Corporation, and in the new field of bioinformatics, licensed with TRW Inc.. Also, joint marketing began with Perkin-Elmer Corporation and Cetus Corporation (formerly of instruments and reagents for DNA replication, the fastest growing segment in biotechnology.

In 1990, instrument sales underwent a cyclical slowdown, as the economy entered the 1990–91 recession. For the first year, Applied revenues did not grow, and came in at less than $159 million, with 1,334 employees. New company developments included new instrumentation for robotics and detection of DNA fragments using the company's fluorescent labelling.

Also in 1990, the U.S. government approved financing to support the Human Genome Project. Dr. James D. Watson, who founded the consortium, forecast that the project could be completed in 15 years from its 1990 starting date, at a cost of cost US$3 billion. Over the next couple years, Japan began a project to sequence the rice genome, and other laboratories initiated programs to sequence the mouse, fruit fly, and yeast genomes.

In 1991, Applied sales revenue grew slightly, to almost $164 million, with consumables and service contracts up by 24% to account for 47% of total sales, and DNA sequencer and DNA synthesis instruments having record sales. Forty-five new consumable products and six new instruments were introduced.

In 1992, sales revenue grew by more than 11% to over $182 million, with Europe representing 25% of revenue, and Asia and the Pacific Rim accounting for 26%. The company formed a new subsidiary, Lynx Therapeutics, Inc., to focus on antisense DNA research in the area of therapeutics for chronic myelogenous leukemia, melanoma, colorectal cancer, and AIDS.

Perkin-Elmer

In February 1993 Applied Biosystems was acquired by Perkin-Elmer, and became the Applied Biosystems Division, as part of the Life Sciences markets segment of that company. Andre Marion, who had been Applied Biosystems's Chairman, President and CEO, became a Senior Vice President of Perkin-Elmer, and President of the Applied Biosystems Division. That year the company was the world's leading manufacturer of instruments and reagents for polymerase chain reaction (PCR). It marketed PCR reagents kits in alliance with Hoffman-La Roche Inc.

In 1994, Perkin-Elmer reported net revenues of over $1 billion, of which Life Sciences accounted for 42% of the business. The company has 5,954 employees. A brand-new highly competitive genomics industry had formed for the development of new pharmaceuticals, based on the work of the Human Genome Project. Companies such as Sequana Therapeutics in San Diego, Human Genome Sciences in Maryland, Myriad Genetics in Utah, INCYTE Pharmaceuticals (later Incyte Genomics) in California, and Millennium Pharmaceuticals relied on the Applied Biosystems Division, which made thermal cyclers and automated sequencers for these new genomics companies.

In 1995, upon Andre Marion retirement, Mike Hunkapiller became President of PE Applied Biosystems Division which sold its 30,000th thermal cycler. To meet Human Genome Project goals, Perkin-Elmer developed mapping kits with markers every 10 million bases along each chromosome. Also that year, DNA fingerprinting using PCR became accepted in court as reliable forensic evidence.

In 1996, Perkin-Elmer acquired Tropix, Inc., a chemiluminescence company, for its life sciences division.

PE Applied Biosystems

In September 1995, Tony L. White from Baxter International Inc. became President and Chief Executive Officer of Perkin-Elmer. In 1996 the company was reorganized into two separate operating divisions, Analytical Instruments and PE Applied Biosystems. The PE Applied Biosystems division accounted for half of Perkin-Elmer's total revenue, with net revenues up by 26%.

In 1997, revenues reached almost US$1.3 billion, of which PE Applied Biosystems was US$653 million. The company acquired GenScope, Inc., and Linkage Genetics, Inc. The Linkage Genetics unit was combined with Zoogen to form PE AgGen, focused on genetic analysis services for plant and animal breeding. The PE Applied Biosystems division partnered with Hyseq, Inc., for work on the new DNA chip technology, and also worked with Tecan U.S., Inc., on combinatorial chemistry automation systems, and also with Molecular Informatics, Inc. on genetic data management and analysis automated systems.

PE Biosystems

In 1998, PE Applied Biosystems became PE Biosystems, and the division's revenues reached US$921.8 million. In January 1998 Perkin-Elmer acquired PerSeptive Biosystems (formerly of Framingham, Massachusetts. It was a leader in the bio-instrumentation field where it made biomolecule purification systems for protein analysis. Noubar B. Afeyan, Ph.D., had been the founder, Chairman, and CEO of PerSeptive, and with the Perkin-Elmer successor company he set up the later tracking stock for Celera.

In 1998, Perkin-Elmer formed the PE Biosystems division, by consolidating Applied Biosystems, PerSeptive Biosystems, Tropix and PE Informatics. Informatics was formed from the Perkin-Elmer combination of two other acquisitions, Molecular Informatics and Nelson Analytical Systems, with existing units of Perkin-Elmer.

While planning the next new generation of machines, PE Biosystems' president, Michael W. Hunkapiller, calculated that it would be possible for their own private industry to decode the human genome before the academic consortium could complete it, by using the resources of a single, industrial-scale center, even though it would require starting from scratch. It was a bold prediction, given that the consortium target date set by Dr. Watson back in 1990 had been the forward year of 2005, only seven years away, and with the consortium already half the way to the completion target date.

Also, it meant that Dr. Hunkapiller's idea would require competing against his own customers, to all of whom Applied Biosystems sold its sequencing machines and their chemical reagents. However, he calculated that it would also mean doubling the market for that equipment.

Hunkapiller brought in Dr. J. Craig Venter to direct the project. Tony White, president of the Perkin-Elmer Corporation backed Hunkapiller on the venture. They organized the new company to accomplish the task. In May 1998, Celera Genomics was formed, to rapidly accelerate the human DNA sequencing process. Dr. Venter boldly declared to the media that he would complete the genome decoding by 2001. That bold announcement prompted the academic consortium to accelerate their own deadline by a couple years, to 2003.

Also in 1998, PE Biosystems partnered with Hitachi, Ltd. to develop electrophoresis-based genetic analysis systems, which resulted in their chief new genomics instrument, the ABI PRISM 3700 DNA Analyzer, which advanced the Human Genome sequencing project by nearly five years ahead of schedule. The partnerships sold hundreds of the 3700 analyzers to Celera, and also to others worldwide.

The new machine cost US$300,000 each, but was a major leap beyond its predecessor, the 377, and was fully automated, allowing genetic decoding to run around the clock with little supervision. According to Venter, the machine was so revolutionary that it could decode in a single day the same amount of genetic material that most DNA labs could produce in a year.

The public consortium also bought one of the PE Biosystems 3700 sequencers, and had plans to buy 200 more. The machine proved to be so fast that by late March 1999 the consortium announced that it had revised its timeline, and would release by the Spring of 2000 a "first draft sequence" for 80% of the human genome.

At year end 1998, the PE Biosystems Group's sales reached US$940 million.

PE Corporation

In 1999, to focus on the new genomics, Perkin-Elmer Corporation was renamed PE Corporation, and sold its old Analytical Instruments division to EG&G, Inc., which also acquired the Perkin-Elmer name. PE Biosystems remained with PE Corp., and became PE Biosystems Group, with 3,500 employees and net revenues of over $1.2 billion. New instruments were developed and sold for forensic human identification, protein identification and characterization, metabolite pathway identification, and lead compound identification from combinatorial libraries.

On April 27, 1999, the shareholders of Perkin-Elmer Corporation approved the reorganization of Perkin-Elmer into PE Corporation, a pure-play life science company. Each share of the Perkin-Elmer stock (PKN) was to be exchanged for one share and for ​ 12 of a share respectively of the two new common share tracking stocks for the two component Life Sciences groups, PE Biosystems Group and Celera Genomics Group.

On April 28, 1999, the two replacement tracking stocks for the new PE Corporation were issued to shareholders. Dr. Michael W. Hunkapiller remained as a Senior Vice President of PE Corporation, and as president of PE Biosystems.

On May 6, 1999, the recapitalization of the company resulted in issuance of the two new classes of common stock, called PE Corporation-PE Biosystems Group Common Stock and PE Corporation-Celera Genomics Group Common Stock. On that date, trading began in both new stocks on the New York Stock Exchange, to great excitement.

On June 17, 1999 the Board of PE Corporation announced a two-for-one split of PE Biosystems Group Common Stock.

By June 2000, the genomics segment of the technology bubble was peaking. Celera Genomics (CRA) and PE Biosystems (PEB) were among five genetics pioneers leading at that time, along with Incyte Genomics, Human Genome Sciences, and Millennium Pharmaceuticals. All five of those stocks by then had exceeded a price above $100 per share in the market, before ultimately crashing back down.

Applera

On November 30, 2000, PE Corporation changed its name to Applera, combining the two partial names Applied and Celera into one, with 5,000 employees. PE Biosystems Group was renamed once again to Applied Biosystems Group, and changed its ticker symbol from PEB to ABI. Its net revenues rose to almost US$1.4 billion. Celera that year made milestone headlines when it announced that it had completed the sequencing and first assembly of the two largest genomes in history, that of the fruit fly, and of the human.

In 2001, the Applied Biosystems division of Applera reached revenues of US$1.6 billion, and developed a new workstation instrument specifically for the new field of proteomics, which had become Celera's new core business focus, as it shifted away from gene discovery. The instrument analyzed 1,000 protein samples per hour.

On April 22, 2002, the Celera Genomics Group announced its decision to shift the role of marketing data from its genetic database over to its sister company, the Applied Biosystems Group. Celera would instead develop pharmaceutical drugs. Applied Biosystems was a better fit for the database, because Applied already had the huge sales force in place for the marketing of its instruments. Plans were to expand those sales and those of the database into an electronic commerce system.

In 2002, Applied Biosystems reached revenues of US$1.6 billion for the year, and took control from Celera of the support of Celera Discovery System (CDS), a data tool to answer specific genomic and proteomic queries, involving the new genetic data field of tens of thousands of single-nucleotide polymorphisms (SNPs) within the human genome. The company developed another new tool, which combined the first ever union of triple quadrupole and ion trap technologies, in proteomics research.

The database itself would remain with Celera, because of shareholder approval complications. Celera would retain responsibility for its maintenance and support to existing customers, and would receive royalties from Applied Biosystems.

In 2003, Catherine Burzik joined Applied's management, from Ortho-Clinical Diagnostics. Applied developed a new tool which measured antibody/antigen binding in real-time kinetic analysis of up to 400 binding interactions simultaneously.

In 2004, Mike Hunkapiller retired and Cathy Burzik replaced him as President of Applied Biosystems. Applera collaborated with General Electric, Abbott Laboratories, Seattle Genetics, and Merck in diagnostics development. Applied Biosystmes also teamed with Northrop Grumman and Cepheid of Sunnyvale, California, to detect Bacillus anthracis during the anthrax contamination case of the U.S. Postal Service.

In 2005, the company released new tools for small molecule quantitation in pharmaceutical drug development. In Mexico, Applied Biosystems collaborated with the National Institute of Genomic Medicine of Mexico (Instituto Nacional de Medicina Genomica or INMEGEN), and established an Applied Biosystems Sequencing and Genotyping Unit at INMEGEN.

In 2006, Applied Biosystems acquired the Research Products Division of Ambion, a supplier of RNA-based reagents and products. That year, with the Influenza A Subtype H5N1 "avian flu" strain scare, the company launched a global initiative to identify and track such infectious diseases.

In 2006, Applied Biosystems also acquired Agencourt Personal Genomics, located in Beverly, MA, to commercialize Agencourt's SOLiD sequencing system.

In 2007, ABI Solid Sequencing - a next-gen DNA sequencing platform was announced. Mark Stevenson was appointed President and Chief Operating Officer of Applied Biosystems.

In November 2008, Applied Biosystems merged with Invitrogen, forming Life Technologies, which was acquired by Thermo Fisher Scientific in 2014.

DNA sequencer

From Wikipedia, the free encyclopedia

DNA-Sequencers from Flickr 57080968.jpg
DNA sequencers
ManufacturersRoche, Illumina, Life Technologies, Beckman Coulter, Pacific Biosciences

A DNA sequencer is a scientific instrument used to automate the DNA sequencing process. Given a sample of DNA, a DNA sequencer is used to determine the order of the four bases: G (guanine), C (cytosine), A (adenine) and T (thymine). This is then reported as a text string, called a read. Some DNA sequencers can be also considered optical instruments as they analyze light signals originating from fluorochromes attached to nucleotides.

The first automated DNA sequencer, invented by Lloyd M. Smith, was introduced by Applied Biosystems in 1987. It used the Sanger sequencing method, a technology which formed the basis of the “first generation” of DNA sequencers and enabled the completion of the human genome project in 2001. This first generation of DNA sequencers are essentially automated electrophoresis systems that detect the migration of labelled DNA fragments. Therefore, these sequencers can also be used in the genotyping of genetic markers where only the length of a DNA fragment(s) needs to be determined (e.g. microsatellites, AFLPs).

The Human Genome Project spurred the development of cheaper, high throughput and more accurate platforms known as Next Generation Sequencers (NGS) to sequence the human genome. These include the 454, SOLiD and Illumina DNA sequencing platforms. Next generation sequencing machines have increased the rate of DNA sequencing substantially, as compared with the previous Sanger methods. DNA samples can be prepared automatically in as little as 90 mins, while a human genome can be sequenced at 15 times coverage in a matter of days.

More recent, third-generation DNA sequencers such as SMRT and Oxford Nanopore measure the addition of nucleotides to a single DNA molecule in real time.

Because of limitations in DNA sequencer technology these reads are short compared to the length of a genome therefore the reads must be assembled into longer contigs. The data may also contain errors, caused by limitations in the DNA sequencing technique or by errors during PCR amplification. DNA sequencer manufacturers use a number of different methods to detect which DNA bases are present. The specific protocols applied in different sequencing platforms have an impact in the final data that is generated. Therefore, comparing data quality and cost across different technologies can be a daunting task. Each manufacturer provides their own ways to inform sequencing errors and scores. However, errors and scores between different platforms cannot always be compared directly. Since these systems rely on different DNA sequencing approaches, choosing the best DNA sequencer and method will typically depend on the experiment objectives and available budget.

History

The first DNA sequencing methods were developed by Gilbert (1973) and Sanger (1975). Gilbert introduced a sequencing method based on chemical modification of DNA followed by cleavage at specific bases whereas Sanger’s technique is based on dideoxynucleotide chain termination. The Sanger method became popular due to its increased efficiency and low radioactivity. The first automated DNA sequencer was the AB370A, introduced in 1986 by Applied Biosystems. The AB370A was able to sequence 96 samples simultaneously, 500 kilobases per day, and reaching read lengths up to 600 bases. This was the beginning of the “first generation” of DNA sequencers, which implemented Sanger sequencing, fluorescent dideoxy nucleotides and polyacrylamide gel sandwiched between glass plates - slab gels. The next major advance was the release in 1995 of the AB310 which utilized a linear polymer in a capillary in place of the slab gel for DNA strand separation by electrophoresis. These techniques formed the base for the completion of the human genome project in 2001. The human genome project spurred the development of cheaper, high throughput and more accurate platforms known as Next Generation Sequencers (NGS). In 2005, 454 Life Sciences released the 454 sequencer, followed by Solexa Genome Analyzer and SOLiD (Supported Oligo Ligation Detection) by Agencourt in 2006. Applied Biosystems acquired Agencourt in 2006, and in 2007, Roche bought 454 Life Sciences, while Illumina purchased Solexa. Ion Torrent entered the market in 2010 and was acquired by Life Technologies (now Thermo Fisher Scientific). These are still the most common NGS systems due to their competitive cost, accuracy, and performance.

More recently, a third generation of DNA sequencers was introduced. The sequencing methods applied by these sequencers do not require DNA amplification (polymerase chain reaction – PCR), which speeds up the sample preparation before sequencing and reduces errors. In addition, sequencing data is collected from the reactions caused by the addition of nucleotides in the complementary strand in real time. Two companies introduced different approaches in their third-generation sequencers. Pacific Biosciences sequencers utilize a method called Single-molecule real-time (SMRT), where sequencing data is produced by light (captured by a camera) emitted when a nucleotide is added to the complementary strand by enzymes containing fluorescent dyes. Oxford Nanopore Technologies is another company developing third-generation sequencers using electronic systems based on nanopore sensing technologies.

Manufacturers of DNA sequencers

DNA sequencers have been developed, manufactured, and sold by the following companies, among others.

Roche

The 454 DNA sequencer was the first next-generation sequencer to become commercially successful. It was developed by 454 Life Sciences and purchased by Roche in 2007. 454 utilizes the detection of pyrophosphate released by the DNA polymerase reaction when adding a nucleotide to the template strain.

Roche currently manufactures two systems based on their pyrosequencing technology: the GS FLX+ and the GS Junior System. The GS FLX+ System promises read lengths of approximately 1000 base pairs while the GS Junior System promises 400 base pair reads. A predecessor to GS FLX+, the 454 GS FLX Titanium system was released in 2008, achieving an output of 0.7G of data per run, with 99.9% accuracy after quality filter, and a read length of up to 700bp. In 2009, Roche launched the GS Junior, a bench top version of the 454 sequencer with read length up to 400bp, and simplified library preparation and data processing.

One of the advantages of 454 systems is their running speed, Manpower can be reduced with automation of library preparation and semi-automation of emulsion PCR. A disadvantage of the 454 system is that it is prone to errors when estimating the number of bases in a long string of identical nucleotides. This is referred to as a homopolymer error and occurs when there are 6 or more identical bases in row. Another disadvantage is that the price of reagents is relatively more expensive compared with other next-generation sequencers.

In 2013 Roche announced that they would be shutting down development of 454 technology and phasing out 454 machines completely in 2016.

Roche produces a number of software tools which are optimised for the analysis of 454 sequencing data. GS Run Processor converts raw images generated by a sequencing run into intensity values. The process consists of two main steps: image processing and signal processing. The software also applies normalization, signal correction, base-calling and quality scores for individual reads. The software outputs data in Standard Flowgram Format (or SFF) files to be used in data analysis applications (GS De Novo Assembler, GS Reference Mapper or GS Amplicon Variant Analyzer). GS De Novo Assembler is a tool for de novo assembly of whole-genomes up to 3GB in size from shotgun reads alone or combined with paired end data generated by 454 sequencers. It also supports de novo assembly of transcripts (including analysis), and also isoform variant detection. GS Reference Mapper maps short reads to a reference genome, generating a consensus sequence. The software is able to generate output files for assessment, indicating insertions, deletions and SNPs. Can handle large and complex genomes of any size. Finally, the GS Amplicon Variant Analyzer aligns reads from amplicon samples against a reference, identifying variants (linked or not) and their frequencies. It can also be used to detect unknown and low-frequency variants. It includes graphical tools for analysis of alignments.

Illumina

Illumina Genome Analyzer II sequencing machine

Illumina produces a number of next-generation sequencing machines using technology acquired from Manteia Predictive Medicine and developed by Solexa. Illumina makes a number of next generation sequencing machines using this technology including the HiSeq, Genome Analyzer IIx, MiSeq and the HiScanSQ, which can also process microarrays.

The technology leading to these DNA sequencers was first released by Solexa in 2006 as the Genome Analyzer. Illumina purchased Solexa in 2007. The Genome Analyzer uses a sequencing by synthesis method. The first model produced 1G per run. During the year 2009 the output was increased from 20G per run in August to 50G per run in December. In 2010 Illumina released the HiSeq 2000 with an output of 200 and then 600G per run which would take 8 days. At its release the HiSeq 2000 provided one of the cheapest sequencing platforms at $0.02 per million bases as costed by the Beijing Genomics Institute.

In 2011 Illumina released a benchtop sequencer called the MiSeq. At its release the MiSeq could generate 1.5G per run with paired end 150bp reads. A sequencing run can be performed in 10 hours when using automated DNA sample preparation.

The Illumina HiSeq uses two software tools to calculate the number and position of DNA clusters to assess the sequencing quality: the HiSeq control system and the real-time analyzer. These methods help to assess if nearby clusters are interfering with each other.

Life Technologies

Life Technologies (now Thermo Fisher Scientific) produces DNA sequencers under the Applied Biosystems and Ion Torrent brands. Applied Biosystems makes the SOLiD next-generation sequencing platform, and Sanger-based DNA sequencers such as the 3500 Genetic Analyzer. Under the Ion Torrent brand, Applied Biosystems produces four next-generation sequencers: the Ion PGM System, Ion Proton System, Ion S5 and Ion S5xl systems. The company is also believed to be developing their new capillary DNA sequencer called SeqStudio that will be released early 2018.

SOLiD systems was acquired by Applied Biosystems in 2006. SOLiD applies sequencing by ligation and dual base encoding. The first SOLiD system was launched in 2007, generating reading lengths of 35bp and 3G data per run. After five upgrades, the 5500xl sequencing system was released in 2010, considerably increasing read length to 85bp, improving accuracy up to 99.99% and producing 30G per 7-day run.
The limited read length of the SOLiD has remained a significant shortcoming and has to some extent limited its use to experiments where read length is less vital such as resequencing and transcriptome analysis and more recently ChIP-Seq and methylation experiments. The DNA sample preparation time for SOLiD systems has become much quicker with the automation of sequencing library preparations such as the Tecan system.

The colour space data produced by the SOLiD platform can be decoded into DNA bases for further analysis, however software that considers the original colour space information can give more accurate results. Life Technologies has released BioScope, a data analysis package for resequencing, ChiP-Seq and transcriptome analysis. It uses the MaxMapper algorithm to map the colour space reads.

Beckman Coulter

Beckman Coulter (now Danaher) has previously manufactured chain termination and capillary electrophoresis-based DNA sequencers under the model name CEQ, including the CEQ 8000. The company now produces the GeXP Genetic Analysis System, which uses dye terminator sequencing. This method uses a thermocycler in much the same way as PCR to denature, anneal, and extend DNA fragments, amplifying the sequenced fragments.

Pacific Biosciences

Pacific Biosciences produces the PacBio RS and Sequel sequencing systems using a single molecule real time sequencing, or SMRT, method. This system can produce read lengths of multiple thousands of base pairs. Higher raw read errors are corrected using either circular consensus - where the same strand is read over and over again - or using optimized assembly strategies. Scientists have reported 99.9999% accuracy with these strategies. The Sequel system was launched in 2015 with an increased capacity and a lower price.

Oxford Nanopore MinION sequencer (lower right) was used in the first-ever DNA sequencing in space in August 2016 by astronaut Kathleen Rubins.

Oxford Nanopore

Oxford Nanopore Technologies has begun shipping early versions of its nanopore sequencing MinION sequencer to selected labs. The device is four inches long and gets power from a USB port. MinION decodes DNA directly as the molecule is drawn at the rate of 450 bases/second through a nanopore suspended in a membrane. Changes in electric current indicate which base is present. It is 60 to 85 percent accurate, compared with 99.9 percent in conventional machines. Even inaccurate results may prove useful because it produces long read lengths. GridION is a slightly larger sequencer that processes up to five MinION flow cells at once. PromethION is another (unreleased) product that will use as many as 100,000 pores in parallel, more suitable for high volume sequencing.

1000 Genomes Project

From Wikipedia, the free encyclopedia
  
The 1000 Genomes Project (abbreviated as 1KGP), launched in January 2008, was an international research effort to establish by far the most detailed catalogue of human genetic variation. Scientists planned to sequence the genomes of at least one thousand anonymous participants from a number of different ethnic groups within the following three years, using newly developed technologies which were faster and less expensive. In 2010, the project finished its pilot phase, which was described in detail in a publication in the journal Nature. In 2012, the sequencing of 1092 genomes was announced in a Nature publication. In 2015, two papers in Nature reported results and the completion of the project and opportunities for future research. Many rare variations, restricted to closely related groups, were identified, and eight structural-variation classes were analyzed.

The project unites multidisciplinary research teams from institutes around the world, including China, Italy, Japan, Kenya, Nigeria, Peru, the United Kingdom, and the United States. Each will contribute to the enormous sequence dataset and to a refined human genome map, which will be freely accessible through public databases to the scientific community and the general public alike.

By providing an overview of all human genetic variation, the consortium will generate a valuable tool for all fields of biological science, especially in the disciplines of genetics, medicine, pharmacology, biochemistry, and bioinformatics.

Changes in the number and order of genes (A-D) create genetic diversity within and between populations.

Background

Since the completion of the Human Genome Project advances in human population genetics and comparative genomics have made it possible to gain increasing insight into the nature of genetic diversity. However, we are just beginning to understand how processes like the random sampling of gametes, structural variations (insertions/deletions (indels), copy number variations (CNV), retroelements), single-nucleotide polymorphisms (SNPs), and natural selection have shaped the level and pattern of variation within species and also between species.

Human genetic variation

The random sampling of gametes during sexual reproduction leads to genetic drift — a random fluctuation in the population frequency of a trait — in subsequent generations and would result in the loss of all variation in the absence of external influence. It is postulated that the rate of genetic drift is inversely proportional to population size, and that it may be accelerated in specific situations such as bottlenecks, where the population size is reduced for a certain period of time, and by the founder effect (individuals in a population tracing back to a small number of founding individuals).

Anzai et al. demonstrated that indels account for 90.4% of all observed variations in the sequence of the major histocompatibility locus (MHC) between humans and chimpanzees. After taking multiple indels into consideration, the high degree of genomic similarity between the two species (98.6% nucleotide sequence identity) drops to only 86.7%. For example, a large deletion of 95 kilobases (kb) between the loci of the human MICA and MICB genes, results in a single hybrid chimpanzee MIC gene, linking this region to a species-specific handling of several retroviral infections and the resultant susceptibility to various autoimmune diseases. The authors conclude that instead of more subtle SNPs, indels were the driving mechanism in primate speciation.

Besides mutations, SNPs and other structural variants such as copy-number variants (CNVs) are contributing to the genetic diversity in human populations. Using microarrays, almost 1,500 copy number variable regions, covering around 12% of the genome and containing hundreds of genes, disease loci, functional elements and segmental duplications, have been identified in the HapMap sample collection. Although the specific function of CNVs remains elusive, the fact that CNVs span more nucleotide content per genome than SNPs emphasizes the importance of CNVs in genetic diversity and evolution.

Investigating human genomic variations holds great potential for identifying genes that might underlie differences in disease resistance (e.g. MHC region) or drug metabolism.

Natural selection

Natural selection in the evolution of a trait can be divided into three classes. Directional or positive selection refers to a situation where a certain allele has a greater fitness than other alleles, consequently increasing its population frequency (e.g. antibiotic resistance of bacteria). In contrast, stabilizing or negative selection (also known as purifying selection) lowers the frequency or even removes alleles from a population due to disadvantages associated with it with respect to other alleles. Finally, a number of forms of balancing selection exist; those increase genetic variation within a species by being overdominant (heterozygous individuals are fitter than homozygous individuals, e.g. G6PD, a gene that is involved in both Hemolytic anaemia and malaria resistance) or can vary spatially within a species that inhabits different niches, thus favouring different alleles. Some genomic differences may not affect fitness. Neutral variation, previously thought to be “junk” DNA, is unaffected by natural selection resulting in higher genetic variation at such sites when compared to sites where variation does influence fitness.

It is not fully clear how natural selection has shaped population differences; however, genetic candidate regions under selection have been identified recently. Patterns of DNA polymorphisms can be used to reliably detect signatures of selection and may help to identify genes that might underlie variation in disease resistance or drug metabolism. Barreiro et al. found evidence that negative selection has reduced population differentiation at the amino acid–altering level (particularly in disease-related genes), whereas, positive selection has ensured regional adaptation of human populations by increasing population differentiation in gene regions (mainly nonsynonymous and 5'-untranslated region variants).

It is thought that most complex and Mendelian diseases (except diseases with late onset, assuming that older individuals no longer contribute to the fitness of their offspring) will have an effect on survival and/or reproduction, thus, genetic factors underlying those diseases should be influenced by natural selection. Although, diseases that have late onset today could have been childhood diseases in the past as genes delaying disease progression could have undergone selection. Gaucher disease (mutations in the GBA gene), Crohn's disease (mutation of NOD2) and familial hypertrophic cardiomyopathy (mutations in MYH7, TNNT2, TPM1 and MYBPC3) are all examples of negative selection. These disease mutations are primarily recessive and segregate as expected at a low frequency, supporting the hypothesized negative selection. There is evidence that the genetic-basis of Type 1 Diabetes may have undergone positive selection. Few cases have been reported, where disease-causing mutations appear at the high frequencies supported by balanced selection. The most prominent example is mutations of the G6PD locus where, if homozygous G6PD enzyme deficiency and consequently Hemolytic anaemia results, but in the heterozygous state are partially protective against malaria. Other possible explanations for segregation of disease alleles at moderate or high frequencies include genetic drift and recent alterations towards positive selection due to environmental changes such as diet or genetic hitch-hiking.

Genome-wide comparative analyses of different human populations, as well as between species (e.g. human versus chimpanzee) are helping us to understand the relationship between diseases and selection and provide evidence of mutations in constrained genes being disproportionally associated with heritable disease phenotypes. Genes implicated in complex disorders tend to be under less negative selection than Mendelian disease genes or non-disease genes.

Project description

Goals

There are two kinds of genetic variants related to disease. The first are rare genetic variants that have a severe effect predominantly on simple traits (e.g. Cystic fibrosis, Huntington disease). The second, more common, genetic variants have a mild effect and are thought to be implicated in complex traits (e.g. Cognition, Diabetes, Heart Disease). Between these two types of genetic variants lies a significant gap of knowledge, which the 1000 Genomes Project is designed to address.

The primary goal of this project is to create a complete and detailed catalogue of human genetic variations, which in turn can be used for association studies relating genetic variation to disease. By doing so the consortium aims to discover >95 % of the variants (e.g. SNPs, CNVs, indels) with minor allele frequencies as low as 1% across the genome and 0.1-0.5% in gene regions, as well as to estimate the population frequencies, haplotype backgrounds and linkage disequilibrium patterns of variant alleles.

Secondary goals will include the support of better SNP and probe selection for genotyping platforms in future studies and the improvement of the human reference sequence. Furthermore, the completed database will be a useful tool for studying regions under selection, variation in multiple populations and understanding the underlying processes of mutation and recombination.

Outline

The human genome consists of approximately 3 billion DNA base pairs and is estimated to carry around 20,000 protein coding genes. In designing the study the consortium needed to address several critical issues regarding the project metrics such as technology challenges, data quality standards and sequence coverage.

Over the course of the next three years, scientists at the Sanger Institute, BGI Shenzhen and the National Human Genome Research Institute’s Large-Scale Sequencing Network are planning to sequence a minimum of 1,000 human genomes. Due to the large amount of sequence data that need to be generated and analyzed it is possible that other participants may be recruited over time.

Almost 10 billion bases will be sequenced per day over a period of the two year production phase. This equates to more than two human genomes every 24 hours; a groundbreaking capacity. Challenging the leading experts of bioinformatics and statistical genetics, the sequence dataset will comprise 6 trillion DNA bases, 60-fold more sequence data than what has been published in DNA databases over the past 25 years.

To determine the final design of the full project three pilot studies were designed and will be carried out within the first year of the project. The first pilot intends to genotype 180 people of 3 major geographic groups at low coverage (2x). For the second pilot study, the genomes of two nuclear families (both parents and an adult child) are going to be sequenced with deep coverage (20x per genome). The third pilot study involves sequencing the coding regions (exons) of 1,000 genes in 1,000 people with deep coverage (20x).

It has been estimated that the project would likely cost more than $500 million if standard DNA sequencing technologies were used. Therefore, several new technologies (e.g. Solexa, 454, SOLiD) will be applied, lowering the expected costs to between $30 million and $50 million. The major support will be provided by the Wellcome Trust Sanger Institute in Hinxton, England; the Beijing Genomics Institute, Shenzhen (BGI Shenzhen), China; and the NHGRI, part of the National Institutes of Health (NIH).

In keeping with Fort Lauderdale principles, all genome sequence data (including variant calls) is freely available as the project progresses and can be downloaded via ftp from the 1000 genomes project webpage.

Human genome samples

Locations of population samples of 1000 Genomes Project. Each circle represents the number of sequences in the final release.
 
Based on the overall goals for the project, the samples will be chosen to provide power in populations where association studies for common diseases are being carried out. Furthermore, the samples do not need to have medical or phenotype information since the proposed catalogue will be a basic resource on human variation.

For the pilot studies human genome samples from the HapMap collection will be sequenced. It will be useful to focus on samples that have additional data available (such as ENCODE sequence, genome-wide genotypes, fosmid-end sequence, structural variation assays, and gene expression) to be able to compare the results with those from other projects.

Complying with extensive ethical procedures, the 1000 Genomes Project will then use samples from volunteer donors. The following populations will be included in the study: Yoruba in Ibadan (YRI), Nigeria; Japanese in Tokyo (JPT); Chinese in Beijing (CHB); Utah residents with ancestry from northern and western Europe (CEU); Luhya in Webuye, Kenya (LWK); Maasai in Kinyawa, Kenya (MKK); Toscani in Italy (TSI); Peruvians in Lima, Peru (PEL); Gujarati Indians in Houston (GIH); Chinese in metropolitan Denver (CHD); people of Mexican ancestry in Los Angeles (MXL); and people of African ancestry in the southwestern United States (ASW).

Community meeting

Data generated by the 1000 Genomes Project is widely used by the genetics community, making the first 1000 Genomes Project one of the most cited papers in biology. To support this user community, the project held a community analysis meeting in July 2012 that included talks highlighting key project discoveries, their impact on population genetics and human disease studies, and summaries of other large scale sequencing studies.

Project findings

Pilot phase

The pilot phase consisted of three projects:
  • low-coverage whole-genome sequencing of 179 individuals from 4 populations
  • high-coverage sequencing of 2 trios (mother-father-child)
  • exon-targeted sequencing of 697 individuals from 7 populations
It was found that on average, each person carries around 250-300 loss-of-function variants in annotated genes and 50-100 variants previously implicated in inherited disorders. Based on the two trios, it is estimated that the rate of de novo germline mutation is approximately 10−8 per base per generation.

Metagenomics

From Wikipedia, the free encyclopedia
 
Metagenomics allows the study of microbial communities like those present in this stream receiving acid drainage from surface coal mining.

Metagenomics is the study of genetic material recovered directly from environmental samples. The broad field may also be referred to as environmental genomics, ecogenomics or community genomics.

While traditional microbiology and microbial genome sequencing and genomics rely upon cultivated clonal cultures, early environmental gene sequencing cloned specific genes (often the 16S rRNA gene) to produce a profile of diversity in a natural sample. Such work revealed that the vast majority of microbial biodiversity had been missed by cultivation-based methods.

Because of its ability to reveal the previously hidden diversity of microscopic life, metagenomics offers a powerful lens for viewing the microbial world that has the potential to revolutionize understanding of the entire living world. As the price of DNA sequencing continues to fall, metagenomics now allows microbial ecology to be investigated at a much greater scale and detail than before. Recent studies use either "shotgun" or PCR directed sequencing to get largely unbiased samples of all genes from all the members of the sampled communities.

Etymology

The term "metagenomics" was first used by Jo Handelsman, Jon Clardy, Robert M. Goodman, Sean F. Brady, and others, and first appeared in publication in 1998. The term metagenome referenced the idea that a collection of genes sequenced from the environment could be analyzed in a way analogous to the study of a single genome. In 2005, Kevin Chen and Lior Pachter (researchers at the University of California, Berkeley) defined metagenomics as "the application of modern genomics technique without the need for isolation and lab cultivation of individual species".

History

Conventional sequencing begins with a culture of identical cells as a source of DNA. However, early metagenomic studies revealed that there are probably large groups of microorganisms in many environments that cannot be cultured and thus cannot be sequenced. These early studies focused on 16S ribosomal RNA sequences which are relatively short, often conserved within a species, and generally different between species. Many 16S rRNA sequences have been found which do not belong to any known cultured species, indicating that there are numerous non-isolated organisms. These surveys of ribosomal RNA (rRNA) genes taken directly from the environment revealed that cultivation based methods find less than 1% of the bacterial and archaeal species in a sample. Much of the interest in metagenomics comes from these discoveries that showed that the vast majority of microorganisms had previously gone unnoticed.

Early molecular work in the field was conducted by Norman R. Pace and colleagues, who used PCR to explore the diversity of ribosomal RNA sequences. The insights gained from these breakthrough studies led Pace to propose the idea of cloning DNA directly from environmental samples as early as 1985. This led to the first report of isolating and cloning bulk DNA from an environmental sample, published by Pace and colleagues in 1991 while Pace was in the Department of Biology at Indiana University. Considerable efforts ensured that these were not PCR false positives and supported the existence of a complex community of unexplored species. Although this methodology was limited to exploring highly conserved, non-protein coding genes, it did support early microbial morphology-based observations that diversity was far more complex than was known by culturing methods. Soon after that, Healy reported the metagenomic isolation of functional genes from "zoolibraries" constructed from a complex culture of environmental organisms grown in the laboratory on dried grasses in 1995. After leaving the Pace laboratory, Edward DeLong continued in the field and has published work that has largely laid the groundwork for environmental phylogenies based on signature 16S sequences, beginning with his group's construction of libraries from marine samples.

In 2002, Mya Breitbart, Forest Rohwer, and colleagues used environmental shotgun sequencing (see below) to show that 200 liters of seawater contains over 5000 different viruses. Subsequent studies showed that there are more than a thousand viral species in human stool and possibly a million different viruses per kilogram of marine sediment, including many bacteriophages. Essentially all of the viruses in these studies were new species. In 2004, Gene Tyson, Jill Banfield, and colleagues at the University of California, Berkeley and the Joint Genome Institute sequenced DNA extracted from an acid mine drainage system. This effort resulted in the complete, or nearly complete, genomes for a handful of bacteria and archaea that had previously resisted attempts to culture them.

Flow diagram of a typical metagenome project
 
Beginning in 2003, Craig Venter, leader of the privately funded parallel of the Human Genome Project, has led the Global Ocean Sampling Expedition (GOS), circumnavigating the globe and collecting metagenomic samples throughout the journey. All of these samples are sequenced using shotgun sequencing, in hopes that new genomes (and therefore new organisms) would be identified. The pilot project, conducted in the Sargasso Sea, found DNA from nearly 2000 different species, including 148 types of bacteria never before seen. Venter has circumnavigated the globe and thoroughly explored the West Coast of the United States, and completed a two-year expedition to explore the Baltic, Mediterranean and Black Seas. Analysis of the metagenomic data collected during this journey revealed two groups of organisms, one composed of taxa adapted to environmental conditions of 'feast or famine', and a second composed of relatively fewer but more abundantly and widely distributed taxa primarily composed of plankton.

In 2005 Stephan C. Schuster at Penn State University and colleagues published the first sequences of an environmental sample generated with high-throughput sequencing, in this case massively parallel pyrosequencing developed by 454 Life Sciences. Another early paper in this area appeared in 2006 by Robert Edwards, Forest Rohwer, and colleagues at San Diego State University.

Sequencing

Recovery of DNA sequences longer than a few thousand base pairs from environmental samples was very difficult until recent advances in molecular biological techniques allowed the construction of libraries in bacterial artificial chromosomes (BACs), which provided better vectors for molecular cloning.

Environmental Shotgun Sequencing (ESS). (A) Sampling from habitat; (B) filtering particles, typically by size; (C) Lysis and DNA extraction; (D) cloning and library construction; (E) sequencing the clones; (F) sequence assembly into contigs and scaffolds.

Shotgun metagenomics

Advances in bioinformatics, refinements of DNA amplification, and the proliferation of computational power have greatly aided the analysis of DNA sequences recovered from environmental samples, allowing the adaptation of shotgun sequencing to metagenomic samples (known also as whole metagenome shotgun or WMGS sequencing). The approach, used to sequence many cultured microorganisms and the human genome, randomly shears DNA, sequences many short sequences, and reconstructs them into a consensus sequence. Shotgun sequencing reveals genes present in environmental samples. Historically, clone libraries were used to facilitate this sequencing. However, with advances in high throughput sequencing technologies, the cloning step is no longer necessary and greater yields of sequencing data can be obtained without this labour-intensive bottleneck step. Shotgun metagenomics provides information both about which organisms are present and what metabolic processes are possible in the community. Because the collection of DNA from an environment is largely uncontrolled, the most abundant organisms in an environmental sample are most highly represented in the resulting sequence data. To achieve the high coverage needed to fully resolve the genomes of under-represented community members, large samples, often prohibitively so, are needed. On the other hand, the random nature of shotgun sequencing ensures that many of these organisms, which would otherwise go unnoticed using traditional culturing techniques, will be represented by at least some small sequence segments. An emerging approach combines shotgun sequencing and chromosome conformation capture (Hi-C), which measures the proximity of any two DNA sequences within the same cell, to guide microbial genome assembly.

High-throughput sequencing

The first metagenomic studies conducted using high-throughput sequencing used massively parallel 454 pyrosequencing. Three other technologies commonly applied to environmental sampling are the Ion Torrent Personal Genome Machine, the Illumina MiSeq or HiSeq and the Applied Biosystems SOLiD system. These techniques for sequencing DNA generate shorter fragments than Sanger sequencing; Ion Torrent PGM System and 454 pyrosequencing typically produces ~400 bp reads, Illumina MiSeq produces 400-700bp reads (depending on whether paired end options are used), and SOLiD produce 25–75 bp reads. Historically, these read lengths were significantly shorter than the typical Sanger sequencing read length of ~750 bp, however the Illumina technology is quickly coming close to this benchmark. However, this limitation is compensated for by the much larger number of sequence reads. In 2009, pyrosequenced metagenomes generate 200–500 megabases, and Illumina platforms generate around 20–50 gigabases, but these outputs have increased by orders of magnitude in recent years. An additional advantage to high throughput sequencing is that this technique does not require cloning the DNA before sequencing, removing one of the main biases and bottlenecks in environmental sampling.

Bioinformatics

The data generated by metagenomics experiments are both enormous and inherently noisy, containing fragmented data representing as many as 10,000 species. The sequencing of the cow rumen metagenome generated 279 gigabases, or 279 billion base pairs of nucleotide sequence data, while the human gut microbiome gene catalog identified 3.3 million genes assembled from 567.7 gigabases of sequence data. Collecting, curating, and extracting useful biological information from datasets of this size represent significant computational challenges for researchers.

Sequence pre-filtering

The first step of metagenomic data analysis requires the execution of certain pre-filtering steps, including the removal of redundant, low-quality sequences and sequences of probable eukaryotic origin (especially in metagenomes of human origin). The methods available for the removal of contaminating eukaryotic genomic DNA sequences include Eu-Detect and DeConseq.

Assembly

DNA sequence data from genomic and metagenomic projects are essentially the same, but genomic sequence data offers higher coverage while metagenomic data is usually highly non-redundant. Furthermore, the increased use of second-generation sequencing technologies with short read lengths means that much of future metagenomic data will be error-prone. Taken in combination, these factors make the assembly of metagenomic sequence reads into genomes difficult and unreliable. Misassemblies are caused by the presence of repetitive DNA sequences that make assembly especially difficult because of the difference in the relative abundance of species present in the sample. Misassemblies can also involve the combination of sequences from more than one species into chimeric contigs.

There are several assembly programs, most of which can use information from paired-end tags in order to improve the accuracy of assemblies. Some programs, such as Phrap or Celera Assembler, were designed to be used to assemble single genomes but nevertheless produce good results when assembling metagenomic data sets. Other programs, such as Velvet assembler, have been optimized for the shorter reads produced by second-generation sequencing through the use of de Bruijn graphs. The use of reference genomes allows researchers to improve the assembly of the most abundant microbial species, but this approach is limited by the small subset of microbial phyla for which sequenced genomes are available. After an assembly is created, an additional challenge is "metagenomic deconvolution", or determining which sequences come from which species in the sample.

Gene prediction

Metagenomic analysis pipelines use two approaches in the annotation of coding regions in the assembled contigs. The first approach is to identify genes based upon homology with genes that are already publicly available in sequence databases, usually by BLAST searches. This type of approach is implemented in the program MEGAN4. The second, ab initio, uses intrinsic features of the sequence to predict coding regions based upon gene training sets from related organisms. This is the approach taken by programs such as GeneMark and GLIMMER. The main advantage of ab initio prediction is that it enables the detection of coding regions that lack homologs in the sequence databases; however, it is most accurate when there are large regions of contiguous genomic DNA available for comparison.

Species diversity

A 2016 representation of the tree of life
Gene annotations provide the "what", while measurements of species diversity provide the "who". In order to connect community composition and function in metagenomes, sequences must be binned. Binning is the process of associating a particular sequence with an organism. In similarity-based binning, methods such as BLAST are used to rapidly search for phylogenetic markers or otherwise similar sequences in existing public databases. This approach is implemented in MEGAN. Another tool, PhymmBL, uses interpolated Markov models to assign reads. MetaPhlAn and AMPHORA are methods based on unique clade-specific markers for estimating organismal relative abundances with improved computational performances. Other tools, like mOTUs and MetaPhyler, use universal marker genes to profile prokaryotic species. With the mOTUs profiler is possible to profile species without a reference genome, improving the estimation of microbial community diversity. Recent methods, such as SLIMM, use read coverage landscape of individual reference genomes to minimize false-positive hits and get reliable relative abundances. In composition based binning, methods use intrinsic features of the sequence, such as oligonucleotide frequencies or codon usage bias. Once sequences are binned, it is possible to carry out comparative analysis of diversity and richness.

Data integration

The massive amount of exponentially growing sequence data is a daunting challenge that is complicated by the complexity of the metadata associated with metagenomic projects. Metadata includes detailed information about the three-dimensional (including depth, or height) geography and environmental features of the sample, physical data about the sample site, and the methodology of the sampling. This information is necessary both to ensure replicability and to enable downstream analysis. Because of its importance, metadata and collaborative data review and curation require standardized data formats located in specialized databases, such as the Genomes OnLine Database (GOLD).

Several tools have been developed to integrate metadata and sequence data, allowing downstream comparative analyses of different datasets using a number of ecological indices. In 2007, Folker Meyer and Robert Edwards and a team at Argonne National Laboratory and the University of Chicago released the Metagenomics Rapid Annotation using Subsystem Technology server (MG-RAST) a community resource for metagenome data set analysis. As of June 2012 over 14.8 terabases (14x1012 bases) of DNA have been analyzed, with more than 10,000 public data sets freely available for comparison within MG-RAST. Over 8,000 users now have submitted a total of 50,000 metagenomes to MG-RAST. The Integrated Microbial Genomes/Metagenomes (IMG/M) system also provides a collection of tools for functional analysis of microbial communities based on their metagenome sequence, based upon reference isolate genomes included from the Integrated Microbial Genomes (IMG) system and the Genomic Encyclopedia of Bacteria and Archaea (GEBA) project.

One of the first standalone tools for analysing high-throughput metagenome shotgun data was MEGAN (MEta Genome ANalyzer). A first version of the program was used in 2005 to analyse the metagenomic context of DNA sequences obtained from a mammoth bone. Based on a BLAST comparison against a reference database, this tool performs both taxonomic and functional binning, by placing the reads onto the nodes of the NCBI taxonomy using a simple lowest common ancestor (LCA) algorithm or onto the nodes of the SEED or KEGG classifications, respectively.

With the advent of fast and inexpensive sequencing instruments, the growth of databases of DNA sequences is now exponential (e.g., the NCBI GenBank database ). Faster and efficient tools are needed to keep pace with the high-throughput sequencing, because the BLAST-based approaches such as MG-RAST or MEGAN run slowly to annotate large samples (e.g., several hours to process a small/medium size dataset/sample). Thus, ultra-fast classifiers have recently emerged, thanks to more affordable powerful servers. These tools can perform the taxonomic annotation at extremely high speed, for example CLARK (according to CLARK's authors, it can classify accurately "32 million metagenomic short reads per minute"). At such a speed, a very large dataset/sample of a billion short reads can be processed in about 30 minutes.

With the increasing availability of samples containing ancient DNA and due to the uncertainty associated with the nature of those samples (ancient DNA damage), FALCON, a fast tool capable of producing conservative similarity estimates has been made available. According to FALCON's authors, it can use relaxed thresholds and edit distances without affecting the memory and speed performance.

Comparative metagenomics

Comparative analyses between metagenomes can provide additional insight into the function of complex microbial communities and their role in host health. Pairwise or multiple comparisons between metagenomes can be made at the level of sequence composition (comparing GC-content or genome size), taxonomic diversity, or functional complement. Comparisons of population structure and phylogenetic diversity can be made on the basis of 16S and other phylogenetic marker genes, or—in the case of low-diversity communities—by genome reconstruction from the metagenomic dataset. Functional comparisons between metagenomes may be made by comparing sequences against reference databases such as COG or KEGG, and tabulating the abundance by category and evaluating any differences for statistical significance. This gene-centric approach emphasizes the functional complement of the community as a whole rather than taxonomic groups, and shows that the functional complements are analogous under similar environmental conditions. Consequently, metadata on the environmental context of the metagenomic sample is especially important in comparative analyses, as it provides researchers with the ability to study the effect of habitat upon community structure and function.

Additionally, several studies have also utilized oligonucleotide usage patterns to identify the differences across diverse microbial communities. Examples of such methodologies include the dinucleotide relative abundance approach by Willner et al. and the HabiSign approach of Ghosh et al. This latter study also indicated that differences in tetranucleotide usage patterns can be used to identify genes (or metagenomic reads) originating from specific habitats. Additionally some methods as TriageTools or Compareads detect similar reads between two read sets. The similarity measure they apply on reads is based on a number of identical words of length k shared by pairs of reads.

A key goal in comparative metagenomics is to identify microbial group(s) which are responsible for conferring specific characteristics to a given environment. However, due to issues in the sequencing technologies artifacts need to be accounted for like in metagenomeSeq. Others have characterized inter-microbial interactions between the resident microbial groups. A GUI-based comparative metagenomic analysis application called Community-Analyzer has been developed by Kuntal et al.  which implements a correlation-based graph layout algorithm that not only facilitates a quick visualization of the differences in the analyzed microbial communities (in terms of their taxonomic composition), but also provides insights into the inherent inter-microbial interactions occurring therein. Notably, this layout algorithm also enables grouping of the metagenomes based on the probable inter-microbial interaction patterns rather than simply comparing abundance values of various taxonomic groups. In addition, the tool implements several interactive GUI-based functionalities that enable users to perform standard comparative analyses across microbiomes.

Data analysis

Community metabolism

In many bacterial communities, natural or engineered (such as bioreactors), there is significant division of labor in metabolism (Syntrophy), during which the waste products of some organisms are metabolites for others. In one such system, the methanogenic bioreactor, functional stability requires the presence of several syntrophic species (Syntrophobacterales and Synergistia) working together in order to turn raw resources into fully metabolized waste (methane). Using comparative gene studies and expression experiments with microarrays or proteomics researchers can piece together a metabolic network that goes beyond species boundaries. Such studies require detailed knowledge about which versions of which proteins are coded by which species and even by which strains of which species. Therefore, community genomic information is another fundamental tool (with metabolomics and proteomics) in the quest to determine how metabolites are transferred and transformed by a community.

Metatranscriptomics

Metagenomics allows researchers to access the functional and metabolic diversity of microbial communities, but it cannot show which of these processes are active. The extraction and analysis of metagenomic mRNA (the metatranscriptome) provides information on the regulation and expression profiles of complex communities. Because of the technical difficulties (the short half-life of mRNA, for example) in the collection of environmental RNA there have been relatively few in situ metatranscriptomic studies of microbial communities to date. While originally limited to microarray technology, metatranscriptomics studies have made use of transcriptomics technologies to measure whole-genome expression and quantification of a microbial community, first employed in analysis of ammonia oxidation in soils.

Viruses

Metagenomic sequencing is particularly useful in the study of viral communities. As viruses lack a shared universal phylogenetic marker (as 16S RNA for bacteria and archaea, and 18S RNA for eukarya), the only way to access the genetic diversity of the viral community from an environmental sample is through metagenomics. Viral metagenomes (also called viromes) should thus provide more and more information about viral diversity and evolution. For example, a metagenomic pipeline called Giant Virus Finder showed the first evidence of existence of giant viruses in a saline desert and in Antarctic dry valleys.

Applications

Metagenomics has the potential to advance knowledge in a wide variety of fields. It can also be applied to solve practical challenges in medicine, engineering, agriculture, sustainability and ecology.

Agriculture

The soils in which plants grow are inhabited by microbial communities, with one gram of soil containing around 109-1010 microbial cells which comprise about one gigabase of sequence information. The microbial communities which inhabit soils are some of the most complex known to science, and remain poorly understood despite their economic importance. Microbial consortia perform a wide variety of ecosystem services necessary for plant growth, including fixing atmospheric nitrogen, nutrient cycling, disease suppression, and sequester iron and other metals. Functional metagenomics strategies are being used to explore the interactions between plants and microbes through cultivation-independent study of these microbial communities. By allowing insights into the role of previously uncultivated or rare community members in nutrient cycling and the promotion of plant growth, metagenomic approaches can contribute to improved disease detection in crops and livestock and the adaptation of enhanced farming practices which improve crop health by harnessing the relationship between microbes and plants.

Biofuel

Bioreactors allow the observation of microbial communities as they convert biomass into cellulosic ethanol.

Biofuels are fuels derived from biomass conversion, as in the conversion of cellulose contained in corn stalks, switchgrass, and other biomass into cellulosic ethanol. This process is dependent upon microbial consortia(association) that transform the cellulose into sugars, followed by the fermentation of the sugars into ethanol. Microbes also produce a variety of sources of bioenergy including methane and hydrogen.

The efficient industrial-scale deconstruction of biomass requires novel enzymes with higher productivity and lower cost. Metagenomic approaches to the analysis of complex microbial communities allow the targeted screening of enzymes with industrial applications in biofuel production, such as glycoside hydrolases. Furthermore, knowledge of how these microbial communities function is required to control them, and metagenomics is a key tool in their understanding. Metagenomic approaches allow comparative analyses between convergent microbial systems like biogas fermenters or insect herbivores such as the fungus garden of the leafcutter ants.

Biotechnology

Microbial communities produce a vast array of biologically active chemicals that are used in competition and communication. Many of the drugs in use today were originally uncovered in microbes; recent progress in mining the rich genetic resource of non-culturable microbes has led to the discovery of new genes, enzymes, and natural products. The application of metagenomics has allowed the development of commodity and fine chemicals, agrochemicals and pharmaceuticals where the benefit of enzyme-catalyzed chiral synthesis is increasingly recognized.

Two types of analysis are used in the bioprospecting of metagenomic data: function-driven screening for an expressed trait, and sequence-driven screening for DNA sequences of interest. Function-driven analysis seeks to identify clones expressing a desired trait or useful activity, followed by biochemical characterization and sequence analysis. This approach is limited by availability of a suitable screen and the requirement that the desired trait be expressed in the host cell. Moreover, the low rate of discovery (less than one per 1,000 clones screened) and its labor-intensive nature further limit this approach. In contrast, sequence-driven analysis uses conserved DNA sequences to design PCR primers to screen clones for the sequence of interest. In comparison to cloning-based approaches, using a sequence-only approach further reduces the amount of bench work required. The application of massively parallel sequencing also greatly increases the amount of sequence data generated, which require high-throughput bioinformatic analysis pipelines. The sequence-driven approach to screening is limited by the breadth and accuracy of gene functions present in public sequence databases. In practice, experiments make use of a combination of both functional and sequence-based approaches based upon the function of interest, the complexity of the sample to be screened, and other factors. An example of success using metagenomics as a biotechnology for drug discovery is illustrated with the malacidin antibiotics.

Ecology

Metagenomics can provide valuable insights into the functional ecology of environmental communities. Metagenomic analysis of the bacterial consortia found in the defecations of Australian sea lions suggests that nutrient-rich sea lion faeces may be an important nutrient source for coastal ecosystems. This is because the bacteria that are expelled simultaneously with the defecations are adept at breaking down the nutrients in the faeces into a bioavailable form that can be taken up into the food chain.

DNA sequencing can also be used more broadly to identify species present in a body of water, debris filtered from the air, or sample of dirt. This can establish the range of invasive species and endangered species, and track seasonal populations.

Environmental remediation

Metagenomics can improve strategies for monitoring the impact of pollutants on ecosystems and for cleaning up contaminated environments. Increased understanding of how microbial communities cope with pollutants improves assessments of the potential of contaminated sites to recover from pollution and increases the chances of bioaugmentation or biostimulation trials to succeed.

Gut microbe characterization

Microbial communities play a key role in preserving human health, but their composition and the mechanism by which they do so remains mysterious. Metagenomic sequencing is being used to characterize the microbial communities from 15–18 body sites from at least 250 individuals. This is part of the Human Microbiome initiative with primary goals to determine if there is a core human microbiome, to understand the changes in the human microbiome that can be correlated with human health, and to develop new technological and bioinformatics tools to support these goals.

Another medical study as part of the MetaHit (Metagenomics of the Human Intestinal Tract) project consisted of 124 individuals from Denmark and Spain consisting of healthy, overweight, and irritable bowel disease patients. The study attempted to categorize the depth and phylogenetic diversity of gastrointestinal bacteria. Using Illumina GA sequence data and SOAPdenovo, a de Bruijn graph-based tool specifically designed for assembly short reads, they were able to generate 6.58 million contigs greater than 500 bp for a total contig length of 10.3 Gb and a N50 length of 2.2 kb.

The study demonstrated that two bacterial divisions, Bacteroidetes and Firmicutes, constitute over 90% of the known phylogenetic categories that dominate distal gut bacteria. Using the relative gene frequencies found within the gut these researchers identified 1,244 metagenomic clusters that are critically important for the health of the intestinal tract. There are two types of functions in these range clusters: housekeeping and those specific to the intestine. The housekeeping gene clusters are required in all bacteria and are often major players in the main metabolic pathways including central carbon metabolism and amino acid synthesis. The gut-specific functions include adhesion to host proteins and the harvesting of sugars from globoseries glycolipids. Patients with irritable bowel syndrome were shown to exhibit 25% fewer genes and lower bacterial diversity than individuals not suffering from irritable bowel syndrome indicating that changes in patients' gut biome diversity may be associated with this condition.

While these studies highlight some potentially valuable medical applications, only 31–48.8% of the reads could be aligned to 194 public human gut bacterial genomes and 7.6–21.2% to bacterial genomes available in GenBank which indicates that there is still far more research necessary to capture novel bacterial genomes.

Infectious disease diagnosis

Differentiating between infectious and non-infectious illness, and identifying the underlying etiology of infection, can be quite challenging. For example, more than half of cases of encephalitis remain undiagnosed, despite extensive testing using state-of-the-art clinical laboratory methods. Metagenomic sequencing shows promise as a sensitive and rapid method to diagnose infection by comparing genetic material found in a patient's sample to a database of thousands of bacteria, viruses, and other pathogens

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...