Search This Blog

Monday, March 21, 2022

Metabarcoding

From Wikipedia, the free encyclopedia
 
Differences in the standard methods for DNA barcoding and metabarcoding. While DNA barcoding focuses on a specific species, metabarcoding examines whole communities.

Metabarcoding is the barcoding of DNA/RNA (or eDNA/eRNA) in a manner that allows for the simultaneous identification of many taxa within the same sample. The main difference between barcoding and metabarcoding is that metabarcoding does not focus on one specific organism, but instead aims to determine species composition within a sample.

A barcode consists of a short variable gene region (for example, see different markers/barcodes) which is useful for taxonomic assignment flanked by highly conserved gene regions which can be used for primer design. This idea of general barcoding originated in 2003 from researchers at the University of Guelph.

The metabarcoding procedure, like general barcoding, proceeds in order through stages of DNA extraction, PCR amplification, sequencing and data analysis. Different genes are used depending if the aim is to barcode single species or metabarcoding several species. In the latter case, a more universal gene is used. Metabarcoding does not use single species DNA/RNA as a starting point, but DNA/RNA from several different organisms derived from one environmental or bulk sample.

Environmental DNA

Environmental DNA or eDNA describes the genetic material present in environmental samples such as sediment, water, and air, including whole cells, extracellular DNA and potentially whole organisms. eDNA can be captured from environmental samples and preserved, extracted, amplified, sequenced, and categorized based on its sequence. From this information, detection and classification of species is possible. eDNA may come from skin, mucous, saliva, sperm, secretions, eggs, feces, urine, blood, roots, leaves, fruit, pollen, and rotting bodies of larger organisms, while microorganisms may be obtained in their entirety. eDNA production is dependent on biomass, age and feeding activity of the organism as well as physiology, life history, and space use.

By 2019 methods in eDNA research had been expanded to be able to assess whole communities from a single sample. This process involves metabarcoding, which can be precisely defined as the use of general or universal polymerase chain reaction (PCR) primers on mixed DNA samples from any origin followed by high-throughput next-generation sequencing (NGS) to determine the species composition of the sample. This method has been common in microbiology for years, but, as of 2020, it is only just finding its footing in the assessment of macroorganisms. Ecosystem-wide applications of eDNA metabarcoding have the potential to not only describe communities and biodiversity, but also to detect interactions and functional ecology over large spatial scales, though it may be limited by false readings due to contamination or other errors. Altogether, eDNA metabarcoding increases speed, accuracy, and identification over traditional barcoding and decreases cost, but needs to be standardized and unified, integrating taxonomy and molecular methods for full ecological study.

Applications of environmental DNA metabarcoding in aquatic and terrestrial ecosystems 
 
Global ecosystem and biodiversity monitoring
with environmental DNA metabarcoding 

eDNA metabarcoding has applications to diversity monitoring across all habitats and taxonomic groups, ancient ecosystem reconstruction, plant-pollinator interactions, diet analysis, invasive species detection, pollution responses, and air quality monitoring. eDNA metabarcoding is a unique method still in development and will likely remain in flux for some time as technology advances and procedures become standardized. However, as metabarcoding is optimized and its use becomes more widespread, it is likely to become an essential tool for ecological monitoring and global conservation study.

Community DNA

Since the inception of high‐throughput sequencing (HTS), the use of metabarcoding as a biodiversity detection tool has drawn immense interest. However, there has yet to be clarity regarding what source material is used to conduct metabarcoding analyses (e.g., environmental DNA versus community DNA). Without clarity between these two source materials, differences in sampling, as well as differences in laboratory procedures, can impact subsequent bioinformatics pipelines used for data processing, and complicate the interpretation of spatial and temporal biodiversity patterns. Here, we seek to clearly differentiate among the prevailing source materials used and their effect on downstream analysis and interpretation for environmental DNA metabarcoding of animals and plants compared to that of community DNA metabarcoding.

With community DNA metabarcoding of animals and plants, the targeted groups are most often collected in bulk (e.g., soil, malaise trap or net), and individuals are removed from other sample debris and pooled together prior to bulk DNA extraction. In contrast, macro‐organism eDNA is isolated directly from an environmental material (e.g., soil or water) without prior segregation of individual organisms or plant material from the sample and implicitly assumes that the whole organism is not present in the sample. Of course, community DNA samples may contain DNA from parts of tissues, cells and organelles of other organisms (e.g., gut contents, cutaneous intracellular or extracellular DNA). Likewise, macro‐organism eDNA samples may inadvertently capture whole microscopic nontarget organisms (e.g., protists, bacteria). Thus, the distinction can at least partly break down in practice.

Another important distinction between community DNA and macro‐organism eDNA is that sequences generated from community DNA metabarcoding can be taxonomically verified when the specimens are not destroyed in the extraction process. Here, sequences can then be generated from voucher specimens using Sanger sequencing. As the samples for eDNA metabarcoding lack whole organisms, no such in situ comparisons can be made. Taxonomic affinities can therefore only be established by directly comparing obtained sequences (or through bioinformatically generated operational taxonomic units (MOTUs)), to sequences that are taxonomically annotated such as NCBI's GenBank nucleotide database, BOLD, or to self‐generated reference databases from Sanger‐sequenced DNA. (The molecular operational taxonomic unit (MOTU) is a group identified through use of cluster algorithms and a predefined percentage sequence similarity, for example, 97%)). Then, to at least partially corroborate the resulting list of taxa, comparisons are made with conventional physical, acoustic or visual‐based survey methods conducted at the same time or compared with historical records from surveys for a location (see Table 1).

The difference in source material between community DNA and eDNA therefore has distinct ramifications for interpreting the scale of inference for time and space about the biodiversity detected. From community DNA, it is clear that the individual species were found in that time and place, but for eDNA, the organism that produced the DNA may be upstream from the sampled location, or the DNA may have been transported in the faeces of a more mobile predatory species (e.g., birds depositing fish eDNA, or was previously present, but no longer active in the community and detection is from DNA that was shed years to decades before. The latter means that the scale of inference both in space and in time must be considered carefully when inferring the presence for the species in the community based on eDNA.

Metabarcoding stages

Six steps in DNA barcoding and metabarcoding 

There are six stages or steps in DNA barcoding and metabarcoding. The DNA barcoding of animals (and specifically of bats) is used as an example in the diagram at the right and in the discussion immediately below.

First, suitable DNA barcoding regions are chosen to answer some specific research question. The most commonly used DNA barcode region for animals is a segment about 600 base pairs long of the mitochondrial gene cytochrome oxidase I (CO1). This locus provides large sequence variation between species yet relatively small amount of variation within species. Other commonly used barcode regions used for species identification of animals are ribosomal DNA (rDNA) regions such as 16S, 18S and 12S and mitochondrial regions such as cytochrome B. These markers have advantages and disadvantages and are used for different purposes. Longer barcode regions (at least 600 base pairs long) are often needed for accurate species delimitation, especially to differentiate close relatives. Identification of the producer of organism's remains such as faeces, hairs and saliva can be used as a proxy measure to verify absence/presence of a species in an ecosystem. The DNA in these remains is usually of low quality and quantity, and therefore, shorter barcodes of around 100 base pairs long are used in these cases. Similarly, DNA remains in dung are often degraded as well, so short barcodes are needed to identify prey consumed.

Second, a reference database needs to be built of all DNA barcodes likely to occur in a study. Ideally, these barcodes need to be generated from vouchered specimens deposited in a publicly accessible place, such as for instance a natural history museum or another research institute. Building up such reference databases is currently being done all over the world. Partner organizations collaborate in international projects such as the International Barcode of Life Project (iBOL) and Consortium for the Barcode of Life (CBOL), aiming to construct a DNA barcode reference that will be the foundation for DNA‐based identification of the world's biome. Well‐known barcode repositories are NCBI GenBank and the Barcode of Life Data System (BOLD).

Third, the cells containing the DNA of interest must be broken open to expose its DNA. This step, DNA extractions and purifications, should be performed from the substrate under investigation. There are several procedures available for this. Specific techniques must be chosen to isolate DNA from substrates with partly degraded DNA, for example fossil samples, and samples containing inhibitors, such as blood, faeces and soil. Extractions in which DNA yield or quality is expected to be low should be carried out in an ancient DNA facility, together with established protocols to avoid contamination with modern DNA. Experiments should always be performed in duplicate  and with positive controls included.

Fourth, amplicons have to be generated from DNA extracted, either from a single specimen or from complex mixtures with primers based on DNA barcodes selected under step 1. To keep track of their origin, labelled nucleotides (molecular IDs or MID labels) need to be added in case of metabarcoding. These labels are needed later on in the analyses to trace reads from a bulk data set back to their origin.

History of sequencing technology 

Fifth, the appropriate techniques should be chosen for DNA sequencing. The classic Sanger chain‐termination method relies on the selective incorporation of chain‐elongating inhibitors of DNA polymerase during DNA replication. These four bases are separated by size using electrophoresis and later identified by laser detection. The Sanger method is limited and can produce a single read at the same time and is therefore suitable to generate DNA barcodes from substrates that contain only a single species. Emerging technologies such as nanopore sequencing have resulted in the cost of DNA sequencing reducing from about USD 30,000 per megabyte in 2002 to about USD 0.60 in 2016. Modern next-generation sequencing (NGS) technologies can handle thousands to millions reads in parallel and are therefore suitable for mass identification of a mix of different species present in a substrate, summarized as metabarcoding.

Finally, bioinformatic analyses need to be carried out to match DNA barcodes obtained with Barcode Index Numbers (BINs) in reference libraries. Each BIN, or BIN cluster, can be identified to species level when it shows high (>97%) concordance with DNA barcodes linked to a species present in a reference library, or when taxonomic identification to the species level is still lacking, an operational taxonomic unit (OTU), which refers to a group of species (i.e. genus, family or higher taxonomic rank). (See binning (metagenomics)). The results of the bioinformatics pipeline must be pruned, for example by filtering out unreliable singletons, superfluous duplicates, low‐quality reads and/or chimeric reads. This is generally done by carrying out serial BLAST searches in combination with automatic filtering and trimming scripts. Standardized thresholds are needed to discriminate between different species or a correct and a wrong identification.

Metabarcoding workflow

Despite the obvious power of the approach, eDNA metabarcoding is affected by precision and accuracy challenges distributed throughout the workflow in the field, in the laboratory and at the keyboard. As set out in the diagram at the right, following the initial study design (hypothesis/question, targeted taxonomic group etc) the current eDNA workflow consists of three components: field, laboratory and bioinformatics. The field component consists of sample collection (e.g., water, sediment, air) that is preserved or frozen prior to DNA extraction. The laboratory component has four basic steps: (i) DNA is concentrated (if not performed in the field) and purified, (ii) PCR is used to amplify a target gene or region, (iii) unique nucleotide sequences called “indexes” (also referred to as “barcodes”) are incorporated using PCR or are ligated (bound) onto different PCR products, creating a “library” whereby multiple samples can be pooled together, and (iv) pooled libraries are then sequenced on a high‐throughput machine. The final step after laboratory processing of samples is to computationally process the output files from the sequencer using a robust bioinformatics pipeline.

Questions for consideration in the design and implementation phases
of an environmental DNA metabarcoding study 
 
Decisions involved in a molecular ecology workflow
Samples can be collected from a variety of different environments using appropriate collection techniques. DNA is then prepared and used to answer a variety of ecological questions: metabarcoding is used to answer questions about "who" is present, while the function of communities or individuals can be established using a metagenomics, single‐cell genomics or metatranscriptomics.

Method and visualisation

Visualization and diversity metrics from environmental sequencing data
a) Alpha diversity displayed as taxonomy bar charts, showing relative abundance of taxa across samples using the Phinch data visualization framework (Bik & Pitch Interactive 2014).
b) Beta diversity patterns illustrated via Principal Coordinate Analyses carried out in QIIME, where each dot represents a sample and colors distinguish different classes of sample. The closer two sample points in 3D space, the more similar their community assemblages
c) GraPhalAn phylogenetic visualization of environmental data, with circular heatmaps and abundance bars used to convey quantitative taxon traits.
d) Edge PCA, a tree‐based diversity metric that identifies specific lineages (green/orange branches) that contribute most to community changes observed in samples distributed across different PCA axes.

The method requires each collected DNA to be archived with its corresponding "type specimen" (one for each taxon), in addition to the usual collection data. These types are stored in specific institutions (museums, molecular laboratories, universities, zoological gardens, botanical gardens, herbaria, etc.) one for each country, and in some cases, the same institution is assigned to contain the types of more than a country, in cases where some nations do not have the technology or financial resources to do so.

In this way, the creation of type specimens of genetic codes represents a methodology parallel to that carried out by traditional taxonomy.

In a first stage, the region of the DNA that would be used to make the barcode was defined. It had to be short and achieve a high percentage of unique sequences. For animals, algae and fungi, a portion of a mitochondrial gene which codes for subunit 1 of the cytochrome oxidase enzyme, CO1, has provided high percentages (95%), a region around 648 base pairs.

In the case of plants, the use of CO1 has not been effective since they have low levels of variability in that region, in addition to the difficulties that are produced by the frequent effects of polyploidy, introgression, and hybridization, so the chloroplast genome seems more suitable.

Applications

Pollinator networks

↑ metabarcoding                                  ↑ visit surveys
(a,b) plant-pollinator groups
(c,d) plant-pollinator species
(e,f) individual pollinator-plant species
(Empis leptempis pandellei)

Apis: Apis mellifera; Bomb.: Bombus sp.; W.bee: wild bees; O.Hym.: other Hymenoptera; O.Dipt.: Other Diptera; Emp.: Empididae; Syrph.: Syrphidae; Col.: Coleoptera; Lep.: Lepidoptera; Musc.: Muscidae.
Line thickness highlights the proportion of interactions

The diagram on the right shows a comparison of pollination networks based on DNA metabarcoding with more traditional networks based on direct observations of insect visits to plants. By detecting numerous additional hidden interactions, metabarcoding data largely alters the properties of the pollination networks compared to visit surveys. Molecular data shows that pollinators are much more generalist than expected from visit surveys. However, pollinator species were composed of relatively specialized individuals and formed functional groups highly specialized upon floral morphs.

As a consequence of the ongoing global changes, a dramatic and parallel worldwide decline in pollinators and animal-pollinated plant species has been observed. Understanding the responses of pollination networks to these declines is urgently required to diagnose the risks the ecosystems may incur as well as to design and evaluate the effectiveness of conservation actions. Early studies on animal pollination dealt with simplified systems, i.e. specific pairwise interactions or involved small subsets of plant-animal communities. However, the impacts of disturbances occur through highly complex interaction networks  and, nowadays, these complex systems are currently a major research focus. Assessing the true networks (determined by ecological process) from field surveys that are subject to sampling effects still provides challenges.

Recent research studies have clearly benefited from network concepts and tools to study the interaction patterns in large species assemblages. They showed that plant-pollinator networks were highly structured, deviating significantly from random associations. Commonly, networks have (1) a low connectance (the realized fraction of all potential links in the community) suggesting a low degree of generalization; (2) a high nestedness (the more-specialist organisms are more likely to interact with subsets of the species that more-generalist organisms interact with) the more specialist species interact only with proper subsets of those species interacting with the more generalist ones; (3) a cumulative distribution of connectivity (number of links per species, s) that follows a power or a truncated power law function  characterized by few supergeneralists with more links than expected by chance and many specialists; (4) a modular organization. A module is a group of plant and pollinator species that exhibits high levels of within-module connectivity, and that is poorly connected to species of other groups.

The low level of connectivity and the high proportion of specialists in pollination networks contrast with the view that generalization rather than specialization is the norm in networks. Indeed, most plants species are visited by a diverse array of pollinators which exploit floral resources from a wide range of plant species. A main cause evoked to explain this apparent contradiction is the incomplete sampling of interactions. Indeed, most network properties are highly sensitive to sampling intensity and network size. Network studies are basically phytocentric i.e. based on the observations of pollinator visits to flowers. This plant-centered approach suffers nevertheless from inherent limitations which may hamper the comprehension of mechanisms contributing to community assembly and biodiversity patterns. First, direct observations of pollinator visits to certain taxa such as orchids are often scarce  and rare interactions are very difficult to detect in field in general. Pollinator and plant communities usually are composed of few abundant species and many rare species that are poorly recorded in visit surveys. These rare species appear as specialists, whereas in fact they could be typical generalists. Because of the positive relationship between interaction frequency (f) and connectivity (s), undersampled interactions may lead to overestimating the degree of specialization in networks. Second, network analyses have mostly operated at species levels. Networks have very rarely been up scaled to the functional groups or down scaled to the individual-based networks, and most of them have been focused on one or two species only. The behavior of either individuals or colonies is commonly ignored, although it may influence the structure of the species networks. Species accounted as generalists in species networks could, therefore, entail cryptic specialized individuals or colonies. Third, flower visitors are by no means always effective pollinators as they may deposit no conspecific pollen and/or a lot of heterospecific pollen. Animal-centered approaches based on the investigation of pollen loads on visitors and plant stigmas may be more efficient at revealing plant-pollinator interactions.

Disentangling food webs

Arthropod predators and vertebrate predators in a millet field 

(A) Trophic network:
of arthropod and vertebrate predators – arrows represent biomass flow between predators and preys.
(B) Intraguild interactions: * Arthropod predators * Parasitoids of arthropods: * Insectivorous vertebrates:

Metabarcoding offers new opportunities for deciphering trophic linkages between predators and their prey within food webs. Compared to traditional, time-consuming methods, such as microscopic or serological analyses, the development of DNA metabarcoding allows the identification of prey species without prior knowledge of the predator’s prey range. In addition, metabarcoding can also be used to characterize a large number of species in a single PCR reaction, and to analyze several hundred samples simultaneously. Such an approach is increasingly used to explore the functional diversity and structure of food webs in agroecosystems. Like other molecular-based approaches, metabarcoding only gives qualitative results on the presence/absence of prey species in the gut or fecal samples. However, this knowledge of the identity of prey consumed by predators of the same species in a given environment enables a "pragmatic and useful surrogate for truly quantitative information.

In food web ecology, "who eats whom" is a fundamental issue for gaining a better understanding of the complex trophic interactions existing between pests and their natural enemies within a given ecosystem. The dietary analysis of arthropod and vertebrate predators allows the identification of key predators involved in the natural control of arthropod pests and gives insights into the breadth of their diet (generalist vs. specialist) and intraguild predation.

The diagram on the right summarises results from a 2020 study which used metabarcoding to untangle the functional diversity and structure of the food web associated with a couple of millet fields in Senegal. After assigning the identified OTUs as species, 27 arthropod prey taxa were identified from nine arthropod predators. The mean number of prey taxa detected per sample was the highest in carabid beetles , ants and spiders, and the lowest in the remaining predators including anthocorid bugs, pentatomid bugs, and earwigs. Across predatory arthropods, a high diversity of arthropod preys was observed in spiders, carabid beetles, ants, and anthocorid bugs. In contrast, the diversity of prey species identified in earwigs and pentatomid bugs was relatively low. Lepidoptera, Hemiptera, Diptera and Coleoptera were the most common insect prey taxa detected from predatory arthropods.

Conserving functional biodiversity and related ecosystem services, especially by controlling pests using their natural enemies, offers new avenues to tackle challenges for the sustainable intensification of food production systems. Predation of crop pests by generalist predators, including arthropods and vertebrates, is a major component of natural pest control. A particularly important trait of most generalist predators is that they can colonize crops early in the season by first feeding on alternative prey. However, the breadth of the "generalist" diet entails some drawbacks for pest control, such as intra-guild predation. A tuned diagnosis of diet breadth in generalist predators, including predation of non-pest prey, is thus needed to better disentangle food webs (e.g., exploitation competition and apparent competition) and ultimately to identify key drivers of natural pest control in agroecosystems. However, the importance of generalist predators in the food web is generally difficult to assess, due to the ephemeral nature of individual predator–prey interactions. The only conclusive evidence of predation results from direct observation of prey consumption, identification of prey residues within predators’ guts, and analyses of regurgitates or feces.

Marine biosecurity

Metabarcoding eDNA and eRNA in marine biosecurity
Global biodiversity of operational taxonomic units (OTUs) for DNA-only, shared eDNA/eRNA, and RNA-only datasets. Charts show the relative abundance of sequences at highest assigned taxonomic levels.
 
Tunicate colony of Didemnum vexillum
 
Species like these survive passage through unfiltered pumping systems

The spread of non-indigenous species (NIS) represents significant and increasing risks to ecosystems. In marine systems, NIS that survive the transport and adapt to new locations can have significant adverse effects on local biodiversity, including the displacement of native species, and shifts in biological communities and associated food webs. Once NIS are established, they are extremely difficult and costly to eradicate, and further regional spread may occur through natural dispersal or via anthropogenic transport pathways. While vessel hull fouling and ships’ ballast waters are well known as important anthropogenic pathways for the international spread of NIS, comparatively little is known about the potential of regionally transiting vessels to contribute to the secondary spread of marine pests through bilge water translocation.

Recent studies have revealed that the water and associated debris entrained in bilge spaces of small vessels (<20 m) can act as a vector for the spread of NIS at regional scales. Bilge water is defined as any water that is retained on a vessel (other than ballast), and that is not deliberately pumped on board. It can accumulate on or below the vessel’s deck (e.g., under floor panels) through a variety of mechanisms, including wave actions, leaks, via the propeller stern glands, and through the loading of items such as diving, fishing, aquaculture or scientific equipment. Bilge water, therefore, may contain seawater as well as living organisms at various life stages, cell debris and contaminants (e.g., oil, dirt, detergent, etc.), all of which are usually discharged using automatic bilge pumps or are self-drained using duckbill valves. Bilge water pumped from small vessels (manually or automatically) is not usually treated prior to discharge to sea, contrasting with larger vessels that are required to separate oil and water using filtration systems, centrifugation, or carbon absorption. If propagules are viable through this process, the discharge of bilge water may result in the spread of NIS.

In 2017, Fletcher et al. used a combination of laboratory and field experiments to investigate the diversity, abundance, and survival of biological material contained in bilge water samples taken from small coastal vessels. Their laboratory experiment showed that ascidian colonies or fragments, and bryozoan larvae, can survive passage through an unfiltered pumping system largely unharmed. They also conducted the first morpho-molecular assessment (using eDNA metabarcoding) on the biosecurity risk posed by bilge water discharges from 30 small vessels (sailboats and motorboats) of various origins and sailing time. Using eDNA metabarcoding they characterised approximately three times more taxa than via traditional microscopic methods, including the detection of five species recognised as non-indigenous in the study region.

To assist in understanding the risks associated with different NIS introduction vectors, traditional microscope biodiversity assessments are increasingly being complemented by eDNA metabarcoding. This allows a wide range of diverse taxonomic assemblages, at many life stages to be identified. It can also enable the detection of NIS that may have been overlooked using traditional methods. Despite the great potential of eDNA metabarcoding tools for broad-scale taxonomic screening, a key challenge for eDNA in the context of environmental monitoring of marine pests, and particularly when monitoring enclosed environments such as some bilge spaces or ballast tanks, is differentiating dead and viable organisms. Extracellular DNA can persist in dark/cold environments for extended periods of time (months to years, thus many of the organisms detected using eDNA metabarcoding may have not been viable in the location of sample collection for days or weeks. In contrast, ribonucleic acid (RNA) deteriorates rapidly after cell death, likely providing a more accurate representation of viable communities. Recent metabarcoding studies have explored the use of co-extracted eDNA and eRNA molecules for monitoring benthic sediment samples around marine fish farms and oil drilling sites, and have collectively found slightly stronger correlations between biological and physico-chemical variables along impact gradients when using eRNA. From a marine biosecurity prospective, the detection of living NIS may represent a more serious and immediate threat than the detection of NIS based purely on a DNA signal. Environmental RNA may therefore offer a useful method for identifying living organisms in samples.

Miscellaneous

The construction of the genetic barcode library was initially focused on fish  and the birds, which were followed by butterflies and other invertebrates. In the case of birds, the DNA sample is usually obtained from the chest.

Researchers have already developed specific catalogs for large animal groups, such as bees, birds, mammals or fish. Another use is to analyze the complete zoocenosis of a given geographic area, such as the "Polar Life Bar Code" project that aims to collect the genetic traits of all organisms that live in polar regions; both poles of the Earth. Related to this form is the coding of all the ichthyofauna of a hydrographic basin, for example the one that began to develop in the Rio São Francisco, in the northeast of Brazil.

The potential of the use of Barcodes is very wide, since the discovery of numerous cryptic species (it has already yielded numerous positive results), the use in the identification of species at any stage of their life, the secure identification in cases of protected species that are illegally trafficked, etc.

Potentials and shortcomings

A region of the gene for the cytochrome c oxidase enzyme is used to distinguish species in the Barcode of Life Data Systems database.

Potentials

DNA barcoding has been proposed as a way to distinguish species suitable even for non-specialists to use.

Shortcomings

In general, the shortcomings for DNA barcoding are valid also for metabarcoding. One particular drawback for metabarcoding studies is that there is no consensus yet regarding the optimal experimental design and bioinformatics criteria to be applied in eDNA metabarcoding. However, there are current joined attempts, like e.g. the EU COST network DNAqua-Net, to move forward by exchanging experience and knowledge to establish best-practice standards for biomonitoring.

The so-called barcode is a region of mitochondrial DNA within the gene for cytochrome c oxidase. A database, Barcode of Life Data Systems (BOLD), contains DNA barcode sequences from over 190,000 species. However, scientists such as Rob DeSalle have expressed concern that classical taxonomy and DNA barcoding, which they consider a misnomer, need to be reconciled, as they delimit species differently. Genetic introgression mediated by endosymbionts and other vectors can further make barcodes ineffective in the identification of species.

Status of barcode species

In microbiology, genes can move freely even between distantly related bacteria, possibly extending to the whole bacterial domain. As a rule of thumb, microbiologists have assumed that kinds of Bacteria or Archaea with 16S ribosomal RNA gene sequences more similar than 97% to each other need to be checked by DNA-DNA hybridisation to decide if they belong to the same species or not. This concept was narrowed in 2006 to a similarity of 98.7%.

DNA-DNA hybridisation is outdated, and results have sometimes led to misleading conclusions about species, as with the pomarine and great skua. Modern approaches compare sequence similarity using computational methods.

Sunday, March 20, 2022

Digitization

From Wikipedia, the free encyclopedia

Digitization is the process of converting information into a digital (i.e. computer-readable) format. The result is the representation of an object, image, sound, document or signal (usually an analog signal) obtained by generating a series of numbers that describe a discrete set of points or samples. The result is called digital representation or, more specifically, a digital image, for the object, and digital form, for the signal. In modern practice, the digitized data is in the form of binary numbers, which facilitates processing by digital computers and other operations, but, digitizing simply means the conversion of analog source material into a numerical format; the decimal or any other number system can be used instead.

Digitization is of crucial importance to data processing, storage and transmission, because it "allows information of all kinds in all formats to be carried with the same efficiency and also intermingled". Though analog data is typically more stable, digital data, has the potential to be more easily shared and accessed and, in theory, can be propagated indefinitely, without generation loss, provided it is migrated to new, stable formats as needed. This potential has led to institutional digitization projects designed to improve access and the rapid growth of the digital preservation field.

Sometimes digitization and digital preservation are mistaken for the same thing, however they are different, but digitization is often a vital first step in digital preservation. Libraries, archives, museums and other memory institutions digitize items to preserve fragile materials and create more access points for patrons. Doing this creates challenges for information professionals and solutions can be as varied as the institutions that implement them. Some analog materials, such as audio and video tapes, are nearing the end of their life-cycle and it is important to digitize them before equipment obsolescence and media deterioration makes the data irretrievable.

There are challenges and implications surrounding digitization including time, cost, cultural history concerns and creating an equitable platform for historically marginalized voices. Many digitizing institutions develop their own solutions to these challenges.

Mass digitization projects have had mixed results over the years, but some institutions have had success even if not in the traditional Google Books model.

Technological changes can happen often and quickly, so digitization standards are difficult to keep updated. Professionals in the field can attend conferences and join organizations and working groups to keep their knowledge current and add to the conversation.

Process

The term digitization is often used when diverse forms of information, such as an object, text, sound, image or voice, are converted into a single binary code. The core of the process is the compromise between the capturing device and the player device so that the rendered result represents the original source with the most possible fidelity, and the advantage of digitization is the speed and accuracy in which this form of information can be transmitted with no degradation compared with analog information.

Digital information exists as one of two digits, either 0 or 1. These are known as bits (a contraction of binary digits) and the sequences of 0s and 1s that constitute information are called bytes.

Analog signals are continuously variable, both in the number of possible values of the signal at a given time, as well as in the number of points in the signal in a given period of time. However, digital signals are discrete in both of those respects – generally a finite sequence of integers – therefore a digitization can, in practical terms, only ever be an approximation of the signal it represents.

Digitization occurs in two parts:

Discretization
The reading of an analog signal A, and, at regular time intervals (frequency), sampling the value of the signal at the point. Each such reading is called a sample and may be considered to have infinite precision at this stage;
Quantization
Samples are rounded to a fixed set of numbers (such as integers), a process known as quantization.

In general, these can occur at the same time, though they are conceptually distinct.

A series of digital integers can be transformed into an analog output that approximates the original analog signal. Such a transformation is called a DA conversion. The sampling rate and the number of bits used to represent the integers combine to determine how close such an approximation to the analog signal a digitization will be.

Examples

Digitization of the first number of Estonian popular science magazine Horisont published in January 1967.

The term is used to describe, for example, the scanning of analog sources (such as printed photos or taped videos) into computers for editing, 3D scanning that creates 3D modeling of an object's surface, and audio (where sampling rate is often measured in kilohertz) and texture map transformations. In this last case, as in normal photos, the sampling rate refers to the resolution of the image, often measured in pixels per inch.

Digitizing is the primary way of storing images in a form suitable for transmission and computer processing, whether scanned from two-dimensional analog originals or captured using an image sensor-equipped device such as a digital camera, tomographical instrument such as a CAT scanner, or acquiring precise dimensions from a real-world object, such as a car, using a 3D scanning device.

Digitizing is central to making digital representations of geographical features, using raster or vector images, in a geographic information system, i.e., the creation of electronic maps, either from various geographical and satellite imaging (raster) or by digitizing traditional paper maps or graphs (vector).

"Digitization" is also used to describe the process of populating databases with files or data. While this usage is technically inaccurate, it originates with the previously proper use of the term to describe that part of the process involving digitization of analog sources, such as printed pictures and brochures, before uploading to target databases.

Digitizing may also be used in the field of apparel, where an image may be recreated with the help of embroidery digitizing software tools and saved as embroidery machine code. This machine code is fed into an embroidery machine and applied to the fabric. The most supported format is DST file. Apparel companies also digitize clothing patterns.

History

  • 1957 The Standards Electronic Automatic Computer (SEAC) was invented. That same year, Russell Kirsch used a rotating drum scanner and photomultiplier connected to SEAC to create the first digital image (176x176 pixels) from a photo of his infant son. This image was stored in SEAC memory via a staticizer and viewed via a cathode ray oscilloscope.
  • 1971 Invention of Charge-Coupled Devices that made conversion from analog data to a digital format easy.
  • 1986 work started on the JPEG format.
  • 1990s Libraries began scanning collections to provide access via the world wide web.

Analog signals to digital

Analog signals are continuous electrical signals; digital signals are non-continuous. Analog signals can be converted to digital signals by using an analog-to-digital converter.

The process of converting analog to digital consists of two parts: sampling and quantizing. Sampling measures wave amplitudes at regular intervals, splits them along the vertical axis, and assigns them a numerical value, while quantizing looks for measurements that are between binary values and rounds them up or down.

Nearly all recorded music has been digitized, and about 12 percent of the 500,000+ movies listed on the Internet Movie Database are digitized and were released on DVD.

Digitization of home movies, slides, and photographs is a popular method of preserving and sharing personal multimedia. Slides and photographs may be scanned quickly using an image scanner, but analog video requires a video tape player to be connected to a computer while the item plays in real time. Slides can be digitized quicker with a slide scanner such as the Nikon Coolscan 5000ED.

Another example of digitization is the VisualAudio process developed by the Swiss Fonoteca Nazionale in Lugano, by scanning a high resolution photograph of a record, they are able to extract and reconstruct the sound from the processed image.

Digitization of analog tapes before they degrade, or after damage has already occurred, can rescue the only copies of local and traditional cultural music for future generations to study and enjoy.

Analog texts to digital

Image of a rare book in a book scanner where it will be digitized.
Book scanner in the digitization lab at the University of Liège, Belgium.

Academic and public libraries, foundations, and private companies like Google are scanning older print books and applying optical character recognition (OCR) technologies so they can be keyword searched, but as of 2006, only about 1 in 20 texts had been digitized. Librarians and archivists are working to increase this statistic and in 2019 began digitizing 480,000 books published between 1923 and 1964 that had entered the public domain.

Unpublished manuscripts and other rare papers and documents housed in special collections are being digitized by libraries and archives, but backlogs often slow this process and keep materials with enduring historical and research value hidden from most users (see digital libraries). Digitization has not completely replaced other archival imaging options, such as microfilming which is still used by institutions such as the National Archives and Records Administration (NARA) to provide preservation and access to these resources.

While digital versions of analog texts can potentially be accessed from anywhere in the world, they are not as stable as most print materials or manuscripts and are unlikely to be accessible decades from now without further preservation efforts, while many books manuscripts and scrolls have already been around for centuries. However, for some materials that have been damaged by water, insects, or catastrophes, digitization might be the only option for continued use.

Library preservation

In the context of libraries, archives, and museums, digitization is a means of creating digital surrogates of analog materials, such as books, newspapers, microfilm and videotapes, offers a variety of benefits, including increasing access, especially for patrons at a distance; contributing to collection development, through collaborative initiatives; enhancing the potential for research and education; and supporting preservation activities. Digitization can provide a means of preserving the content of the materials by creating an accessible facsimile of the object in order to put less strain on already fragile originals. For sounds, digitization of legacy analog recordings is essential insurance against technological obsolescence. A fundamental aspect of planning digitization projects is to ensure that the digital files themselves are preserved and remain accessible; the term "digital preservation," in its most basic sense, refers to an array of activities undertaken to maintain access to digital materials over time.

The prevalent Brittle Books issue facing libraries across the world is being addressed with a digital solution for long term book preservation. Since the mid-1800s, books were printed on wood-pulp paper, which turns acidic as it decays. Deterioration may advance to a point where a book is completely unusable. In theory, if these widely circulated titles are not treated with de-acidification processes, the materials upon those acid pages will be lost. As digital technology evolves, it is increasingly preferred as a method of preserving these materials, mainly because it can provide easier access points and significantly reduce the need for physical storage space.

Cambridge University Library is working on the Cambridge Digital Library, which will initially contain digitised versions of many of its most important works relating to science and religion. These include examples such as Isaac Newton's personally annotated first edition of his Philosophiæ Naturalis Principia Mathematica as well as college notebooks and other papers, and some Islamic manuscripts such as a Quran from Tipu Sahib's library.

Google, Inc. has taken steps towards attempting to digitize every title with "Google Book Search". While some academic libraries have been contracted by the service, issues of copyright law violations threaten to derail the project. However, it does provide – at the very least – an online consortium for libraries to exchange information and for researchers to search for titles as well as review the materials.

Digitization versus digital preservation

Digitizing something is not the same as digitally preserving it. To digitize something is to create a digital surrogate (copy or format) of an existing analog item (book, photograph, or record) and is often described as converting it from analog to digital, however both copies remain. An example would be scanning a photograph and having the original piece in a photo album and a digital copy saved to a computer. This is essentially the first step in digital preservation which is to maintain the digital copy over a long period of time and making sure it remains authentic and accessible.

Digitization is done once with the technology currently available, while digital preservation is more complicated because technology changes so quickly that a once popular storage format may become obsolete before it breaks. An example is a 5 1/4" floppy drive, computers are no longer made with them and obtaining the hardware to convert a file stored on 5 1/4" floppy disc can be expensive. To combat this risk, equipment must be upgraded as newer technology becomes affordable (about 2 to 5 years), but before older technology becomes unobtainable (about 5 to 10 years).

Digital preservation can also apply to born-digital material, such as a Microsoft Word document or a social media post. In contrast, digitization only applies exclusively to analog materials. Born-digital materials present a unique challenge to digital preservation not only due to technological obsolescence but also because of the inherently unstable nature of digital storage and maintenance. Most websites last between 2.5 and 5 years, depending on the purpose for which they were designed.

The Library of Congress provides numerous resources and tips for individuals looking to practice digitization and digital preservation for their personal collections.

Digital reformatting

Digital reformatting is the process of converting analog materials into a digital format as a surrogate of the original. The digital surrogates perform a preservation function by reducing or eliminating the use of the original. Digital reformatting is guided by established best practices to ensure that materials are being converted at the highest quality.

Digital reformatting at the Library of Congress

The Library of Congress has been actively reformatting materials for its American Memory project and developed best standards and practices pertaining to book handling during the digitization process, scanning resolutions, and preferred file formats. Some of these standards are:

  • The use of ISO 16067-1 and ISO 16067-2 standards for resolution requirements.
  • Recommended 400 ppi resolution for OCR'ed printed text.
  • The use of 24-bit color when color is an important attribute of a document.
  • The use of the scanning device's maximum resolution for digitally reproducing photographs
  • TIFF as the standard file format.
  • Attachment of descriptive, structural, and technical metadata to all digitized documents.

A list of archival standards for digital preservation can be found on the ARL website.

The Library of Congress has constituted a Preservation Digital Reformatting Program. The Three main components of the program include:

  • Selection Criteria for digital reformatting
  • Digital reformatting principles and specifications
  • Life cycle management of LC digital data

Audio digitization and reformatting

Audio media offers a rich source of historic ethnographic information, with the earliest forms of recorded sound dating back to 1890. According to the International Association of Sound and Audiovisual Archives (IASA), these sources of audio data, as well as the aging technologies used to play them back, are in imminent danger of permanent loss due to degradation and obsolescence. These primary sources are called “carriers” and exist in a variety of formats, including wax cylinders, magnetic tape, and flat discs of grooved media, among others. Some formats are susceptible to more severe, or quicker, degradation than others. For instance, lacquer discs suffer from delamination. Analog tape may deteriorate due to sticky shed syndrome.

1/4" analog tape being played back on a Studer A810 tape machine for digitization at Smithsonian Folkways Recordings.

Archival workflow and file standardization have been developed to minimize loss of information from the original carrier to the resulting digital file as digitization is underway. For most at-risk formats (magnetic tape, grooved cylinders, etc.), a similar workflow can be observed. Examination of the source carrier will help determine what, if any, steps need to be taken to repair material prior to transfer. A similar inspection must be undertaken for the playback machines. If satisfactory conditions are met for both carrier and playback machine, the transfer can take place, moderated by an analog-to-digital converter. The digital signal is then represented visually for the transfer engineer by a digital audio workstation, like Audacity, WaveLab, or Pro Tools. Reference access copies can be made at smaller sample rates. For archival purposes, it is standard to transfer at a sample rate of 96 kHz and a bit depth of 24 bits per channel.

Challenges

Many libraries, archives, museums, and other memory institutions, struggle with catching up and staying current regarding digitization and the expectation that everything should already be online. The time spent planning, doing the work, and processing the digital files along with the expense and fragility of some materials are some of the most common.

Time spent

Digitization is a time-consuming process, even more so when the condition or format of the analog resources requires special handling. Deciding what part of a collection to digitize can sometimes take longer than digitizing it in its entirety. Each digitization project is unique and workflows for one will be different from every other project that goes through the process, so time must be spent thoroughly studying and planning each one to create the best plan for the materials and the intended audience.

Expense

Cost of equipment, staff time, metadata creation, and digital storage media make large scale digitization of collections expensive for all types of cultural institutions.

Ideally all institutions want their digital copies to have the best image quality so a high-quality copy can be maintained over time. However, smaller institutions may not be able to afford such equipment or manpower, which limits how much material can be digitized, so archivists and librarians must know what their patrons need and prioritize digitization of those items. Often the cost of time and expertise involved with describing materials and adding metadata is more than the digitization process.

Fragility of materials

Some materials, such as brittle books, are so fragile that undergoing the process of digitization could damage them irreparably. Despite potential damage, one reason for digitizing fragile materials is because they are so heavily used that creating a digital surrogate will help preserve the original copy long past its expected lifetime and increase access to the item.

Copyright

Copyright is not only a problem faced by projects like Google Books, but by institutions that may need to contact private citizens or institutions mentioned in archival documents for permission to scan the items for digital collections. It can be time consuming to make sure all potential copyright holders have given permission, but if copyright cannot be determined or cleared, it may be necessary to restrict even digital materials to in library use.

Solutions

Institutions can make digitization more cost-effective by planning before a project begins, including outlining what they hope to accomplish and the minimum amount of equipment, time, and effort that can meet those goals. If a budget needs more money to cover the cost of equipment or staff, an institution might investigate if grants are available.

Collaboration

Collaborations between institutions have the potential to save money on equipment, staff, and training as individual members share their equipment, manpower, and skills rather than pay outside organizations to provide these services. Collaborations with donors can build long-term support of current and future digitization projects.

Outsourcing

Outsourcing can be an option if an institution does not want to invest in equipment but since most vendors require an inventory and basic metadata for materials, this is not an option for institutions hoping to digitize without processing.

Non-traditional staffing

Many institutions have the option of using volunteers, student employees, or temporary employees on projects. While this saves on staffing costs, it can add costs elsewhere such as on training or having to re-scan items due to poor quality.

MPLP

One way to save time and resources is by using the More Product, Less Process (MPLP) method to digitize materials while they are being processed. Since GLAM (Galleries, Libraries, Archives, and Museums) institutions are already committed to preserving analog materials from special collections, digital access copies do not need to be high-resolution preservation copies, just good enough to provide access to rare materials. Sometimes institutions can get by with 300 dpi JPGs rather than a 600 dpi TIFF for images, and a 300 dpi grayscale scan of a document rather than a color one at 600 dpi.

Digitizing marginalized voices

Digitization can be used to highlight voices of historically marginalized peoples and add them to the greater body of knowledge. Many projects, some community archives created by members of those groups, are doing this in a way that supports the people, values their input and collaboration, and gives them a sense of ownership of the collection. Examples of projects are Gi-gikinomaage-min and the South Asian American Digital Archive (SAADA).

Gi-gikinomaage-min

Gi-gikinomaage-min is Anishinaabemowin for "We are all teachers" and its main purpose is "to document the history of Native Americans in Grand Rapids, Michigan." It combines new audio and video oral histories with digitized flyers, posters, and newsletters from Grand Valley State University's analog collections. Although not entirely a newly digitized project, what was created also added item-level metadata to enhance context. At the start, collaboration between several university departments and the Native American population was deemed important and remained strong throughout the project.

SAADA

The South Asian American Digital Archive (SAADA) has no physical building, is entirely digital and everything is handled by volunteers. This archive was started by Michelle Caswell and Samip Mallick and collects a broad variety of materials "created by or about people residing in the United States who trace their  heritage to Bangladesh, Bhutan, India, Maldives, Nepal, Pakistan, Sri Lanka, and the many South Asian diaspora communities across the globe." (Caswell, 2015, 2). The collection of digitized items includes private, government, and university held materials.

Black Campus Movement Collection (BCM)

Kent State University began its BCM collection when it acquired the papers of African American alumnus Lafayette Tolliver, which included about 1,000 photographs that chronicled the black student experience at Kent State from 1968-1971. The collection continues to add materials from the 1960s up to and including the current student body and several oral histories have been added since it debuted. When digitizing the items, it was necessary to work with alumni to create descriptions for the images. This collaboration created changes in local controlled vocabularies the libraries used to create metadata for the images.

Mass digitization

The expectation that everything should be online has led to mass digitization practices, but it is an ongoing process with obstacles that have led to alternatives. As new technology makes automated scanning of materials safer for materials and decreases need for cropping and de-skewing, mass digitization should be able to increase.

Obstacles

Digitization can be a physically slow process involving selection and preparation of collections that can take years if materials need to be compared for completeness or are vulnerable to damage. Price of specialized equipment, storage costs, website maintenance, quality control, and retrieval system limitations all add to the problems of working on a large scale.

Successes

Digitization on demand

Scanning materials as users ask for them, provides copies for others to use and cuts down on repeated copying of popular items. If one part of a folder, document, or book is asked for, scanning the entire object can save time in the future by already having the material access if someone else needs the material. Digitizing on demand can increase volume because time spent on selection and prep has been used on scanning instead.

Google Books

From the start, Google has concentrated on text rather than images or special collections. Although criticized in the past for poor image quality, selection practices, and lacking long-term preservation plans, their focus on quantity over quality has enabled Google to digitize more books than other digitizers.

Standards

Digitization is not a static field and standards change with new technology, so it is up to digitization managers to stay current with new developments. Although each digitization project is different, common standards in formats, metadata, quality, naming, and file storage should be used to give the best chance of interoperability and patron access. As digitization is often the first step in digital preservation, questions about how to handle digital files should be addressed in institutional standards.

Resources to create local standards are available from the Society of American Archivists, the Smithsonian, and the Northeast Document Conservation Center.

Implications

Cultural Heritage Concerns

Digitization of community archives by indigenous and other marginalized people has led to traditional memory institutions reassessing how they digitize and handle objects in their collections that may have ties to these groups. The topics they are rethinking are varied and include how items are chosen for digitization projects, what metadata to use to convey proper context to be retrievable by the groups they represent, and whether an item should be accessed by the world or just those who the groups originally intended to have access, such as elders. Many navigate these concerns by collaborating with the communities they seek to represent through their digitized collections.

Lean philosophy

The broad use of internet and the increasing popularity of lean philosophy has also increased the use and meaning of "digitizing" to describe improvements in the efficiency of organizational processes. Lean philosophy refers to the approach which considers any use of time and resources, which does not lead directly to creating a product, as waste and therefore a target for elimination. This will often involve some kind of Lean process in order to simplify process activities, with the aim of implementing new "lean and mean" processes by digitizing data and activities. Digitization can help to eliminate time waste by introducing wider access to data, or by the implementation of enterprise resource planning systems.

Fiction

Works of science-fiction often include the term digitize as the act of transforming people into digital signals and sending them into digital technology. When that happens, the people disappear from the real world and appear in a virtual world (as featured in the cult film Tron, the animated series Code: Lyoko, or the late 1980s live-action series Captain Power and the Soldiers of the Future). In the video game Beyond Good & Evil, the protagonist's holographic friend digitizes the player's inventory items. One Super Friends cartoon episode showed Wonder Woman and Jayna freeing the world's men (including the male super heroes) onto computer tape by the female villainess Medula.

Endogenous retrovirus

From Wikipedia, the free encyclopedia

Dendrogram of various classes of endogenous retroviruses

Endogenous retroviruses (ERVs) are endogenous viral elements in the genome that closely resemble and can be derived from retroviruses. They are abundant in the genomes of jawed vertebrates, and they comprise up to 5–8% of the human genome (lower estimates of ~1%).

ERVs are a vertically inherited proviral sequence and a subclass of a type of gene called a transposon, which can normally be packaged and moved within the genome to serve a vital role in gene expression and in regulation. ERVs however lack most transposon functions, are typically not infectious and are often defective genomic remnants of the retroviral replication cycle. They are distinguished as germline provirus retroelements due to their integration and reverse-transcription into the nuclear genome of the host cell.

Researchers have suggested that retroviruses evolved from a type of transposon called a retrotransposon, a Class I element; these genes can mutate and instead of moving to another location in the genome they can become exogenous or pathogenic. This means that not all ERVs may have originated as an insertion by a retrovirus but that some may have been the source for the genetic information in the retroviruses they resemble. When integration of viral DNA occurs in the germ-line, it can give rise to an ERV, which can later become fixed in the gene pool of the host population.

Formation

The replication cycle of a retrovirus entails the insertion ("integration") of a DNA copy of the viral genome into the nuclear genome of the host cell. Most retroviruses infect somatic cells, but occasional infection of germline cells (cells that produce eggs and sperm) can also occur. Rarely, retroviral integration may occur in a germline cell that goes on to develop into a viable organism. This organism will carry the inserted retroviral genome as an integral part of its own genome—an "endogenous" retrovirus (ERV) that may be inherited by its offspring as a novel allele. Many ERVs have persisted in the genome of their hosts for millions of years. However, most of these have acquired inactivating mutations during host DNA replication and are no longer capable of producing the virus. ERVs can also be partially excised from the genome by a process known as recombinational deletion, in which recombination between the identical sequences that flank newly integrated retroviruses results in deletion of the internal, protein-coding regions of the viral genome.

The general retrovirus genome consists of three genes vital for the invasion, replication, escape, and spreading of its viral genome. These three genes are gag (encodes for structural proteins for the viral core), pol (encodes for reverse transcriptase, integrase, and protease), and env (encodes for coat proteins for the virus's exterior). These viral proteins are encoded as polyproteins. In order to carry out their life cycle, the retrovirus relies heavily on the host cell's machinery. Protease degrades peptide bonds of the viral polyproteins, making the separate proteins functional. Reverse transcriptase functions to synthesize viral DNA from the viral RNA in the host cell's cytoplasm before it enters the nucleus. Integrase guides the integration of viral DNA into the host genome.

Over time, the genome of ERVs not only acquire point mutations, but also shuffle and recombine with other ERVs. ERVs with a decayed sequence for the env become more likely to propagate.

Role in genomic evolution

Diagram displaying the integration of viral DNA into a host genome

Endogenous retroviruses can play an active role in shaping genomes. Most studies in this area have focused on the genomes of humans and higher primates, but other vertebrates, such as mice and sheep, have also been studied in depth. The long terminal repeat (LTR) sequences that flank ERV genomes frequently act as alternate promoters and enhancers, often contributing to the transcriptome by producing tissue-specific variants. In addition, the retroviral proteins themselves have been co-opted to serve novel host functions, particularly in reproduction and development. Recombination between homologous retroviral sequences has also contributed to gene shuffling and the generation of genetic variation. Furthermore, in the instance of potentially antagonistic effects of retroviral sequences, repressor genes have co-evolved to combat them.

About 90% of endogenous retroviruses are solo LTRs, lacking all open reading frames (ORFs). Solo LTRs and LTRs associated with complete retroviral sequences have been shown to act as transcriptional elements on host genes. Their range of action is mainly by insertion into the 5' UTRs of protein coding genes; however, they have been known to act upon genes up to 70–100 kb away. The majority of these elements are inserted in the sense direction to their corresponding genes, but there has been evidence of LTRs acting in the antisense direction and as a bidirectional promoter for neighboring genes. In a few cases, the LTR functions as the major promoter for the gene.

For example, in humans AMY1C has a complete ERV sequence in its promoter region; the associated LTR confers salivary specific expression of the digestive enzyme amylase. Also, the primary promoter for bile acid-CoA:amino acid N-acyltransferase (BAAT), which codes for an enzyme that is integral in bile metabolism, is of LTR origin.

The insertion of a solo ERV-9 LTR may have produced a functional open reading frame, causing the rebirth of the human immunity related GTPase gene (IRGM). ERV insertions have also been shown to generate alternative splice sites either by direct integration into the gene, as with the human leptin hormone receptor, or driven by the expression of an upstream LTR, as with the phospholipase A-2 like protein.

Most of the time, however, the LTR functions as one of many alternate promoters, often conferring tissue-specific expression related to reproduction and development. In fact, 64% of known LTR-promoted transcription variants are expressed in reproductive tissues. For example, the gene CYP19 codes for aromatase P450, an important enzyme for estrogen synthesis, that is normally expressed in the brain and reproductive organs of most mammals. However, in primates, an LTR-promoted transcriptional variant confers expression to the placenta and is responsible for controlling estrogen levels during pregnancy. Furthermore, the neuronal apoptosis inhibitory protein (NAIP), normally widespread, has an LTR of the HERV-P family acting as a promoter that confers expression to the testis and prostate. Other proteins, such as nitric oxide synthase 3 (NOS3), interleukin-2 receptor B (IL2RB), and another mediator of estrogen synthesis, HSD17B1, are also alternatively regulated by LTRs that confer placental expression, but their specific functions are not yet known. The high degree of reproductive expression is thought to be an after effect of the method by which they were endogenized; however, this also may be due to a lack of DNA methylation in germ-line tissues.

The best-characterized instance of placental protein expression comes not from an alternatively promoted host gene but from a complete co-option of a retroviral protein. Retroviral fusogenic env proteins, which play a role in the entry of the virion into the host cell, have had an important impact on the development of the mammalian placenta. In mammals, intact env proteins called syncytins are responsible for the formation and function of syncytiotrophoblasts. These multinucleated cells are mainly responsible for maintaining nutrient exchange and separating the fetus from the mother's immune system. It has been suggested that the selection and fixation of these proteins for this function have played a critical role in the evolution of viviparity.

In addition, the insertion of ERVs and their respective LTRs have the potential to induce chromosomal rearrangement due to recombination between viral sequences at inter-chromosomal loci. These rearrangements have been shown to induce gene duplications and deletions that largely contribute to genome plasticity and dramatically change the dynamic of gene function. Furthermore, retroelements in general are largely prevalent in rapidly evolving, mammal-specific gene families whose function is largely related to the response to stress and external stimuli. In particular, both human class I and class II MHC genes have a high density of HERV elements as compared to other multi-locus-gene families. It has been shown that HERVs have contributed to the formation of extensively duplicated duplicon blocks that make up the HLA class 1 family of genes. More specifically, HERVs primarily occupy regions within and between the break points between these blocks, suggesting that considerable duplication and deletions events, typically associated with unequal crossover, facilitated their formation. The generation of these blocks, inherited as immunohaplotypes, act as a protective polymorphism against a wide range of antigens that may have imbued humans with an advantage over other primates.

The characteristic of placentas being very evolutionary distinct organs between different species has been suggested to result from the co-option of ERV enhancers. Regulatory mutations, instead of mutations in genes that encode for hormones and growth factors, support the known evolution of placental morphology, especially since the majority of hormone and growth factor genes are expressed in response to pregnancy, not during placental development. Researchers studied the regulatory landscape of placental development between the rat and mouse, two closely related species. This was done by mapping all regulatory elements of the rat trophoblast stem cells (TSCs) and comparing them to their orthologs in mouse TSCs. TSCs were observed because they reflect the initial cells that develop in the fetal placenta. Regardless of their tangible similarities, enhancer and repressed regions were mostly species-specific. However, most promoter sequences were conserved between mouse and rat. In conclusion to their study, researchers proposed that ERVs influenced species-specific placental evolution through mediation of placental growth, immunosuppression, and cell fusion.

Another example of ERV exploiting cellular mechanisms is p53, a tumor suppressor gene (TSG). DNA damage and cellular stress induces the p53 pathway, which results in cell apoptosis. Using chromatin immunoprecipitation with sequencing, thirty-percent of all p53-binding sites were located within copies of a few primate-specific ERV families. A study suggested that this benefits retroviruses because p53's mechanism provides a rapid induction of transcription, which leads to the exit of viral RNA from the host cell.

Finally, the insertion of ERVs or ERV elements into genic regions of host DNA, or overexpression of their transcriptional variants, has a much higher potential to produce deleterious effects than positive ones. Their appearance into the genome has created a host-parasite co-evolutionary dynamic that proliferated the duplication and expansion of repressor genes. The most clear-cut example of this involves the rapid duplication and proliferation of tandem zinc-finger genes in mammal genomes. Zinc-finger genes, particularly those that include a KRAB domain, exist in high copy number in vertebrate genomes, and their range of functions are limited to transcriptional roles. It has been shown in mammals, however, that the diversification of these genes was due to multiple duplication and fixation events in response to new retroviral sequences or their endogenous copies to repress their transcription.

Role in disease

The majority of ERVs that occur in vertebrate genomes are ancient, inactivated by mutation, and have reached genetic fixation in their host species. For these reasons, they are extremely unlikely to have negative effects on their hosts except under unusual circumstances. Nevertheless, it is clear from studies in birds and non-human mammal species including mice, cats and koalas, that younger (i.e., more recently integrated) ERVs can be associated with disease. The number of active ERVs in the genome of mammals is negatively related to their body size suggesting a contribution to the Peto's paradox through cancer pathogenesis. This has led researchers to propose a role for ERVs in several forms of human cancer and autoimmune disease, although conclusive evidence is lacking.

Neurological disorders

In humans, ERVs have been proposed to be involved in multiple sclerosis (MS). A specific association between MS and the ERVWE1, or "syncytin", gene, which is derived from an ERV insertion, has been reported, along with the presence of an "MS-associated retrovirus" (MSRV), in patients with the disease. Human ERVs (HERVs) have also been implicated in ALS and addiction.

In 2004 it was reported that antibodies to HERVs were found in greater frequency in the sera of people with schizophrenia. Additionally, the cerebrospinal fluid of people with recent onset schizophrenia contained levels of a retroviral marker, reverse transcriptase, four times higher than control subjects. Researchers continue to look at a possible link between HERVs and schizophrenia, with the additional possibility of a triggering infection inducing schizophrenia.

Immunity

ERVs have been found to be associated to disease not only through disease-causing relations, but also through immunity. The frequency of ERVs in long terminal repeats (LTRs) likely correlates to viral adaptations to take advantage of immunity signaling pathways that promote viral transcription and replication. A study done in 2016 investigated the benefit of ancient viral DNA integrated into a host through gene regulation networks induced by interferons, a branch of innate immunity. These cytokines are first to respond to viral infection and are also important in immunosurveillance for malignant cells. ERVs are predicted to act as cis-regulatory elements, but much of the adaptive consequences of this for certain physiological functions is still unknown. There is data that supports the general role of ERVs in the regulation of human interferon response, specifically to interferon-gamma (IFNG). For example, interferon-stimulated genes were found to be greatly enriched with ERVs bound by signal transducer and activator of transcription 1 (STAT1) and/or Interferon regulatory factor (IRF1) in CD14+ macrophages.

HERVs also play various roles shaping the human innate immunity response, with some sequences activating the system and others suppressing it. They may also protect from exogenous retroviral infections: the virus-like transcripts can activate pattern recognition receptors, and the proteins can interfere with active retroviruses. A gag protein from HERV-K(HML2) is shown to mix with HIV Gag, impairing HIV capsid formation as a result.

Gene regulation

Another idea proposed was that ERVs from the same family played a role in recruiting multiple genes into the same network of regulation. It was found that MER41 elements provided addition redundant regulatory enhancement to the genes located near STAT1 binding sites.

Role in medicine

Porcine endogenous retrovirus

For humans, porcine endogenous retroviruses (PERVs) pose a concern when using porcine tissues and organs in xenotransplantion, the transplanting of living cells, tissues, and organs from an organism of one species to an organism of different species. Although pigs are generally the most suitable donors to treat human organ diseases due to practical, financial, safety, and ethical reasons, PERVs previously could not be removed from pigs, due to their viral ability to integrate into the host genome and to be passed into offspring, until the year 2017, when one lab, using CRISPR-Cas9, removed all 62 retroviruses from the pig genome. The consequences of cross-species transmission remain unexplored and have dangerous potential.

Researchers have indicated that infection of human tissues by PERVs is very possible, especially in immunosuppressed individuals. An immunosuppressed condition could potentially permit a more rapid and tenacious replication of viral DNA, and would later have less difficulty adapting to human-to-human transmission. Although known infectious pathogens present in the donor organ/tissue can be eliminated by breeding pathogen-free herds, unknown retroviruses can be present in the donor. These retroviruses are often latent and asymptomatic in the donor, but can become active in the recipient. Some examples of endogenous viruses that can infect and multiply in human cells are from baboons (BaEV), cats (RD114), and mice.

There are three different classes of PERVs, PERV-A, PERV-B, and PERV-C. PERV-A and PERV-B are polytropic and can infect human cells in vitro, while PERV-C is ecotropic and does not replicate on human cells. The major differences between the classes is in the receptor binding domain of the env protein and the long terminal repeats (LTRs) that influence the replication of each class. PERV-A and PERV-B display LTRs that have repeats in the U3 region. However, PERV-A and PERV-C show repeatless LTRs. Researchers found that PERVs in culture actively adapted to the repeat structure of their LTR in order to match the best replication performance a host cell could perform. At the end of their study, researchers concluded that repeatless PERV LTR evolved from the repeat-harboring LTR. This was likely to have occurred from insertional mutation and was proven through use of data on LTR and env/Env. It is thought that the generation of repeatless LTRs could be reflective of an adaptation process of the virus, changing from an exogenous to an endogenous lifestyle.

A clinical trial study performed in 1999 sampled 160 patients who were treated with different living pig tissues and observed no evidence of a persistent PERV infection in 97% of the patients for whom a sufficient amount of DNA was available to PCR for amplification of PERV sequences. This study stated that retrospective studies are limited to find the true incidence of infection or associated clinical symptoms, however. It suggested using closely monitored prospective trials, which would provide a more complete and detailed evaluation of the possible cross-species PERV transmission and a comparison of the PERV.

Human endogenous retroviruses

Human endogenous retroviruses (HERV) comprise a significant part of the human genome, with approximately 98,000 ERV elements and fragments making up 5–8%. According to a study published in 2005, no HERVs capable of replication had been identified; all appeared to be defective, containing major deletions or nonsense mutations (not true for HERV-K). This is because most HERVs are merely traces of original viruses, having first integrated millions of years ago. An analysis of HERV integrations is ongoing as part of the 100,000 Genomes Project.

Human endogenous retroviruses were discovered by accident using a couple of different experiments. Human genomic libraries were screened under low-stringency conditions using probes from animal retroviruses, allowing the isolation and characterization of multiple, though defective, proviruses, that represented various families. Another experiment depended on oligonucleotides with homology to viral primer binding sites.

HERVs are classified based on their homologies to animal retroviruses. Families belong to Class I are similar in sequence to mammalian Gammaretroviruses (type C) and Epsilonretroviruses (Type E). Families belonging to Class II show homology to mammalian Betaretroviruses (Type B) and Deltaretroviruses (Type D). Families belong to Class III are similar to foamy viruses. For all classes, if homologies appear well conserved in the gag, pol, and env gene, they are grouped into a superfamily. There are more Class I families known to exist. The families themselves are named in a less uniform manner, with a mixture of naming based on an exogenous retrovirus, the priming tRNA (HERV-W, K), or some neighboring gene (HERV-ADP), clone number (HERV-S71), or some amino acid motif (HERV-FRD). A proposed nomenclature aims to clean up the sometimes paraphyletic standards.

There are two proposals for how HERVs became fixed in the human genome. The first assumes that sometime during human evolution, exogenous progenitors of HERV inserted themselves into germ line cells and then replicated along with the host's genes using and exploiting the host's cellular mechanisms. Because of their distinct genomic structure, HERVs were subjected to many rounds of amplification and transposition, which lead to a widespread distribution of retroviral DNA. The second hypothesis claims the continuous evolution of retro-elements from more simple structured ancestors.

Nevertheless, one family of viruses has been active since the divergence of humans and chimpanzees. This family, termed HERV-K (HML2), makes up less than 1% of HERV elements but is one of the most studied. There are indications it has even been active in the past few hundred thousand years, e.g., some human individuals carry more copies of HML2 than others. Traditionally, age estimates of HERVs are performed by comparing the 5' and 3' LTR of a HERV; however, this method is only relevant for full-length HERVs. A recent method, called cross-sectional dating, uses variations within a single LTR to estimate the ages of HERV insertions. This method is more precise in estimating HERV ages and can be used for any HERV insertions. Cross-sectional dating has been used to suggest that two members of HERV-K (HML2), HERV-K106 and HERV-K116, were active in the last 800,000 years and that HERV-K106 may have infected modern humans 150,000 years ago. However, the absence of known infectious members of the HERV-K (HML2) family, and the lack of elements with a full coding potential within the published human genome sequence, suggests to some that the family is less likely to be active at present. In 2006 and 2007, researchers working independently in France and the US recreated functional versions of HERV-K (HML2).

MER41.AIM2 is an HERV that regulates the transcription of AIM2 (Absent in Melanoma 2) which encodes for a sensor of foreign cytosolic DNA. This acts as a binding site for AIM2, meaning that it is necessary for the transcription of AIM2. Researchers had shown this by deleting MER41.AIM2 in HeLa cells using CRISPR/Cas9, leading to an undetectable transcript level of AIM2 in modified HeLa cells. The control cells, which still contained the MER41.AIM2 ERV, were observed with normal amounts of AIM2 transcript. In terms of immunity, researchers concluded that MER41.AIM2 is necessary for an inflammatory response to infection.

Immunological studies have shown some evidence for T cell immune responses against HERVs in HIV-infected individuals. The hypothesis that HIV induces HERV expression in HIV-infected cells led to the proposal that a vaccine targeting HERV antigens could specifically eliminate HIV-infected cells. The potential advantage of this novel approach is that, by using HERV antigens as surrogate markers of HIV-infected cells, it could circumvent the difficulty inherent in directly targeting notoriously diverse and fast-mutating HIV antigens.

There are a few classes of human endogenous retroviruses that still have intact open reading frames. For example, the expression of HERV-K, a biologically active family of HERV, produces proteins found in placenta. Furthermore, the expression of the envelope genes of HERV-W (ERVW-1) and HERV-FRD (ERVFRD-1) produces syncytins which are important for the generation of the syncytiotrophoblast cell layer during placentogenesis by inducing cell-cell fusion. The HUGO Gene Nomenclature Committee (HGNC) approves gene symbols for transcribed human ERVs.

Techniques for characterizing ERVs

Whole genome sequencing

Example: A porcine ERV (PERV) Chinese-born minipig isolate, PERV-A-BM, was sequenced completely and along with different breeds and cell lines in order to understand its genetic variation and evolution. The observed number of nucleotide substitutions and among the different genome sequences helped researchers determine an estimate age that PERV-A-BM was integrated into its host genome, which was found to be of an evolutionary age earlier than the European-born pigs isolates.

Chromatin immunoprecipitation with sequencing (ChIP-seq)

This technique is used to find histone marks indicative of promoters and enhancers, which are binding sites for DNA proteins, and repressed regions and trimethylation. DNA methylation has been shown to be vital to maintain silencing of ERVs in mouse somatic cells, while histone marks are vital for the same purpose in embryonic stem cells (ESCs) and early embryogenesis.

Applications

Constructing phylogenies

Because most HERVs have no function, are selectively neutral, and are very abundant in primate genomes, they easily serve as phylogenetic markers for linkage analysis. They can be exploited by comparing the integration site polymorphisms or the evolving, proviral, nucleotide sequences of orthologs. To estimate when integration occurred, researchers used distances from each phylogenetic tree to find the rate of molecular evolution at each particular locus. It is also useful that ERVs are rich in many species genomes (i.e. plants, insects, mollusks, fish, rodents, domestic pets, and livestock) because its application can be used to answer a variety of phylogenetic questions.

Designating the age of provirus and the time points of species separation events

This is accomplished by comparing the different HERV from different evolutionary periods. For example, this study was done for different hominoids, which ranged from humans to apes and to monkeys. This is difficult to do with PERV because of the large diversity present.

Further research

Epigenetic variability

Researchers could analyze individual epigenomes and transcriptomes to study the reactivation of dormant transposable elements through epigenetic release and their potential associations with human disease and exploring the specifics of gene regulatory networks.

Immunological problems of xenotransplantation

Little is known about an effective way to overcoming hyperacute rejection (HAR), which follows the activation of complement initiated by xenoreactive antibodies recognizing galactosyl-alpha1-3galatosyl (alpha-Gal) antigens on the donor epithelium.

Risk factors of HERVs in gene therapy

Because retroviruses are able to recombine with each other and with other endogenous DNA sequences, it would be beneficial for gene therapy to explore the potential risks HERVs can cause, if any. Also, this ability of HERVs to recombine can be manipulated for site-directed integration by including HERV sequences in retroviral vectors.

HERV gene expression

Researchers believe that RNA and proteins encoded for by HERV genes should continue to be explored for putative function in cell physiology and in pathological conditions. This would make sense to examine in order to more deeply define the biological significance of the proteins synthesized.

Education

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Education Education is the transmissio...