Search This Blog

Tuesday, March 10, 2015

Protein engineering


From Wikipedia, the free encyclopedia

Protein engineering is the process of developing useful or valuable proteins. It is a young discipline, with much research taking place into the understanding of protein folding and recognition for protein design principles.

There are two general strategies for protein engineering, 'rational' protein design and directed evolution. These techniques are not mutually exclusive; researchers will often apply both. In the future, more detailed knowledge of protein structure and function, as well as advancements in high-throughput technology, may greatly expand the capabilities of protein engineering. Eventually, even unnatural amino acids may be incorporated, thanks to a new method that allows the inclusion of novel amino acids in the genetic code.[according to whom?]

Approaches

Rational design

In rational protein design, the scientist uses detailed knowledge of the structure and function of the protein to make desired changes. In general, this has the advantage of being inexpensive and technically easy, since site-directed mutagenesis techniques are well-developed. However, its major drawback is that detailed structural knowledge of a protein is often unavailable, and, even when it is available, it can be extremely difficult to predict the effects of various mutations.
Computational protein design algorithms seek to identify novel amino acid sequences that are low in energy when folded to the pre-specified target structure. While the sequence-conformation space that needs to be searched is large, the most challenging requirement for computational protein design is a fast, yet accurate, energy function that can distinguish optimal sequences from similar suboptimal ones.

Directed evolution

In directed evolution, random mutagenesis is applied to a protein, and a selection regime is used to pick out variants that have the desired qualities.[citation needed] Further rounds of mutation and selection are then applied. This method mimics natural evolution and, in general, produces superior results to rational design. An additional technique known as DNA shuffling mixes and matches pieces of successful variants in order to produce better results. This process mimics the recombination that occurs naturally during sexual reproduction.[citation needed] The advantage of directed evolution is that it requires no prior structural knowledge of a protein, nor is it necessary to be able to predict what effect a given mutation will have.[citation needed] Indeed, the results of directed evolution experiments are often surprising in that desired changes are often caused by mutations that were not expected to have that effect. The drawback is that they require high-throughput, which is not feasible for all proteins. Large amounts of recombinant DNA must be mutated and the products screened for desired qualities. The sheer number of variants often requires expensive robotic equipment to automate the process. Furthermore, not all desired activities can be easily screened for.

Examples of engineered proteins

Using computational methods, a protein with a novel fold has been designed, known as Top7,[1] as well as sensors for unnatural molecules.[2] The engineering of fusion proteins has yielded rilonacept, a pharmaceutical that has secured FDA approval for the treatment of cryopyrin-associated periodic syndrome.

Another computational method, IPRO, successfully engineered the switching of cofactor specificity of Candida boidinii xylose reductase.[3] Iterative Protein Redesign and Optimization (IPRO) redesigns proteins to increase or give specificity to native or novel substrates and cofactors. This is done by repeatedly randomly perturbing the structure of the proteins around specified design positions, identifying the lowest energy combination of rotamers, and determining whether the new design has a lower binding energy than previous ones.[4]

Computation-aided design has also been used to engineer complex properties of a highly ordered nano-protein assembly. [5] A protein cage, E. coli bacterioferritin (EcBfr), which naturally shows structural instability and an incomplete self-assembly behavior by populating two oligomerization states, is the model protein in this study. Through computational analysis and comparison to its homologs, it has been found that this protein has a smaller-than-average dimeric interface on its two-fold symmetry axis due mainly to the existence of an interfacial water pocket centered around two water-bridged asparagine residues. To investigate the possibility of engineering EcBfr for modified structural stability, a semi-empirical computational method is used to virtually explore the energy differences of the 480 possible mutants at the dimeric interface relative to the wild type EcBfr. This computational study also converges on the water-bridged asparagines. Replacing these two asparagines with hydrophobic amino acids results in proteins that fold into alpha-helical monomers and assemble into cages as evidenced by circular dichroism and transmission electron microscopy. Both thermal and chemical denaturation confirm that, all redesigned proteins, in agreement with the calculations, possess increased stability. One of the three mutations shifts the population in favor of the higher order oligomerization state in solution as shown by both size exclusion chromatography and native gel electrophoresis.[5]

Enzyme engineering

Enzyme engineering is the application of modifying an enzyme's structure (and, thus, its function) or modifying the catalytic activity of isolated enzymes to produce new metabolites, to allow new (catalyzed) pathways for reactions to occur,[6] or to convert from some certain compounds into others (biotransformation). These products will be useful as chemicals, pharmaceuticals, fuel, food, or agricultural additives.

An enzyme reactor [7] consists of a vessel containing a reactional medium that is used to perform a desired conversion by enzymatic means. Enzymes used in this process are free in the solution.

World's first thorium reactor ready to be built for cheaper, safer nuclear energy


Unlike current nuclear power stations, that use uranium, the thorium plant won't use a material that can be weaponised. It would also mean there is much less danger from a meltdown. Thorium is also more abundant than uranium so it will be cheaper and easier to supply.

The safer material means it can be supplied at a lower cost with far fewer security needs. Security measures are actually the most expensive part about building current nuclear power stations. Thorium reactors, on the other hand, don't require special containment buildings and can even be set up in normal structures.

The proposed thorium reactor is made to run by itself without any need for intervention. It will only need to be checked by a person once every four months.

The plan is to build a 300MW reactor by 2016, which should have a runtime life of 100 years. India's Thorium Energy Program, which is behind the system, aims to expand from the prototype so that 30 per cent of India's energy comes from Thorium reactors by 2050.

Since thorium reactors are far safer than current nuclear reactors there has been talk of minaturising them so a $1000 unit could power a ten house street for a lifetime. While that sounds exciting the reality is still a long way off.
null

Directed evolution


From Wikipedia, the free encyclopedia


An example of directed evolution with comparison to natural evolution. The inner cycle indicates the 3 stages of the directed evolution cycle with the natural process being mimicked in brackets. The outer circle demonstrates steps a typical experiment. The red symbols indicate functional variants, the pale symbols indicate variants with reduced function.

Directed evolution (DE) is a method used in protein engineering that mimics the process of natural selection to evolve proteins or nucleic acids toward a user-defined goal.[1] It consists of subjecting a gene to iterative rounds of mutagenesis (creating a library of variants), selection (expressing the variants and isolating members with the desired function), and amplification (generating a template for the next round). It can be performed in vivo (in living cells), or in vitro (free in solution or microdroplet). Directed evolution is used both for protein engineering as an alternative to rationally designing modified proteins, as well as studies of fundamental evolutionary principles in a controlled, laboratory environment.

Principles


Directed evolution is analogous to climbing a hill on a 'fitness landscape' where elevation represents the desired property. Each round of selection samples mutants on all sides of the starting template (1) and selects the mutant with the highest elevation, thereby climbing the hill. This is repeated until a local summit is reached (2).

Directed evolution is a mimic of the natural evolution cycle in a laboratory setting. Evolution requires three things to occur: variation between replicators, that the variation causes fitness differences upon which selection acts, and that this variation is heritable. In DE, a single gene is evolved by iterative rounds of mutagenesis, selection or screening, and amplification.[2] Rounds of these steps are typically repeated, using the best variant from one round as the template for the next to achieve stepwise improvements.

The likelihood of success in a directed evolution experiment is directly related to the total library size, as evaluating more mutants increases the chances of finding one with the desired properties.[3]

Generating variation


Starting gene (left) and library of variants (right). Point mutations change single nucleotides. Insertions and deletions add or remove sections of DNA. Shuffling recombines segments of two (or more) similar genes.

The first step in performing a cycle of directed evolution is the generation of a library of variant genes. The sequence space for random sequence is vast (10130 possible sequences for a 100 amino acid protein) and extremely sparsely populated by functional proteins. Neither experimental,[4] nor natural[5] evolution can ever get close to sampling so many sequences. Of course, natural evolution samples variant sequences close to functional protein sequences and this is imitated in DE by mutagenising an already functional gene.

The starting gene can be mutagenised by random point mutations (by chemical mutagens or error prone PCR)[6][7] and insertions and deletions (by transposons).[8] Gene recombination can be mimicked by DNA shuffling[9][10] of several sequences (usually of more than 70% homology) to jump into regions of sequence space between the shuffled parent genes. Finally, specific regions of a gene can be systematically randomised[11] for a more focused approach based on structure and function knowledge. Depending on the method, the library generated will vary in the proportion of functional variants it contains. Even if an organism is used to express the gene of interest, by mutagenising only that gene, the rest of the organism’s genome remains the same and can be ignored for the evolution experiment (to the extent of providing a constant genetic environment).

Detecting fitness differences

The majority of mutations are deleterious and so libraries of mutants tend to mostly have variants with reduced activity.[12] Therefore, a high-throughput assay is vital for measuring activity to find the rare variants with beneficial mutations that improve the desired properties. Two main categories of method exist for isolating functional variants. Selection systems directly couple protein function to survival of the gene, whereas screening systems individually assay each variant and allow a quantitative threshold to be set for sorting a variant or population of variants of a desired activity.
Both selection and screening can be performed in living cells (in vivo evolution) or performed directly on the protein or RNA without any cells (in vitro evolution).[13][14]

During in vivo evolution, each cell (usually bacteria or yeast) is transformed with a plasmid containing a different member of the variant library. In this way, only the gene of interest differs between the cells, with all other genes being kept the same. The cells express the protein either in their cytoplasm or surface where its function can be tested. This format has the advantage of selecting for properties in a cellular environment, which is useful when the evolved protein or RNA is to be used in living organisms. When performed without cells, DE involves using in vitro transcription translation to produce proteins or RNA free in solution or compartmentalised in artificial microdroplets. This method has the benefits of being more versatile in the selection conditions (e.g temperature, solvent), and can express proteins that would be toxic to cells. Furthermore, in vitro evolution experiments can generate far larger libraries (up to 1015) because the library DNA need not be inserted into cells (often a limiting step).

Selection

Selection for binding activity is conceptually simple. The target molecule is immobilised on a solid support, a library of variant proteins is flowed over it, poor binders are washed away, and the remaining bound variants recovered to isolate their genes.[15] Binding of an enzyme to immobilised covalent inhibitor has been also used as an attempt to isolate active catalysts. This approach, however, only selects for single catalytic turnover and is not a good model of substrate binding or true substrate reactivity. If an enzyme activity can be made necessary for cell survival, either by synthesizing a vital metabolite, or destroying a toxin, then cell survival is a function of enzyme activity.[16][17] Such systems are generally only limited in throughput by the transformation efficiency of cells. They are also less expensive and labour intensive than screening, however they are typically difficult to engineer, prone to artefacts and give no information on the range of activities present in the library.

Screening

An alternative to selection is a screening system. Each variant gene is individually expressed and assayed to quantitatively measure the activity (most often by a colourgenic or fluorogenic product). The variants are then ranked and the experimenter decides which variants to use as temples for the next round of DE. Even the most high throughput assays usually have lower coverage than selection methods but give the advantage of producing detailed information on each one of the screened variants. This disaggregated data can also be used to characterise the distribution of activities in libraries which is not possible in simple selection systems. Screening systems, therefore, have advantages when it comes to experimentally characterising adaptive evolution and fitness landscapes.

Ensuring heredity


An expressed protein can either be covalently linked to its gene (as in mRNA, left) or compartmentalized with it (cells or artificial compartments, right). Either way ensures that the gene can be isolated based on the activity of the encoded protein.

When functional proteins have been isolated, it is necessary that their genes are too, therefore a genotype-phenotype link is required.[18] This can be covalent, such as mRNA display where the mRNA gene is linked to the protein at the end of translation by puromycin.[19] Alternatively the protein and its gene can be co-localised by compartmentalisation in living cells[20] or emulsion droplets.[21] The gene sequences isolated are then amplified by PCR or by transformed host bacteria. Either the single best sequence, or a pool of sequences can be used as the template for the next round of mutagenesis. The repeated cycles of Diversification-Selection-Amplification generate protein variants adapted to the applied selection pressures.

Comparison to rational protein design

Advantages of directed evolution

Rational design of a protein relies on an in-depth knowledge of the protein structure, as well as its catalytic mechanism.[22][23] Specific changes are then made by site-directed mutagenesis in an attempt to change the function of the protein. A drawback of this is that even when the structure and mechanism of action of the protein are well known, the change due to mutation is still difficult to predict. Therefore, an advantage of DE is that there is no need to understand the mechanism of the desired activity or how mutations would affect it.[24]

Limitations of directed evolution

A restriction of directed evolution is that a high-throughput assay is required in order to measure the effects of a large number of different random mutations. This can require extensive research and development before it can be used for directed evolution. Additionally, such assays are often highly specific to monitoring a particular activity and so are not transferable to new DE experiments.[25]
Additionally, selecting for improvement in the assayed function simply generates improvements in the assayed function. To understand how these improvements are achieved, the properties of the evolving enzyme have to be measured. Improvement of the assayed activity can be due to improvements in enzyme catalytic activity or enzyme concentration. There is also no guarantee that improvement on one substrate will improve activity on another. This is particularly important when the desired activity cannot be directly screened or selected for and so a ‘proxy’ substrate is used. DE can lead to evolutionary specialisation to the proxy without improving the desired activity. Consequently, choosing appropriate screening or selection conditions is vital for successful DE.

Combinatorial approaches

Combined, 'semi-rational' approaches are being investigated as address the limitations of both rational design and directed evolution.[26][27] Beneficial mutations are rare, so large numbers of random mutants have to be screened to find improved variants. 'Focussed libraries' concentrate on randomising regions though to be richer in beneficial mutations for the mutagenesis step of DE. A focussed library contains fewer variants than a traditional random mutagenesis library and so does not require such high-throughput screening.

Creating a focussed library requires some knowledge of which residues in the structure to mutate. For example, knowledge of the active site of an enzyme may allow just the residues known to interact with the substrate to be randomised.[28][29] Alternatively, knowledge of which protein regions are variable in nature can guide mutagenesis in just those regions.[30][31]

Uses

Directed evolution is frequently used for protein engineering as an alternative to rational design,[32] but can also be used to investigate fundamental questions of enzyme evolution.[33]

Protein engineering

As a protein engineering tool, DE has been most successful in three areas:
  1. Improving protein stability for biotechnological use at high temperatures or in harsh solvents.[34][35]
  2. Improving binding affinity of therapeutic antibodies (Affinity maturation)[36] and the activity of de novo designed enzymes.[37]
  3. Altering substrate specificity of existing enzymes,[38][39][40][41] (often for use in industry).[42]

Evolution studies

The study of natural evolution is traditionally based on extant organisms and their genes. However, research is fundamentally limited by the lack of fossils (and particularly the lack of ancient DNA sequences)[43][44] and incomplete knowledge of ancient environmental conditions. Directed evolution investigates evolution in a controlled system of genes for individual enzymes,[45][46][47] ribozymes[48] and replicators[49][50] (similar to experimental evolution of eukaryotes,[51][52] prokaryotes[53] and viruses[54]).

DE allows control of selection pressure, mutation rate and environment (both the abiotic environment such as temperature, and the biotic environment, such as other genes in the organism). Additionally, there is a complete record of all evolutionary intermediate genes. This allows for detailed measurements of evolutionary processes, for example epistasis, evolvability, adaptive constraint[55] fitness landscapes,[56] and neutral networks.[57]

Xenobiology



From Wikipedia, the free encyclopedia

Xenobiology (XB) is a subfield of synthetic biology, the study of synthesizing and manipulating biological devices and systems. Xenobiology derives from the term xenos (Greek) and means "stranger, guest". So XB describes a form of biology that is not (yet) familiar to science and is not found in nature. In practice it describes novel biological systems and biochemistries that differ from the canonical DNA-RNA-20 amino acid system (see the classical central dogma in molecular biology). For example, instead of DNA or RNA, XB explores nucleic acid analogues, termed Xeno Nucleic Acid (XNA) as information carriers.[1] It also focuses on an expanded genetic code [2] and the incorporation of non-proteinogenic amino acids into proteins.[3]

Difference between xeno-, exo-, and astro-biology

Astro means star and exo means outside. Both exo- and astrobiology deal with the search for naturally evolved life in the Universe, mostly on other planets in Goldilocks zones. Whereas astrobiologists are concerned with the detection and analysis of (hypothetically) existing life elsewhere in the Universe, xenobiology attempts to design forms of life with a different biochemistry or different genetic code on planet Earth.[4]

Aims of xenobiology

  • Xenobiology has the potential to reveal fundamental knowledge about biology and the origin of life. In order to better understand the origin of life, it is necessary to know why life evolved seemingly via an early RNA world to the DNA-RNA-protein system and its nearly universal genetic code.[5] Was it an evolutionary "accident" or were there constraints that ruled out other types of chemistries? By testing alternative biochemical "primordial soups", it is expected to better understand the principles that gave rise to life as we know it.
  • Xenobiology is an approach to develop industrial production system with novel capabilities by means of enhanced biopolymer engineering and pathogen resistance. The genetic code encodes in all organisms 20 canonical amino acids that are used for protein biosynthesis. In rare cases, special amino acids such as selenocysteine, pyrrolysine or formylmethionine, can be incorporated by the translational apparatus in to proteins of some organisms.[6] By using additional amino acids from among the over 700 known to biochemistry, the capabilities of proteins may be altered to give rise to more efficient catalytical or material functions. The EC-funded project METACODE, for examples, aims to incorporate metathesis (a useful catalytical function so far not known in living organisms) into bacterial cells. Another reason why XB could improve production processes lies in the possibility to reduce the risk of virus or bacteriophage contamination in cultivations since XB cells would no longer provide suitable host cells, rendering them more resistant (an approach called semantic containment)
  • Xenobiology offers the option to design a ‘genetic firewall’, a novel biocontainment system, which may help to strengthen and diversify current bio-containment approaches. One concern with traditional genetic engineering and biotechnology is horizontal gene transfer to the environment and possible risks to human health. One major idea in XB is to design alternative genetic codes and biochemistries so that horizontal gene transfer is no longer possible. Additionally alternative biochemistry also allows for new synthetic auxotrophies. The idea is to create an orthogonal biological system that would be incompatible with natural genetic systems.[7]

Scientific approach

In xenobiology, the aim is to design and construct biological systems that differ from their natural counterparts on one or more fundamental levels. Ideally these new-to-nature organisms would be different in every possible biochemical aspect exhibiting a very different genetic code. The long-term goal is to construct a cell that would store its genetic information not in DNA but in an alternative informational polymer consisting of xeno nucleic acids (XNA), different base pairs, using non-canonical amino acids and an altered genetic code. So far cells have been constructed that incorporate only one or two of these features.

Xeno nucleic acids (XNA)

Originally this research on alternative forms of DNA was driven by the question of how life evolved on earth and why RNA and DNA were selected by (chemical) evolution over other possible nucleic acid structures.[8] Systematic experimental studies aiming at the diversification of the chemical structure of nucleic acids have resulted in completely novel informational biopolymers. So far a number of XNAs with new chemical backbones or leaving group of the DNA have been synthesized,[9][10][11][12] e.g.: hexose nucleic acid (HNA); threose nucleic acid (TNA),[13] glycol nucleic acid (GNA) cyclohexenyl nucleic acid (CeNA).[14] The incorporation of XNA in a plasmid, involving 3 HNA codons, has been accomplished already in 2003.[15] This XNA is used in vivo (E coli) as template for DNA synthesis. This study, using a binary (G/T) genetic cassette and two non-DNA bases (Hx/U), was extended to CeNA, while GNA seems to be too alien at this moment for the natural biological system to be used as template for DNA synthesis.[16] Extended bases using a natural DNA backbone could, likewise, be transliterated into natural DNA, although to a more limited extent.[17]

Expanding the genetic alphabet

While XNAs have modified backbones, other experiments target the replacement or enlargement of the genetic alphabet of DNA with unnatural base pairs. For example, DNA has been designed that has - instead of the four standard bases A,T,G, and C - six bases A, T, G, C, and the two new ones P and Z (where Z stands for 6-Amino-5-nitro3-(l'-p-D-2'-deoxyribofuranosyl)-2(1H)-pyridone, and P stands for 2-Amino-8-(1-beta-D-2'-deoxyribofuranosyl)imidazo[1,2-a]-1,3,5-triazin-4 (8H)).[18][19][20] In a systematic study, Leconte et al. tested the viability of 60 candidate bases (yielding potentially 3600 base pairs) for possible incorporation in the DNA.[21]
In 2002, Hirao et al. developed an unnatural base pair between 2-amino-8-(2-thienyl)purine (s) and pyridine-2-one (y) that functions in vitro in transcription and translation toward a genetic code for protein synthesis containing a non-standard amino acid.[22] In 2006, they created 7-(2-thienyl)imidazo[4,5-b]pyridine (Ds) and pyrrole-2-carbaldehyde (Pa) as a third base pair for replication and transcription,[23] and afterward, Ds and 4-[3-(6-aminohexanamido)-1-propynyl]-2-nitropyrrole (Px) was discovered as a high fidelity pair in PCR amplification.[24][25] In 2013, they applied the Ds-Px pair to DNA aptamer generation by in vitro selection (SELEX) and demonstrated the genetic alphabet expansion significantly augment DNA aptamer affinities to target proteins.[26]

In May 2014, researchers announced that they had successfully introduced two new artificial nucleotides into bacterial DNA, alongside the four naturally occurring nucleotides, and by including individual artificial nucleotides in the culture media, were able to passage the bacteria 24 times; they did not create mRNA or proteins able to use the artificial nucleotides.[27][28][29]

Novel polymerases

Neither the XNA nor the unnatural bases are recognized by natural polymerases. One of the major challenges is to find or create novel types of polymerases that will be able to replicate these new-to-nature constructs. In one case a modified variant of the HIV-reverse transcriptase was found to be able to PCR-amplify an oligonucleotide containing a third type base pair.[30][31] Pinheiro et al. (2012) demonstrated that the method of polymerase evolution and design successfully led to the storage and recovery of genetic information (of less than 100bp length) from six alternative genetic polymers based on simple nucleic acid architectures not found in nature Xeno nucleic acids.[32]

Genetic code engineering

One of the goals of xenobiology is to rewrite the genetic code. The most promising approach to change the code is the reassignment of seldomly used or even unused codons.[33] In an ideal scenario, the genetic code is expanded by one codon, thus having been liberated from its old function and fully reassigned to a non-canonical amino acid (ncAA) (“code expansion”). As these methods are laborious to implement, and some short cuts can be applied (“code engineering”), for example in bacteria that are auxotrophic for specific amino acids and at some point in the experiment are fed isostructural analogues instead of the canonical amino acids for which they are auxotrophic. In that situation, the canonical amino acid residues in native proteins are substituted with the ncAAs. Even the insertion of multiple different ncAAs into the same protein is possible.[34] Finally, the repertoire of 20 canonical amino acids can not only be expanded, but also reduced to 19.[35] By reassigning transfer RNA (tRNA)/aminoacyl-tRNA synthetase pairs the codon specificity can be changed. Cells endowed with such aminoacyl-[tRNA synthetases] are thus able to read [mRNA] sequences that make no sense to the existing gene expression machinery.[36] Altering the codon: tRNA synthetases pairs may lead to the in vivo incorporation of the non-canonical amino acids into proteins.[37][38] In the past reassigning codons was mainly done on a limited scale. In 2013, however, Farren Isaacs and George Church at Harvard University reported the replacement of all 321 TAG stop codons present in the genome of E. coli with synonymous TAA codons, thereby demonstrating that massive substitutions can be combined into higher-order strains without lethal effects.[39] Following the success of this genome wide codon replacement, the authors continued and achieved the reprogramming of 13 codons throughout the genome, directly affecting 42 essential genes.[40]
An even more radical change in the genetic code is the change of a triplet codon to a quadruplet and even pentaplet codon pioneered by Sisido in cell-free systems [41] and by Schultz in bacteria.[42] Finally, non-natural base pairs can be used to introduce novel amino acid in proteins.[43]

Directed evolution

The goal of substituting DNA by XNA may also be reached by another route, namely by engineering the environment instead of the genetic modules. This approach has been successfully demonstrated by Marlière and Mutzel with the production of an E. coli strain whose DNA is composed of standard A, C and G nucleotides but has the synthetic thymine analogue 5-chlorouracil instead of thymine (T) in the corresponding positions of the sequence. These cells are then dependent on externally supplied 5-chlorouracil for growth, but otherwise they look and behave as normal E. coli. These cells, however, are currently not yet fully auxotrophic for the Xeno-base since they are still growing on thymine when this is supplied to the medium.[44]

Biosafety

Xenobiological systems are designed to convey orthogonality to natural biological systems. A (still hypothetical) organisms that uses XNA,[45] different base pairs and polymerases and has an altered genetic code will hardly be able to interact with natural forms of life on the genetic level. Thus, these xenobiological organisms represent a genetic enclave that cannot exchange information with natural cells.[46] Altering the genetic machinery of the cell leads to semantic containment. In analogy to information processing in IT, this safety concept is termed a “genetic firewall”.[4][47] The concept of the genetic firewall seems to overcome a number of limitations of previous safety systems.[48][49] A first experimental evidence of the theoretical concept of the genetic firewall was achieved in 2013 with the construction of a genomically recoded organism (GRO). In this GRO all known UAG stop codons in E.coli were replaced by UAA codons, which allowed for the deletion of release factor 1 and reassignment of UAG translation function. The GRO exhibited increased resistance to T7 bacteriophage, thus showing that alternative genetic codes do reduce genetic compatibility.[50] This GRO, however, is still very similar to its natural “parent” and cannot be regarded as a genetic firewall. The possibility of reassigning the function of large number of triplets opens the perspective to have strains that combine XNA, novel base pairs, new genetic codes etc. that cannot exchange any information with the natural biological world. Regardless of changes leading to a semantic containment mechanism in new organisms, any novel biochemical systems still has to undergo a toxicological screening. XNA, novel proteins etc. might represent novel toxins, or have an allergic potential that needs to be assessed.[51][52]

Governance and Regulatory issues

Xenobiology might challenge the regulatory framework, as currently laws and directives deal with genetically modified organisms and do not directly mention chemically or genomically modified organisms. Taking into account that real xenobiology organisms are not expected in the next few years, policy makers do have some time at hand to prepare themselves for an upcoming governance challenge. Since 2012 policy advisers in the US,[53] four National Biosafety Boards in Europe,[54] and the European Molecular Biology Organisation [55] have picked up the topic as a developing governance issue.

Astrochemistry


From Wikipedia, the free encyclopedia

Astrochemistry is the study of the abundance and reactions of chemical elements and molecules in the universe, and their interaction with radiation.[citation needed] The discipline is an overlap of astronomy and chemistry. The word "astrochemistry" may be applied to both the Solar System and the interstellar medium. The study of the abundance of elements and isotope ratios in Solar System objects, such as meteorites, is also called cosmochemistry, while the study of interstellar atoms and molecules and their interaction with radiation is sometimes called molecular astrophysics. The formation, atomic and chemical composition, evolution and fate of molecular gas clouds is of special interest, because it is from these clouds that solar systems form.

Spectroscopy

One particularly important experimental tool in astrochemistry is spectroscopy, the use of telescopes to measure the absorption and emission of light from molecules and atoms in various environments. By comparing astronomical observations with laboratory measurements, astrochemists can infer the elemental abundances, chemical composition, and temperatures of stars and interstellar clouds. This is possible because ions, atoms, and molecules have characteristic spectra: that is, the absorption and emission of certain wavelengths (colors) of light, often not visible to the human eye. However, these measurements have limitations, with various types of radiation (radio, infrared, visible, ultraviolet etc.) able to detect only certain types of species, depending on the chemical properties of the molecules. Interstellar formaldehyde was the first organic molecule detected in the interstellar medium.

Perhaps the most powerful technique for detection of individual chemical species is radio astronomy, which has resulted in the detection of over a hundred interstellar species, including radicals and ions, and organic (i.e. carbon-based) compounds, such as alcohols, acids, aldehydes, and ketones. One of the most abundant interstellar molecules, and among the easiest to detect with radio waves (due to its strong electric dipole moment), is CO (carbon monoxide). In fact, CO is such a common interstellar molecule that it is used to map out molecular regions.[1] The radio observation of perhaps greatest human interest is the claim of interstellar glycine,[2] the simplest amino acid, but with considerable accompanying controversy.[3] One of the reasons why this detection was controversial is that although radio (and some other methods like rotational spectroscopy) are good for the identification of simple species with large dipole moments, they are less sensitive to more complex molecules, even something relatively small like amino acids.

Moreover, such methods are completely blind to molecules that have no dipole. For example, by far the most common molecule in the universe is H2 (hydrogen gas), but it does not have a dipole moment, so it is invisible to radio telescopes. Moreover, such methods cannot detect species that are not in the gas-phase. Since dense molecular clouds are very cold (10-50 K = -263 to -223 C = -440 to -370 F), most molecules in them (other than hydrogen) are frozen, i.e. solid. Instead, hydrogen and these other molecules are detected using other wavelengths of light. Hydrogen is easily detected in the ultraviolet (UV) and visible ranges from its absorption and emission of light (the hydrogen line).
Moreover, most organic compounds absorb and emit light in the infrared (IR) so, for example, the detection of methane in the atmosphere of Mars[4] was achieved using an IR ground-based telescope, NASA's 3-meter Infrared Telescope Facility atop Mauna Kea, Hawaii. NASA also has an airborne IR telescope called SOFIA and an IR space telescope called Spitzer. Somewhat related to the recent detection of methane in the atmosphere of Mars, scientists reported, in June 2012, that measuring the ratio of hydrogen and methane levels on Mars may help determine the likelihood of life on Mars.[5][6] According to the scientists, "...low H2/CH4 ratios (less than approximately 40) indicate that life is likely present and active."[5] Other scientists have recently reported methods of detecting hydrogen and methane in extraterrestrial atmospheres.[7][8]

Infrared astronomy has also revealed that the interstellar medium contains a suite of complex gas-phase carbon compounds called polyaromatic hydrocarbons, often abbreviated PAHs or PACs. These molecules, composed primarily of fused rings of carbon (either neutral or in an ionized state), are said to be the most common class of carbon compound in the galaxy. They are also the most common class of carbon molecule in meteorites and in cometary and asteroidal dust (cosmic dust). These compounds, as well as the amino acids, nucleobases, and many other compounds in meteorites, carry deuterium and isotopes of carbon, nitrogen, and oxygen that are very rare on earth, attesting to their extraterrestrial origin. The PAHs are thought to form in hot circumstellar environments (around dying, carbon-rich red giant stars).

Infrared astronomy has also been used to assess the composition of solid materials in the interstellar medium, including silicates, kerogen-like carbon-rich solids, and ices. This is because unlike visible light, which is scattered or absorbed by solid particles, the IR radiation can pass through the microscopic interstellar particles, but in the process there are absorptions at certain wavelengths that are characteristic of the composition of the grains.[9] As above with radio astronomy, there are certain limitations, e.g. N2 is difficult to detect by either IR or radio astronomy.

Such IR observations have determined that in dense clouds (where there are enough particles to attenuate the destructive UV radiation) thin ice layers coat the microscopic particles, permitting some low-temperature chemistry to occur. Since hydrogen is by far the most abundant molecule in the universe, the initial chemistry of these ices is determined by the chemistry of the hydrogen. If the hydrogen is atomic, then the H atoms react with available O, C and N atoms, producing "reduced" species like H2O, CH4, and NH3. However, if the hydrogen is molecular and thus not reactive, this permits the heavier atoms to react or remain bonded together, producing CO, CO2, CN, etc. These mixed-molecular ices are exposed to ultraviolet radiation and cosmic rays, which results in complex radiation-driven chemistry.[9] Lab experiments on the photochemistry of simple interstellar ices have produced amino acids.[10] The similarity between interstellar and cometary ices (as well as comparisons of gas phase compounds) have been invoked as indicators of a connection between interstellar and cometary chemistry. This is somewhat supported by the results of the analysis of the organics from the comet samples returned by the Stardust mission but the minerals also indicated a surprising contribution from high-temperature chemistry in the solar nebula.

Research

Research is progressing on the way in which interstellar and circumstellar molecules form and interact, and this research could have a profound impact on our understanding of the suite of molecules that were present in the molecular cloud when our solar system formed, which contributed to the rich carbon chemistry of comets and asteroids and hence the meteorites and interstellar dust particles which fall to the Earth by the ton every day.

The sparseness of interstellar and interplanetary space results in some unusual chemistry, since symmetry-forbidden reactions cannot occur except on the longest of timescales. For this reason, molecules and molecular ions which are unstable on Earth can be highly abundant in space, for example the H3+ ion. Astrochemistry overlaps with astrophysics and nuclear physics in characterizing the nuclear reactions which occur in stars, the consequences for stellar evolution, as well as stellar 'generations'. Indeed, the nuclear reactions in stars produce every naturally occurring chemical element. As the stellar 'generations' advance, the mass of the newly formed elements increases. A first-generation star uses elemental hydrogen (H) as a fuel source and produces helium (He). Hydrogen is the most abundant element, and it is the basic building block for all other elements as its nucleus has only one proton. Gravitational pull toward the center of a star creates massive amounts of heat and pressure, which cause nuclear fusion. Through this process of merging nuclear mass, heavier elements are formed. Carbon, oxygen and silicon are examples of elements that form in stellar fusion. After many stellar generations, very heavy elements are formed (e.g. iron and lead).

In October 2011, scientists reported that cosmic dust contains complex organic matter ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars.[11][12][13]

On August 29, 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth.[14][15] Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation.[16]

In September, 2012, NASA scientists reported that polycyclic aromatic hydrocarbons (PAHs), subjected to interstellar medium (ISM) conditions, are transformed, through hydrogenation, oxygenation and hydroxylation, to more complex organics - "a step along the path toward amino acids and nucleotides, the raw materials of proteins and DNA, respectively".[17][18] Further, as a result of these transformations, the PAHs lose their spectroscopic signature which could be one of the reasons "for the lack of PAH detection in interstellar ice grains, particularly the outer regions of cold, dense clouds or the upper molecular layers of protoplanetary disks."[17][18]

In February 2014, NASA announced the creation of an improved spectral database [19] for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. According to scientists, more than 20% of the carbon in the universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets.[20]

On August 11, 2014, astronomers released studies, using the Atacama Large Millimeter/Submillimeter Array (ALMA) for the first time, that detailed the distribution of HCN, HNC, H2CO, and dust inside the comae of comets C/2012 F6 (Lemmon) and C/2012 S1 (ISON).[21][22]

For the study of the recourses of chemical elements and molecules in the universe is developed the mathematical model of the molecules composition distribution in the interstellar environment on thermodynamic potentials by professor M.Yu. Dolomatov using methods of the probability theory, the mathematical and physical statistics and the equilibrium thermodynamics.[23][24][25] Based on this model are estimated the resources of life-related molecules, amino acids and the nitrogenous bases in the interstellar medium. The possibility of the oil hydrocarbons molecules formation is shown. The given calculations confirm Sokolov’s and Hoyl’s hypotheses about the possibility of the oil hydrocarbons formation in Space. Results are confirmed by data of astrophysical supervision and space researches.

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...