Search This Blog

Thursday, March 11, 2021

Biotechnology

From Wikipedia, the free encyclopedia

Insulin crystals

Biotechnology is a broad area of biology, involving the use of living systems and organisms to develop or make products. Depending on the tools and applications, it often overlaps with related scientific fields. In the late 20th and early 21st centuries, biotechnology has expanded to include new and diverse sciences, such as genomics, recombinant gene techniques, applied immunology, and development of pharmaceutical therapies and diagnostic tests. The term biotechnology was first used by Karl Ereky in 1919, meaning the production of products from raw materials with the aid of living organisms.

Definition

The wide concept of biotechnology encompasses a wide range of procedures for modifying living organisms according to human purposes, going back to domestication of animals, cultivation of the plants, and "improvements" to these through breeding programs that employ artificial selection and hybridization. Modern usage also includes genetic engineering as well as cell and tissue culture technologies. The American Chemical Society defines biotechnology as the application of biological organisms, systems, or processes by various industries to learning about the science of life and the improvement of the value of materials and organisms such as pharmaceuticals, crops, and livestock. Per the European Federation of Biotechnology, biotechnology is the integration of natural science and organisms, cells, parts thereof, and molecular analogues for products and services. Biotechnology is based on the basic biological sciences (e.g. molecular biology, biochemistry, cell biology, embryology, genetics, microbiology) and conversely provides methods to support and perform basic research in biology.

Biotechnology is the research and development in the laboratory using bioinformatics for exploration, extraction, exploitation and production from any living organisms and any source of biomass by means of biochemical engineering where high value-added products could be planned (reproduced by biosynthesis, for example), forecasted, formulated, developed, manufactured, and marketed for the purpose of sustainable operations (for the return from bottomless initial investment on R & D) and gaining durable patents rights (for exclusives rights for sales, and prior to this to receive national and international approval from the results on animal experiment and human experiment, especially on the pharmaceutical branch of biotechnology to prevent any undetected side-effects or safety concerns by using the products). The utilization of biological processes, organisms or systems to produce products that are anticipated to improve human lives is termed biotechnology.

By contrast, bioengineering is generally thought of as a related field that more heavily emphasizes higher systems approaches (not necessarily the altering or using of biological materials directly) for interfacing with and utilizing living things. Bioengineering is the application of the principles of engineering and natural sciences to tissues, cells and molecules. This can be considered as the use of knowledge from working with and manipulating biology to achieve a result that can improve functions in plants and animals. Relatedly, biomedical engineering is an overlapping field that often draws upon and applies biotechnology (by various definitions), especially in certain sub-fields of biomedical or chemical engineering such as tissue engineering, biopharmaceutical engineering, and genetic engineering.

History

Brewing was an early application of biotechnology.

Although not normally what first comes to mind, many forms of human-derived agriculture clearly fit the broad definition of "'utilizing a biotechnological system to make products". Indeed, the cultivation of plants may be viewed as the earliest biotechnological enterprise.

Agriculture has been theorized to have become the dominant way of producing food since the Neolithic Revolution. Through early biotechnology, the earliest farmers selected and bred the best suited crops, having the highest yields, to produce enough food to support a growing population. As crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by-products could effectively fertilize, restore nitrogen, and control pests. Throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants — one of the first forms of biotechnology.

These processes also were included in early fermentation of beer. These processes were introduced in early Mesopotamia, Egypt, China and India, and still use the same basic biological methods. In brewing, malted grains (containing enzymes) convert starch from grains into sugar and then adding specific yeasts to produce beer. In this process, carbohydrates in the grains broke down into alcohols,e such as ethanol. Later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. Fermentation was also used in this time period to produce leavened bread. Although the process of fermentation was not fully understood until Louis Pasteur's work in 1857, it is still the first use of biotechnology to convert a food source into another form.

Before the time of Charles Darwin's work and life, animal and plant scientists had already used selective breeding. Darwin added to that body of work with his scientific observations about the ability of science to change species. These accounts contributed to Darwin's theory of natural selection.

For thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. In selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. For example, this technique was used with corn to produce the largest and sweetest crops.

In the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. In 1917, Chaim Weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using Clostridium acetobutylicum, to produce acetone, which the United Kingdom desperately needed to manufacture explosives during World War I.

Biotechnology has also led to the development of antibiotics. In 1928, Alexander Fleming discovered the mold Penicillium. His work led to the purification of the antibiotic compound formed by the mold by Howard Florey, Ernst Boris Chain and Norman Heatley – to form what we today know as penicillin. In 1940, penicillin became available for medicinal use to treat bacterial infections in humans.

The field of modern biotechnology is generally thought of as having been born in 1971 when Paul Berg's (Stanford) experiments in gene splicing had early success. Herbert W. Boyer (Univ. Calif. at San Francisco) and Stanley N. Cohen (Stanford) significantly advanced the new technology in 1972 by transferring genetic material into a bacterium, such that the imported material would be reproduced. The commercial viability of a biotechnology industry was significantly expanded on June 16, 1980, when the United States Supreme Court ruled that a genetically modified microorganism could be patented in the case of Diamond v. Chakrabarty. Indian-born Ananda Chakrabarty, working for General Electric, had modified a bacterium (of the genus Pseudomonas) capable of breaking down crude oil, which he proposed to use in treating oil spills. (Chakrabarty's work did not involve gene manipulation but rather the transfer of entire organelles between strains of the Pseudomonas bacterium.

The MOSFET (metal-oxide-semiconductor field-effect transistor) was invented by Mohamed M. Atalla and Dawon Kahng in 1959. Two years later, Leland C. Clark and Champ Lyons invented the first biosensor in 1962. Biosensor MOSFETs were later developed, and they have since been widely used to measure physical, chemical, biological and environmental parameters. The first BioFET was the ion-sensitive field-effect transistor (ISFET), invented by Piet Bergveld in 1970. It is a special type of MOSFET, where the metal gate is replaced by an ion-sensitive membrane, electrolyte solution and reference electrode. The ISFET is widely used in biomedical applications, such as the detection of DNA hybridization, biomarker detection from blood, antibody detection, glucose measurement, pH sensing, and genetic technology.

By the mid-1980s, other BioFETs had been developed, including the gas sensor FET (GASFET), pressure sensor FET (PRESSFET), chemical field-effect transistor (ChemFET), reference ISFET (REFET), enzyme-modified FET (ENFET) and immunologically modified FET (IMFET). By the early 2000s, BioFETs such as the DNA field-effect transistor (DNAFET), gene-modified FET (GenFET) and cell-potential BioFET (CPFET) had been developed.

A factor influencing the biotechnology sector's success is improved intellectual property rights legislation—and enforcement—worldwide, as well as strengthened demand for medical and pharmaceutical products to cope with an ageing, and ailing, U.S. population.

Rising demand for biofuels is expected to be good news for the biotechnology sector, with the Department of Energy estimating ethanol usage could reduce U.S. petroleum-derived fuel consumption by up to 30% by 2030. The biotechnology sector has allowed the U.S. farming industry to rapidly increase its supply of corn and soybeans—the main inputs into biofuels—by developing genetically modified seeds that resist pests and drought. By increasing farm productivity, biotechnology boosts biofuel production.

Examples

A rose plant that began as cells grown in a tissue culture

Biotechnology has applications in four major industrial areas, including health care (medical), crop production and agriculture, non-food (industrial) uses of crops and other products (e.g. biodegradable plastics, vegetable oil, biofuels), and environmental uses.

For example, one application of biotechnology is the directed use of microorganisms for the manufacture of organic products (examples include beer and milk products). Another example is using naturally present bacteria by the mining industry in bioleaching. Biotechnology is also used to recycle, treat waste, clean up sites contaminated by industrial activities (bioremediation), and also to produce biological weapons.

A series of derived terms have been coined to identify several branches of biotechnology, for example:

  • Bioinformatics (also called "gold biotechnology") is an interdisciplinary field that addresses biological problems using computational techniques, and makes the rapid organization as well as analysis of biological data possible. The field may also be referred to as computational biology, and can be defined as, "conceptualizing biology in terms of molecules and then applying informatics techniques to understand and organize the information associated with these molecules, on a large scale." Bioinformatics plays a key role in various areas, such as functional genomics, structural genomics, and proteomics, and forms a key component in the biotechnology and pharmaceutical sector.
  • Blue biotechnology is based on the exploitation of sea resources to create products and industrial applications. This branch of biotechnology is the most used for the industries of refining and combustion principally on the production of bio-oils with photosynthetic micro-algae.
  • Green biotechnology is biotechnology applied to agricultural processes. An example would be the selection and domestication of plants via micropropagation. Another example is the designing of transgenic plants to grow under specific environments in the presence (or absence) of chemicals. One hope is that green biotechnology might produce more environmentally friendly solutions than traditional industrial agriculture. An example of this is the engineering of a plant to express a pesticide, thereby ending the need of external application of pesticides. An example of this would be Bt corn. Whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate. It is commonly considered as the next phase of green revolution, which can be seen as a platform to eradicate world hunger by using technologies which enable the production of more fertile and resistant, towards biotic and abiotic stress, plants and ensures application of environmentally friendly fertilizers and the use of biopesticides, it is mainly focused on the development of agriculture. On the other hand, some of the uses of green biotechnology involve microorganisms to clean and reduce waste.
  • Red biotechnology is the use of biotechnology in the medical and pharmaceutical industries, and health preservation. This branch involves the production of vaccines and antibiotics, regenerative therapies, creation of artificial organs and new diagnostics of diseases. As well as the development of hormones, stem cells, antibodies, siRNA and diagnostic tests.
  • White biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. An example is the designing of an organism to produce a useful chemical. Another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous/polluting chemicals. White biotechnology tends to consume less in resources than traditional processes used to produce industrial goods.
  • "Yellow biotechnology" refers to the use of biotechnology in food production, for example in making wine, cheese, and beer by fermentation. It has also been used to refer to biotechnology applied to insects. This includes biotechnology-based approaches for the control of harmful insects, the characterisation and utilisation of active ingredients or genes of insects for research, or application in agriculture and medicine and various other approaches.
  • Gray biotechnology is dedicated to environmental applications, and focused on the maintenance of biodiversity and the remotion of pollutants.
  • Brown biotechnology is related to the management of arid lands and deserts. One application is the creation of enhanced seeds that resist extreme environmental conditions of arid regions, which is related to the innovation, creation of agriculture techniques and management of resources.
  • Violet biotechnology is related to law, ethical and philosophical issues around biotechnology.
  • Dark biotechnology is the color associated with bioterrorism or biological weapons and biowarfare which uses microorganisms, and toxins to cause diseases and death in humans, livestock and crops.

Medicine

In medicine, modern biotechnology has many applications in areas such as pharmaceutical drug discoveries and production, pharmacogenomics, and genetic testing (or genetic screening).

DNA microarray chip – some can do as many as a million blood tests at once

Pharmacogenomics (a combination of pharmacology and genomics) is the technology that analyses how genetic makeup affects an individual's response to drugs. Researchers in the field investigate the influence of genetic variation on drug responses in patients by correlating gene expression or single-nucleotide polymorphisms with a drug's efficacy or toxicity. The purpose of pharmacogenomics is to develop rational means to optimize drug therapy, with respect to the patients' genotype, to ensure maximum efficacy with minimal adverse effects. Such approaches promise the advent of "personalized medicine"; in which drugs and drug combinations are optimized for each individual's unique genetic makeup.

Computer-generated image of insulin hexamers highlighting the threefold symmetry, the zinc ions holding it together, and the histidine residues involved in zinc binding

Biotechnology has contributed to the discovery and manufacturing of traditional small molecule pharmaceutical drugs as well as drugs that are the product of biotechnology – biopharmaceutics. Modern biotechnology can be used to manufacture existing medicines relatively easily and cheaply. The first genetically engineered products were medicines designed to treat human diseases. To cite one example, in 1978 Genentech developed synthetic humanized insulin by joining its gene with a plasmid vector inserted into the bacterium Escherichia coli. Insulin, widely used for the treatment of diabetes, was previously extracted from the pancreas of abattoir animals (cattle or pigs). The genetically engineered bacteria are able to produce large quantities of synthetic human insulin at relatively low cost. Biotechnology has also enabled emerging therapeutics like gene therapy. The application of biotechnology to basic science (for example through the Human Genome Project) has also dramatically improved our understanding of biology and as our scientific knowledge of normal and disease biology has increased, our ability to develop new medicines to treat previously untreatable diseases has increased as well.

Genetic testing allows the genetic diagnosis of vulnerabilities to inherited diseases, and can also be used to determine a child's parentage (genetic mother and father) or in general a person's ancestry. In addition to studying chromosomes to the level of individual genes, genetic testing in a broader sense includes biochemical tests for the possible presence of genetic diseases, or mutant forms of genes associated with increased risk of developing genetic disorders. Genetic testing identifies changes in chromosomes, genes, or proteins. Most of the time, testing is used to find changes that are associated with inherited disorders. The results of a genetic test can confirm or rule out a suspected genetic condition or help determine a person's chance of developing or passing on a genetic disorder. As of 2011 several hundred genetic tests were in use. Since genetic testing may open up ethical or psychological problems, genetic testing is often accompanied by genetic counseling.

Agriculture

Genetically modified crops ("GM crops", or "biotech crops") are plants used in agriculture, the DNA of which has been modified with genetic engineering techniques. In most cases, the main aim is to introduce a new trait that does not occur naturally in the species. Biotechnology firms can contribute to future food security by improving the nutrition and viability of urban agriculture. Furthermore, the protection of intellectual property rights encourages private sector investment in agrobiotechnology. For example, in Illinois FARM Illinois (Food and Agriculture RoadMap for Illinois) is an initiative to develop and coordinate farmers, industry, research institutions, government, and nonprofits in pursuit of food and agriculture innovation. In addition, the Illinois Biotechnology Industry Organization (iBIO) is a life sciences industry association with more than 500 life sciences companies, universities, academic institutions, service providers and others as members. The association describes its members as "dedicated to making Illinois and the surrounding Midwest one of the world’s top life sciences centers."

Examples in food crops include resistance to certain pests, diseases, stressful environmental conditions, resistance to chemical treatments (e.g. resistance to a herbicide), reduction of spoilage, or improving the nutrient profile of the crop. Examples in non-food crops include production of pharmaceutical agents, biofuels, and other industrially useful goods, as well as for bioremediation.

Farmers have widely adopted GM technology. Between 1996 and 2011, the total surface area of land cultivated with GM crops had increased by a factor of 94, from 17,000 square kilometers (4,200,000 acres) to 1,600,000 km2 (395 million acres). 10% of the world's crop lands were planted with GM crops in 2010. As of 2011, 11 different transgenic crops were grown commercially on 395 million acres (160 million hectares) in 29 countries such as the US, Brazil, Argentina, India, Canada, China, Paraguay, Pakistan, South Africa, Uruguay, Bolivia, Australia, Philippines, Myanmar, Burkina Faso, Mexico and Spain.

Genetically modified foods are foods produced from organisms that have had specific changes introduced into their DNA with the methods of genetic engineering. These techniques have allowed for the introduction of new crop traits as well as a far greater control over a food's genetic structure than previously afforded by methods such as selective breeding and mutation breeding. Commercial sale of genetically modified foods began in 1994, when Calgene first marketed its Flavr Savr delayed ripening tomato. To date most genetic modification of foods have primarily focused on cash crops in high demand by farmers such as soybean, corn, canola, and cotton seed oil. These have been engineered for resistance to pathogens and herbicides and better nutrient profiles. GM livestock have also been experimentally developed; in November 2013 none were available on the market, but in 2015 the FDA approved the first GM salmon for commercial production and consumption.

There is a scientific consensus that currently available food derived from GM crops poses no greater risk to human health than conventional food, but that each GM food needs to be tested on a case-by-case basis before introduction. Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe. The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation.

GM crops also provide a number of ecological benefits, if not used in excess. However, opponents have objected to GM crops per se on several grounds, including environmental concerns, whether food produced from GM crops is safe, whether GM crops are needed to address the world's food needs, and economic concerns raised by the fact these organisms are subject to intellectual property law.

Industrial

Industrial biotechnology (known mainly in Europe as white biotechnology) is the application of biotechnology for industrial purposes, including industrial fermentation. It includes the practice of using cells such as microorganisms, or components of cells like enzymes, to generate industrially useful products in sectors such as chemicals, food and feed, detergents, paper and pulp, textiles and biofuels. In the current decades, significant progress has been done in creating genetically modified organisms (GMOs) that enhance the diversity of applications and economical viability of industrial biotechnology. By using renewable raw materials to produce a variety of chemicals and fuels, industrial biotechnology is actively advancing towards lowering greenhouse gas emissions and moving away from a petrochemical-based economy.

Environmental

The environment can be affected by biotechnologies, both positively and adversely. Vallero and others have argued that the difference between beneficial biotechnology (e.g.bioremediation is to clean up an oil spill or hazard chemical leak) versus the adverse effects stemming from biotechnological enterprises (e.g. flow of genetic material from transgenic organisms into wild strains) can be seen as applications and implications, respectively. Cleaning up environmental wastes is an example of an application of environmental biotechnology; whereas loss of biodiversity or loss of containment of a harmful microbe are examples of environmental implications of biotechnology.

Regulation

The regulation of genetic engineering concerns approaches taken by governments to assess and manage the risks associated with the use of genetic engineering technology, and the development and release of genetically modified organisms (GMO), including genetically modified crops and genetically modified fish. There are differences in the regulation of GMOs between countries, with some of the most marked differences occurring between the US and Europe. Regulation varies in a given country depending on the intended use of the products of the genetic engineering. For example, a crop not intended for food use is generally not reviewed by authorities responsible for food safety. The European Union differentiates between approval for cultivation within the EU and approval for import and processing. While only a few GMOs have been approved for cultivation in the EU a number of GMOs have been approved for import and processing. The cultivation of GMOs has triggered a debate about coexistence of GM and non GM crops. Depending on the coexistence regulations, incentives for cultivation of GM crops differ.

Learning

In 1988, after prompting from the United States Congress, the National Institute of General Medical Sciences (National Institutes of Health) (NIGMS) instituted a funding mechanism for biotechnology training. Universities nationwide compete for these funds to establish Biotechnology Training Programs (BTPs). Each successful application is generally funded for five years then must be competitively renewed. Graduate students in turn compete for acceptance into a BTP; if accepted, then stipend, tuition and health insurance support is provided for two or three years during the course of their Ph.D. thesis work. Nineteen institutions offer NIGMS supported BTPs. Biotechnology training is also offered at the undergraduate level and in community colleges.

Computational biology

From Wikipedia, the free encyclopedia

Computational biology involves the development and application of data-analytical and theoretical methods, mathematical modelling and computational simulation techniques to the study of biological, ecological, behavioural, and social systems. The field is broadly defined and includes foundations in biology, applied mathematics, statistics, biochemistry, chemistry, biophysics, molecular biology, genetics, genomics, computer science, and evolution.

Computational biology is different from biological computing, which is a subfield of computer engineering using bioengineering and biology to build computers.

Introduction

Computational biology, which includes many aspects of bioinformatics, is the science of using biological data to develop algorithms or models in order to understand biological systems and relationships. Until recently, biologists did not have access to very large amounts of data. This data has now become commonplace, particularly in molecular biology and genomics. Researchers were able to develop analytical methods for interpreting biological information, but were unable to share them quickly among colleagues.

Bioinformatics began to develop in the early 1970s. It was considered the science of analyzing informatics processes of various biological systems. At this time, research in artificial intelligence was using network models of the human brain in order to generate new algorithms. This use of biological data to develop other fields pushed biological researchers to revisit the idea of using computers to evaluate and compare large data sets. By 1982, information was being shared among researchers through the use of punch cards. The amount of data being shared began to grow exponentially by the end of the 1980s. This required the development of new computational methods in order to quickly analyze and interpret relevant information.

Since the late 1990s, computational biology has become an important part of developing emerging technologies for the field of biology. The terms computational biology and evolutionary computation have a similar name, but are not to be confused. Unlike computational biology, evolutionary computation is not concerned with modeling and analyzing biological data. It instead creates algorithms based on the ideas of evolution across species. Sometimes referred to as genetic algorithms, the research of this field can be applied to computational biology. While evolutionary computation is not inherently a part of computational biology, computational evolutionary biology is a subfield of it.

Computational biology has been used to help sequence the human genome, create accurate models of the human brain, and assist in modeling biological systems.

Subfields

Computational anatomy

Computational anatomy is a discipline focusing on the study of anatomical shape and form at the visible or gross anatomical scale of morphology. It involves the development and application of computational, mathematical and data-analytical methods for modeling and simulation of biological structures. It focuses on the anatomical structures being imaged, rather than the medical imaging devices. Due to the availability of dense 3D measurements via technologies such as magnetic resonance imaging (MRI), computational anatomy has emerged as a subfield of medical imaging and bioengineering for extracting anatomical coordinate systems at the morphome scale in 3D.

The original formulation of computational anatomy is as a generative model of shape and form from exemplars acted upon via transformations. The diffeomorphism group is used to study different coordinate systems via coordinate transformations as generated via the Lagrangian and Eulerian velocities of flow from one anatomical configuration in to another. It relates with shape statistics and morphometrics, with the distinction that diffeomorphisms are used to map coordinate systems, whose study is known as diffeomorphometry.

Computational biomodeling

Computational biomodeling is a field concerned with building computer models of biological systems. Computational biomodeling aims to develop and use visual simulations in order to assess the complexity of biological systems. This is accomplished through the use of specialized algorithms, and visualization software. These models allow for prediction of how systems will react under different environments. This is useful for determining if a system is robust. A robust biological system is one that “maintain their state and functions against external and internal perturbations”, which is essential for a biological system to survive. Computational biomodeling generates a large archive of such data, allowing for analysis from multiple users. While current techniques focus on small biological systems, researchers are working on approaches that will allow for larger networks to be analyzed and modeled. A majority of researchers believe that this will be essential in developing modern medical approaches to creating new drugs and gene therapy. A useful modelling approach is to use Petri nets via tools such as esyN 

Computational genomics

A partially sequenced genome.

Computational genomics is a field within genomics which studies the genomes of cells and organisms. It is sometimes referred to as Computational and Statistical Genetics and encompasses much of Bioinformatics. The Human Genome Project is one example of computational genomics. This project looks to sequence the entire human genome into a set of data. Once fully implemented, this could allow for doctors to analyze the genome of an individual patient. This opens the possibility of personalized medicine, prescribing treatments based on an individual's pre-existing genetic patterns. This project has created many similar programs. Researchers are looking to sequence the genomes of animals, plants, bacteria, and all other types of life.

One of the main ways that genomes are compared is by sequence homology. Homology is the study of biological structures and nucleotide sequences in different organisms that come from a common ancestor. Research suggests that between 80 and 90% of genes in newly sequenced prokaryotic genomes can be identified this way.

This field is still in development. An untouched project in the development of computational genomics is the analysis of intergenic regions. Studies show that roughly 97% of the human genome consists of these regions. Researchers in computational genomics are working on understanding the functions of non-coding regions of the human genome through the development of computational and statistical methods and via large consortia projects such as ENCODE (The Encyclopedia of DNA Elements) and the Roadmap Epigenomics Project.

Computational neuroscience

Computational neuroscience is the study of brain function in terms of the information processing properties of the structures that make up the nervous system. It is a subset of the field of neuroscience, and looks to analyze brain data to create practical applications. It looks to model the brain in order to examine specific types aspects of the neurological system. Various types of models of the brain include:

  • Realistic Brain Models: These models look to represent every aspect of the brain, including as much detail at the cellular level as possible. Realistic models provide the most information about the brain, but also have the largest margin for error. More variables in a brain model create the possibility for more error to occur. These models do not account for parts of the cellular structure that scientists do not know about. Realistic brain models are the most computationally heavy and the most expensive to implement.
  • Simplifying Brain Models: These models look to limit the scope of a model in order to assess a specific physical property of the neurological system. This allows for the intensive computational problems to be solved, and reduces the amount of potential error from a realistic brain model.

It is the work of computational neuroscientists to improve the algorithms and data structures currently used to increase the speed of such calculations.

Computational pharmacology

Computational pharmacology (from a computational biology perspective) is “the study of the effects of genomic data to find links between specific genotypes and diseases and then screening drug data”. The pharmaceutical industry requires a shift in methods to analyze drug data. Pharmacologists were able to use Microsoft Excel to compare chemical and genomic data related to the effectiveness of drugs. However, the industry has reached what is referred to as the Excel barricade. This arises from the limited number of cells accessible on a spreadsheet. This development led to the need for computational pharmacology. Scientists and researchers develop computational methods to analyze these massive data sets. This allows for an efficient comparison between the notable data points and allows for more accurate drugs to be developed.

Analysts project that if major medications fail due to patents, that computational biology will be necessary to replace current drugs on the market. Doctoral students in computational biology are being encouraged to pursue careers in industry rather than take Post-Doctoral positions. This is a direct result of major pharmaceutical companies needing more qualified analysts of the large data sets required for producing new drugs.

Computational evolutionary biology

Computational biology has assisted the field of evolutionary biology in many capacities. This includes:

Cancer computational biology

Cancer computational biology is a field that aims to determine the future mutations in cancer through an algorithmic approach to analyzing data. Research in this field has led to the use of high-throughput measurement. High throughput measurement allows for the gathering of millions of data points using robotics and other sensing devices. This data is collected from DNA, RNA, and other biological structures. Areas of focus include determining the characteristics of tumors, analyzing molecules that are deterministic in causing cancer, and understanding how the human genome relates to the causation of tumors and cancer.

Computational neuropsychiatry

Computational neuropsychiatry is the emerging field that uses mathematical and computer-assisted modeling of brain mechanisms involved in mental disorders. It was already demonstrated by several initiatives that computational modeling is an important contribution to understand neuronal circuits that could generate mental functions and dysfunctions.

Software and tools

Computational Biologists use a wide range of software. These range from command line programs to graphical and web-based programs.

Open source software

Open source software provides a platform to develop computational biological methods. Specifically, open source means that every person and/or entity can access and benefit from software developed in research. PLOS cites four main reasons for the use of open source software including:

  • Reproducibility: This allows for researchers to use the exact methods used to calculate the relations between biological data.
  • Faster Development: developers and researchers do not have to reinvent existing code for minor tasks. Instead they can use pre-existing programs to save time on the development and implementation of larger projects.
  • Increased quality: Having input from multiple researchers studying the same topic provides a layer of assurance that errors will not be in the code.
  • Long-term availability: Open source programs are not tied to any businesses or patents. This allows for them to be posted to multiple web pages and ensure that they are available in the future.

Conferences

There are several large conferences that are concerned with computational biology. Some notable examples are Intelligent Systems for Molecular Biology (ISMB), European Conference on Computational Biology (ECCB) and Research in Computational Molecular Biology (RECOMB).

Journals

There are numerous journals dedicated to computational biology. Some notable examples include Journal of Computational Biology and PLOS Computational Biology. The PLOS computational biology journal is a peer-reviewed journal that has many notable research projects in the field of computational biology. They provide reviews on software, tutorials for open source software, and display information on upcoming computational biology conferences. PLOS Computational Biology is an open access journal. The publication may be openly used provided the author is cited.

Related fields

Computational biology, bioinformatics and mathematical biology are all interdisciplinary approaches to the life sciences that draw from quantitative disciplines such as mathematics and information science. The NIH describes computational/mathematical biology as the use of computational/mathematical approaches to address theoretical and experimental questions in biology and, by contrast, bioinformatics as the application of information science to understand complex life-sciences data.

Specifically, the NIH defines

Computational biology: The development and application of data-analytical and theoretical methods, mathematical modeling and computational simulation techniques to the study of biological, behavioral, and social systems.

Bioinformatics: Research, development, or application of computational tools and approaches for expanding the use of biological, medical, behavioral or health data, including those to acquire, store, organize, archive, analyze, or visualize such data.

While each field is distinct, there may be significant overlap at their interface.

 

Superorganism

From Wikipedia, the free encyclopedia

A mound built by cathedral termites
 
A coral colony

A superorganism or supraorganism is a group of synergetically interacting organisms of the same species. A community of synergetically interacting organisms of different species is called a holobiont.

Concept

The term superorganism is used most often to describe a social unit of eusocial animals, where division of labour is highly specialised and where individuals are not able to survive by themselves for extended periods. Ants are the best-known example of such a superorganism. A superorganism can be defined as "a collection of agents which can act in concert to produce phenomena governed by the collective", phenomena being any activity "the hive wants" such as ants collecting food and avoiding predators, or bees choosing a new nest site. Superorganisms tend to exhibit homeostasis, power law scaling, persistent disequilibrium and emergent behaviours.

The term was coined in 1789 by James Hutton, the "father of geology", to refer to Earth in the context of geophysiology. The Gaia hypothesis of James Lovelock, and Lynn Margulis as well as the work of Hutton, Vladimir Vernadsky and Guy Murchie, have suggested that the biosphere itself can be considered a superorganism, although this has been disputed. This view relates to systems theory and the dynamics of a complex system.

The concept of a superorganism raises the question of what is to be considered an individual. Toby Tyrrell's critique of the Gaia hypothesis argues that Earth's climate system does not resemble an animal's physiological system. Planetary biospheres are not tightly regulated in the same way that animal bodies are: "planets, unlike animals, are not products of evolution. Therefore we are entitled to be highly skeptical (or even outright dismissive) about whether to expect something akin to a 'superorganism'". He concludes that "the superorganism analogy is unwarranted".

Some scientists have suggested that individual human beings can be thought of as "superorganisms"; as a typical human digestive system contains 1013 to 1014 microorganisms whose collective genome, the microbiome studied by the Human Microbiome Project, contains at least 100 times as many genes as the human genome itself. Salvucci wrote that superorganism is another level of integration that it is observed in nature. These levels include the genomic, the organismal and the ecological levels. The genomic structure of organism reveals the fundamental role of integration and gene shuffling along evolution.

In social theory

The nineteenth century thinker Herbert Spencer coined the term super-organic to focus on social organization (the first chapter of his Principles of Sociology is entitled "Super-organic Evolution"), though this was apparently a distinction between the organic and the social, not an identity: Spencer explored the holistic nature of society as a social organism while distinguishing the ways in which society did not behave like an organism. For Spencer, the super-organic was an emergent property of interacting organisms, that is, human beings. And, as has been argued by D. C. Phillips, there is a "difference between emergence and reductionism".

The economist Carl Menger expanded upon the evolutionary nature of much social growth, but without ever abandoning methodological individualism. Many social institutions arose, Menger argued, not as "the result of socially teleological causes, but the unintended result of innumerable efforts of economic subjects pursuing 'individual' interests".

Spencer and Menger both argued that because it is individuals who choose and act, any social whole should be considered less than an organism, though Menger emphasized this more emphatically. Spencer used the organistic idea to engage in extended analysis of social structure, conceding that it was primarily an analogy. So, for Spencer, the idea of the super-organic best designated a distinct level of social reality above that of biology and psychology, and not a one-to-one identity with an organism. Nevertheless, Spencer maintained that "every organism of appreciable size is a society", which has suggested to some that the issue may be terminological.

The term superorganic was adopted by the anthropologist Alfred L. Kroeber in 1917. Social aspects of the superorganism concept are analysed by Alan Marshall in his 2002 book "The Unity of Nature". Finally, recent work in social psychology has offered the superorganism metaphor as a unifying framework to understand diverse aspects of human sociality, such as religion, conformity, and social identity processes.

In cybernetics

Superorganisms are important in cybernetics, particularly biocybernetics. They are capable of the so-called "distributed intelligence", which is a system composed of individual agents that have limited intelligence and information. These are able to pool resources so that they are able to complete goals that are beyond reach of the individuals on their own. Existence of such behavior in organisms has many implications for military and management applications, and is being actively researched.

Superorganisms are also considered dependent upon cybernetic governance and processes. This is based on the idea that a biological system – in order to be effective – needs a sub-system of cybernetic communications and control. This is demonstrated in the way a mole rat colony uses functional synergy and cybernetic processes together.

Joel de Rosnay also introduced a concept called "cybionte" to describe cybernetic superorganism. This notion associate superorganism with chaos theory, multimedia technology, and other new developments.

 

Human Microbiome Project

From Wikipedia, the free encyclopedia
 
Human Microbiome Project (HMP)
Human Microbiome Project logo.jpg
OwnerUS National Institutes of Health
Established2007
Disestablished2016
Websitehmpdacc.org

The Human Microbiome Project (HMP) was a United States National Institutes of Health (NIH) research initiative to improve understanding of the microbial flora involved in human health and disease. Launched in 2007, the first phase (HMP1) focused on identifying and characterizing human microbial flora. The second phase, known as the Integrative Human Microbiome Project (iHMP) launched in 2014 with the aim of generating resources to characterize the microbiome and elucidating the roles of microbes in health and disease states. The program received $170 million in funding by the NIH Common Fund from 2007 to 2016.

Important components of the HMP were culture-independent methods of microbial community characterization, such as metagenomics (which provides a broad genetic perspective on a single microbial community), as well as extensive whole genome sequencing (which provides a "deep" genetic perspective on certain aspects of a given microbial community, i.e. of individual bacterial species). The latter served as reference genomic sequences — 3000 such sequences of individual bacterial isolates are currently planned — for comparison purposes during subsequent metagenomic analysis. The project also financed deep sequencing of bacterial 16S rRNA sequences amplified by polymerase chain reaction from human subjects.

Introduction

Depiction of prevalences of various classes of bacteria at selected sites on human skin

Prior to the HMP launch, it was often reported in popular media and scientific literature that there are about 10 times as many microbial cells and 100 times as many microbial genes in the human body as there are human cells; this figure was based on estimates that the human microbiome includes around 100 trillion bacterial cells and an adult human typically has around 10 trillion human cells. In 2014 the American Academy of Microbiology published a FAQ that emphasized that the number of microbial cells and the number of human cells are both estimates, and noted that recent research had arrived at a new estimate of the number of human cells at around 37 trillion cells, meaning that the ratio of microbial to human cells is probably about 3:1. In 2016 another group published a new estimate of ratio as being roughly 1:1 (1.3:1, with "an uncertainty of 25% and a variation of 53% over the population of standard 70 kg males").

Despite the staggering number of microbes in and on the human body, little was known about their roles in human health and disease. Many of the organisms that make up the microbiome have not been successfully cultured, identified, or otherwise characterized. Organisms thought to be found in the human microbiome, however, may generally be categorized as bacteria, members of domain Archaea, yeasts, and single-celled eukaryotes as well as various helminth parasites and viruses, the latter including viruses that infect the cellular microbiome organisms (e.g., bacteriophages). The HMP set out to discover and characterize the human microbiome, emphasizing oral, skin, vaginal, gastrointestinal, and respiratory sites.

The HMP will address some of the most inspiring, vexing and fundamental scientific questions today. Importantly, it also has the potential to break down the artificial barriers between medical microbiology and environmental microbiology. It is hoped that the HMP will not only identify new ways to determine health and predisposition to diseases but also define the parameters needed to design, implement and monitor strategies for intentionally manipulating the human microbiota, to optimize its performance in the context of an individual's physiology.

The HMP has been described as "a logical conceptual and experimental extension of the Human Genome Project." In 2007 the HMP was listed on the NIH Roadmap for Medical Research as one of the New Pathways to Discovery. Organized characterization of the human microbiome is also being done internationally under the auspices of the International Human Microbiome Consortium. The Canadian Institutes of Health Research, through the CIHR Institute of Infection and Immunity, is leading the Canadian Microbiome Initiative to develop a coordinated and focused research effort to analyze and characterize the microbes that colonize the human body and their potential alteration during chronic disease states.

Contributing Institutions

The HMP involved participation from many research institutions, including Stanford University, the Broad Institute, Virginia Commonwealth University, Washington University, Northeastern University, MIT, the Baylor College of Medicine, and many others. Contributions included data evaluation, construction of reference sequence data sets, ethical and legal studies, technology development, and more.

Phase One (2007-2014)

The HMP1 included research efforts from many institutions. The HMP1 set the following goals:

  • Develop a reference set of microbial genome sequences and to perform preliminary characterization of the human microbiome
  • Explore the relationship between disease and changes in the human microbiome
  • Develop new technologies and tools for computational analysis
  • Establish a resource repository
  • Study the ethical, legal, and social implications of human microbiome research

Phase Two (2014-2016)

In 2014, the NIH launched the second phase of the project, known as the Integrative Human Microbiome Project (iHMP). The goal of the iHMP was to produce resources to create a complete characterization of the human microbiome, with a focus on understanding the presence of microbiota in health and disease states. The project mission, as stated by the NIH, was as follows:

The iHMP will create integrated longitudinal datasets of biological properties from both the microbiome and host from three different cohort studies of microbiome-associated conditions using multiple "omics" technologies.

The project encompassed three sub-projects carried out at multiple institutions. Study methods included 16S rRNA gene profiling, whole metagenome shotgun sequencing, whole genome sequencing, metatranscriptomics, metabolomics/lipidomics, and immunoproteomics. The key findings of the iHMP were published in 2019.

Pregnancy & Preterm Birth

The Vaginal Microbiome Consortium team at Virginia Commonwealth University led research on the Pregnancy & Preterm Birth project with a goal of understanding how the microbiome changes during the gestational period and influences the neonatal microbiome. The project was also concerned with the role of the microbiome in the occurrence of preterm births, which, according to the CDC, account for nearly 10% of all births and constitutes the second leading cause of neonatal death. The project received $7.44 million in NIH funding.

Onset of Inflammatory Bowel Disease (IBD)

The Inflammatory Bowel Disease Multi'omics Data (IBDMDB) team was a multi-institution group of researchers focused on understanding how the gut microbiome changes longitudinally in adults and children suffering from IBD. IBD is an inflammatory autoimmune disorder that manifests as either Crohn's disease or ulcerative colitis and affects about one million Americans. Research participants included cohorts from Massachusetts General Hospital, Emory University Hospital/Cincinnati Children's Hospital, and Cedars-Sinai Medical Center.

Onset of Type 2 Diabetes (T2D)

Researchers from Stanford University and the Jackson Laboratory of Genomic Medicine worked together to perform a longitudinal analysis on the biological processes that occur in the microbiome of patients at risk for Type 2 Diabetes. T2D affects nearly 20 million Americans with at least 79 million pre-diabetic patients, and is partially characterized by marked shifts in the microbiome compared to healthy individuals. The project aimed to identify molecules and signaling pathways that play a role in the etiology of the disease.

Achievements

The impact to date of the HMP may be partially assessed by examination of research sponsored by the HMP. Over 650 peer-reviewed publications were listed on the HMP website from June 2009 to the end of 2017, and had been cited over 70,000 times. At this point the website was archived and is no longer updated, although datasets do continue to be available.

Major categories of work funded by HMP included:

  • Development of new database systems allowing efficient organization, storage, access, search and annotation of massive amounts of data. These include IMG, the Integrated Microbial Genomes database and comparative analysis system; IMG/M, a related system that integrates metagenome data sets with isolate microbial genomes from the IMG system; CharProtDB, a database of experimentally characterized protein annotations; and the Genomes OnLine Database (GOLD), for monitoring the status of genomic and metagenomic projects worldwide and their associated metadata.
  • Development of tools for comparative analysis that facilitate the recognition of common patterns, major themes and trends in complex data sets. These include RAPSearch2, a fast and memory-efficient protein similarity search tool for next-generation sequencing data; Boulder ALignment Editor (ALE), a web-based RNA alignment tool; WebMGA, a customizable web server for fast metagenomic sequence analysis; and DNACLUST, a tool for accurate and efficient clustering of phylogenetic marker genes
  • Development of new methods and systems for assembly of massive sequence data sets. No single assembly algorithm addresses all the known problems of assembling short-length sequences, so next-generation assembly programs such as AMOS are modular, offering a wide range of tools for assembly. Novel algorithms have been developed for improving the quality and utility of draft genome sequences.
  • Assembly of a catalog of sequenced reference genomes of pure bacterial strains from multiple body sites, against which metagenomic results can be compared. The original goal of 600 genomes has been far surpassed; the current goal is for 3000 genomes to be in this reference catalog, sequenced to at least a high-quality draft stage. As of March 2012, 742 genomes have been cataloged.
  • Establishment of the Data Analysis and Coordination Center (DACC), which serves as the central repository for all HMP data.
  • Various studies exploring legal and ethical issues associated with whole genome sequencing research.

Developments funded by HMP included:

  • New predictive methods for identifying active transcription factor binding sites.
  • Identification, on the basis of bioinformatic evidence, of a widely distributed, ribosomally produced electron carrier precursor
  • Time-lapse "moving pictures" of the human microbiome.
  • Identification of unique adaptations adopted by segmented filamentous bacteria (SFB) in their role as gut commensals. SFB are medically important because they stimulate T helper 17 cells, thought to play a key role in autoimmune disease.
  • Identification of factors distinguishing the microbiota of healthy and diseased gut.
  • Identification of a hitherto unrecognized dominant role of Verrucomicrobia in soil bacterial communities.
  • Identification of factors determining the virulence potential of Gardnerella vaginalis strains in vaginosis.
  • Identification of a link between oral microbiota and atherosclerosis.
  • Demonstration that pathogenic species of Neisseria involved in meningitis, sepsis, and sexually transmitted disease exchange virulence factors with commensal species.

Milestones

Reference database established

On 13 June 2012, a major milestone of the HMP was announced by the NIH director Francis Collins. The announcement was accompanied with a series of coordinated articles published in Nature and several journals including the Public Library of Science (PLoS) on the same day. By mapping the normal microbial make-up of healthy humans using genome sequencing techniques, the researchers of the HMP have created a reference database and the boundaries of normal microbial variation in humans.

From 242 healthy U.S. volunteers, more than 5,000 samples were collected from tissues from 15 (men) to 18 (women) body sites such as mouth, nose, skin, lower intestine (stool) and vagina. All the DNA, human and microbial, were analyzed with DNA sequencing machines. The microbial genome data were extracted by identifying the bacterial specific ribosomal RNA, 16S rRNA. The researchers calculated that more than 10,000 microbial species occupy the human ecosystem and they have identified 81 – 99% of the genera. In addition to establishing the human microbiome reference database, the HMP project also discovered several "surprises", which include:

  • Microbes contribute more genes responsible for human survival than humans' own genes. It is estimated that bacterial protein-coding genes are 360 times more abundant than human genes.
  • Microbial metabolic activities; for example, digestion of fats; are not always provided by the same bacterial species. The presence of the activities seems to matter more.
  • Components of the human microbiome change over time, affected by a patient disease state and medication. However, the microbiome eventually returns to a state of equilibrium, even though the composition of bacterial types has changed.

Clinical application

Among the first clinical applications utilizing the HMP data, as reported in several PLoS papers, the researchers found a shift to less species diversity in vaginal microbiome of pregnant women in preparation for birth, and high viral DNA load in the nasal microbiome of children with unexplained fevers. Other studies using the HMP data and techniques include role of microbiome in various diseases in the digestive tract, skin, reproductive organs and childhood disorders.

Pharmaceutical application

Pharmaceutical microbiologists have considered the implications of the HMP data in relation to the presence / absence of 'objectionable' microorganisms in non-sterile pharmaceutical products and in relation to the monitoring of microorganisms within the controlled environments in which products are manufactured. The latter also has implications for media selection and disinfectant efficacy studies.

Butane

From Wikipedia, the free encyclopedia ...