Search This Blog

Wednesday, June 20, 2018

Mathematical and theoretical biology

From Wikipedia, the free encyclopedia
Mathematical and theoretical biology is a branch of biology which employs theoretical analysis, mathematical models and abstractions of the living organisms to investigate the principles that govern the structure, development and behavior of the systems, as opposed to experimental biology which deals with the conduction of experiments to prove and validate the scientific theories.[1] The field is sometimes called mathematical biology or biomathematics to stress the mathematical side, or theoretical biology to stress the biological side.[2] Theoretical biology focuses more on the development of theoretical principles for biology while mathematical biology focuses on the use of mathematical tools to study biological systems, even though the two terms are sometimes interchanged.[3][4]

Mathematical biology aims at the mathematical representation and modeling of biological processes, using techniques and tools of applied mathematics. It has both theoretical and practical applications in biological, biomedical and biotechnology research. Describing systems in a quantitative manner means their behavior can be better simulated, and hence properties can be predicted that might not be evident to the experimenter. This requires precise mathematical models.

Mathematical biology employs many components of mathematics,[5] and has contributed to the development of new techniques.

History

Early history

Mathematics has been applied to biology since the 19th century.

Fritz Müller described the evolutionary benefits of what is now called Müllerian mimicry in 1879, in an account notable for being the first use of a mathematical argument in evolutionary ecology to show how powerful the effect of natural selection would be, unless one includes Malthus's discussion of the effects of population growth that influenced Charles Darwin: Malthus argued that growth would be "geometric" while resources (the environment's carrying capacity) could only grow arithmetically.[6]

One founding text is considered to be On Growth and Form (1917) by D'Arcy Thompson,[7] and other early pioneers include Ronald Fisher, Hans Leo Przibram, Nicolas Rashevsky and Vito Volterra.[8]

Recent growth

Interest in the field has grown rapidly from the 1960s onwards. Some reasons for this include:
  • The rapid growth of data-rich information sets, due to the genomics revolution, which are difficult to understand without the use of analytical tools
  • Recent development of mathematical tools such as chaos theory to help understand complex, non-linear mechanisms in biology
  • An increase in computing power, which facilitates calculations and simulations not previously possible
  • An increasing interest in in silico experimentation due to ethical considerations, risk, unreliability and other complications involved in human and animal research

Areas of research

Several areas of specialized research in mathematical and theoretical biology[9][10][11][12][13] as well as external links to related projects in various universities are concisely presented in the following subsections, including also a large number of appropriate validating references from a list of several thousands of published authors contributing to this field. Many of the included examples are characterised by highly complex, nonlinear, and supercomplex mechanisms, as it is being increasingly recognised that the result of such interactions may only be understood through a combination of mathematical, logical, physical/chemical, molecular and computational models.

Evolutionary biology

Ecology and evolutionary biology have traditionally been the dominant fields of mathematical biology.

Evolutionary biology has been the subject of extensive mathematical theorizing. The traditional approach in this area, which includes complications from genetics, is population genetics. Most population geneticists consider the appearance of new alleles by mutation, the appearance of new genotypes by recombination, and changes in the frequencies of existing alleles and genotypes at a small number of gene loci. When infinitesimal effects at a large number of gene loci are considered, together with the assumption of linkage equilibrium or quasi-linkage equilibrium, one derives quantitative genetics. Ronald Fisher made fundamental advances in statistics, such as analysis of variance, via his work on quantitative genetics. Another important branch of population genetics that led to the extensive development of coalescent theory is phylogenetics. Phylogenetics is an area that deals with the reconstruction and analysis of phylogenetic (evolutionary) trees and networks based on inherited characteristics[14] Traditional population genetic models deal with alleles and genotypes, and are frequently stochastic.

Many population genetics models assume that population sizes are constant. Variable population sizes, often in the absence of genetic variation, are treated by the field of population dynamics. Work in this area dates back to the 19th century, and even as far as 1798 when Thomas Malthus formulated the first principle of population dynamics, which later became known as the Malthusian growth model. The Lotka–Volterra predator-prey equations are another famous example. Population dynamics overlap with another active area of research in mathematical biology: mathematical epidemiology, the study of infectious disease affecting populations. Various models of the spread of infections have been proposed and analyzed, and provide important results that may be applied to health policy decisions.

In evolutionary game theory, developed first by John Maynard Smith and George R. Price, selection acts directly on inherited phenotypes, without genetic complications. This approach has been mathematically refined to produce the field of adaptive dynamics.

Computer models and automata theory

A monograph on this topic summarizes an extensive amount of published research in this area up to 1986,[15][16][17] including subsections in the following areas: computer modeling in biology and medicine, arterial system models, neuron models, biochemical and oscillation networks, quantum automata, quantum computers in molecular biology and genetics,[18] cancer modelling,[19] neural nets, genetic networks, abstract categories in relational biology,[20] metabolic-replication systems, category theory[21] applications in biology and medicine,[22] automata theory, cellular automata,[23] tessellation models[24][25] and complete self-reproduction, chaotic systems in organisms, relational biology and organismic theories.[26][27]

Modeling cell and molecular biology

This area has received a boost due to the growing importance of molecular biology.[12]
  • Mechanics of biological tissues[28]
  • Theoretical enzymology and enzyme kinetics
  • Cancer modelling and simulation[29][30]
  • Modelling the movement of interacting cell populations[31]
  • Mathematical modelling of scar tissue formation[32]
  • Mathematical modelling of intracellular dynamics[33][34]
  • Mathematical modelling of the cell cycle[35]
Modelling physiological systems

Molecular set theory

Molecular set theory (MST) is a mathematical formulation of the wide-sense chemical kinetics of biomolecular reactions in terms of sets of molecules and their chemical transformations represented by set-theoretical mappings between molecular sets. It was introduced by Anthony Bartholomay, and its applications were developed in mathematical biology and especially in mathematical medicine.[38] In a more general sense, MST is the theory of molecular categories defined as categories of molecular sets and their chemical transformations represented as set-theoretical mappings of molecular sets. The theory has also contributed to biostatistics and the formulation of clinical biochemistry problems in mathematical formulations of pathological, biochemical changes of interest to Physiology, Clinical Biochemistry and Medicine.[38][39]

Mathematical methods

A model of a biological system is converted into a system of equations, although the word 'model' is often used synonymously with the system of corresponding equations. The solution of the equations, by either analytical or numerical means, describes how the biological system behaves either over time or at equilibrium. There are many different types of equations and the type of behavior that can occur is dependent on both the model and the equations used. The model often makes assumptions about the system. The equations may also make assumptions about the nature of what may occur.

Simulation of mathematical biology

Computer with significant recent evolution in performance acceraretes the model simulation based on various formulas. The websites BioMath Modeler can run simulations and display charts interactively on browser.

Mathematical biophysics

The earlier stages of mathematical biology were dominated by mathematical biophysics, described as the application of mathematics in biophysics, often involving specific physical/mathematical models of biosystems and their components or compartments.

The following is a list of mathematical descriptions and their assumptions.

Deterministic processes (dynamical systems)

A fixed mapping between an initial state and a final state. Starting from an initial condition and moving forward in time, a deterministic process always generates the same trajectory, and no two trajectories cross in state space.

Stochastic processes (random dynamical systems)

A random mapping between an initial state and a final state, making the state of the system a random variable with a corresponding probability distribution.

Spatial modelling

One classic work in this area is Alan Turing's paper on morphogenesis entitled The Chemical Basis of Morphogenesis, published in 1952 in the Philosophical Transactions of the Royal Society.

Organizational biology

Theoretical approaches to biological organization aim to understand the interdependence between the parts of organisms. They emphasize the circularities that these interdependences lead to. Theoretical biologists developed several concepts to formalize this idea.

For example, abstract relational biology (ARB)[45] is concerned with the study of general, relational models of complex biological systems, usually abstracting out specific morphological, or anatomical, structures. Some of the simplest models in ARB are the Metabolic-Replication, or (M,R)--systems introduced by Robert Rosen in 1957-1958 as abstract, relational models of cellular and organismal organization.[46]

Other approaches include the notion of autopoiesis developed by Maturana and Varela, Kauffman's Work-Constraints cycles, and more recently the notion of closure of constraints.[47]

Algebraic biology

Algebraic biology (also known as symbolic systems biology) applies the algebraic methods of symbolic computation to the study of biological problems, especially in genomics, proteomics, analysis of molecular structures and study of genes.[26][48][49]

Computational neuroscience

Computational neuroscience (also known as theoretical neuroscience or mathematical neuroscience) is the theoretical study of the nervous system.[50][51]

Model example: the cell cycle

The eukaryotic cell cycle is very complex and is one of the most studied topics, since its misregulation leads to cancers. It is possibly a good example of a mathematical model as it deals with simple calculus but gives valid results. Two research groups [52][53] have produced several models of the cell cycle simulating several organisms. They have recently produced a generic eukaryotic cell cycle model that can represent a particular eukaryote depending on the values of the parameters, demonstrating that the idiosyncrasies of the individual cell cycles are due to different protein concentrations and affinities, while the underlying mechanisms are conserved (Csikasz-Nagy et al., 2006).
By means of a system of ordinary differential equations these models show the change in time (dynamical system) of the protein inside a single typical cell; this type of model is called a deterministic process (whereas a model describing a statistical distribution of protein concentrations in a population of cells is called a stochastic process).

To obtain these equations an iterative series of steps must be done: first the several models and observations are combined to form a consensus diagram and the appropriate kinetic laws are chosen to write the differential equations, such as rate kinetics for stoichiometric reactions, Michaelis-Menten kinetics for enzyme substrate reactions and Goldbeter–Koshland kinetics for ultrasensitive transcription factors, afterwards the parameters of the equations (rate constants, enzyme efficiency coefficients and Michaelis constants) must be fitted to match observations; when they cannot be fitted the kinetic equation is revised and when that is not possible the wiring diagram is modified. The parameters are fitted and validated using observations of both wild type and mutants, such as protein half-life and cell size.

To fit the parameters, the differential equations must be studied. This can be done either by simulation or by analysis. In a simulation, given a starting vector (list of the values of the variables), the progression of the system is calculated by solving the equations at each time-frame in small increments.

Cell cycle bifurcation diagram.jpg

In analysis, the properties of the equations are used to investigate the behavior of the system depending of the values of the parameters and variables. A system of differential equations can be represented as a vector field, where each vector described the change (in concentration of two or more protein) determining where and how fast the trajectory (simulation) is heading. Vector fields can have several special points: a stable point, called a sink, that attracts in all directions (forcing the concentrations to be at a certain value), an unstable point, either a source or a saddle point, which repels (forcing the concentrations to change away from a certain value), and a limit cycle, a closed trajectory towards which several trajectories spiral towards (making the concentrations oscillate).

A better representation, which handles the large number of variables and parameters, is a bifurcation diagram using bifurcation theory. The presence of these special steady-state points at certain values of a parameter (e.g. mass) is represented by a point and once the parameter passes a certain value, a qualitative change occurs, called a bifurcation, in which the nature of the space changes, with profound consequences for the protein concentrations: the cell cycle has phases (partially corresponding to G1 and G2) in which mass, via a stable point, controls cyclin levels, and phases (S and M phases) in which the concentrations change independently, but once the phase has changed at a bifurcation event (Cell cycle checkpoint), the system cannot go back to the previous levels since at the current mass the vector field is profoundly different and the mass cannot be reversed back through the bifurcation event, making a checkpoint irreversible. In particular the S and M checkpoints are regulated by means of special bifurcations called a Hopf bifurcation and an infinite period bifurcation.[citation needed]

Societies and institutes

Tuesday, June 19, 2018

Neuroinformatics

From Wikipedia, the free encyclopedia
Neuroinformatics is a research field concerned with the organization of neuroscience data by the application of computational models and analytical tools. These areas of research are important for the integration and analysis of increasingly large-volume, high-dimensional, and fine-grain experimental data. Neuroinformaticians provide computational tools, mathematical models, and create interoperable databases for clinicians and research scientists. Neuroscience is a heterogeneous field, consisting of many and various sub-disciplines (e.g., cognitive psychology, behavioral neuroscience, and behavioral genetics). In order for our understanding of the brain to continue to deepen, it is necessary that these sub-disciplines are able to share data and findings in a meaningful way; Neuroinformaticians facilitate this.[1]

Neuroinformatics stands at the intersection of neuroscience and information science. Other fields, like genomics, have demonstrated the effectiveness of freely distributed databases and the application of theoretical and computational models for solving complex problems. In Neuroinformatics, such facilities allow researchers to more easily quantitatively confirm their working theories by computational modeling. Additionally, neuroinformatics fosters collaborative research—an important fact that facilitates the field's interest in studying the multi-level complexity of the brain.

There are three main directions where neuroinformatics has to be applied:[2]
  1. the development of tools and databases for management and sharing of neuroscience data at all levels of analysis,
  2. the development of tools for analyzing and modeling neuroscience data,
  3. the development of computational models of the nervous system and neural processes.
In the recent decade, as vast amounts of diverse data about the brain were gathered by many research groups, the problem was raised of how to integrate the data from thousands of publications in order to enable efficient tools for further research. The biological and neuroscience data are highly interconnected and complex, and by itself, integration represents a great challenge for scientists.

Combining informatics research and brain research provides benefits for both fields of science. On one hand, informatics facilitates brain data processing and data handling, by providing new electronic and software technologies for arranging databases, modeling and communication in brain research. On the other hand, enhanced discoveries in the field of neuroscience will invoke the development of new methods in information technologies (IT).

History

Starting in 1989, the United States National Institute of Mental Health (NIMH), the National Institute of Drug Abuse (NIDA) and the National Science Foundation (NSF) provided the National Academy of Sciences Institute of Medicine with funds to undertake a careful analysis and study of the need to create databases, share neuroscientific data and to examine how the field of information technology could create the tools needed for the increasing volume and modalities of neuroscientific data.[citation needed] The positive recommendations were reported in 1991.[3] This positive report enabled NIMH, now directed by Allan Leshner, to create the "Human Brain Project" (HBP), with the first grants awarded in 1993. The HBP was led by Koslow along with cooperative efforts of other NIH Institutes, the NSF, the National Aeronautics and Space Administration and the Department of Energy. The HPG[expand acronym] and grant-funding initiative in this area slightly preceded the explosive expansion of the World Wide Web. From 1993 through 2004 this program grew to over 100 million dollars in funded grants.

Next, Koslow pursued the globalization of the HPG and neuroinformatics through the European Union and the Office for Economic Co-operation and Development (OECD), Paris, France. Two particular opportunities occurred in 1996.
  • The first was the existence of the US/European Commission Biotechnology Task force co-chaired by Mary Clutter from NSF. Within the mandate of this committee, of which Koslow was a member the United States European Commission Committee on Neuroinformatics was established and co-chaired by Koslow from the United States. This committee resulted in the European Commission initiating support for neuroinformatics in Framework 5 and it has continued to support activities in neuroinformatics research and training.
  • A second opportunity for globalization of neuroinformatics occurred when the participating governments of the Mega Science Forum (MSF) of the OECD were asked if they had any new scientific initiatives to bring forward for scientific cooperation around the globe. The White House Office of Science and Technology Policy requested that agencies in the federal government meet at NIH to decide if cooperation were needed that would be of global benefit. The NIH held a series of meetings in which proposals from different agencies were discussed. The proposal recommendation from the U.S. for the MSF was a combination of the NSF and NIH proposals. Jim Edwards of NSF supported databases and data-sharing in the area of biodiversity; Koslow proposed the HPG as a model for sharing neuroscientific data, with the new moniker of neuroinformatics.
The two related initiates were combined to form the United States proposal on "Biological Informatics". This initiative was supported by the White House Office of Science and Technology Policy and presented at the OECD MSF by Edwards and Koslow. An MSF committee was established on Biological Informatics with two subcommittees: 1. Biodiversity (Chair, James Edwards, NSF), and 2. Neuroinformatics (Chair, Stephen Koslow, NIH). At the end of two years the Neuroinformatics subcommittee of the Biological Working Group issued a report supporting a global neuroinformatics effort. Koslow, working with the NIH and the White House Office of Science and Technology Policy to establishing a new Neuroinformatics working group to develop specific recommendation to support the more general recommendations of the first report. The Global Science Forum (GSF; renamed from MSF) of the OECD supported this recommendation.

The International Neuroinformatics Coordinating Facility

This committee presented 3 recommendations to the member governments of GSF. These recommendations were:
  1. National neuroinformatics programs should be continued or initiated in each country should have a national node to both provide research resources nationally and to serve as the contact for national and international coordination.
  2. An International Neuroinformatics Coordinating Facility (INCF) should be established. The INCF will coordinate the implementation of a global neuroinformatics network through integration of national neuroinformatics nodes.
  3. A new international funding scheme should be established. This scheme should eliminate national and disciplinary barriers and provide a most efficient approach to global collaborative research and data sharing. In this new scheme, each country will be expected to fund the participating researchers from their country.
The GSF neuroinformatics committee then developed a business plan for the operation, support and establishment of the INCF which was supported and approved by the GSF Science Ministers at its 2004 meeting. In 2006 the INCF was created and its central office established and set into operation at the Karolinska Institute, Stockholm, Sweden under the leadership of Sten Grillner. Sixteen countries (Australia, Canada, China, the Czech Republic, Denmark, Finland, France, Germany, India, Italy, Japan, the Netherlands, Norway, Sweden, Switzerland, the United Kingdom and the United States), and the EU Commission established the legal basis for the INCF and Programme in International Neuroinformatics (PIN). To date, eighteen countries (Australia, Belgium, Czech Republic, Finland, France, Germany, India, Italy, Japan, Malaysia, Netherlands, Norway, Poland, Republic of Korea, Sweden, Switzerland, the United Kingdom and the United States) are members of the INCF. Membership is pending for several other countries.

The goal of the INCF is to coordinate and promote international activities in neuroinformatics. The INCF contributes to the development and maintenance of database and computational infrastructure and support mechanisms for neuroscience applications. The system is expected to provide access to all freely accessible human brain data and resources to the international research community. The more general task of INCF is to provide conditions for developing convenient and flexible applications for neuroscience laboratories in order to improve our knowledge about the human brain and its disorders.

Society for Neuroscience Brain Information Group

On the foundation of all of these activities, Huda Akil, the 2003 President of the Society for Neuroscience (SfN) established the Brain Information Group (BIG) to evaluate the importance of neuroinformatics to neuroscience and specifically to the SfN. Following the report from BIG, SfN also established a neuroinformatics committee.

In 2004, SfN announced the Neuroscience Database Gateway (NDG) as a universal resource for neuroscientists through which almost any neuroscience databases and tools may be reached. The NDG was established with funding from NIDA, NINDS and NIMH. The Neuroscience Database Gateway has transitioned to a new enhanced platform, the Neuroscience Information Framework.[4] Funded by the NIH Neuroscience BLueprint, the NIF is a dynamic portal providing access to neuroscience-relevant resources (data, tools, materials) from a single search interface. The NIF builds upon the foundation of the NDG, but provides a unique set of tools tailored especially for neuroscientists: a more expansive catalog, the ability to search multiple databases directly from the NIF home page, a custom web index of neuroscience resources, and a neuroscience-focused literature search function.

Collaboration with other disciplines

Neuroinformatics is formed at the intersections of the following fields:
Biology is concerned with molecular data (from genes to cell specific expression); medicine and anatomy with the structure of synapses and systems level anatomy; engineering – electrophysiology (from single channels to scalp surface EEG), brain imaging; computer science – databases, software tools, mathematical sciences – models, chemistry – neurotransmitters, etc. Neuroscience uses all aforementioned experimental and theoretical studies to learn about the brain through its various levels. Medical and biological specialists help to identify the unique cell types, and their elements and anatomical connections. Functions of complex organic molecules and structures, including a myriad of biochemical, molecular, and genetic mechanisms which regulate and control brain function, are determined by specialists in chemistry and cell biology. Brain imaging determines structural and functional information during mental and behavioral activity. Specialists in biophysics and physiology study physical processes within neural cells neuronal networks. The data from these fields of research is analyzed and arranged in databases and neural models in order to integrate various elements into a sophisticated system; this is the point where neuroinformatics meets other disciplines.

Neuroscience provides the following types of data and information on which neuroinformatics operates:
Neuroinformatics uses databases, the Internet, and visualization in the storage and analysis of the mentioned neuroscience data.

Research programs and groups

Australia

Neuroimaging & Neuroinformatics, Howard Florey Institute, University of Melbourne
Institute scientists utilize brain imaging techniques, such as magnetic resonance imaging, to reveal the organization of brain networks involved in human thought. Led by Gary Egan.

Canada

McGill Centre for Integrative Neuroscience (MCIN), Montreal Neurological Institute, McGill University
Led by Alan Evans, MCIN conducts computationally-intensive brain research using innovative mathematical and statistical approaches to integrate clinical, psychological and brain imaging data with genetics. MCIN researchers and staff also develop infrastructure and software tools in the areas of image processing, databasing, and high performance computing. The MCIN community, together with the Ludmer Centre for Neuroinformatics and Mental Health, collaborates with a broad range of researchers and increasingly focuses on open data sharing and open science, including for the Montreal Neurological Institute.

Denmark

The THOR Center for Neuroinformatics
Established April 1998 at the Department of Mathematical Modelling, Technical University of Denmark. Besides pursuing independent research goals, the THOR Center hosts a number of related projects concerning neural networks, functional neuroimaging, multimedia signal processing, and biomedical signal processing.

Germany

The Neuroinformatics Portal Pilot
The project is part of a larger effort to enhance the exchange of neuroscience data, data-analysis tools, and modeling software. The portal is supported from many members of the OECD Working Group on Neuroinformatics. The Portal Pilot is promoted by the German Ministry for Science and Education.
Computational Neuroscience, ITB, Humboldt-University Berlin
This group focuses on computational neurobiology, in particular on the dynamics and signal processing capabilities of systems with spiking neurons. Lead by Andreas VM Herz.
The Neuroinformatics Group in Bielefeld
Active in the field of Artificial Neural Networks since 1989. Current research programmes within the group are focused on the improvement of man-machine-interfaces, robot-force-control, eye-tracking experiments, machine vision, virtual reality and distributed systems.

Italy

Laboratory of Computational Embodied Neuroscience (LOCEN)[5]
This group, part of the Institute of Cognitive Sciences and Technologies, Italian National Research Council (ISTC-CNR) in Rome and founded in 2006 is currently led by Gianluca Baldassarre. It has two objectives: (a) understanding the brain mechanisms underlying learning and expression of sensorimotor behaviour, and related motivations and higher-level cognition grounded on it, on the basis of embodied computational models; (b) transferring the acquired knowledge to building innovative controllers for autonomous humanoid robots capable of learning in an open-ended fashion on the basis of intrinsic and extrinsic motivations.

Japan

Japan national neuroinformatics resource
The Visiome Platform is the Neuroinformatics Search Service that provides access to mathematical models, experimental data, analysis libraries and related resources. An online portal for neurophysiological data sharing is also available at BrainLiner.jp as part of the MEXT Strategic Research Program for Brain Sciences (SRPBS).
Laboratory for Mathematical Neuroscience, RIKEN Brain Science Institute (Wako, Saitama)
The target of Laboratory for Mathematical Neuroscience is to establish mathematical foundations of brain-style computations toward construction of a new type of information science. Led by Shun-ichi Amari.

The Netherlands

Netherlands state program in neuroinformatics
Started in the light of the international OECD Global Science Forum which aim is to create a worldwide program in Neuroinformatics.

Pakistan

NUST-SEECS Neuroinformatics Research Lab[6]
Establishment of the Neuro-Informatics Lab at SEECS-NUST has enabled Pakistani researchers and members of the faculty to actively participate in such efforts, thereby becoming an active part of the above-mentioned experimentation, simulation, and visualization processes. The lab collaborates with the leading international institutions to develop highly skilled human resource in the related field. This lab facilitates neuroscientists and computer scientists in Pakistan to conduct their experiments and analysis on the data collected using state of the art research methodologies without investing in establishing the experimental neuroscience facilities. The key goal of this lab is to provide state of the art experimental and simulation facilities, to all beneficiaries including higher education institutes, medical researchers/practitioners, and technology industry.

Switzerland

The Blue Brain Project
The Blue Brain Project was founded in May 2005, and uses an 8000 processor Blue Gene/L supercomputer developed by IBM. At the time, this was one of the fastest supercomputers in the world.
The project involves:
  • Databases: 3D reconstructed model neurons, synapses, synaptic pathways, microcircuit statistics, computer model neurons, virtual neurons.
  • Visualization: microcircuit builder and simulation results visualizator, 2D, 3D and immersive visualization systems are being developed.
  • Simulation: a simulation environment for large-scale simulations of morphologically complex neurons on 8000 processors of IBM's Blue Gene supercomputer.
  • Simulations and experiments: iterations between large-scale simulations of neocortical microcircuits and experiments in order to verify the computational model and explore predictions.
The mission of the Blue Brain Project is to understand mammalian brain function and dysfunction through detailed simulations. The Blue Brain Project will invite researchers to build their own models of different brain regions in different species and at different levels of detail using Blue Brain Software for simulation on Blue Gene. These models will be deposited in an internet database from which Blue Brain software can extract and connect models together to build brain regions and begin the first whole brain simulations.
The Institute of Neuroinformatics (INI)
Established at the University of Zurich at the end of 1995, the mission of the Institute is to discover the key principles by which brains work and to implement these in artificial systems that interact intelligently with the real world.

United Kingdom

Genes to Cognition Project
A neuroscience research programme that studies genes, the brain and behaviour in an integrated manner. It is engaged in a large-scale investigation of the function of molecules found at the synapse. This is mainly focused on proteins that interact with the NMDA receptor, a receptor for the neurotransmitter, glutamate, which is required for processes of synaptic plasticity such as long-term potentiation (LTP). Many of the techniques used are high-throughout in nature, and integrating the various data sources, along with guiding the experiments has raised numerous informatics questions. The program is primarily run by Professor Seth Grant at the Wellcome Trust Sanger Institute, but there are many other teams of collaborators across the world.
The CARMEN project[7]
The CARMEN project is a multi-site (11 universities in the United Kingdom) research project aimed at using GRID computing to enable experimental neuroscientists to archive their datasets in a structured database, making them widely accessible for further research, and for modellers and algorithm developers to exploit.
EBI Computational Neurobiology, EMBL-EBI (Hinxton)
The main goal of the group is to build realistic models of neuronal function at various levels, from the synapse to the micro-circuit, based on the precise knowledge of molecule functions and interactions (Systems Biology). Led by Nicolas Le Novère.

United States

Neuroscience Information Framework
The Neuroscience Information Framework (NIF) is an initiative of the NIH Blueprint for Neuroscience Research, which was established in 2004 by the National Institutes of Health. Unlike general search engines, NIF provides deeper access to a more focused set of resources that are relevant to neuroscience, search strategies tailored to neuroscience, and access to content that is traditionally "hidden" from web search engines. The NIF is a dynamic inventory of neuroscience databases, annotated and integrated with a unified system of biomedical terminology (i.e. NeuroLex). NIF supports concept-based queries across multiple scales of biological structure and multiple levels of biological function, making it easier to search for and understand the results. NIF will also provide a registry through which resources providers can disclose availability of resources relevant to neuroscience research. NIF is not intended to be a warehouse or repository itself, but a means for disclosing and locating resources elsewhere available via the web.
Neurogenetics GeneNetwork
Genenetwork started as component of the NIH Human Brain Project in 1999 with a focus on the genetic analysis of brain structure and function. This international program consists of tightly integrated genome and phenome data sets for human, mouse, and rat that are designed specifically for large-scale systems and network studies relating gene variants to differences in mRNA and protein expression and to differences in CNS structure and behavior. The great majority of data are open access. GeneNetwork has a companion neuroimaging web site—the Mouse Brain Library—that contains high resolution images for thousands of genetically defined strains of mice.
The Neuronal Time Series Analysis (NTSA)[8]
NTSA Workbench is a set of tools, techniques and standards designed to meet the needs of neuroscientists who work with neuronal time series data. The goal of this project is to develop information system that will make the storage, organization, retrieval, analysis and sharing of experimental and simulated neuronal data easier. The ultimate aim is to develop a set of tools, techniques and standards in order to satisfy the needs of neuroscientists who work with neuronal data.
The Cognitive Atlas[9]
The Cognitive Atlas is a project developing a shared knowledge base in cognitive science and neuroscience. This comprises two basic kinds of knowledge: tasks and concepts, providing definitions and properties thereof, and also relationships between them. An important feature of the site is ability to cite literature for assertions (e.g. "The Stroop task measures executive control") and to discuss their validity. It contributes to NeuroLex and the Neuroscience Information Framework, allows programmatic access to the database, and is built around semantic web technologies.
Brain Big Data research group at the Allen Institute for Brain Science (Seattle, WA)
Led by Hanchuan Peng,[10] this group has focused on using large-scale imaging computing and data analysis techniques to reconstruct single neuron models and mapping them in brains of different animals.

Technologies and developments

The main technological tendencies in neuroinformatics are:
  1. Application of computer science for building databases, tools, and networks in neuroscience;
  2. Analysis and modeling of neuronal systems.
In order to organize and operate with neural data scientists need to use the standard terminology and atlases that precisely describe the brain structures and their relationships.
  • Neuron Tracing and Reconstruction is an essential technique to establish digital models of the morphology of neurons. Such morphology is useful for neuron classification and simulation.
  • BrainML[11] is a system that provides a standard XML metaformat for exchanging neuroscience data.
  • The Biomedical Informatics Research Network (BIRN)[12] is an example of a grid system for neuroscience. BIRN is a geographically distributed virtual community of shared resources offering vast scope of services to advance the diagnosis and treatment of disease. BIRN allows combining databases, interfaces and tools into a single environment.
  • Budapest Reference Connectome is a web-based 3D visualization tool to browse connections in the human brain. Nodes, and connections are calculated from the MRI datasets of the Human Connectome Project.
  • GeneWays[13] is concerned with cellular morphology and circuits. GeneWays is a system for automatically extracting, analyzing, visualizing and integrating molecular pathway data from the research literature. The system focuses on interactions between molecular substances and actions, providing a graphical view on the collected information and allows researchers to review and correct the integrated information.
  • Neocortical Microcircuit Database (NMDB).[14] A database of versatile brain's data from cells to complex structures. Researchers are able not only to add data to the database but also to acquire and edit one.
  • SenseLab.[15] SenseLab is a long-term effort to build integrated, multidisciplinary models of neurons and neural systems. It was founded in 1993 as part of the original Human Brain Project. A collection of multilevel neuronal databases and tools. SenseLab contains six related databases that support experimental and theoretical research on the membrane properties that mediate information processing in nerve cells, using the olfactory pathway as a model system.
  • BrainMaps.org[16] is an interactive high-resolution digital brain atlas using a high-speed database and virtual microscope that is based on over 12 million megapixels of scanned images of several species, including human.
Another approach in the area of the brain mappings is the probabilistic atlases obtained from the real data from different group of people, formed by specific factors, like age, gender, diseased etc. Provides more flexible tools for brain research and allow obtaining more reliable and precise results, which cannot be achieved with the help of traditional brain atlases.

Reverse Engineering the Brain

 
 

Original link:  http://www.engineeringchallenges.org/challenges/9109.aspx

Summary

The intersection of engineering and neuroscience promises great advances in health care, manufacturing, and communication.

For decades, some of engineering’s best minds have focused their thinking skills on how to create thinking machines — computers capable of emulating human intelligence.

Why should you reverse-engineer the brain?

While some of thinking machines have mastered specific narrow skills — playing chess, for instance — general-purpose artificial intelligence (AI) has remained elusive.

Part of the problem, some experts now believe, is that artificial brains have been designed without much attention to real ones. Pioneers of artificial intelligence approached thinking the way that aeronautical engineers approached flying without much learning from birds. It has turned out, though, that the secrets about how living brains work may offer the best guide to engineering the artificial variety. Discovering those secrets by reverse-engineering the brain promises enormous opportunities for reproducing intelligence the way assembly lines spit out cars or computers.

Figuring out how the brain works will offer rewards beyond building smarter computers. Advances gained from studying the brain may in return pay dividends for the brain itself. Understanding its methods will enable engineers to simulate its activities, leading to deeper insights about how and why the brain works and fails. Such simulations will offer more precise methods for testing potential biotechnology solutions to brain disorders, such as drugs or neural implants. Neurological disorders may someday be circumvented by technological innovations that allow wiring of new materials into our bodies to do the jobs of lost or damaged nerve cells. Implanted electronic devices could help victims of dementia to remember, blind people to see, and crippled people to walk.

Sophisticated computer simulations could also be used in many other applications. Simulating the interactions of proteins in cells would be a novel way of designing and testing drugs, for instance. And simulation capacity will be helpful beyond biology, perhaps in forecasting the impact of earthquakes in ways that would help guide evacuation and recovery plans.

Much of this power to simulate reality effectively will come from increased computing capability rooted in the reverse-engineering of the brain. Learning from how the brain itself learns, researchers will likely improve knowledge of how to design computing devices that process multiple streams of information in parallel, rather than the one-step-at-a-time approach of the basic PC. Another feature of real brains is the vast connectivity of nerve cells, the biological equivalent of computer signaling switches. While nerve cells typically form tens of thousands of connections with their neighbors, traditional computer switches typically possess only two or three. AI systems attempting to replicate human abilities, such as vision, are now being developed with more, and more complex, connections.

What are the applications for this information?

Already, some applications using artificial intelligence have benefited from simulations based on brain reverse-engineering. Examples include AI algorithms used in speech recognition and in machine vision systems in automated factories. More advanced AI software should in the future be able to guide devices that can enter the body to perform medical diagnoses and treatments.

Of potentially even greater impact on human health and well-being is the use of new AI insights for repairing broken brains.  Damage from injury or disease to the hippocampus, a brain structure important for learning and memory, can disrupt the proper electrical signaling between nerve cells that is needed for forming and recalling memories. With knowledge of the proper signaling patterns in healthy brains, engineers have begun to design computer chips that mimic the brain’s own communication skills. Such chips could be useful in cases where healthy brain tissue is starved for information because of the barrier imposed by damaged tissue. In principle, signals from the healthy tissue could be recorded by an implantable chip, which would then generate new signals to bypass the damage. Such an electronic alternate signaling route could help restore normal memory skills to an impaired brain that otherwise could not form them.

“Neural prostheses” have already been put to use in the form of cochlear implants to treat hearing loss and stimulating electrodes to treat Parkinson’s disease. Progress has also been made in developing “artificial retinas,” light-sensitive chips that could help restore vision.

Even more ambitious programs are underway for systems to control artificial limbs. Engineers envision computerized implants capable of receiving the signals from thousands of the brain’s nerve cells and then wirelessly transmitting that information to an interface device that would decode the brain’s intentions. The interface could then send signals to an artificial limb, or even directly to nerves and muscles, giving directions for implementing the desired movements.

Other research has explored, with some success, implants that could literally read the thoughts of immobilized patients and signal an external computer, giving people unable to speak or even move a way to communicate with the outside world.

What is needed to reverse-engineer the brain?

The progress so far is impressive. But to fully realize the brain’s potential to teach us how to make machines learn and think, further advances are needed in the technology for understanding the brain in the first place. Modern noninvasive methods for simultaneously measuring the activity of many brain cells have provided a major boost in that direction, but details of the brain’s secret communication code remain to be deciphered. Nerve cells communicate by firing electrical pulses that release small molecules called neurotransmitters, chemical messengers that hop from one nerve cell to a neighbor, inducing the neighbor to fire a signal of its own (or, in some cases, inhibiting the neighbor from sending signals). Because each nerve cell receives messages from tens of thousands of others, and circuits of nerve cells link up in complex networks, it is extremely difficult to completely trace the signaling pathways.

Furthermore, the code itself is complex — nerve cells fire at different rates, depending on the sum of incoming messages. Sometimes the signaling is generated in rapid-fire bursts; sometimes it is more leisurely. And much of mental function seems based on the firing of multiple nerve cells around the brain in synchrony. Teasing out and analyzing all the complexities of nerve cell signals, their dynamics, pathways, and feedback loops, presents a major challenge.

Today’s computers have electronic logic gates that are either on or off, but if engineers could replicate neurons’ ability to assume various levels of excitation, they could create much more powerful computing machines. Success toward fully understanding brain activity will, in any case, open new avenues for deeper understanding of the basis for intelligence and even consciousness, no doubt providing engineers with insight into even grander accomplishments for enhancing the joy of living.

References

Berger, T.W., et al. Restoring Lost Cognitive Function,” IEEE Engineering in Medicine and Biology Magazine (September/October 2005), pp. 30-44.

Griffith, A. 2007.  Chipping In,” Scientific American (February 2007), pp. 18-20.

Handelman, S. The Memory Hacker,” Popular Science (2007).

Hapgood,  F. Reverse-Engineering the Brain,” MIT News Magazine (July 1, 2006).

Lebedev, M.A. and Miguel A.L. Nicolelis. Brain-machine interfaces: Past, present, and future,” Trends in Neurosciences 29 (September 2006), pp. 536-546.

Microbiomes of the built environment

From Wikipedia, the free encyclopedia

Microbiomes of the built environment [1][2] is a field of inquiry focusing on the study of the communities of microorganisms found in human constructed environments (i.e., the built environment). It is also sometimes referred to as "microbiology of the built environment".
This field encompasses studies of any kind of microorganism (e.g. bacteria, archaea, viruses, various microbial eukaryotes including yeasts, and others sometimes generally referred to as protists) and studies of any kind of built environment such as buildings, vehicles, and water systems.

Some key highlights emphasizing the growing importance of the field include:
A 2016 paper by Brent Stephens [6] highlights some of the key findings of studies of "microbiomes of the indoor environment". These key findings include those listed below:
  • "Culture-independent methods reveal vastly greater microbial diversity compared to culture-based methods"
  • "Indoor spaces often harbor unique microbial communities"
  • "Indoor bacterial communities often originate from indoor sources."
  • "Humans are also major sources of bacteria to indoor air"
  • "Building design and operation can influence indoor microbial communities."
The microbiomes of the built environment are being studied for multiple reasons including how they may impact the health of humans and other organisms occupying the built environment but also some non health reasons such as diagnostics of building properties, for forensic application, impact on food production, impact on built environment function, and more.

Types of Built Environments For Which Microbiomes Have Been Studied

Extensive research has been conducted on individual microbes found in the built environment. More recently there has been a significant expansion in the number of studies that are examining the communities of microbes (i.e., microbiomes) found in the built environment. Such studies of microbial communities in the built environment have covered a wide range of types of built environments including those listed below.

Buildings. Examples include homes,[7][8][9] dormitories,[10] offices,[11][12] hospitals,[13][14][15] operating rooms,[16][17][18] NICUs,[19] classrooms,[20][21] transportation facilities such as train and subway stations,[22][23] food production facilities [24] (e.g. dairies, wineries,[25] cheesemaking facilities,[26][27] sake breweries [28] and beer breweries [29]), aquaria,[30] libraries,[31] cleanrooms,[32][33] zoos, animal shelters, farms, and hicken coops and housing.[34]

Vehicles. Examples include airplanes,[35] ships, tains,[23] automobiles [36] and space vehicles including International Space Station,[37] MIR,[38] the Mars Odyssey,[39] the Herschel Spacecraft.[40]

Water Systems. Examples include shower heads,[41] children's paddling pools,[42] municipal water systems,[43] drinking water and premise plumbing systems [44][45][46][47] and saunas.[48]

Other. Examples include art and cultural heritage items,[49] clothing,[50] and household appliances such as dishwashers [51] and washing machines.[52]

Results from Studies of the Microbiomes of the Built Environment

General Biogeography

Overall the many studies that have been conducted on the microbiomes of the built environment have started to identify some general patterns regarding the microbes are found in various places. For example, Adams et al., in a comparative analysis of ribosomal RNA based studies in the built environment found that geography and building type had strong associations with the types of microbes seen in the built environment.[53] Pakpour et al. in 2016 reviewed the patterns relating to the presence of archaea in indoor environments (based on analysis of rRNA gene sequence data).[54]

Human Health and Microbiomes of the Built Environment

Many studies have documented possible human health implications of the microbiomes of the built environment (e.g.,[55] ). Examples include those below.

Newborn colonization. The microbes that colonize newborns come in part from the built environment (e.g., hospital rooms). This appears to be especially true for babies born by C-section (see for example Shin et al. 2016 [56]) and also babies that spend time in a NICU.[19]

Risk of allergy and asthma. The risk of allergy and asthma is correlated to differences in the built environment microbiome. Some experimental tests (e.g., in mice) have suggested that these correlations may actually be causal (i.e., the differences in the microbiomes may actually lead to differences in risk of allergy or asthma). Review papers on this topic include Casas et al. 2016 [57] and Fujimura and Lynch 2015.[58] Studies of dust in various homes has shown that the microbiome found in the dust is correlated to the risk of children in those homes developing allergy, asthma, or phenotypes connected to these ailments.[59][60][61] The impact of the microbiome of the built environment on the risk of allergy and asthma and other inflammatory or immune conditions is a possible mechanism underlying what is known as the hygiene hypothesis.

Mental health. In a 2015 review Hoisington et al. discuss possible connections between the microbiology of the built environment and human health.[62] The concept presented in this paper is that more and more evidence is accumulating that the human microbiome has some impact on the brain and thus if the built environment either directly or indirectly impacts the human microbiome, this in turn could have impacts on human mental health.

Pathogen transmission. Many pathogens are transmitted in the built environment and may also reside in the built environment for some period of time.[63] Good examples include influenza, Norovirus, Legionella, and MRSA. The study of the transmission and survival of these pathogens is a component of studies of microbiomes of the built environment.

Indoor Air Quality The study of Indoor air quality and the health impact of such air quality is linked at least in part to microbes in the built environment since they can impact directly or indirectly indoor air quality.

Components of the Built Environment that Likely Impact Microbiomes

A major component of studies of Microbiomes of the Built Environment involves determining how components of the built environment impact these microbes and microbial communities. Factors that are thought to be important include humidity, pH, chemical exposures, temperature, filtration, surface materials, and air flow.[64] There has been an effort to develop standards for what built environment "metadata" to collect associated with studies of the microbial communities in the built environment.[65] A 2014 paper reviews the tools that are available to improve the built environment data that is collected associated with such studies.[66] Examples of types of built environment data covered in this review include building characteristics and environmental conditions, HVAC system characteristics and ventilation rates, human occupancy and activity measurements, surface characterizations and air sampling and aerosol dynamics.

Impact of Microbiomes on the Built Environment

Just as the built environment has an impact on the microbiomes found therein, the microbial communities of the built environment can impact the built environment itself. Examples include degradation of building materials, altering fluid and airflow, generating volatiles, and more.

Possible Uses in Forensics

The microbiome of the built environment has some potential for being used as a feature for forensic studies. Most of these applications are still in the early research phase. For example, it has been shown that people leave behind a somewhat diagnostic microbial signature when they type on computer keyboards,[67] use phones [68] or occupy a room.[10]

Odor and Microbes in the Built Environment

There has been a significant amount of research on the role that microbes play in various odors in the built environment. For example, Diekmann et al. examined the connection between volatile organic emissions in automobile air conditioning units.[69] They reported that the types of microbes found were correlated to the bad odors found. Park and Kim examined which microbes found in an automobile air conditioner could produce bad smelling volatile compounds and identified candidate taxa producing some such compounds.[70]

Methods Used

Many methods are used to study microbes in built environment. A review of such methods are some of the challenges in using them was published by NIST. Hoisington et al. in 2015 reviewed methods that could be used by building professionals to study the microbiology of the built environment.[71] Methods used in the study of microbes in the built environment include culturing (with subsequent studies of the cultured microbes), microscopy, air, water and surface sampling, chemical analyses, and culture independent DNA studies such as ribosomal RNA gene PCR and metagenomics.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...