Search This Blog

Friday, October 5, 2018

Computer simulation

From Wikipedia, the free encyclopedia
 
Process of building a computer model, and the interplay between experiment, simulation, and theory.

Computer simulation is the reproduction of the behavior of a system using a computer to simulate the outcomes of a mathematical model associated with said system. Since they allow to check the reliability of chosen mathematical models, computer simulations have become a useful tool for the mathematical modeling of many natural systems in physics (computational physics), astrophysics, climatology, chemistry and biology, human systems in economics, psychology, social science, and engineering. Simulation of a system is represented as the running of the system's model. It can be used to explore and gain new insights into new technology and to estimate the performance of systems too complex for analytical solutions.

Computer simulations are realized by running computer programs that can be either small, running almost instantly on small devices, or large-scale programs that run for hours or days on network-based groups of computers. The scale of events being simulated by computer simulations has far exceeded anything possible (or perhaps even imaginable) using traditional paper-and-pencil mathematical modeling. Over 10 years ago, a desert-battle simulation of one force invading another involved the modeling of 66,239 tanks, trucks and other vehicles on simulated terrain around Kuwait, using multiple supercomputers in the DoD High Performance Computer Modernization Program. Other examples include a 1-billion-atom model of material deformation; a 2.64-million-atom model of the complex protein-producing organelle of all living organisms, the ribosome, in 2005; a complete simulation of the life cycle of Mycoplasma genitalium in 2012; and the Blue Brain project at EPFL (Switzerland), begun in May 2005 to create the first computer simulation of the entire human brain, right down to the molecular level.

Because of the computational cost of simulation, computer experiments are used to perform inference such as uncertainty quantification.

Simulation versus model

A computer model is the algorithms and equations used to capture the behavior of the system being modeled. By contrast, computer simulation is the actual running of the program that contains these equations or algorithms. Simulation, therefore, is the process of running a model. Thus one would not "build a simulation"; instead, one would "build a model", and then either "run the model" or equivalently "run a simulation".

History

Computer simulation developed hand-in-hand with the rapid growth of the computer, following its first large-scale deployment during the Manhattan Project in World War II to model the process of nuclear detonation. It was a simulation of 12 hard spheres using a Monte Carlo algorithm. Computer simulation is often used as an adjunct to, or substitute for, modeling systems for which simple closed form analytic solutions are not possible. There are many types of computer simulations; their common feature is the attempt to generate a sample of representative scenarios for a model in which a complete enumeration of all possible states of the model would be prohibitive or impossible.

Data preparation

The external data requirements of simulations and models vary widely. For some, the input might be just a few numbers (for example, simulation of a waveform of AC electricity on a wire), while others might require terabytes of information (such as weather and climate models).
Input sources also vary widely:
  • Sensors and other physical devices connected to the model;
  • Control surfaces used to direct the progress of the simulation in some way;
  • Current or historical data entered by hand;
  • Values extracted as a by-product from other processes;
  • Values output for the purpose by other simulations, models, or processes.
Lastly, the time at which data is available varies:
  • "invariant" data is often built into the model code, either because the value is truly invariant (e.g., the value of π) or because the designers consider the value to be invariant for all cases of interest;
  • data can be entered into the simulation when it starts up, for example by reading one or more files, or by reading data from a preprocessor;
  • data can be provided during the simulation run, for example by a sensor network.
Because of this variety, and because diverse simulation systems have many common elements, there are a large number of specialized simulation languages. The best-known may be Simula (sometimes called Simula-67, after the year 1967 when it was proposed). There are now many others.

Systems that accept data from external sources must be very careful in knowing what they are receiving. While it is easy for computers to read in values from text or binary files, what is much harder is knowing what the accuracy (compared to measurement resolution and precision) of the values are. Often they are expressed as "error bars", a minimum and maximum deviation from the value range within which the true value (is expected to) lie. Because digital computer mathematics is not perfect, rounding and truncation errors multiply this error, so it is useful to perform an "error analysis" to confirm that values output by the simulation will still be usefully accurate.

Even small errors in the original data can accumulate into substantial error later in the simulation. While all computer analysis is subject to the "GIGO" (garbage in, garbage out) restriction, this is especially true of digital simulation. Indeed, observation of this inherent, cumulative error in digital systems was the main catalyst for the development of chaos theory.

Types

Computer models can be classified according to several independent pairs of attributes, including:
  • Stochastic or deterministic (and as a special case of deterministic, chaotic) – see external links below for examples of stochastic vs. deterministic simulations
  • Steady-state or dynamic
  • Continuous or discrete (and as an important special case of discrete, discrete event or DE models)
  • Dynamic system simulation, e.g. electric systems, hydraulic systems or multi-body mechanical systems (described primarely by DAE:s) or dynamics simulation of field problems, e.g. CFD of FEM simulations (described by PDE:s).
  • Local or distributed.
Another way of categorizing models is to look at the underlying data structures. For time-stepped simulations, there are two main classes:
  • Simulations which store their data in regular grids and require only next-neighbor access are called stencil codes. Many CFD applications belong to this category.
  • If the underlying graph is not a regular grid, the model may belong to the meshfree method class.
Equations define the relationships between elements of the modeled system and attempt to find a state in which the system is in equilibrium. Such models are often used in simulating physical systems, as a simpler modeling case before dynamic simulation is attempted.
  • Dynamic simulations model changes in a system in response to (usually changing) input signals.
  • Stochastic models use random number generators to model chance or random events;
  • A discrete event simulation (DES) manages events in time. Most computer, logic-test and fault-tree simulations are of this type. In this type of simulation, the simulator maintains a queue of events sorted by the simulated time they should occur. The simulator reads the queue and triggers new events as each event is processed. It is not important to execute the simulation in real time. It is often more important to be able to access the data produced by the simulation and to discover logic defects in the design or the sequence of events.
  • A continuous dynamic simulation performs numerical solution of differential-algebraic equations or differential equations (either partial or ordinary). Periodically, the simulation program solves all the equations and uses the numbers to change the state and output of the simulation. Applications include flight simulators, construction and management simulation games, chemical process modeling, and simulations of electrical circuits. Originally, these kinds of simulations were actually implemented on analog computers, where the differential equations could be represented directly by various electrical components such as op-amps. By the late 1980s, however, most "analog" simulations were run on conventional digital computers that emulate the behavior of an analog computer.
  • A special type of discrete simulation that does not rely on a model with an underlying equation, but can nonetheless be represented formally, is agent-based simulation. In agent-based simulation, the individual entities (such as molecules, cells, trees or consumers) in the model are represented directly (rather than by their density or concentration) and possess an internal state and set of behaviors or rules that determine how the agent's state is updated from one time-step to the next.
  • Distributed models run on a network of interconnected computers, possibly through the Internet. Simulations dispersed across multiple host computers like this are often referred to as "distributed simulations". There are several standards for distributed simulation, including Aggregate Level Simulation Protocol (ALSP), Distributed Interactive Simulation (DIS), the High Level Architecture (simulation) (HLA) and the Test and Training Enabling Architecture (TENA).

Visualization

Formerly, the output data from a computer simulation was sometimes presented in a table or a matrix showing how data were affected by numerous changes in the simulation parameters. The use of the matrix format was related to traditional use of the matrix concept in mathematical models. However, psychologists and others noted that humans could quickly perceive trends by looking at graphs or even moving-images or motion-pictures generated from the data, as displayed by computer-generated-imagery (CGI) animation. Although observers could not necessarily read out numbers or quote math formulas, from observing a moving weather chart they might be able to predict events (and "see that rain was headed their way") much faster than by scanning tables of rain-cloud coordinates. Such intense graphical displays, which transcended the world of numbers and formulae, sometimes also led to output that lacked a coordinate grid or omitted timestamps, as if straying too far from numeric data displays. Today, weather forecasting models tend to balance the view of moving rain/snow clouds against a map that uses numeric coordinates and numeric timestamps of events.

Similarly, CGI computer simulations of CAT scans can simulate how a tumor might shrink or change during an extended period of medical treatment, presenting the passage of time as a spinning view of the visible human head, as the tumor changes.

Other applications of CGI computer simulations are being developed to graphically display large amounts of data, in motion, as changes occur during a simulation run.

Computer simulation in science

Computer simulation of the process of osmosis

Generic examples of types of computer simulations in science, which are derived from an underlying mathematical description:
Specific examples of computer simulations follow:
  • statistical simulations based upon an agglomeration of a large number of input profiles, such as the forecasting of equilibrium temperature of receiving waters, allowing the gamut of meteorological data to be input for a specific locale. This technique was developed for thermal pollution forecasting.
  • agent based simulation has been used effectively in ecology, where it is often called "individual based modeling" and is used in situations for which individual variability in the agents cannot be neglected, such as population dynamics of salmon and trout (most purely mathematical models assume all trout behave identically).
  • time stepped dynamic model. In hydrology there are several such hydrology transport models such as the SWMM and DSSAM Models developed by the U.S. Environmental Protection Agency for river water quality forecasting.
  • computer simulations have also been used to formally model theories of human cognition and performance, e.g., ACT-R.
  • computer simulation using molecular modeling for drug discovery.
  • computer simulation to model viral infection in mammalian cells.
  • computer simulation for studying the selective sensitivity of bonds by mechanochemistry during grinding of organic molecules.
  • Computational fluid dynamics simulations are used to simulate the behaviour of flowing air, water and other fluids. One-, two- and three-dimensional models are used. A one-dimensional model might simulate the effects of water hammer in a pipe. A two-dimensional model might be used to simulate the drag forces on the cross-section of an aeroplane wing. A three-dimensional simulation might estimate the heating and cooling requirements of a large building.
  • An understanding of statistical thermodynamic molecular theory is fundamental to the appreciation of molecular solutions. Development of the Potential Distribution Theorem (PDT) allows this complex subject to be simplified to down-to-earth presentations of molecular theory.
Notable, and sometimes controversial, computer simulations used in science include: Donella Meadows' World3 used in the Limits to Growth, James Lovelock's Daisyworld and Thomas Ray's Tierra.

In social sciences, computer simulation is an integral component of the five angles of analysis fostered by the data percolation methodology, which also includes qualitative and quantitative methods, reviews of the literature (including scholarly), and interviews with experts, and which forms an extension of data triangulation.

Simulation environments for physics and engineering

Graphical environments to design simulations have been developed. Special care was taken to handle events (situations in which the simulation equations are not valid and have to be changed). The open project Open Source Physics was started to develop reusable libraries for simulations in Java, together with Easy Java Simulations, a complete graphical environment that generates code based on these libraries.

Simulation environments for linguistics

Taiwanese Tone Group Parser is a simulator of Taiwanese tone sandhi acquisition. In practical, the method using linguistic theory to implement the Taiwanese tone group parser is a way to apply knowledge engineering technique to build the experiment environment of computer simulation for language acquisition. A work-in-process version of artificial tone group parser that includes a knowledge base and an executable program file for Microsoft Windows system (XP/Win7) can be download for evaluation.

Computer simulation in practical contexts

Computer simulations are used in a wide variety of practical contexts, such as:
The reliability and the trust people put in computer simulations depends on the validity of the simulation model, therefore verification and validation are of crucial importance in the development of computer simulations. Another important aspect of computer simulations is that of reproducibility of the results, meaning that a simulation model should not provide a different answer for each execution. Although this might seem obvious, this is a special point of attention in stochastic simulations, where random numbers should actually be semi-random numbers. An exception to reproducibility are human-in-the-loop simulations such as flight simulations and computer games. Here a human is part of the simulation and thus influences the outcome in a way that is hard, if not impossible, to reproduce exactly.

Vehicle manufacturers make use of computer simulation to test safety features in new designs. By building a copy of the car in a physics simulation environment, they can save the hundreds of thousands of dollars that would otherwise be required to build and test a unique prototype. Engineers can step through the simulation milliseconds at a time to determine the exact stresses being put upon each section of the prototype.

Computer graphics can be used to display the results of a computer simulation. Animations can be used to experience a simulation in real-time, e.g., in training simulations. In some cases animations may also be useful in faster than real-time or even slower than real-time modes. For example, faster than real-time animations can be useful in visualizing the buildup of queues in the simulation of humans evacuating a building. Furthermore, simulation results are often aggregated into static images using various ways of scientific visualization.

In debugging, simulating a program execution under test (rather than executing natively) can detect far more errors than the hardware itself can detect and, at the same time, log useful debugging information such as instruction trace, memory alterations and instruction counts. This technique can also detect buffer overflow and similar "hard to detect" errors as well as produce performance information and tuning data.

Pitfalls

Although sometimes ignored in computer simulations, it is very important to perform a sensitivity analysis to ensure that the accuracy of the results is properly understood. For example, the probabilistic risk analysis of factors determining the success of an oilfield exploration program involves combining samples from a variety of statistical distributions using the Monte Carlo method. If, for instance, one of the key parameters (e.g., the net ratio of oil-bearing strata) is known to only one significant figure, then the result of the simulation might not be more precise than one significant figure, although it might (misleadingly) be presented as having four significant figures.

Model calibration techniques

The following three steps should be used to produce accurate simulation models: calibration, verification, and validation. Computer simulations are good at portraying and comparing theoretical scenarios, but in order to accurately model actual case studies they have to match what is actually happening today. A base model should be created and calibrated so that it matches the area being studied. The calibrated model should then be verified to ensure that the model is operating as expected based on the inputs. Once the model has been verified, the final step is to validate the model by comparing the outputs to historical data from the study area. This can be done by using statistical techniques and ensuring an adequate R-squared value. Unless these techniques are employed, the simulation model created will produce inaccurate results and not be a useful prediction tool.
Model calibration is achieved by adjusting any available parameters in order to adjust how the model operates and simulates the process. For example, in traffic simulation, typical parameters include look-ahead distance, car-following sensitivity, discharge headway, and start-up lost time. These parameters influence driver behavior such as when and how long it takes a driver to change lanes, how much distance a driver leaves between his car and the car in front of it, and how quickly a driver starts to accelerate through an intersection. Adjusting these parameters has a direct effect on the amount of traffic volume that can traverse through the modeled roadway network by making the drivers more or less aggressive. These are examples of calibration parameters that can be fine-tuned to match characteristics observed in the field at the study location. Most traffic models have typical default values but they may need to be adjusted to better match the driver behavior at the specific location being studied.

Model verification is achieved by obtaining output data from the model and comparing them to what is expected from the input data. For example, in traffic simulation, traffic volume can be verified to ensure that actual volume throughput in the model is reasonably close to traffic volumes input into the model. Ten percent is a typical threshold used in traffic simulation to determine if output volumes are reasonably close to input volumes. Simulation models handle model inputs in different ways so traffic that enters the network, for example, may or may not reach its desired destination. Additionally, traffic that wants to enter the network may not be able to, if congestion exists. This is why model verification is a very important part of the modeling process.

The final step is to validate the model by comparing the results with what is expected based on historical data from the study area. Ideally, the model should produce similar results to what has happened historically. This is typically verified by nothing more than quoting the R-squared statistic from the fit. This statistic measures the fraction of variability that is accounted for by the model. A high R-squared value does not necessarily mean the model fits the data well. Another tool used to validate models is graphical residual analysis. If model output values drastically differ from historical values, it probably means there is an error in the model. Before using the model as a base to produce additional models, it is important to verify it for different scenarios to ensure that each one is accurate. If the outputs do not reasonably match historic values during the validation process, the model should be reviewed and updated to produce results more in line with expectations. It is an iterative process that helps to produce more realistic models.

Validating traffic simulation models requires comparing traffic estimated by the model to observed traffic on the roadway and transit systems. Initial comparisons are for trip interchanges between quadrants, sectors, or other large areas of interest. The next step is to compare traffic estimated by the models to traffic counts, including transit ridership, crossing contrived barriers in the study area. These are typically called screenlines, cutlines, and cordon lines and may be imaginary or actual physical barriers. Cordon lines surround particular areas such as a city's central business district or other major activity centers. Transit ridership estimates are commonly validated by comparing them to actual patronage crossing cordon lines around the central business district.

Three sources of error can cause weak correlation during calibration: input error, model error, and parameter error. In general, input error and parameter error can be adjusted easily by the user. Model error however is caused by the methodology used in the model and may not be as easy to fix. Simulation models are typically built using several different modeling theories that can produce conflicting results. Some models are more generalized while others are more detailed. If model error occurs as a result, in may be necessary to adjust the model methodology to make results more consistent.

In order to produce good models that can be used to produce realistic results, these are the necessary steps that need to be taken in order to ensure that simulation models are functioning properly. Simulation models can be used as a tool to verify engineering theories, but they are only valid if calibrated properly. Once satisfactory estimates of the parameters for all models have been obtained, the models must be checked to assure that they adequately perform the intended functions. The validation process establishes the credibility of the model by demonstrating its ability to replicate actual traffic patterns. The importance of model validation underscores the need for careful planning, thoroughness and accuracy of the input data collection program that has this purpose. Efforts should be made to ensure collected data is consistent with expected values. For example, in traffic analysis it is typical for a traffic engineer to perform a site visit to verify traffic counts and become familiar with traffic patterns in the area. The resulting models and forecasts will be no better than the data used for model estimation and validation.

Metabolic network modelling

From Wikipedia, the free encyclopedia
Metabolic network showing interactions between enzymes and metabolites in the Arabidopsis thaliana citric acid cycle. Enzymes and metabolites are the red dots and interactions between them are the lines.
Metabolic network model for Escherichia coli.

Metabolic network reconstruction and simulation allows for an in-depth insight into the molecular mechanisms of a particular organism. In particular, these models correlate the genome with molecular physiology. A reconstruction breaks down metabolic pathways (such as glycolysis and the citric acid cycle) into their respective reactions and enzymes, and analyzes them within the perspective of the entire network. In simplified terms, a reconstruction collects all of the relevant metabolic information of an organism and compiles it in a mathematical model. Validation and analysis of reconstructions can allow identification of key features of metabolism such as growth yield, resource distribution, network robustness, and gene essentiality. This knowledge can then be applied to create novel biotechnology.

In general, the process to build a reconstruction is as follows:
  1. Draft a reconstruction
  2. Refine the model
  3. Convert model into a mathematical/computational representation
  4. Evaluate and debug model through experimentation

Genome-scale metabolic reconstruction

A metabolic reconstruction provides a highly mathematical, structured platform on which to understand the systems biology of metabolic pathways within an organism. The integration of biochemical metabolic pathways with rapidly available, annotated genome sequences has developed what are called genome-scale metabolic models. Simply put, these models correspond metabolic genes with metabolic pathways. In general, the more information about physiology, biochemistry and genetics is available for the target organism, the better the predictive capacity of the reconstructed models. Mechanically speaking, the process of reconstructing prokaryotic and eukaryotic metabolic networks is essentially the same. Having said this, eukaryote reconstructions are typically more challenging because of the size of genomes, coverage of knowledge, and the multitude of cellular compartments. The first genome-scale metabolic model was generated in 1995 for Haemophilus influenzae. The first multicellular organism, C. elegans, was reconstructed in 1998. Since then, many reconstructions have been formed.

Organism Genes in Genome Genes in Model Reactions Metabolites Date of reconstruction
Haemophilus influenzae 1,775 296 488 343 June 1999
Escherichia coli 4,405 660 627 438 May 2000
Saccharomyces cerevisiae 6,183 708 1,175 584 February 2003
Mus musculus 28,287 473 1220 872 January 2005
Homo sapiens 21,090 3,623 3,673 -- January 2007
Mycobacterium tuberculosis 4,402 661 939 828 June 2007
Bacillus subtilis 4,114 844 1,020 988 September 2007
Synechocystis sp. PCC6803 3,221 633 831 704 October 2008
Salmonella typhimurium 4,489 1,083 1,087 774 April 2009
Arabidopsis thaliana 27,379 1,419 1,567 1,748 February 2010

Drafting a reconstruction

Resources

Because the timescale for the development of reconstructions is so recent, most reconstructions have been built manually. However, now, there are quite a few resources that allow for the semi-automatic assembly of these reconstructions that are utilized due to the time and effort necessary for a reconstruction. An initial fast reconstruction can be developed automatically using resources like PathoLogic or ERGO in combination with encyclopedias like MetaCyc, and then manually updated by using resources like PathwayTools. These semi-automatic methods allow for a fast draft to be created while allowing the fine tune adjustments required once new experimental data is found. It is only in this manner that the field of metabolic reconstructions will keep up with the ever-increasing numbers of annotated genomes.

Databases

  • Kyoto Encyclopedia of Genes and Genomes (KEGG): a bioinformatics database containing information on genes, proteins, reactions, and pathways. The ‘KEGG Organisms’ section, which is divided into eukaryotes and prokaryotes, encompasses many organisms for which gene and DNA information can be searched by typing in the enzyme of choice.
  • BioCyc, EcoCyc, and MetaCyc: BioCyc Is a collection of 3,000 pathway/genome databases (as of Oct 2013), with each database dedicated to one organism. For example, EcoCyc is a highly detailed bioinformatics database on the genome and metabolic reconstruction of Escherichia coli, including thorough descriptions of E. coli signaling pathways and regulatory network. The EcoCyc database can serve as a paradigm and model for any reconstruction. Additionally, MetaCyc, an encyclopedia of experimentally defined metabolic pathways and enzymes, contains 2,100 metabolic pathways and 11,400 metabolic reactions (Oct 2013).
  • ENZYME: An enzyme nomenclature database (part of the ExPASy proteonomics server of the Swiss Institute of Bioinformatics). After searching for a particular enzyme on the database, this resource gives you the reaction that is catalyzed. ENZYME has direct links to other gene/enzyme/literature databases such as KEGG, BRENDA, and PUBMED.
  • BRENDA: A comprehensive enzyme database that allows for an enzyme to be searched by name, EC number, or organism.
  • BiGG: A knowledge base of biochemically, genetically, and genomically structured genome-scale metabolic network reconstructions.
  • metaTIGER: Is a collection of metabolic profiles and phylogenomic information on a taxonomically diverse range of eukaryotes which provides novel facilities for viewing and comparing the metabolic profiles between organisms.
This table quickly compares the scope of each database.
Database Scope

Enzymes Genes Reactions Pathways Metabolites
KEGG X X X X X
BioCyc X X X X X
MetaCyc X
X X X
ENZYME X
X
X
BRENDA X
X
X
BiGG
X
X X

Tools for metabolic modeling

  • Pathway Tools: A bioinformatics software package that assists in the construction of pathway/genome databases such as EcoCyc. Developed by Peter Karp and associates at the SRI International Bioinformatics Research Group, Pathway Tools has several components. Its PathoLogic module takes an annotated genome for an organism and infers probable metabolic reactions and pathways to produce a new pathway/genome database. Its MetaFlux component can generate a quantitative metabolic model from that pathway/genome database using flux-balance analysis. Its Navigator component provides extensive query and visualization tools, such as visualization of metabolites, pathways, and the complete metabolic network.
  • ERGO: A subscription-based service developed by Integrated Genomics. It integrates data from every level including genomic, biochemical data, literature, and high-throughput analysis into a comprehensive user friendly network of metabolic and nonmetabolic pathways.
  • KEGGtranslator: an easy-to-use stand-alone application that can visualize and convert KEGG files (KGML formatted XML-files) into multiple output formats. Unlike other translators, KEGGtranslator supports a plethora of output formats, is able to augment the information in translated documents (e.g., MIRIAM annotations) beyond the scope of the KGML document, and amends missing components to fragmentary reactions within the pathway to allow simulations on those. KEGGtranslator converts these files to SBML, BioPAX, SIF, SBGN, SBML with qualitative modeling extension, GML, GraphML, JPG, GIF, LaTeX, etc.
  • ModelSEED: An online resource for the analysis, comparison, reconstruction, and curation of genome-scale metabolic models. Users can submit genome sequences to the RAST annotation system, and the resulting annotation can be automatically piped into the ModelSEED to produce a draft metabolic model. The ModelSEED automatically constructs a network of metabolic reactions, gene-protein-reaction associations for each reaction, and a biomass composition reaction for each genome to produce a model of microbial metabolism that can be simulated using Flux Balance Analysis.
  • MetaMerge: algorithm for semi-automatically reconciling a pair of existing metabolic network reconstructions into a single metabolic network model.
  • CoReCo: [21][22] algorithm for automatic reconstruction of metabolic models of related species. The first version of the software used KEGG as reaction database to link with the EC number predictions from CoReCo. Its automatic gap filling using atom map of all the reactions produce functional models ready for simulation.

Tools for literature

  • PUBMED: This is an online library developed by the National Center for Biotechnology Information, which contains a massive collection of medical journals. Using the link provided by ENZYME, the search can be directed towards the organism of interest, thus recovering literature on the enzyme and its use inside of the organism.

Methodology to draft a reconstruction

This is a visual representation of the metabolic network reconstruction process.

A reconstruction is built by compiling data from the resources above. Database tools such as KEGG and BioCyc can be used in conjunction with each other to find all the metabolic genes in the organism of interest. These genes will be compared to closely related organisms that have already developed reconstructions to find homologous genes and reactions. These homologous genes and reactions are carried over from the known reconstructions to form the draft reconstruction of the organism of interest. Tools such as ERGO, Pathway Tools and Model SEED can compile data into pathways to form a network of metabolic and non-metabolic pathways. These networks are then verified and refined before being made into a mathematical simulation.

The predictive aspect of a metabolic reconstruction hinges on the ability to predict the biochemical reaction catalyzed by a protein using that protein's amino acid sequence as an input, and to infer the structure of a metabolic network based on the predicted set of reactions. A network of enzymes and metabolites is drafted to relate sequences and function. When an uncharacterized protein is found in the genome, its amino acid sequence is first compared to those of previously characterized proteins to search for homology. When a homologous protein is found, the proteins are considered to have a common ancestor and their functions are inferred as being similar. However, the quality of a reconstruction model is dependent on its ability to accurately infer phenotype directly from sequence, so this rough estimation of protein function will not be sufficient. A number of algorithms and bioinformatics resources have been developed for refinement of sequence homology-based assignments of protein functions:
  • InParanoid: Identifies eukaryotic orthologs by looking only at in-paralogs.
  • CDD: Resource for the annotation of functional units in proteins. Its collection of domain models utilizes 3D structure to provide insights into sequence/structure/function relationships.
  • InterPro: Provides functional analysis of proteins by classifying them into families and predicting domains and important sites.
  • STRING: Database of known and predicted protein interactions.
Once proteins have been established, more information about the enzyme structure, reactions catalyzed, substrates and products, mechanisms, and more can be acquired from databases such as KEGG, MetaCyc and NC-IUBMB. Accurate metabolic reconstructions require additional information about the reversibility and preferred physiological direction of an enzyme-catalyzed reaction which can come from databases such as BRENDA or MetaCyc database.

Model refinement

An initial metabolic reconstruction of a genome is typically far from perfect due to the high variability and diversity of microorganisms. Often, metabolic pathway databases such as KEGG and MetaCyc will have "holes", meaning that there is a conversion from a substrate to a product (i.e., an enzymatic activity) for which there is no known protein in the genome that encodes the enzyme that facilitates the catalysis. What can also happen in semi-automatically drafted reconstructions is that some pathways are falsely predicted and don't actually occur in the predicted manner. Because of this, a systematic verification is made in order to make sure no inconsistencies are present and that all the entries listed are correct and accurate. Furthermore, previous literature can be researched in order to support any information obtained from one of the many metabolic reaction and genome databases. This provides an added level of assurance for the reconstruction that the enzyme and the reaction it catalyzes do actually occur in the organism.

Enzyme promiscuity and spontaneous chemical reactions can damage metabolites. This metabolite damage, and its repair or pre-emption, create energy costs that need to be incorporated into models. It is likely that many genes of unknown function encode proteins that repair or pre-empt metabolite damage, but most genome-scale metabolic reconstructions only include a fraction of all genes.

Any new reaction not present in the databases needs to be added to the reconstruction. This is an iterative process that cycles between the experimental phase and the coding phase. As new information is found about the target organism, the model will be adjusted to predict the metabolic and phenotypical output of the cell. The presence or absence of certain reactions of the metabolism will affect the amount of reactants/products that are present for other reactions within the particular pathway. This is because products in one reaction go on to become the reactants for another reaction, i.e. products of one reaction can combine with other proteins or compounds to form new proteins/compounds in the presence of different enzymes or catalysts.

Francke et al. provide an excellent example as to why the verification step of the project needs to be performed in significant detail. During a metabolic network reconstruction of Lactobacillus plantarum, the model showed that succinyl-CoA was one of the reactants for a reaction that was a part of the biosynthesis of methionine. However, an understanding of the physiology of the organism would have revealed that due to an incomplete tricarboxylic acid pathway, Lactobacillus plantarum does not actually produce succinyl-CoA, and the correct reactant for that part of the reaction was acetyl-CoA.

Therefore, systematic verification of the initial reconstruction will bring to light several inconsistencies that can adversely affect the final interpretation of the reconstruction, which is to accurately comprehend the molecular mechanisms of the organism. Furthermore, the simulation step also ensures that all the reactions present in the reconstruction are properly balanced. To sum up, a reconstruction that is fully accurate can lead to greater insight about understanding the functioning of the organism of interest.

Metabolic network simulation

A metabolic network can be broken down into a stoichiometric matrix where the rows represent the compounds of the reactions, while the columns of the matrix correspond to the reactions themselves. Stoichiometry is a quantitative relationship between substrates of a chemical reaction. In order to deduce what the metabolic network suggests, recent research has centered on a few approaches, such as extreme pathways, elementary mode analysis, flux balance analysis, and a number of other constraint-based modeling methods.

Extreme pathways

Price, Reed, and Papin, from the Palsson lab, use a method of singular value decomposition (SVD) of extreme pathways in order to understand regulation of a human red blood cell metabolism. Extreme pathways are convex basis vectors that consist of steady state functions of a metabolic network. For any particular metabolic network, there is always a unique set of extreme pathways available. Furthermore, Price, Reed, and Papin, define a constraint-based approach, where through the help of constraints like mass balance and maximum reaction rates, it is possible to develop a ‘solution space’ where all the feasible options fall within. Then, using a kinetic model approach, a single solution that falls within the extreme pathway solution space can be determined. Therefore, in their study, Price, Reed, and Papin, use both constraint and kinetic approaches to understand the human red blood cell metabolism. In conclusion, using extreme pathways, the regulatory mechanisms of a metabolic network can be studied in further detail.

Elementary mode analysis

Elementary mode analysis closely matches the approach used by extreme pathways. Similar to extreme pathways, there is always a unique set of elementary modes available for a particular metabolic network. These are the smallest sub-networks that allow a metabolic reconstruction network to function in steady state. According to Stelling (2002), elementary modes can be used to understand cellular objectives for the overall metabolic network. Furthermore, elementary mode analysis takes into account stoichiometrics and thermodynamics when evaluating whether a particular metabolic route or network is feasible and likely for a set of proteins/enzymes.

Minimal metabolic behaviors (MMBs)

In 2009, Larhlimi and Bockmayr presented a new approach called "minimal metabolic behaviors" for the analysis of metabolic networks. Like elementary modes or extreme pathways, these are uniquely determined by the network, and yield a complete description of the flux cone. However, the new description is much more compact. In contrast with elementary modes and extreme pathways, which use an inner description based on generating vectors of the flux cone, MMBs are using an outer description of the flux cone. This approach is based on sets of non-negativity constraints. These can be identified with irreversible reactions, and thus have a direct biochemical interpretation. One can characterize a metabolic network by MMBs and the reversible metabolic space.

Flux balance analysis

A different technique to simulate the metabolic network is to perform flux balance analysis. This method uses linear programming, but in contrast to elementary mode analysis and extreme pathways, only a single solution results in the end. Linear programming is usually used to obtain the maximum potential of the objective function that you are looking at, and therefore, when using flux balance analysis, a single solution is found to the optimization problem. In a flux balance analysis approach, exchange fluxes are assigned to those metabolites that enter or leave the particular network only. Those metabolites that are consumed within the network are not assigned any exchange flux value. Also, the exchange fluxes along with the enzymes can have constraints ranging from a negative to positive value (ex: -10 to 10).

Furthermore, this particular approach can accurately define if the reaction stoichiometry is in line with predictions by providing fluxes for the balanced reactions. Also, flux balance analysis can highlight the most effective and efficient pathway through the network in order to achieve a particular objective function. In addition, gene knockout studies can be performed using flux balance analysis. The enzyme that correlates to the gene that needs to be removed is given a constraint value of 0. Then, the reaction that the particular enzyme catalyzes is completely removed from the analysis.

Dynamic simulation and parameter estimation

In order to perform a dynamic simulation with such a network it is necessary to construct an ordinary differential equation system that describes the rates of change in each metabolite's concentration or amount. To this end, a rate law, i.e., a kinetic equation that determines the rate of reaction based on the concentrations of all reactants is required for each reaction. Software packages that include numerical integrators, such as COPASI or SBMLsimulator, are then able to simulate the system dynamics given an initial condition. Often these rate laws contain kinetic parameters with uncertain values. In many cases it is desired to estimate these parameter values with respect to given time-series data of metabolite concentrations. The system is then supposed to reproduce the given data. For this purpose the distance between the given data set and the result of the simulation, i.e., the numerically or in few cases analytically obtained solution of the differential equation system is computed. The values of the parameters are then estimated to minimize this distance. One step further, it may be desired to estimate the mathematical structure of the differential equation system because the real rate laws are not known for the reactions within the system under study. To this end, the program SBMLsqueezer allows automatic creation of appropriate rate laws for all reactions with the network. 

Synthetic accessibility

Synthetic accessibility is a simple approach to network simulation whose goal is to predict which metabolic gene knockouts are lethal. The synthetic accessibility approach uses the topology of the metabolic network to calculate the sum of the minimum number of steps needed to traverse the metabolic network graph from the inputs, those metabolites available to the organism from the environment, to the outputs, metabolites needed by the organism to survive. To simulate a gene knockout, the reactions enabled by the gene are removed from the network and the synthetic accessibility metric is recalculated. An increase in the total number of steps is predicted to cause lethality. Wunderlich and Mirny showed this simple, parameter-free approach predicted knockout lethality in E. coli and S. cerevisiae as well as elementary mode analysis and flux balance analysis in a variety of media.

Applications of a reconstruction

  • Several inconsistencies exist between gene, enzyme, reaction databases, and published literature sources regarding the metabolic information of an organism. A reconstruction is a systematic verification and compilation of data from various sources that takes into account all of the discrepancies.
  • The combination of relevant metabolic and genomic information of an organism.
  • Metabolic comparisons can be performed between various organisms of the same species as well as between different organisms.
  • Analysis of synthetic lethality
  • Predict adaptive evolution outcomes
  • Use in metabolic engineering for high value outputs.
Reconstructions and their corresponding models allow the formulation of hypotheses about the presence of certain enzymatic activities and the production of metabolites that can be experimentally tested, complementing the primarily discovery-based approach of traditional microbial biochemistry with hypothesis-driven research. The results these experiments can uncover novel pathways and metabolic activities and decipher between discrepancies in previous experimental data. Information about the chemical reactions of metabolism and the genetic background of various metabolic properties (sequence to structure to function) can be utilized by genetic engineers to modify organisms to produce high value outputs whether those products be medically relevant like pharmaceuticals; high value chemical intermediates such as terpenoids and isoprenoids; or biotechnological outputs like biofuels.

Metabolic network reconstructions and models are used to understand how an organism or parasite functions inside of the host cell. For example, if the parasite serves to compromise the immune system by lysing macrophages, then the goal of metabolic reconstruction/simulation would be to determine the metabolites that are essential to the organism's proliferation inside of macrophages. If the proliferation cycle is inhibited, then the parasite would not continue to evade the host's immune system. A reconstruction model serves as a first step to deciphering the complicated mechanisms surrounding disease. These models can also look at the minimal genes necessary for a cell to maintain virulence. The next step would be to use the predictions and postulates generated from a reconstruction model and apply it to discover novel biological functions such as drug-engineering and drug delivery techniques.

Thursday, October 4, 2018

Cell theory

From Wikipedia, the free encyclopedia
 
Human cancer cells with nuclei (specifically the DNA) stained blue. The central and rightmost cell are in interphase, so the entire nuclei are labeled. The cell on the left is going through mitosis and its DNA has condensed.

In biology, cell theory is the historic scientific theory, now universally accepted, that living organisms are made up of cells, that they are the basic structural/organizational unit of all organisms, and that all cells come from pre-existing cells. Cells are the basic unit of structure in all organisms and also the basic unit of reproduction. With continual improvements made to microscopes over time, magnification technology advanced enough to discover cells in the 17th century. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, also known as cell biology. Over a century later, many debates about cells began amongst scientists. Most of these debates involved the nature of cellular regeneration, and the idea of cells as a fundamental unit of life. Cell theory was eventually formulated in 1839. This is usually credited to Matthias Schleiden and Theodor Schwann. However, many other scientists like Rudolf Virchow contributed to the theory. It was an important step in the movement away from spontaneous generation.

The three tenets to the cell theory are as described below:
  1. All living organisms are composed of one or more cells.
  2. The cell is the basic unit of structure and organization in organisms.
  3. Cells arise from pre-existing cells.
The first of these tenets is disputed, as non-cellular entities such as viruses are sometimes considered life-forms.

Microscopes

Anton van Leeuwenhoek's microscope from the 17th century with a magnification of 300x. 
 
Robert Hooke's microscope

The discovery of the cell was made possible through the invention of the microscope. In the first century BC, Romans were able to make glass, discovering that objects appeared to be larger under the glass. In Italy during the 12th century, Salvino D’Armate made a piece of glass fit over one eye, allowing for a magnification effect to that eye. The expanded use of lenses in eyeglasses in the 13th century probably led to wider spread use of simple microscopes (magnifying glasses) with limited magnification. Compound microscope, which combine an objective lens with an eyepiece to view a real image achieving much higher magnification, first appeared in Europe around 1620 In 1665, Robert Hooke used a microscope about six inches long with two convex lenses inside and examined specimens under reflected light for the observations in his book Micrographia. Hooke also used a simpler microscope with a single lens for examining specimens with directly transmitted light, because this allowed for a clearer image.

Extensive microscopic study was done by Anton van Leeuwenhoek, a draper who took the interest in microscopes after seeing one while on an apprenticeship in Amsterdam in 1648. At some point in his life before 1668, he was able to learn how to grind lenses. This eventually led to Leeuwenhoek making his own unique microscope. His were a single lens simple microscope, rather than a compound microscope. This was because he was able to use a single lens that was a small glass sphere but allowed for a magnification of 270x. This was a large progression since the magnification before was only a maximum of 50x. After Leeuwenhoek, there was not much progress for the microscopes until the 1850s, two hundred years later. Carl Zeiss, a German engineer who manufactured microscopes, began to make changes to the lenses used. But the optical quality did not improve until the 1880s when he hired Otto Schott and eventually Ernst Abbe.

Optical microscopes can focus on objects the size of a wavelength or larger, giving restrictions still to advancement in discoveries with objects smaller than the wavelengths of visible light. Later in the 1920s, the electron microscope was developed, making it possible to view objects that are smaller than optical wavelengths, once again, changing the possibilities in science.

Discovery of cells

Drawing of the structure of cork by Robert Hooke that appeared in Micrographia.

The cell was first discovered by Robert Hooke in 1665, which can be found to be described in his book Micrographia. In this book, he gave 60 ‘observations’ in detail of various objects under a coarse, compound microscope. One observation was from very thin slices of bottle cork. Hooke discovered a multitude of tiny pores that he named "cells". This came from the Latin word Cella, meaning ‘a small room’ like monks lived in and also Cellulae, which meant the six sided cell of a honeycomb. However, Hooke did not know their real structure or function. What Hooke had thought were cells, were actually empty cell walls of plant tissues. With microscopes during this time having a low magnification, Hooke was unable to see that there were other internal components to the cells he was observing. Therefore, he did not think the "cellulae" were alive. His cell observations gave no indication of the nucleus and other organelles found in most living cells. In Micrographia, Hooke also observed mould, bluish in color, found on leather. After studying it under his microscope, he was unable to observe “seeds” that would have indicated how the mould was multiplying in quantity. This led to Hooke suggesting that spontaneous generation, from either natural or artificial heat, was the cause. Since this was an old Aristotelian theory still accepted at the time, others did not reject it and was not disproved until Leeuwenhoek later discovers generation is achieved otherwise.

Anton van Leeuwenhoek is another scientist who saw these cells soon after Hooke did. He made use of a microscope containing improved lenses that could magnify objects almost 300-fold, or 270x. Under these microscopes, Leeuwenhoek found motile objects. In a letter to The Royal Society on October 9, 1676, he states that motility is a quality of life therefore these were living organisms. Over time, he wrote many more papers in which described many specific forms of microorganisms. Leeuwenhoek named these “animalcules,” which included protozoa and other unicellular organisms, like bacteria. Though he did not have much formal education, he was able to identify the first accurate description of red blood cells and discovered bacteria after gaining interest in the sense of taste that resulted in Leeuwenhoek to observe the tongue of an ox, then leading him to study "pepper water" in 1676. He also found for the first time the sperm cells of animals and humans. Once discovering these types of cells, Leeuwenhoek saw that the fertilization process requires the sperm cell to enter the egg cell. This put an end to the previous theory of spontaneous generation. After reading letters by Leeuwenhoek, Hooke was the first to confirm his observations that were thought to be unlikely by other contemporaries.

The cells in animal tissues were observed after plants were because the tissues were so fragile and susceptible to tearing, it was difficult for such thin slices to be prepared for studying. Biologists believed that there was a fundamental unit to life, but were unsure what this was. It would not be until over a hundred years later that this fundamental unit was connected to cellular structure and existence of cells in animals or plants. This conclusion was not made until Henri Dutrochet. Besides stating “the cell is the fundamental element of organization”, Dutrochet also claimed that cells were not just a structural unit, but also a physiological unit.

In 1804, Karl Rudolphi and J.H.F. Link were awarded the prize for "solving the problem of the nature of cells", meaning they were the first to prove that cells had independent cell walls by the Königliche Societät der Wissenschaft (Royal Society of Science), Göttingen. Before, it had been thought that cells shared walls and the fluid passed between them this way.

Cell theory

 
Theodor Schwann (1810–1882)

Credit for developing cell theory is usually given to two scientists: Theodor Schwann and Matthias Jakob Schleiden. While Rudolf Virchow contributed to the theory, he is not as credited for his attributions toward it. In 1839, Schleiden suggested that every structural part of a plant was made up of cells or the result of cells. He also suggested that cells were made by a crystallization process either within other cells or from the outside. However, this was not an original idea of Schleiden. He claimed this theory as his own, though Barthelemy Dumortier had stated it years before him. This crystallization process is no longer accepted with modern cell theory. In 1839, Theodor Schwann states that along with plants, animals are composed of cells or the product of cells in their structures. This was a major advancement in the field of biology since little was known about animal structure up to this point compared to plants. From these conclusions about plants and animals, two of the three tenets of cell theory were postulated.

1. All living organisms are composed of one or more cells
2. The cell is the most basic unit of life

Schleiden's theory of free cell formation through crystallization was refuted in the 1850s by Robert Remak, Rudolf Virchow, and Albert Kolliker. In 1855, Rudolf Virchow added the third tenet to cell theory. In Latin, this tenet states Omnis cellula e cellula. This translated to:

3. All cells arise only from pre-existing cells

However, the idea that all cells come from pre-existing cells had in fact already been proposed by Robert Remak; it has been suggested that Virchow plagiarized Remak and did not give him credit. Remak published observations in 1852 on cell division, claiming Schleiden and Schawnn were incorrect about generation schemes. He instead said that binary fission, which was first introduced by Dumortier, was how reproduction of new animal cells were made. Once this tenet was added, the classical cell theory was complete.

Modern interpretation

The generally accepted parts of modern cell theory include:
  1. All known living things are made up of one or more cells
  2. All living cells arise from pre-existing cells by division.
  3. The cell is the fundamental unit of structure and function in all living organisms.
  4. The activity of an organism depends on the total activity of independent cells.
  5. Energy flow (metabolism and biochemistry) occurs within cells.
  6. Cells contain DNA which is found specifically in the chromosome and RNA found in the cell nucleus and cytoplasm.
  7. All cells are basically the same in chemical composition in organisms of similar species.

Modern version

The modern version of the cell theory includes the ideas that:
  • Energy flow occurs within cells.
  • Heredity information (DNA) is passed on from cell to cell.
  • All cells have the same basic chemical composition.

Opposing concepts in cell theory: history and background

The cell was first discovered by Robert Hooke in 1665 using a microscope. The first cell theory is credited to the work of Theodor Schwann and Matthias Jakob Schleiden in the 1830s. In this theory the internal contents of cells were called protoplasm and described as a jelly-like substance, sometimes called living jelly. At about the same time, colloidal chemistry began its development, and the concepts of bound water emerged. A colloid being something between a solution and a suspension, where Brownian motion is sufficient to prevent sedimentation. The idea of a semipermeable membrane, a barrier that is permeable to solvent but impermeable to solute molecules was developed at about the same time. The term osmosis originated in 1827 and its importance to physiological phenomena realized, but it wasn’t until 1877, when the botanist Pfeffer proposed the membrane theory of cell physiology. In this view, the cell was seen to be enclosed by a thin surface, the plasma membrane, and cell water and solutes such as a potassium ion existed in a physical state like that of a dilute solution. In 1889 Hamburger used hemolysis of erythrocytes to determine the permeability of various solutes. By measuring the time required for the cells to swell past their elastic limit, the rate at which solutes entered the cells could be estimated by the accompanying change in cell volume. He also found that there was an apparent nonsolvent volume of about 50% in red blood cells and later showed that this includes water of hydration in addition to the protein and other nonsolvent components of the cells.

Evolution of the membrane and bulk phase theories

Two opposing concepts developed within the context of studies on osmosis, permeability, and electrical properties of cells. The first held that these properties all belonged to the plasma membrane whereas the other predominant view was that the protoplasm was responsible for these properties. The membrane theory developed as a succession of ad-hoc additions and changes to the theory to overcome experimental hurdles. Overton (a distant cousin of Charles Darwin) first proposed the concept of a lipid (oil) plasma membrane in 1899. The major weakness of the lipid membrane was the lack of an explanation of the high permeability to water, so Nathansohn (1904) proposed the mosaic theory. In this view, the membrane is not a pure lipid layer, but a mosaic of areas with lipid and areas with semipermeable gel. Ruhland refined the mosaic theory to include pores to allow additional passage of small molecules. Since membranes are generally less permeable to anions, Leonor Michaelis concluded that ions are adsorbed to the walls of the pores, changing the permeability of the pores to ions by electrostatic repulsion. Michaelis demonstrated the membrane potential (1926) and proposed that it was related to the distribution of ions across the membrane. Harvey and Danielli (1939) proposed a lipid bilayer membrane covered on each side with a layer of protein to account for measurements of surface tension. In 1941 Boyle & Conway showed that the membrane of frog muscle was permeable to both K+ and Cl, but apparently not to Na+, so the idea of electrical charges in the pores was unnecessary since a single critical pore size would explain the permeability to K+, H+, and Cl as well as the impermeability to Na+, Ca+, and Mg2+. Over the same time period, it was shown (Procter & Wilson, 1916) that gels, which do not have a semipermeable membrane, would swell in dilute solutions. Loeb (1920) also studied gelatin extensively, with and without a membrane, showing that more of the properties attributed to the plasma membrane could be duplicated in gels without a membrane. In particular, he found that an electrical potential difference between the gelatin and the outside medium could be developed, based on the H+ concentration. Some criticisms of the membrane theory developed in the 1930s, based on observations such as the ability of some cells to swell and increase their surface area by a factor of 1000. A lipid layer cannot stretch to that extent without becoming a patchwork (thereby losing its barrier properties. Such criticisms stimulated continued studies on protoplasm as the principal agent determining cell permeability properties. In 1938, Fischer and Suer proposed that water in the protoplasm is not free but in a chemically combined form—the protoplasm represents a combination of protein, salt and water—and demonstrated the basic similarity between swelling in living tissues and the swelling of gelatin and fibrin gels. Dimitri Nasonov (1944) viewed proteins as the central components responsible for many properties of the cell, including electrical properties. By the 1940s, the bulk phase theories were not as well developed as the membrane theories. In 1941, Brooks & Brooks published a monograph, "The Permeability of Living Cells", which rejects the bulk phase theories.

Emergence of the steady-state membrane pump concept

With the development of radioactive tracers, it was shown that cells are not impermeable to Na+. This was difficult to explain with the membrane barrier theory, so the sodium pump was proposed to continually remove Na+ as it permeates cells. This drove the concept that cells are in a state of dynamic equilibrium, constantly using energy to maintain ion gradients. In 1935, Karl Lohmann discovered ATP and its role as a source of energy for cells, so the concept of a metabolically-driven sodium pump was proposed. The tremendous success of Hodgkin, Huxley, and Katz in the development of the membrane theory of cellular membrane potentials, with differential equations that modeled the phenomena correctly, provided even more support for the membrane pump hypothesis.

The modern view of the plasma membrane is of a fluid lipid bilayer that has protein components embedded within it. The structure of the membrane is now known in great detail, including 3D models of many of the hundreds of different proteins that are bound to the membrane. These major developments in cell physiology placed the membrane theory in a position of dominance and stimulated the imagination of most physiologists, who now apparently accept the theory as fact—there are, however, a few dissenters.

The reemergence of the bulk phase theories

In 1956, Afanasy S. Troshin published a book, The Problems of Cell Permeability, in Russian (1958 in German, 1961 in Chinese, 1966 in English) in which he found that permeability was of secondary importance in determination of the patterns of equilibrium between the cell and its environment. Troshin showed that cell water decreased in solutions of galactose or urea although these compounds did slowly permeate cells. Since the membrane theory requires an impermanent solute to sustain cell shrinkage, these experiments cast doubt on the theory. Others questioned whether the cell has enough energy to sustain the sodium/potassium pump. Such questions became even more urgent as dozens of new metabolic pumps were added as new chemical gradients were discovered.

In 1962, Gilbert Ling became the champion of the bulk phase theories and proposed his association-induction hypothesis of living cells.

Types of cells

Prokaryote cell.
 
Eukaryote cell.

Cells can be subdivided into the following subcategories:
  1. Prokaryotes: Prokaryotes are relatively small cells surrounded by the plasma membrane, with a characteristic cell wall that may differ in composition depending on the particular organism. Prokaryotes lack a nucleus (although they do have circular or linear DNA) and other membrane-bound organelles (though they do contain ribosomes). The protoplasm of a prokaryote contains the chromosomal region that appears as fibrous deposits under the microscope, and the cytoplasm. Bacteria and Archaea are the two domains of prokaryotes.
  2. Eukaryotes: Eukaryotic cells are also surrounded by the plasma membrane, but on the other hand, they have distinct nuclei bound by a nuclear membrane or envelope. Eukaryotic cells also contain membrane-bound organelles, such as (mitochondria, chloroplasts, lysosomes, rough and smooth endoplasmic reticulum, vacuoles). In addition, they possess organized chromosomes which store genetic material.
Animals have evolved a greater diversity of cell types in a multicellular body (100–150 different cell types), compared with 10–20 in plants, fungi, and protoctista.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...