Search This Blog

Sunday, June 18, 2023

Mathematical and theoretical biology

Yellow chamomile head showing the Fibonacci numbers in spirals consisting of 21 (blue) and 13 (aqua). Such arrangements have been noticed since the Middle Ages and can be used to make mathematical models of a wide variety of plants.

Mathematical and theoretical biology, or biomathematics, is a branch of biology which employs theoretical analysis, mathematical models and abstractions of the living organisms to investigate the principles that govern the structure, development and behavior of the systems, as opposed to experimental biology which deals with the conduction of experiments to prove and validate the scientific theories. The field is sometimes called mathematical biology or biomathematics to stress the mathematical side, or theoretical biology to stress the biological side. Theoretical biology focuses more on the development of theoretical principles for biology while mathematical biology focuses on the use of mathematical tools to study biological systems, even though the two terms are sometimes interchanged.

Mathematical biology aims at the mathematical representation and modeling of biological processes, using techniques and tools of applied mathematics. It can be useful in both theoretical and practical research. Describing systems in a quantitative manner means their behavior can be better simulated, and hence properties can be predicted that might not be evident to the experimenter. This requires precise mathematical models.

Because of the complexity of the living systems, theoretical biology employs several fields of mathematics, and has contributed to the development of new techniques.

History

Early history

Mathematics has been used in biology as early as the 13th century, when Fibonacci used the famous Fibonacci series to describe a growing population of rabbits. In the 18th century, Daniel Bernoulli applied mathematics to describe the effect of smallpox on the human population. Thomas Malthus' 1789 essay on the growth of the human population was based on the concept of exponential growth. Pierre François Verhulst formulated the logistic growth model in 1836.

Fritz Müller described the evolutionary benefits of what is now called Müllerian mimicry in 1879, in an account notable for being the first use of a mathematical argument in evolutionary ecology to show how powerful the effect of natural selection would be, unless one includes Malthus's discussion of the effects of population growth that influenced Charles Darwin: Malthus argued that growth would be exponential (he uses the word "geometric") while resources (the environment's carrying capacity) could only grow arithmetically.

The term "theoretical biology" was first used as a monograph title by Johannes Reinke in 1901, and soon after by Jakob von Uexküll in 1920. One founding text is considered to be On Growth and Form (1917) by D'Arcy Thompson, and other early pioneers include Ronald Fisher, Hans Leo Przibram, Vito Volterra, Nicolas Rashevsky and Conrad Hal Waddington.

Recent growth

Interest in the field has grown rapidly from the 1960s onwards. Some reasons for this include:

  • The rapid growth of data-rich information sets, due to the genomics revolution, which are difficult to understand without the use of analytical tools
  • Recent development of mathematical tools such as chaos theory to help understand complex, non-linear mechanisms in biology
  • An increase in computing power, which facilitates calculations and simulations not previously possible
  • An increasing interest in in silico experimentation due to ethical considerations, risk, unreliability and other complications involved in human and animal research

Areas of research

Several areas of specialized research in mathematical and theoretical biology as well as external links to related projects in various universities are concisely presented in the following subsections, including also a large number of appropriate validating references from a list of several thousands of published authors contributing to this field. Many of the included examples are characterised by highly complex, nonlinear, and supercomplex mechanisms, as it is being increasingly recognised that the result of such interactions may only be understood through a combination of mathematical, logical, physical/chemical, molecular and computational models.

Abstract relational biology

Abstract relational biology (ARB) is concerned with the study of general, relational models of complex biological systems, usually abstracting out specific morphological, or anatomical, structures. Some of the simplest models in ARB are the Metabolic-Replication, or (M,R)--systems introduced by Robert Rosen in 1957-1958 as abstract, relational models of cellular and organismal organization.

Other approaches include the notion of autopoiesis developed by Maturana and Varela, Kauffman's Work-Constraints cycles, and more recently the notion of closure of constraints.

Algebraic biology

Algebraic biology (also known as symbolic systems biology) applies the algebraic methods of symbolic computation to the study of biological problems, especially in genomics, proteomics, analysis of molecular structures and study of genes.

Complex systems biology

An elaboration of systems biology to understand the more complex life processes was developed since 1970 in connection with molecular set theory, relational biology and algebraic biology.

Computer models and automata theory

A monograph on this topic summarizes an extensive amount of published research in this area up to 1986, including subsections in the following areas: computer modeling in biology and medicine, arterial system models, neuron models, biochemical and oscillation networks, quantum automata, quantum computers in molecular biology and genetics, cancer modelling, neural nets, genetic networks, abstract categories in relational biology, metabolic-replication systems, category theory applications in biology and medicine, automata theory, cellular automata, tessellation models and complete self-reproduction, chaotic systems in organisms, relational biology and organismic theories.

Modeling cell and molecular biology

This area has received a boost due to the growing importance of molecular biology.

  • Mechanics of biological tissues
  • Theoretical enzymology and enzyme kinetics
  • Cancer modelling and simulation
  • Modelling the movement of interacting cell populations
  • Mathematical modelling of scar tissue formation
  • Mathematical modelling of intracellular dynamics
  • Mathematical modelling of the cell cycle
  • Mathematical modelling of apoptosis

Modelling physiological systems

Computational neuroscience

Computational neuroscience (also known as theoretical neuroscience or mathematical neuroscience) is the theoretical study of the nervous system.

Evolutionary biology

Ecology and evolutionary biology have traditionally been the dominant fields of mathematical biology.

Evolutionary biology has been the subject of extensive mathematical theorizing. The traditional approach in this area, which includes complications from genetics, is population genetics. Most population geneticists consider the appearance of new alleles by mutation, the appearance of new genotypes by recombination, and changes in the frequencies of existing alleles and genotypes at a small number of gene loci. When infinitesimal effects at a large number of gene loci are considered, together with the assumption of linkage equilibrium or quasi-linkage equilibrium, one derives quantitative genetics. Ronald Fisher made fundamental advances in statistics, such as analysis of variance, via his work on quantitative genetics. Another important branch of population genetics that led to the extensive development of coalescent theory is phylogenetics. Phylogenetics is an area that deals with the reconstruction and analysis of phylogenetic (evolutionary) trees and networks based on inherited characteristics. Traditional population genetic models deal with alleles and genotypes, and are frequently stochastic.

Many population genetics models assume that population sizes are constant. Variable population sizes, often in the absence of genetic variation, are treated by the field of population dynamics. Work in this area dates back to the 19th century, and even as far as 1798 when Thomas Malthus formulated the first principle of population dynamics, which later became known as the Malthusian growth model. The Lotka–Volterra predator-prey equations are another famous example. Population dynamics overlap with another active area of research in mathematical biology: mathematical epidemiology, the study of infectious disease affecting populations. Various models of the spread of infections have been proposed and analyzed, and provide important results that may be applied to health policy decisions.

In evolutionary game theory, developed first by John Maynard Smith and George R. Price, selection acts directly on inherited phenotypes, without genetic complications. This approach has been mathematically refined to produce the field of adaptive dynamics.

Mathematical biophysics

The earlier stages of mathematical biology were dominated by mathematical biophysics, described as the application of mathematics in biophysics, often involving specific physical/mathematical models of biosystems and their components or compartments.

The following is a list of mathematical descriptions and their assumptions.

Deterministic processes (dynamical systems)

A fixed mapping between an initial state and a final state. Starting from an initial condition and moving forward in time, a deterministic process always generates the same trajectory, and no two trajectories cross in state space.

Stochastic processes (random dynamical systems)

A random mapping between an initial state and a final state, making the state of the system a random variable with a corresponding probability distribution.

Spatial modelling

One classic work in this area is Alan Turing's paper on morphogenesis entitled The Chemical Basis of Morphogenesis, published in 1952 in the Philosophical Transactions of the Royal Society.

Mathematical methods

A model of a biological system is converted into a system of equations, although the word 'model' is often used synonymously with the system of corresponding equations. The solution of the equations, by either analytical or numerical means, describes how the biological system behaves either over time or at equilibrium. There are many different types of equations and the type of behavior that can occur is dependent on both the model and the equations used. The model often makes assumptions about the system. The equations may also make assumptions about the nature of what may occur.

Molecular set theory

Molecular set theory (MST) is a mathematical formulation of the wide-sense chemical kinetics of biomolecular reactions in terms of sets of molecules and their chemical transformations represented by set-theoretical mappings between molecular sets. It was introduced by Anthony Bartholomay, and its applications were developed in mathematical biology and especially in mathematical medicine. In a more general sense, MST is the theory of molecular categories defined as categories of molecular sets and their chemical transformations represented as set-theoretical mappings of molecular sets. The theory has also contributed to biostatistics and the formulation of clinical biochemistry problems in mathematical formulations of pathological, biochemical changes of interest to Physiology, Clinical Biochemistry and Medicine.

Organizational biology

Theoretical approaches to biological organization aim to understand the interdependence between the parts of organisms. They emphasize the circularities that these interdependences lead to. Theoretical biologists developed several concepts to formalize this idea.

For example, abstract relational biology (ARB) is concerned with the study of general, relational models of complex biological systems, usually abstracting out specific morphological, or anatomical, structures. Some of the simplest models in ARB are the Metabolic-Replication, or (M,R)--systems introduced by Robert Rosen in 1957-1958 as abstract, relational models of cellular and organismal organization.

Model example: the cell cycle

The eukaryotic cell cycle is very complex and is one of the most studied topics, since its misregulation leads to cancers. It is possibly a good example of a mathematical model as it deals with simple calculus but gives valid results. Two research groups have produced several models of the cell cycle simulating several organisms. They have recently produced a generic eukaryotic cell cycle model that can represent a particular eukaryote depending on the values of the parameters, demonstrating that the idiosyncrasies of the individual cell cycles are due to different protein concentrations and affinities, while the underlying mechanisms are conserved (Csikasz-Nagy et al., 2006).

By means of a system of ordinary differential equations these models show the change in time (dynamical system) of the protein inside a single typical cell; this type of model is called a deterministic process (whereas a model describing a statistical distribution of protein concentrations in a population of cells is called a stochastic process).

To obtain these equations an iterative series of steps must be done: first the several models and observations are combined to form a consensus diagram and the appropriate kinetic laws are chosen to write the differential equations, such as rate kinetics for stoichiometric reactions, Michaelis-Menten kinetics for enzyme substrate reactions and Goldbeter–Koshland kinetics for ultrasensitive transcription factors, afterwards the parameters of the equations (rate constants, enzyme efficiency coefficients and Michaelis constants) must be fitted to match observations; when they cannot be fitted the kinetic equation is revised and when that is not possible the wiring diagram is modified. The parameters are fitted and validated using observations of both wild type and mutants, such as protein half-life and cell size.

To fit the parameters, the differential equations must be studied. This can be done either by simulation or by analysis. In a simulation, given a starting vector (list of the values of the variables), the progression of the system is calculated by solving the equations at each time-frame in small increments.

Cell cycle bifurcation diagram.jpg

In analysis, the properties of the equations are used to investigate the behavior of the system depending on the values of the parameters and variables. A system of differential equations can be represented as a vector field, where each vector described the change (in concentration of two or more protein) determining where and how fast the trajectory (simulation) is heading. Vector fields can have several special points: a stable point, called a sink, that attracts in all directions (forcing the concentrations to be at a certain value), an unstable point, either a source or a saddle point, which repels (forcing the concentrations to change away from a certain value), and a limit cycle, a closed trajectory towards which several trajectories spiral towards (making the concentrations oscillate).

A better representation, which handles the large number of variables and parameters, is a bifurcation diagram using bifurcation theory. The presence of these special steady-state points at certain values of a parameter (e.g. mass) is represented by a point and once the parameter passes a certain value, a qualitative change occurs, called a bifurcation, in which the nature of the space changes, with profound consequences for the protein concentrations: the cell cycle has phases (partially corresponding to G1 and G2) in which mass, via a stable point, controls cyclin levels, and phases (S and M phases) in which the concentrations change independently, but once the phase has changed at a bifurcation event (Cell cycle checkpoint), the system cannot go back to the previous levels since at the current mass the vector field is profoundly different and the mass cannot be reversed back through the bifurcation event, making a checkpoint irreversible. In particular the S and M checkpoints are regulated by means of special bifurcations called a Hopf bifurcation and an infinite period bifurcation.

Societies

Institutes

Journals [year established]

Effects of ionizing radiation in spaceflight

The Phantom Torso, as seen here in the Destiny laboratory on the International Space Station (ISS), is designed to measure the effects of radiation on organs inside the body by using a torso that is similar to those used to train radiologists on Earth. The torso is equivalent in height and weight to an average adult male. It contains radiation detectors that will measure, in real-time, how much radiation the brain, thyroid, stomach, colon, and heart and lung area receive on a daily basis. The data will be used to determine how the body reacts to and shields its internal organs from radiation, which will be important for longer duration space flights.

Astronauts are exposed to approximately 50-2,000 millisieverts (mSv) while on six-month-duration missions to the International Space Station (ISS), the Moon and beyond. The risk of cancer caused by ionizing radiation is well documented at radiation doses beginning at 100mSv and above.

Related radiological effect studies have shown that survivors of the atomic bomb explosions in Hiroshima and Nagasaki, nuclear reactor workers and patients who have undergone therapeutic radiation treatments have received low-linear energy transfer (LET) radiation (x-rays and gamma rays) doses in the same 50-2,000 mSv range.

Composition of space radiation

While in space, astronauts are exposed to radiation which is mostly composed of high-energy protons, helium nuclei (alpha particles), and high-atomic-number ions (HZE ions), as well as secondary radiation from nuclear reactions from spacecraft parts or tissue.

The ionization patterns in molecules, cells, tissues and the resulting biological effects are distinct from typical terrestrial radiation (x-rays and gamma rays, which are low-LET radiation). Galactic cosmic rays (GCRs) from outside the Milky Way galaxy consist mostly of highly energetic protons with a small component of HZE ions.

Prominent HZE ions:

GCR energy spectra peaks (with median energy peaks up to 1,000 MeV/amu) and nuclei (energies up to 10,000 MeV/amu) are important contributors to the dose equivalent.

Uncertainties in cancer projections

One of the main roadblocks to interplanetary travel is the risk of cancer caused by radiation exposure. The largest contributors to this roadblock are: (1) The large uncertainties associated with cancer risk estimates, (2) The unavailability of simple and effective countermeasures and (3) The inability to determine the effectiveness of countermeasures. Operational parameters that need to be optimized to help mitigate these risks include:

  • length of space missions
  • crew age
  • crew sex
  • shielding
  • biological countermeasures

Major uncertainties

  • effects on biological damage related to differences between space radiation and x-rays
  • dependence of risk on dose-rates in space related to the biology of DNA repair, cell regulation and tissue responses
  • predicting solar particle events (SPEs)
  • extrapolation from experimental data to humans and between human populations
  • individual radiation sensitivity factors (genetic, epigenetic, dietary or "healthy worker" effects)

Minor uncertainties

  • data on galactic cosmic ray environments
  • physics of shielding assessments related to transmission properties of radiation through materials and tissue
  • microgravity effects on biological responses to radiation
  • errors in human data (statistical, dosimetry or recording inaccuracies)

Quantitative methods have been developed to propagate uncertainties that contribute to cancer risk estimates. The contribution of microgravity effects on space radiation has not yet been estimated, but it is expected to be small. The effects of changes in oxygen levels or in immune dysfunction on cancer risks are largely unknown and are of great concern during space flight.

Types of cancer caused by radiation exposure

Studies are being conducted on populations accidentally exposed to radiation (such as Chernobyl, production sites, and Hiroshima and Nagasaki). These studies show strong evidence for cancer morbidity as well as mortality risks at more than 12 tissue sites. The largest risks for adults who have been studied include several types of leukemia, including myeloid leukemia and acute lymphatic lymphoma  as well as tumors of the lung, breast, stomach, colon, bladder and liver. Inter-sex variations are very likely due to the differences in the natural incidence of cancer in males and females. Another variable is the additional risk for cancer of the breast, ovaries and lungs in females. There is also evidence of a declining risk of cancer caused by radiation with increasing age, but the magnitude of this reduction above the age of 30 is uncertain.

It is unknown whether high-LET radiation could cause the same types of tumors as low-LET radiation, but differences should be expected.

The ratio of a dose of high-LET radiation to a dose of x-rays or gamma rays that produce the same biological effect are called relative biological effectiveness (RBE) factors. The types of tumors in humans who are exposed to space radiation will be different from those who are exposed to low-LET radiation. This is evidenced by a study that observed mice with neutrons and have RBEs that vary with the tissue type and strain.

Measured rate of cancer among astronauts

The measured change rate of cancer is restricted by limited statistics. A study published in Scientific Reports looked over 301 U.S. astronauts and 117 Soviet and Russian cosmonauts, and found no measurable increase in cancer mortality compared to the general population, as reported by LiveScience.

An earlier 1998 study came to similar conclusions, with no statistically significant increase in cancer among astronauts compared to the reference group.

Approaches for setting acceptable risk levels

The various approaches to setting acceptable levels of radiation risk are summarized below:

Comparison of radiation doses - includes the amount detected on the trip from Earth to Mars by the RAD on the MSL (2011 - 2013).
  • Unlimited Radiation Risk - NASA management, the families of loved ones of astronauts, and taxpayers would find this approach unacceptable.
  • Comparison to Occupational Fatalities in Less-safe Industries - The life-loss from attributable radiation cancer death is less than that from most other occupational deaths. At this time, this comparison would also be very restrictive on ISS operations because of continued improvements in ground-based occupational safety over the last 20 years.
  • Comparison to Cancer Rates in General Population - The number of years of life-loss from radiation-induced cancer deaths can be significantly larger than from cancer deaths in the general population, which often occur late in life (> age 70 years) and with significantly less numbers of years of life-loss.
  • Doubling Dose for 20 Years Following Exposure - Provides a roughly equivalent comparison based on life-loss from other occupational risks or background cancer fatalities during a worker's career, however, this approach negates the role of mortality effects later in life.
  • Use of Ground-based Worker Limits - Provides a reference point equivalent to the standard that is set on Earth, and recognizes that astronauts face other risks. However, ground workers remain well below dose limits, and are largely exposed to low-LET radiation where the uncertainties of biological effects are much smaller than for space radiation.

NCRP Report No. 153 provides a more recent review of cancer and other radiation risks. This report also identifies and describes the information needed to make radiation protection recommendations beyond LEO, contains a comprehensive summary of the current body of evidence for radiation-induced health risks and also makes recommendations on areas requiring future experimentation.

Current permissible exposure limits

Career cancer risk limits

Astronauts' radiation exposure limit is not to exceed 3% of the risk of exposure-induced death (REID) from fatal cancer over their career. It is NASA's policy to ensure a 95% confidence level (CL) that this limit is not exceeded. These limits are applicable to all missions in low Earth orbit (LEO) as well as lunar missions that are less than 180 days in duration. In the United States, the legal occupational exposure limits for adult workers is set at an effective dose of 50 mSv annually.

Cancer risk to dose relationship

The relationship between radiation exposure and risk is both age- and sex-specific due to latency effects and differences in tissue types, sensitivities, and life spans between sexes. These relationships are estimated using the methods that are recommended by the NCRP  and more recent radiation epidemiology information. 

The principle of As Low As Reasonably Achievable

The as low as reasonably achievable (ALARA) principle is a legal requirement intended to ensure astronaut safety. An important function of ALARA is to ensure that astronauts do not approach radiation limits and that such limits are not considered as "tolerance values." ALARA is especially important for space missions in view of the large uncertainties in cancer and other risk projection models. Mission programs and terrestrial occupational procedures resulting in radiation exposures to astronauts are required to find cost-effective approaches to implement ALARA.

Evaluating career limits

Organ (T) Tissue weighting factor (wT)
Gonads 0.20
Bone Marrow (red) 0.12
Colon 0.12
Lung 0.12
Stomach 0.12
Bladder 0.05
Breast 0.05
Liver 0.05
Esophagus 0.05
Thyroid 0.05
Skin 0.01
Bone Surface 0.01
Remainder* 0.05
*Adrenals, brain, upper intestine, small intestine,
kidney, muscle, pancreas, spleen, thymus and uterus.

The risk of cancer is calculated by using radiation dosimetry and physics methods.

For the purpose of determining radiation exposure limits at NASA, the probability of fatal cancer is calculated as shown below:

  1. The body is divided into a set of sensitive tissues, and each tissue, T, is assigned a weight, wT, according to its estimated contribution to cancer risk.[19]
  2. The absorbed dose, Dγ, that is delivered to each tissue is determined from measured dosimetry. For the purpose of estimating radiation risk to an organ, the quantity characterizing the ionization density is the LET (keV/μm).
  3. For a given interval of LET, between L and ΔL, the dose-equivalent risk (in units of sievert) to a tissue, T, Hγ(L) is calculated as

    where the quality factor, Q(L), is obtained according to the International Commission on Radiological Protection (ICRP).
  4. The average risk to a tissue, T, due to all types of radiation contributing to the dose is given by

    or, since , where Fγ(L) is the fluence of particles with LET=L, traversing the organ,
  5. The effective dose is used as a summation over radiation type and tissue using the tissue weighting factors, wγ
  6. For a mission of duration t, the effective dose will be a function of time, E(t), and the effective dose for mission i will be
  7. The effective dose is used to scale the mortality rate for radiation-induced death from the Japanese survivor data, applying the average of the multiplicative and additive transfer models for solid cancers and the additive transfer model for leukemia by applying life-table methodologies that are based on U.S. population data for background cancer and all causes of death mortality rates. A dose-dose rate effectiveness factor (DDREF) of 2 is assumed.

Evaluating cumulative radiation risks

The cumulative cancer fatality risk (%REID) to an astronaut for occupational radiation exposures, N, is found by applying life-table methodologies that can be approximated at small values of %REID by summing over the tissue-weighted effective dose, Ei, as

where R0 are the age- and sex- specific radiation mortality rates per unit dose.

For organ dose calculations, NASA uses the model of Billings et al. to represent the self-shielding of the human body in a water-equivalent mass approximation. Consideration of the orientation of the human body relative to vehicle shielding should be made if it is known, especially for SPEs.

Confidence levels for career cancer risks are evaluated using methods that are specified by the NPRC in Report No. 126. These levels were modified to account for the uncertainty in quality factors and space dosimetry.

The uncertainties that were considered in evaluating the 95% confidence levels are the uncertainties in:

  • Human epidemiology data, including uncertainties in
    • statistics limitations of epidemiology data
    • dosimetry of exposed cohorts
    • bias, including misclassification of cancer deaths, and
    • the transfer of risk across populations.
  • The DDREF factor that is used to scale acute radiation exposure data to low-dose and dose-rate radiation exposures.
  • The radiation quality factor (Q) as a function of LET.
  • Space dosimetry

The so-called "unknown uncertainties" from the NCRP report No. 126 are ignored by NASA.

Models of cancer risks and uncertainties

Life-table methodology

The double-detriment life-table approach is what is recommended by the NPRC  to measure radiation cancer mortality risks. The age-specific mortality of a population is followed over its entire life span with competing risks from radiation and all other causes of death described.

For a homogenous population receiving an effective dose E at age aE, the probability of dying in the age-interval from a to a+1 is described by the background mortality-rate for all causes of death, M(a), and the radiation cancer mortality rate, m(E,aE,a), as:

The survival probability to age, a, following an exposure, E at age aE, is:

The excessive lifetime risk (ELR - the increased probability that an exposed individual will die from cancer) is defined by the difference in the conditional survival probabilities for the exposed and the unexposed groups as:

A minimum latency-time of 10 years is often used for low-LET radiation. Alternative assumptions should be considered for high-LET radiation. The REID (the lifetime risk that an individual in the population will die from cancer caused by radiation exposure) is defined by:

Generally, the value of the REID exceeds the value of the ELR by 10-20%.

The average loss of life-expectancy, LLE, in the population is defined by:

The loss of life-expectancy among exposure-induced-deaths (LLE-REID) is defined by:

Uncertainties in low-LET epidemiology data

The low-LET mortality rate per sievert, mi is written

where m0 is the baseline mortality rate per sievert and xα are quantiles (random variables) whose values are sampled from associated probability distribution functions (PDFs), P(Xa).

NCRP, in Report No. 126, defines the following subjective PDFs, P(Xa), for each factor that contributes to the acute low-LET risk projection:

  1. Pdosimetry is the random and systematic errors in the estimation of the doses received by atomic-bomb blast survivors.
  2. Pstatistical is the distribution in uncertainty in the point estimate of the risk coefficient, r0.
  3. Pbias is any bias resulting for over- or under-reporting cancer deaths.
  4. Ptransfer is the uncertainty in the transfer of cancer risk following radiation exposure from the Japanese population to the U.S. population.
  5. PDr is the uncertainty in the knowledge of the extrapolation of risks to low dose and dose-rates, which are embodied in the DDREF.

Risk in context of exploration mission operational scenarios

The accuracy of galactic cosmic ray environmental models, transport codes and nuclear interaction cross sections allow NASA to predict space environments and organ exposure that may be encountered on long-duration space missions. The lack of knowledge of the biological effects of radiation exposure raise major questions about risk prediction.

The cancer risk projection for space missions is found by

where represents the folding of predictions of tissue-weighted LET spectra behind spacecraft shielding with the radiation mortality rate to form a rate for trial J.

Alternatively, particle-specific energy spectra, Fj(E), for each ion, j, can be used

.

The result of either of these equations is inserted into the expression for the REID.

Related probability distribution functions (PDFs) are grouped together into a combined probability distribution function, Pcmb(x). These PDFs are related to the risk coefficient of the normal form (dosimetry, bias and statistical uncertainties). After a sufficient number of trials have been completed (approximately 105), the results for the REID estimated are binned and the median values and confidence intervals are found.

The chi-squared (χ2) test is used for determining whether two separate PDFs are significantly different (denoted p1(Ri) and p2(Ri), respectively). Each p(Ri) follows a Poisson distribution with variance .

The χ2 test for n-degrees of freedom characterizing the dispersion between the two distributions is

.

The probability, P(ņχ2), that the two distributions are the same is calculated once χ2 is determined.

Radiation carcinogenesis mortality rates

Age-and sex-dependent mortality rare per unit dose, multiplied by the radiation quality factor and reduced by the DDREF is used for projecting lifetime cancer fatality risks. Acute gamma ray exposures are estimated. The additivity of effects of each component in a radiation field is also assumed.

Rates are approximated using data gathered from Japanese atomic bomb survivors. There are two different models that are considered when transferring risk from Japanese to U.S. populations.

  • Multiplicative transfer model - assumes that radiation risks are proportional to spontaneous or background cancer risks.
  • Additive transfer model - assumes that radiation risk acts independently of other cancer risks.

The NCRP recommends a mixture model to be used that contains fractional contributions from both methods.

The radiation mortality rate is defined as:

Where:

  • ERR = excess relative risk per sievert
  • EAR = excess additive risk per sievert
  • Mc(a) = the sex- and age-specific cancer mortality rate in the U.S. population
  • F = the tissue-weighted fluence
  • L = the LET
  • v = the fractional division between the assumption of the multiplicative and additive risk transfer models. For solid cancer, it is assumed that v=1/2 and for leukemia, it is assumed that v=0.

Biological and physical countermeasures

Identifying effective countermeasures that reduce the risk of biological damage is still a long-term goal for space researchers. These countermeasures are probably not needed for extended duration lunar missions, but will be needed for other long-duration missions to Mars and beyond. On 31 May 2013, NASA scientists reported that a possible human mission to Mars may involve a great radiation risk based on the amount of energetic particle radiation detected by the RAD on the Mars Science Laboratory while traveling from the Earth to Mars in 2011-2012.

There are three fundamental ways to reduce exposure to ionizing radiation:

  • increasing the distance from the radiation source
  • reducing the exposure time
  • shielding (i.e.: a physical barrier)

Shielding is a plausible option, but due to current launch mass restrictions, it is prohibitively costly. Also, the current uncertainties in risk projection prevent the actual benefit of shielding from being determined. Strategies such as drugs and dietary supplements to reduce the effects of radiation, as well as the selection of crew members are being evaluated as viable options for reducing exposure to radiation and effects of irradiation. Shielding is an effective protective measure for solar particle events. As far as shielding from GCR, high-energy radiation is very penetrating and the effectiveness of radiation shielding depends on the atomic make-up of the material used.

Antioxidants are effectively used to prevent the damage caused by radiation injury and oxygen poisoning (the formation of reactive oxygen species), but since antioxidants work by rescuing cells from a particular form of cell death (apoptosis), they may not protect against damaged cells that can initiate tumor growth.

Evidence sub-pages

The evidence and updates to projection models for cancer risk from low-LET radiation are reviewed periodically by several bodies, which include the following organizations:

These committees release new reports about every 10 years on cancer risks that are applicable to low-LET radiation exposures. Overall, the estimates of cancer risks among the different reports of these panels will agree within a factor of two or less. There is continued controversy for doses that are below 5 mSv, however, and for low dose-rate radiation because of debate over the linear no-threshold hypothesis that is often used in statistical analysis of these data. The BEIR VII report, which is the most recent of the major reports is used in the following sub-pages. Evidence for low-LET cancer effects must be augmented by information on protons, neutrons, and HZE nuclei that is only available in experimental models. Such data have been reviewed by NASA several times in the past and by the NCRP.

Sleep in space

From Wikipedia, the free encyclopedia

An astronaut asleep in the microgravity of Earth orbit-continual free-fall around the Earth, inside the pressurized module Harmony node of the International Space Station in 2007

Sleeping in space is an important part of space medicine and mission planning, with impacts on the health, capabilities and morale of astronauts.

Human spaceflight often requires astronaut crews to endure long periods without rest. Studies have shown that lack of sleep can cause fatigue that leads to errors while performing critical tasks. Also, individuals who are fatigued often cannot determine the degree of their impairment. Astronauts and ground crews frequently suffer from the effects of sleep deprivation and circadian rhythm disruption. Fatigue due to sleep loss, sleep shifting and work overload could cause performance errors that put space flight participants at risk of compromising mission objectives as well as the health and safety of those on board.

Mission Specialist Margaret Rhea Seddon, wearing a blindfold, sleeps in SLS-1 module (STS-40)

Overview

Sleeping in space requires that astronauts sleep in a crew cabin, a small room about the size of a shower stall. They lie in a sleeping bag which is strapped to the wall. Astronauts have reported having nightmares and dreams, and snoring while sleeping in space.

Sleeping and crew accommodations need to be well ventilated; otherwise, astronauts can wake up oxygen-deprived and gasping for air, because a bubble of their own exhaled carbon dioxide had formed around their heads. Brain cells are extremely sensitive to a lack of oxygen and brain cells can start dying less than 5 minutes after their oxygen supply disappears; the result is that brain hypoxia can rapidly cause severe brain damage or even death. A decrease of oxygen to the brain can cause dementia and brain damage, as well as a host of other symptoms.

In the early 21st century, crew on the ISS were said to average about six hours of sleep per day.

On the ground

Chronic sleep loss can impact performance similarly to total sleep loss and recent studies have shown that cognitive impairment after 17 hours of wakefulness is similar to impairment from an elevated blood alcohol level.

It has been suggested that work overload and circadian desynchronization may cause performance impairment. Those who perform shift work suffer from increased fatigue because the timing of their sleep/wake schedule is out of sync with natural daylight (see Shift work syndrome). They are more prone to auto and industrial accidents as well as a decreased quality of work and productivity on the job.

Ground crews at NASA are also affected by slam shifting (sleep shifting) while supporting critical International Space Station operations during overnight shifts.

In space

Flight engineer Nikolai Budarin, uses a computer in a sleep station in the Zvezda Service Module on the International Space Station (ISS).
 
A man, dressed in blue work clothing, seen in a small cubicle. On the walls around him can be seen a sleeping bag, children's drawings, technical manuals and stained insulation. A small porthole in the centre of the wall behind him shows the nose of the Space Shuttle Atlantis and the blackness of space.
Cosmonaut Yury Usachov in his sleeping compartment on Mir, called a Kayutka

During the Apollo program, it was discovered that adequate sleep in the small volumes available in the command module and Lunar Module was most easily achieved if (1) there was minimum disruption to the pre-flight circadian rhythm of the crew members; (2) all crew members in the spacecraft slept at the same time; (3) crew members were able to doff their suits before sleeping; (4) work schedules were organized – and revised as needed – to provide an undisturbed (radio quiet) 6-8 hour rest period during each 24-hour period; (5) in zero-gravity, loose restraints were provided to keep the crewmen from drifting; (6) on the lunar surface, a hammock or other form of bed was provided; (7) there was an adequate combination of cabin temperature and sleepwear for comfort; (8) the crew could dim instrument lights and either cover their eyes or exclude sunlight from the cabin; and (9) equipment such as pumps were adequately muffled.

NASA management currently has limits in place to restrict the number of hours in which astronauts are to complete tasks and events. This is known as the "Fitness for Duty Standards". Space crews' current nominal number of work hours is 6.5 hours per day, and weekly work time should not exceed 48 hours. NASA defines critical workload overload for a space flight crew as 10-hour work days for 3 days per work week, or more than 60 hours per week (NASA STD-3001, Vol. 1). Astronauts have reported that periods of high-intensity workload can result in mental and physical fatigue. Studies from the medical and aviation industries have shown that increased and intense workloads combined with disturbed sleep and fatigue can lead to significant health issues and performance errors.

Research suggests that astronauts' quality and quantity of sleep while in space is markedly reduced than while on Earth. The use of sleep-inducing medication could be indicative of poor sleep due to disturbances. A study in 1997 showed that sleep structure as well as the restorative component of sleep may be disrupted while in space. These disturbances could increase the occurrence of performance errors.

Current space flight data shows that accuracy, response time and recall tasks are all affected by sleep loss, work overload, fatigue and circadian desynchronization.

Factors that contribute to sleep loss and fatigue

The most common factors that can affect the length and quality of sleep while in space include:

  • noise
  • physical discomfort
  • voids
  • disturbances caused by other crew members
  • temperature

An evidence gathering effort is currently underway to evaluate the impact of these individual, physiological and environmental factors on sleep and fatigue. The effects of work-rest schedules, environmental conditions and flight rules and requirements on sleep, fatigue and performance are also being evaluated.

Factors that contribute to circadian desynchronization

Exposure to light is the largest contributor to circadian desynchronization on board the ISS. Since the ISS orbits the Earth every 1.5 hours, the flight crew experiences 16 sunrises and sunsets per day. Slam shifting (sleep shifting) is also a considerable external factor that causes circadian desynchronization in the current space flight environment.

Other factors that may cause circadian desynchronization in space:

  • shift work
  • extended work hours
  • timeline changes
  • slam shifting (sleep shifting)
  • prolonged light of lunar day
  • Mars sol on Earth
  • Mars sol on Mars
  • abnormal environmental cues (i.e.: unnatural light exposure)

Sleep loss, genetics, and space

Both acute and chronic partial sleep loss occur frequently in space flight due to operational demands and for physiological reasons not yet entirely understood. Some astronauts are affected more than others. Earth-based research has demonstrated that sleep loss poses risks to astronaut performance, and that there are large, highly reliable individual differences in the magnitude of cognitive performance, fatigue and sleepiness, and sleep homeostatic vulnerability to acute total sleep deprivation and to chronic sleep restriction in healthy adults. The stable, trait-like (phenotypic) inter-individual differences observed in response to sleep loss point to an underlying genetic component. Indeed, data suggest that common genetic variations (polymorphisms) involved in sleep-wake, circadian, and cognitive regulation may serve as markers for prediction of inter-individual differences in sleep homeostatic and neurobehavioral vulnerability to sleep restriction in healthy adults. Identification of genetic predictors of differential vulnerability to sleep restriction will help identify astronauts most in need of fatigue countermeasures in space flight and inform medical standards for obtaining adequate sleep in space.

Computer-based simulation information

Biomathematical models are being developed to instantiate the biological dynamics of sleep need and circadian timing. These models could predict astronaut performance relative to fatigue and circadian desynchronization.

Structured programming

From Wikipedia, the free encyclopedia ...