Search This Blog

Wednesday, October 3, 2018

Central nervous system effects from radiation exposure during spaceflight

From Wikipedia, the free encyclopedia
Acute and late radiation damage to the central nervous system (CNS) may lead to changes in motor function and behavior or neurological disorders. Radiation and synergistic effects of radiation with other space flight factors may affect neural tissues, which in turn may lead to changes in function or behavior. Data specific to the spaceflight environment must be compiled to quantify the magnitude of this risk. If this is identified as a risk of high enough magnitude then appropriate protection strategies should be employed.
— Human Research Program Requirements Document, HRP-47052, Rev. C, dated Jan 2009.
A vigorous ground-based cellular and animal model research program will help quantify the risk to the CNS from space radiation exposure on future long distance space missions and promote the development of optimized countermeasures.

Possible acute and late risks to the CNS from galactic cosmic rays (GCRs) and solar proton events (SPEs) are a documented concern for human exploration of our solar system. In the past, the risks to the CNS of adults who were exposed to low to moderate doses of ionizing radiation (0 to 2 Gy (Gray) (Gy = 100 rad)) have not been a major consideration. However, the heavy ion component of space radiation presents distinct biophysical challenges to cells and tissues as compared to the physical challenges that are presented by terrestrial forms of radiation. Soon after the discovery of cosmic rays, the concern for CNS risks originated with the prediction of the light flash phenomenon from single HZE nuclei traversals of the retina; this phenomenon was confirmed by the Apollo astronauts in 1970 and 1973. HZE nuclei are capable of producing a column of heavily damaged cells, or a microlesion, along their path through tissues, thereby raising concern over serious impacts on the CNS. In recent years, other concerns have arisen with the discovery of neurogenesis and its impact by HZE nuclei, which have been observed in experimental models of the CNS.

Human epidemiology is used as a basis for risk estimation for cancer, acute radiation risks, and cataracts. This approach is not viable for estimating CNS risks from space radiation, however. At doses above a few Gy, detrimental CNS changes occur in humans who are treated with radiation (e.g., gamma rays and protons) for cancer. Treatment doses of 50 Gy are typical, which is well above the exposures in space even if a large SPE were to occur. Thus, of the four categories of space radiation risks (cancer, CNS, degenerative, and acute radiation syndromes), the CNS risk relies most extensively on experimental data with animals for its evidence base. Understanding and mitigating CNS risks requires a vigorous research program that will draw on the basic understanding that is gained from cellular and animal models, and on the development of approaches to extrapolate risks and the potential benefits of countermeasures for astronauts.

Several experimental studies, which use heavy ion beams simulating space radiation, provide constructive evidence of the CNS risks from space radiation. First, exposure to HZE nuclei at low doses ( less than 50 cGy) significantly induces neurocognitive deficits, such as learning and behavioral changes as well as operant reactions in the mouse and rat. Exposures to equal or higher doses of low-LET radiation (e.g., gamma or X rays) do not show similar effects. The threshold of performance deficit following exposure to HZE nuclei depends on both the physical characteristics of the particles, such as linear energy transfer (LET), and the animal age at exposure. A performance deficit has been shown to occur at doses that are similar to the ones that will occur on a Mars mission (less than 0.5 Gy). The neurocognitive deficits with the dopaminergic nervous system are similar to aging and appear to be unique to space radiation. Second, exposure to HZE disrupts neurogenesis in mice at low doses (less than 1 Gy), showing a significant dose-related reduction of new neurons and oligodendrocytes in the subgranular zone (SGZ) of the hippocampal dentate gyrus. Third, reactive oxygen species (ROS) in neuronal precursor cells arise following exposure to HZE nuclei and protons at low dose, and can persist for several months. Antioxidants and anti-inflammatory agents can possibly reduce these changes. Fourth, neuroinflammation arises from the CNS following exposure to HZE nuclei and protons. In addition, age-related genetic changes increase the sensitivity of the CNS to radiation.

Research with animal models that are irradiated with HZE nuclei has shown that important changes to the CNS occur with the dose levels that are of concern to NASA. However, the significance of these results on the morbidity to astronauts has not been elucidated. One model of late tissue effects  suggests that significant effects will occur at lower doses, but with increased latency. It is to be noted that the studies that have been conducted to date have been carried out with relatively small numbers of animals (less than 10 per dose group); therefore, testing of dose threshold effects at lower doses (less than 0.5 Gy) has not been carried out sufficiently at this time. As the problem of extrapolating space radiation effects in animals to humans will be a challenge for space radiation research, such research could become limited by the population size that is used in animal studies. Furthermore, the role of dose protraction has not been studied to date. An approach to extrapolate existing observations to possible cognitive changes, performance degradation, or late CNS effects in astronauts has not been discovered. New approaches in systems biology offer an exciting tool to tackle this challenge. Recently, eight gaps were identified for projecting CNS risks. Research on new approaches to risk assessment may be needed to provide the necessary data and knowledge to develop risk projection models of the CNS from space radiation.

Introduction

Both GCRs and SPEs are of concern for CNS risks. The major GCRs are composed of protons, α-particles, and particles of HZE nuclei with a broad energy spectra ranging from a few tens to above 10,000 MeV/u. In interplanetary space, GCR organ dose and dose-equivalent of more than 0.2 Gy or 0.6 Sv per year, respectively, are expected. The high energies of GCRs allow them to penetrate to hundreds of centimeters of any material, thus precluding radiation shielding as a plausible mitigation measure to GCR risks on the CNS. For SPEs, the possibility exists for an absorbed dose of over 1 Gy from an SPE if crew members are in a thinly shielded spacecraft or performing a spacewalk. The energies of SPEs, although substantial (tens to hundreds of MeV), do not preclude radiation shielding as a potential countermeasure. However, the costs of shielding may be high to protect against the largest events.

The fluence of charged particles hitting the brain of an astronaut has been estimated several times in the past. One estimate is that during a 3-year mission to Mars at solar minimum (assuming the 1972 spectrum of GCR), 20 million out of 43 million hippocampus cells and 230 thousand out of 1.3 million thalamus cell nuclei will be directly hit by one or more particles with charge Z> 15.These numbers do not include the additional cell hits by energetic electrons (delta rays) that are produced along the track of HZE nuclei  or correlated cellular damage. The contributions of delta rays from GCR and correlated cellular damage increase the number of damaged cells two- to three-fold from estimates of the primary track alone and present the possibility of heterogeneously damaged regions, respectively. The importance of such additional damage is poorly understood.

At this time, the possible detrimental effects to an astronaut’s CNS from the HZE component of GCR have yet to be identified. This is largely due to the lack of a human epidemiological basis with which to estimate risks and the relatively small number of published experimental studies with animals. RBE factors are combined with human data to estimate cancer risks for low-LET radiation exposure. Since this approach is not possible for CNS risks, new approaches to risk estimation will be needed. Thus, biological research is required to establish risk levels and risk projection models and, if the risk levels are found to be significant, to design countermeasures.

Description of central nervous system risks of concern to NASA

Acute and late CNS risks from space radiation are of concern for Exploration missions to the moon or Mars. Acute CNS risks include: altered cognitive function, reduced motor function, and behavioral changes, all of which may affect performance and human health. Late CNS risks are possible neurological disorders such as Alzheimer’s disease, dementia, or premature aging. The effect of the protracted exposure of the CNS to the low dose-rate (< 50 mGy h–1) of proton, HZE particles, and neutrons of the relevant energies for doses up to 2 Gy is of concern.

Current NASA permissible exposure limits

PELs for short-term and career astronaut exposure to space radiation have been approved by the NASA Chief Health and Medical Officer. The PELs set requirements and standards for mission design and crew selection as recommended in NASA-STD-3001, Volume 1. NASA has used dose limits for cancer risks and the non-cancer risks to the BFOs, skin, and lens since 1970. For Exploration mission planning, preliminary dose limits for the CNS risks are based largely on experimental results with animal models. Further research is needed to validate and quantify these risks, however, and to refine the values for dose limits. The CNS PELs, which correspond to the doses at the region of the brain called the hippocampus, are set for time periods of 30 days or 1 year, or for a career with values of 500, 1,000, and 1,500 mGy-Eq, respectively. Although the unit mGy-Eq is used, the RBE for CNS effects is largely unknown; therefore, the use of the quality factor function for cancer risk estimates is advocated. For particles with charge Z>10, an addition PEL requirement limits the physical dose (mGy) for 1 year and the career to 100 and 250 mGy, respectively. NASA uses computerized anatomical geometry models to estimate the body self-shielding at the hippocampus.

Evidence

Review of human data

Evidence of the effects of terrestrial forms of ionizing radiation on the CNS has been documented from radiotherapy patients, although the dose is higher for these patients than would be experienced by astronauts in the space environment. CNS behavioral changes such as chronic fatigue and depression occur in patients who are undergoing irradiation for cancer therapy. Neurocognitive effects, especially in children, are observed at lower radiation doses. A recent review on intelligence and the academic achievement of children after treatment for brain tumors indicates that radiation exposure is related to a decline in intelligence and academic achievement, including low intelligence quotient (IQ) scores, verbal abilities, and performance IQ; academic achievement in reading, spelling, and mathematics; and attention functioning. Mental retardation was observed in the children of the atomic-bomb survivors in Japan who were exposed to radiation prenatally at moderate doses (<2 15="" 8="" at="" but="" earlier="" gy="" later="" not="" or="" p="" post-conception="" prenatal="" times.="" to="" weeks="">
Radiotherapy for the treatment of several tumors with protons and other charged particle beams provides ancillary data for considering radiation effects for the CNS. NCRP Report No. 153 notes charge particle usage “for treatment of pituitary tumors, hormone-responsive metastatic mammary carcinoma, brain tumors, and intracranial arteriovenous malformations and other cerebrovascular diseases.” In these studies are found associations with neurological complications such as impairments in cognitive functioning, language acquisition, visual spatial ability, and memory and executive functioning, as well as changes in social behaviors. Similar effects did not appear in patients who were treated with chemotherapy. In all of these examples, the patients were treated with extremely high doses that were below the threshold for necrosis. Since cognitive functioning and memory are closely associated with the cerebral white volume of the prefrontal/frontal lobe and cingulate gyrus, defects in neurogenesis may play a critical role in neurocognitive problems in irradiated patients.

Review of space flight issues

The first proposal concerning the effect of space radiation on the CNS was made by Cornelius Tobias in his 1952 description of light flash phenomenon caused by single HZE nuclei traversals of the retina. Light flashes, such as those described by Tobias, were observed by the astronauts during the early Apollo missions as well as in dedicated experiments that were subsequently performed on Apollo and Skylab missions. More recently, studies of light flashes were made on the Russian Mir space station and the ISS. A 1973 report by the NAS considered these effects in detail. This phenomenon, which is known as a Phosphene, is the visual perception of flickering light. It is considered a subjective sensation of light since it can be caused by simply applying pressure on the eyeball. The traversal of a single, highly charged particle through the occipital cortex or the retina was estimated to be able to cause a light flash. Possible mechanisms for HZE-induced light flashes include direction ionization and Cerenkov radiation within the retina.

The observation of light flashes by the astronauts brought attention to the possible effects of HZE nuclei on brain function. The microlesion concept, which considered the effects of the column of damaged cells surrounding the path of an HZE nucleus traversing critical regions of the brain, originated at this time. An important task that still remains is to determine whether and to what extent such particle traversals contribute to functional degradation within the CNS.

The possible observation of CNS effects in astronauts who were participating in past NASA missions is highly unlikely for several reasons. First, the lengths of past missions are relatively short and the population sizes of astronauts are small. Second, when astronauts are traveling in LEO, they are partially protected by the magnetic field and the solid body of the Earth, which together reduce the GCR dose-rate by about two-thirds from its free space values. Furthermore, the GCR in LEO has lower LET components compared to the GCR that will be encountered in transit to Mars or on the lunar surface because the magnetic field of the Earth repels nuclei with energies that are below about 1,000 MeV/u, which are of higher LET. For these reasons, the CNS risks are a greater concern for long-duration lunar missions or for a Mars mission than for missions on the ISS.

Radiobiology studies of central nervous system risks for protons, neutrons, and high-Z high-energy nuclei

Both GCR and SPE could possibly contribute to acute and late CNS risks to astronaut health and performance. This section presents a description of the studies that have been performed on the effects of space radiation in cell, tissue, and animal models.

Effects in neuronal cells and the central nervous system

Neurogenesis
The CNS consists of neurons, astrocytes, and oligodendrocytes that are generated from multipotent stem cells. NCRP Report No. 153 provides the following excellent and short introduction to the composition and cell types of interest for radiation studies of the CNS: “The CNS consists of neurons differing markedly in size and number per unit area. There are several nuclei or centers that consist of closely packed neuron cell bodies (e.g., the respiratory and cardiac centers in the floor of the fourth ventricle). In the cerebral cortex the large neuron cell bodies, such as Betz cells, are separated by a considerable distance. Of additional importance are the neuroglia which are the supporting cells and consist of astrocytes, oligodendroglia, and microglia. These cells permeate and support the nervous tissue of the CNS, binding it together like a scaffold that also supports the vasculature. The most numerous of the neuroglia are Type I astrocytes, which make up about half the brain, greatly outnumbering the neurons. Neuroglia retain the capability of cell division in contrast to neurons and, therefore, the responses to radiation differ between the cell types. A third type of tissue in the brain is the vasculature which exhibits a comparable vulnerability for radiation damage to that found elsewhere in the body. Radiation-induced damage to oligodendrocytes and endothelial cells of the vasculature accounts for major aspects of the pathogenesis of brain damage that can occur after high doses of low-LET radiation.” Based on studies with low-LET radiation, the CNS is considered a radioresistant tissue. For example: in radiotherapy, early brain complications in adults usually do not develop if daily fractions of 2 Gy or less are administered with a total dose of up to 50 Gy. The tolerance dose in the CNS, as with other tissues, depends on the volume and the specific anatomical location in the human brain that is irradiated.

In recent years, studies with stem cells uncovered that neurogenesis still occurs in the adult hippocampus, where cognitive actions such as memory and learning are determined. This discovery provides an approach to understand mechanistically the CNS risk of space radiation. Accumulating data indicate that radiation not only affects differentiated neural cells, but also the proliferation and differentiation of neuronal precursor cells and even adult stem cells. Recent evidence points out that neuronal progenitor cells are sensitive to radiation. Studies on low-LET radiation show that radiation stops not only the generation of neuronal progenitor cells, but also their differentiation into neurons and other neural cells. NCRP Report No. 153  notes that cells in the SGZ of the dentate gyrus undergo dose-dependent apoptosis above 2 Gy of X-ray irradiation, and the production of new neurons in young adult male mice is significantly reduced by relatively low (>2 Gy) doses of X rays. NCRP Report No. 153  also notes that: “These changes are observed to be dose dependent. In contrast there were no apparent effects on the production of new astrocytes or oligodendrocytes. Measurements of activated microglia indicated that changes in neurogenesis were associated with a significant dose-dependent inflammatory response even 2 months after irradiation. This suggests that the pathogenesis of long-recognized radiation-induced cognitive injury may involve loss of neural precursor cells from the SGZ of the hippocampal dentate gyrus and alterations in neurogenesis.”

Recent studies provide evidence of the pathogenesis of HZE nuclei in the CNS. The authors of one of these studies  were the first to suggest neurodegeneration with HZE nuclei, as shown in figure 6-1(a). These studies demonstrate that HZE radiation led to the progressive loss of neuronal progenitor cells in the SGZ at doses of 1 to 3 Gy in a dosedependent manner. NCRP Report No. 153  notes that “Mice were irradiated with 1 to 3 Gy of 12C or 56Fe-ions and 9 months later proliferating cells and immature neurons in the dentate SGZ were quantified. The results showed that reductions in these cells were dependent on the dose and LET. Loss of precursor cells was also associated with altered neurogenesis and a robust inflammatory response, as shown in figures 6-1(a) and 6-1(b). These results indicate that high-LET radiation has a significant and long-lasting effect on the neurogenic population in the hippocampus that involves cell loss and changes in the microenvironment. The work has been confirmed by other studies. These investigators noted that these changes are consistent with those found in aged subjects, indicating that heavy-particle irradiation is a possible model for the study of aging.”

Figure 6-1(a). (Panel A) Expression of polysialic acid form of neural cell adhesion molecule (PSA-NCAM) in the hippocampus of rats that were irradiated (IR) with 2.5 Gy of 56Fe high-energy radiation and control subjects as measured by % density/field area measured. (Panel B) PSA-NCAM staining in the dentate gyrus of representative irradiated (IR) and control (C) subjects at 5x magnification.
Figure 6-1(b). Numbers of proliferating cells (left panel) and immature neurons (right panel) in the dentate SGZ are significantly decreased 48 hours after irradiation. Antibodies against Ki-67 and doublecortin (Dcx) were used to detect proliferating cells and immature neurons, respectively. Doses from 2 to 10 Gy significantly (p < 0.05) reduced the numbers of proliferating cells. Immature neurons were also reduced in a dose-dependent fashion (p<0 .001="" an="" and="" animals="" average="" bar="" bars="" div="" each="" error.="" error="" four="" of="" represents="" standard="">
Oxidative damage
Recent studies indicate that adult rat neural precursor cells from the hippocampus show an acute, dose-dependent apoptotic response that was accompanied by an increase in ROS. Low-LET protons are also used in clinical proton beam radiation therapy, at an RBE of 1.1 relative to megavoltage X rays at a high dose. NCRP Report No. 153 notes that: “Relative ROS levels were increased at nearly all doses (1 to 10 Gy) of Bragg-peak 250 MeV protons at post-irradiation times (6 to 24 hours) compared to unirradiated controls. The increase in ROS after proton irradiation was more rapid than that observed with X rays and showed a well-defined dose response at 6 and 24 hours, increasing about 10-fold over controls at a rate of 3% per Gy. However, by 48 hours post-irradiation, ROS levels fell below controls and coincided with minor reductions in mitochondrial content. Use of the antioxidant alpha-lipoic acid (before or after irradiation) was shown to eliminate the radiation-induced rise in ROS levels. These results corroborate the earlier studies using X rays and provide further evidence that elevated ROS are integral to the radioresponse of neural precursor cells.” Furthermore, high-LET radiation led to significantly higher levels of oxidative stress in hippocampal precursor cells as compared to lower-LET radiations (X rays, protons) at lower doses (≤1 Gy) (figure 6-2). The use of the antioxidant lipoic acid was able to reduce ROS levels below background levels when added before or after 56Fe-ion irradiation. These results conclusively show that low doses of 56Fe-ions can elicit significant levels of oxidative stress in neural precursor cells at a low dose.

Figure 6-2. Dose response for oxidative stress after 56Fe-ion irradiation. Hippocampal precursors that are subjected to 56Fe-ion irradiation were analyzed for oxidative stress 6 hours after exposure. At doses ≤1 Gy a linear dose response for the induction of oxidative stress was observed. At higher 56Fe doses, oxidative stress fell to values that were found using lower-LET irradiations (X rays, protons). Experiments, which represent a minimum of three independent measurements (±SE), were normalized against unirradiated controls set to unity. ROS levels induced after 56Fe irradiation were significantly (P < 0.05) higher than controls.
Neuroinflammation
Neuroinflammation, which is a fundamental reaction to brain injury, is characterized by the activation of resident microglia and astrocytes and local expression of a wide range of inflammatory mediators. Acute and chronic neuroinflammation has been studied in the mouse brain following exposure to HZE. The acute effect of HZE is detectable at 6 and 9 Gy; no studies are available at lower doses. Myeloid cell recruitment appears by 6 months following exposure. The estimated RBE value of HZE irradiation for induction of an acute neuroinflammatory response is three compared to that of gamma irradiation. COX-2 pathways are implicated in neuroinflammatory processes that are caused by low-LET radiation. COX-2 up-regulation in irradiated microglia cells leads to prostaglandin E2 production, which appears to be responsible for radiation-induced gliosis (overproliferation of astrocytes in damaged areas of the CNS).

Behavioral effects

As behavioral effects are difficult to quantitate, they consequently are one of the most uncertain of the space radiation risks. NCRP Report No. 153  notes that: “The behavioral neurosciences literature is replete with examples of major differences in behavioral outcome depending on the animal species, strain, or measurement method used. For example, compared to unirradiated controls, X-irradiated mice show hippocampal-dependent spatial learning and memory impairments in the Barnes maze, but not in the Morris water maze  which, however, can be used to demonstrate deficits in rats. Particle radiation studies of behavior have been accomplished with rats and mice, but with some differences in the outcome depending on the endpoint measured.”

The following studies provide evidence that space radiation affects the CNS behavior of animals in a somewhat dose- and LET-dependent manner.
Sensorimotor effects
Sensorimotor deficits and neurochemical changes were observed in rats that were exposed to low doses of 56Fe-ions. Doses that are below 1 Gy reduce performance, as tested by the wire suspension test. Behavioral changes were observed as early as 3 days after radiation exposure and lasted up to 8 months. Biochemical studies showed that the K+-evoked release of dopamine was significantly reduced in the irradiated group, together with an alteration of the nerve signaling pathways. A negative result was reported by Pecaut et al., in which no behavioral effects were seen in female C57/BL6 mice in a 2- to 8-week period following their exposure to 0, 0.1, 0.5 or 2 Gy accelerated 56Fe-ions (1 GeV/u56Fe) as measured by open-field, rotorod, or acoustic startle habituation.
Radiation-induced changes in conditioned taste aversion
There is evidence that deficits in conditioned taste aversion (CTA) are induced by low doses of heavy ions. The CTA test is a classical conditioning paradigm that assesses the avoidance behavior that occurs when the ingestion of a normally acceptable food item is associated with illness. This is considered a standard behavioral test of drug toxicity. NCRP Report No. 153 notes that: “The role of the dopaminergic system in radiation-induced changes in CTA is suggested by the fact that amphetamine-induced CTA, which depends on the dopaminergic system, is affected by radiation, whereas lithium chloride-induced CTA, which does not involve the dopaminergic system, is not affected by radiation. It was established that the degree of CTA due to radiation is LET-dependent ([figure 6-3]) and that 56Fe-ions are the most effective of the various low and high LET radiation types that have been tested. Doses as low as ~0.2 Gy of 56Fe-ions appear to have an effect on CTA.”

The RBE of different types of heavy particles on CNS function and cognitive/behavioral performance was studied in Sprague-Dawley rats. The relationship between the thresholds for the HZE particle-induced disruption of amphetamine-induced CTA learning is shown in figure 6-4; and for the disruption of operant responding is shown in figure 6-5. These figures show a similar pattern of responsiveness to the disruptive effects of exposure to either 56Fe or 28Si particles on both CTA learning and operant responding. These results suggest that the RBE of different particles for neurobehavioral dysfunction cannot be predicted solely on the basis of the LET of the specific particle.

Figure 6-3. ED50 for CTA as a function of LET for the following radiation sources: 40Ar = argon ions, 60Co = Cobalt-60 gamma rays, e = electrons, 56FE = iron ions, 4He = helium ions, n0 = neutrons, 20Ne = neon ions.
Figure 6-4. Radiation-induced disruption in CTA. This figure shows the relationship between exposure to different energies of 56FE and 28Si particles and the threshold dose for the disruption of amphetamine-induced CTA learning. Only a single energy of 48Ti particles was tested. The threshold dose (cGy) for the disruption of the response is plotted against particle LET (keV/μm).
Figure 6-5.jpg High-LET radiation effects on operant response. This figure shows the relationship between the exposure to different energies of 56Fe and 28Si particles and the threshold dose for the disruption of performance on a food-reinforced operant response. Only a single energy of 48Ti particles was tested. The threshold dose (cGy) for the disruption of the response is plotted against particle LET (keV/μm).
Radiation affect on operant conditioning
Operant conditioning uses several consequences to modify a voluntary behavior. Recent studies by Rabin et al. have examined the ability of rats to perform an operant order to obtain food reinforcement using an ascending fixed ratio (FR) schedule. They found that 56Fe-ion doses that are above 2 Gy affect the appropriate responses of rats to increasing work requirements. NCRP Report No. 153  notes that "The disruption of operant response in rats was tested 5 and 8 months after exposure, but maintaining the rats on a diet containing strawberry, but not blueberry, extract were shown to prevent the disruption. When tested 13 and 18 months after irradiation, there were no differences in performance between the irradiated rats maintained on control, strawberry or blueberry diets. These observations suggest that the beneficial effects of antioxidant diets may be age dependent."
Spatial learning and memory
The effects of exposure to HZE nuclei on spatial learning, memory behavior, and neuronal signaling have been tested, and threshold doses have also been considered for such effects. It will be important to understand the mechanisms that are involved in these deficits to extrapolate the results to other dose regimes, particle types, and, eventually, astronauts. Studies on rats were performed using the Morris water maze test 1 month after whole-body irradiation with 1.5 Gy of 1 GeV/u 56Fe-ions. Irradiated rats demonstrated cognitive impairment that was similar to that seen in aged rats. This leads to the possibility that an increase in the amount of ROS may be responsible for the induction of both radiation- and age-related cognitive deficits.

NCRP Report No. 153 notes that: “Denisova et al. exposed rats to 1.5 Gy of 1 GeV/u56Feions and tested their spatial memory in an eight-arm radial maze. Radiation exposure impaired the rats’ cognitive behavior, since they committed more errors than control rats in the radial maze and were unable to adopt a spatial strategy to solve the maze. To determine whether these findings related to brain-region specific alterations in sensitivity to oxidative stress, inflammation or neuronal plasticity, three regions of the brain, the striatum, hippocampus and frontal cortex that are linked to behavior, were isolated and compared to controls. Those that were irradiated were adversely affected as reflected through the levels of dichlorofluorescein, heat shock, and synaptic proteins (for example, synaptobrevin and synaptophysin). Changes in these factors consequently altered cellular signaling (for example, calciumdependent protein kinase C and protein kinase A). These changes in brain responses significantly correlated with working memory errors in the radial maze. The results show differential brain-region-specific sensitivity induced by 56Fe irradiation ([figure 6-6]). These findings are similar to those seen in aged rats, suggesting that increased oxidative stress and inflammation may be responsible for the induction of both radiation and age-related cognitive deficits.”

Figure 6-6. Brain-region-specific calcium-dependent protein kinase C expression was assessed in control and irradiated rats using standard Western blotting procedures. Values are means ± SEM (standard error of mean).

Acute central nervous system risks

In addition to the possible in-flight performance and motor skill changes that were described above, the immediate CNS effects (i.e., within 24 hours following exposure to low-LET radiation) are anorexia and nausea. These prodromal risks are dose-dependent and, as such, can provide an indicator of the exposure dose. Estimates are ED50 = 1.08 Gy for anorexia, ED50 = 1.58 Gy for nausea, and ED50=2.40 Gy for emesis. The relative effectiveness of different radiation types in producing emesis was studied in ferrets and is illustrated in figure 6-7. High-LET radiation at doses that are below 0.5 Gy show greater relative biological effectiveness compared to low-LET radiation. The acute effects on the CNS, which are associated with increases in cytokines and chemokines, may lead to disruption in the proliferation of stem cells or memory loss that may contribute to other degenerative diseases.

Figure 6-7. LET dependence of RBE of radiation in producing emesis or retching in a ferret. B = bremsstrahlung; e = electrons; P = protons; 60Co = cobalt gamma rays; n0 = neutrons; and 56Fe = iron.

Computer models and systems biology analysis of central nervous system risks

Since human epidemiology and experimental data for CNS risks from space radiation are limited, mammalian models are essential tools for understanding the uncertainties of human risks. Cellular, tissue, and genetic animal models have been used in biological studies on the CNS using simulated space radiation. New technologies, such as three-dimensional cell cultures, microarrays, proteomics, and brain imaging, are used in systematic studies on CNS risks from different radiation types. According to biological data, mathematical models can be used to estimate the risks from space radiation.

Systems biology approaches to Alzheimer’s disease that consider the biochemical pathways that are important in CNS disease evolution have been developed by research that was funded outside NASA. Figure 6-8 shows a schematic of the biochemical pathways that are important in the development of Alzheimer’s disease. The description of the interaction of space radiation within these pathways would be one approach to developing predictive models of space radiation risks. For example, if the pathways that were studied in animal models could be correlated with studies in humans who are suffering from Alzheimer’s disease, an approach to describe risk that uses biochemical degrees-of-freedom could be pursued. Edelstein-Keshet and Spiros  have developed an in silico model of senile plaques that are related to Alzheimer’s disease. In this model, the biochemical interactions among TNF, IL-1B, and IL-6 are described within several important cell populations, including astrocytes, microglia, and neurons. Further, in this model soluble amyloid causes microglial chemotaxis and activates IL-1B secretion. Figure 6-9 shows the results of the Edelstein-Keshet and Spiros model simulating plaque formation and neuronal death. Establishing links between space radiation-induced changes to the changes that are described in this approach can be pursued to develop an in silico model of Alzheimer’s disease that results from space radiation.

Figure 6-8.Molecular pathways important in Alzheimer’s disease. From Kyoto Encyclopedia of Genes and Genomes. Copyrighted image located at http://www.genome.jp/kegg/pathway/hsa/hsa05010.html
 
Figure 6-9. Model of plaque formation and neuronal death in Alzheimer’s disease. From Edelstein-Keshet and Spiros, 2002 : Top row: Formation of a plaque and death of neurons in the absence of glial cells, when fibrous amyloid is the only injurious influence. The simulation was run with no astrocytes or microglia, and the health of neurons was determined solely by the local fibrous amyloid. Shown above is a time sequence (left to right) of three stages in plaque development, at early, intermediate, and advanced stages. Density of fibrous deposit is represented by small dots and neuronal health by shading from white (healthy) to black (dead). Note radial symmetry due to simple diffusion. Bottom row: Effect of microglial removal of amyloid on plaque morphology. Note that microglia (small star-like shapes) are seen approaching the plaque (via chemotaxis to soluble amyloid, not shown). At a later stage, they have congregated at the plaque center, where they adhere to fibers. As a result of the removal of soluble and fibrous amyloid, the microglia lead to irregular plaque morphology. Size scale: In this figure, the distance between the small single dots (representing low-fiber deposits) is 10 mm. Similar results were obtained for a 10-fold scaling in the time scale of neuronal health dynamics.
Other interesting candidate pathways that may be important in the regulation of radiation-induced degenerative CNS changes are signal transduction pathways that are regulated by Cdk5. Cdk5 is a kinase that plays a key role in neural development; its aberrant expression and activation are associated with neurodegenerative processes, including Alzheimer’s disease. This kinase is up-regulated in neural cells following ionizing radiation exposure.

Risks in context of exploration mission operational scenarios

Projections for space missions

Reliable projections of CNS risks for space missions cannot be made from the available data. Animal behavior studies indicate that high-HZE radiation has a high RBE, but the data are not consistent. Other uncertainties include: age at exposure, radiation quality, and dose-rate effects, as well as issues regarding genetic susceptibility to CNS risk from space radiation exposure. More research is required before CNS risk can be estimated.

Potential for biological countermeasures

The goal of space radiation research is to estimate and reduce uncertainties in risk projection models and, if necessary, develop countermeasures and technologies to monitor and treat adverse outcomes to human health and performance that are relevant to space radiation for short-term and career exposures, including acute or late CNS effects from radiation exposure. The need for the development of countermeasures to CNS risks is dependent on further understanding of CNS risks, especially issues that are related to a possible dose threshold, and if so, which NASA missions would likely exceed threshold doses. As a result of animal experimental studies, antioxidant and anti-inflammation are expected to be effective countermeasures for CNS risks from space radiation. Diets of blueberries and strawberries were shown to reduce CNS risks after heavy-ion exposure. Estimating the effects of diet and nutritional supplementation will be a primary goal of CNS research on countermeasures.

A diet that is rich in fruit and vegetables significantly reduces the risk of several diseases. Retinoids and vitamins A, C, and E are probably the most well-known and studied natural radioprotectors, but hormones (e.g., melatonin), glutathione, superoxide dismutase, and phytochemicals from plant extracts (including green tea and cruciferous vegetables), as well as metals (especially selenium, zinc, and copper salts) are also under study as dietary supplements for individuals, including astronauts, who have been overexposed to radiation. Antioxidants should provide reduced or no protection against the initial damage from densely ionizing radiation such as HZE nuclei, because the direct effect is more important than the free-radical-mediated indirect radiation damage at high LET. However, there is an expectation that some benefits should occur for persistent oxidative damage that is related to inflammation and immune responses. Some recent experiments suggest that, at least for acute high-dose irradiation, an efficient radioprotection by dietary supplements can be achieved, even in the case of exposure to high-LET radiation. Although there is evidence that dietary antioxidants (especially strawberries) can protect the CNS from the deleterious effects of high doses of HZE particles, because the mechanisms of biological effects are different at low dose-rates compared to those of acute irradiation, new studies for protracted exposures will be needed to understand the potential benefits of biological countermeasures.

Concern about the potential detrimental effects of antioxidants was raised by a recent meta-study of the effects of antioxidant supplements in the diet of normal subjects. The authors of this study did not find statistically significant evidence that antioxidant supplements have beneficial effects on mortality. On the contrary, they concluded that β-carotene, vitamin A, and vitamin E seem to increase the risk of death. Concerns are that the antioxidants may allow rescue of cells that still sustain DNA mutations or altered genomic methylation patterns following radiation damage to DNA, which can result in genomic instability. An approach to target damaged cells for apoptosis may be advantageous for chronic exposures to GCR.

Individual risk factors

Individual factors of potential importance are genetic factors, prior radiation exposure, and previous head injury, such as concussion. Apolipoprotein E (ApoE) has been shown to be an important and common factor in CNS responses. ApoE controls the redistribution of lipids among cells and is expressed at high levels in the brain. New studies are considering the effects of space radiation for the major isoforms of ApoE, which are encoded by distinct alleles (ε2, ε3, and ε4). The isoform ApoE ε4 has been shown to increase the risk of cognitive impairments and to lower the age for Alzheimer’s disease. It is not known whether the interaction of radiation sensitivity or other individual risks factors is the same for high- and low-LET radiation. Other isoforms of ApoE confer a higher risk for other diseases. People who carry at least one copy of the ApoE ε4 allele are at increased risk for atherosclerosis, which is also suspected to be a risk increased by radiation. People who carry two copies of the ApoE ε2 allele are at risk for a condition that is known as hyperlipoproteinemia type III. It will therefore be extremely challenging to consider genetic factors in a multipleradiation-risk paradigm.

Conclusion

Reliable projections for CNS risks from space radiation exposure cannot be made at this time due to a paucity of data on the subject. Existing animal and cellular data do suggest that space radiation can produce neurological and behavioral effects; therefore, it is possible that mission operations will be impacted. The significance of these results on the morbidity to astronauts has not been elucidated, however. It is to be noted that studies, to date, have been carried out with relatively small numbers of animals (

Selfish brain theory

From Wikipedia, the free encyclopedia
 
The “Selfish Brain” theory describes the characteristic of the human brain to cover its own, comparably high energy requirements with the utmost of priorities when regulating energy fluxes in the organism. The brain behaves selfishly in this respect. The "Selfish brain" theory amongst other things provides a possible explanation for the origin of obesity, the severe and pathological form of overweight. The Luebeck obesity and diabetes specialist Achim Peters developed the fundamentals of this theory between 1998 and 2004. The interdisciplinary “Selfish Brain: brain glucose and metabolic syndrome” research group headed by Peters and supported by the German Research Foundation (DFG) at the University of Luebeck has in the meantime been able to reinforce the basics of the theory through experimental research.

The explanatory power of the Selfish Brain theory

Investigative approach of the Selfish Brain theory

The brain performs many functions for the human organism. Most are of a cognitive nature or concern the regulation of the motor system. A previously lesser investigated aspect of brain activity was the regulation of energy metabolism. The "Selfish Brain" theory shed new light on this function. It states that the brain behaves selfishly by controlling energy fluxes in such a way that it allocates energy to itself before the needs of the other organs are satisfied. The internal energy consumption of the brain is very high. Although its mass constitutes only 2% of the entire body weight, it consumes 20% of the carbohydrates ingested over a 24-hour period. This corresponds to 100 g of glucose per day, or half the daily requirement for a human being. A 30-year-old office worker with a body weight of 75 kg and a height of 1.85 m consumes approx. 200 g glucose per day.

Before now the scientific community assumed that the energy needs of the brain, the muscles and the organs were all met in parallel. The hypothalamus, an area of the upper brainstem, was thought to play a central role in regulating two feedback loops within narrow limits.
  • The "lipostatic theory" established by Gordon C Kennedy in 1953 describes the fat deposition feedback system. The hypothalamus receives signals from circulating metabolic products or hormones about how much adipose tissue there is in the body as well as its prevailing metabolic status. Using these signals the hypothalamus can adapt the absorption of nutrients so that the body’s fat depots remain constant, i.e. a "lipostasis" is achieved.
  • The "glucostatic theory" developed in the same year by Jean Mayer describes the blood glucose feedback system. According to this theory the hypothalamus controls the absorption of nutrients via receptors that measure the glucose level in the blood. In this way a certain glucose concentration is set by adjusting the intake of nutrients. Mayer also included the brain in his calculations. Although he considered that food intake served to safeguard the energy homoeostasis of the central nervous system, he did imply that the energy flux from the body to the brain was a passive process.
On the basis of these theories a number of international research groups still position the origin of obesity in a disorder in one of the two above described feedback systems. However, there are scenarios in weight regulation that can not be explained in this way. For example, upon inanition of the body (e.g. during fasting) almost all the organs such as the heart, liver, spleen and kidneys dramatically lose weight (approx. 40%) and the blood glucose concentration falls. During this time, however, the brain mass hardly changes (less than 2% on average). A further example illustrates the inherent conflict between these two explanatory approaches: although large amounts of the appetite suppressing hormone leptin are released in obese individuals, they are still afflicted with a ravenous hunger once their blood glucose falls.

The "Selfish Brain" theory links in seamlessly with the traditions of the lipo- and glucostatic theories. What is new is that the “Selfish Brain” theory assumes there is another feedback control system that is supraordinate to the blood glucose and fat feedback control systems.

A feedback system is meant here in which the cerebral hemispheres, the integrating organ for the entire central nervous system, control the ATP concentration (adenosine-triposphate - a form of energy currency for the organism) of the neurons (see 3). In this way the cerebral hemispheres ensure the primacy of the brain’s energy supply and are therefore considered in the "Selfish Brain" theory as wings of a central authority that governs energy metabolism. Whenever required the cerebral hemispheres direct an energy flux from the body to the brain to maintain its energy status. In contrast to the ideas of Jean Mayer, the "Selfish Brain" theory assumes an active "Energy on Demand" process. It is controlled by cerebral ATP sensors that react sensitively to changes in ATP in neurons over the entire brain.

The "Selfish Brain" theory combines the theories of Kennedy and Mayer, considering blood glucose and fat feedback control systems as a complex. This regulates the energy flux from the environment to the body, i.e. the intake of nutrients. It is regulated by a hypothalamic nucleus. Here as well there are sensors that record changes in both blood glucose and fat depots, and which activate biochemical processes that maintain a certain body weight.

For achieving their goal of maintaining energy homeostasis in the brain, the cerebral hemispheres depend on subordinate feedback loops, since these loops send signals for energy procurement to their control organ. If these signals are not processed correctly, e.g. due to impairments in the amygdala or hippocampus, the energy supply to the brain will not be endangered, but anomalies such as obesity can still result. The origin of this is not to be found in the blood glucose or fat feedback control systems, but much rather in the regulating instances within the cerebral hemispheres.

Energy procurement by the brain

The brain can cover its energy needs (particularly those of the cerebral hemispheres) either by allocation or nutrient intake. The corresponding signal to the subordinate regulatory system originates in the cerebral hemispheres. The most phylogenetically recent part of the brain is characterized by a high plasticity and a high capacity to learn with this process. It is always able to adapt its regulatory processes by processing responses from the periphery, memorizing the results of individual feedback loops and behaviors, and anticipating any possible build-ups.

Energy procurement by the brain is complicated by three factors. Firstly, the brain always requests energy whenever it is needed. It can only store energy in a very restricted form. Peters therefore refers to this as an "energy on demand" system. Secondly, the brain is almost exclusively dependent on glucose as an ATP-substrate. Lactate and betahydroxybutyric acid can also be considered as substrates, but only under certain conditions, e.g. with considerable stress levels or malnutrition. Thirdly, the brain is separated from the rest of the body’s circulation by the blood-brain-barrier. The blood glucose has to be brought there via a special, insulin-independent transporter.

The healthy and the diseased brain: energy supply through allocation or food intake

Allocation represents the way a healthy brain secures its energy supply when acutely needed. It diverts blood glucose from the periphery and leads it across the blood-brain-barrier. An important role here is played by the stress system, whose neural pathways lead directly to the organs (heart, muscle, adipose tissue, liver, pancreas, etc.) and which also acts indirectly on these organs via the bloodstream by the stress hormones adrenaline and cortisol. This system ensures that the glucose is transported to the brain, and that uptake by the musculature and the adipose tissue is reduced. In order to achieve that, the release of insulin and its effect on organs is halted.

The acute supply of energy to the brain from the intake of nutrients presents problems for the organism. In the event of an emergency food intake is only activated if allocation is insufficient, and must be taken as a sign of disease. In this case the required energy can not be requested from the body, and it can only be taken directly from the environment. This pathology is due to defects lying within the control centers of the brain such as the hippocampus, amygdala and hypothalamus. These may be due to mechanical (tumors, injuries), genetic defects (lacking brain-derived neurotrophic factor (BDNF) receptors or leptin receptors), faulty programming (post-traumatic stress disorder, conditioning of eating behavior, advertising for sweets) or false signals may arise due to the influence of antidepressants, drugs, alcohol, pesticides, saccharin or viruses.

Such disorders can have a negative impact on a number of behavioral types:
  • Eating behavior (eating, drinking)
  • Social behavior (e.g. dealing with conflicts, sexuality)
  • Behavior during food procurement (movement, orientation)
Diseases can then result. The "Selfish Brain” research group has concentrated above all on obesity as a pathology.

The following applies irrespective of the nature of energy provision: the brain never gives up on being selfish. Peters therefore differentiates the healthy from the diseased brain through its ability to compete for its energy requirements even under adverse conditions where there are excessive demands from the body. He contraposes the "selfish brain with high fitness" that can tap the bodies energy reserves even in times of short food supply at the expense of the body mass, and the "selfish brain with low fitness", that is unable to do this, and which instead takes in additional food and bears the risk of developing obesity.

Obesity - a build-up in the supply chain

The "Selfish Brain" theory can be considered as a new way to understand obesity. Disorders in the control centers of the brain such as the hippocampus, amygdala and hypothalamus are thought to underlie this, as outlined above. Whatever the type of disruption that exists, it entails that the energy procurement for the brain is accomplished less by allocation and more by the intake of nutrients even though the muscles have no additional energy requirement. If one imagines the energy supply of the human organism as a supply-chain that passes from the outside world with its numerous options for nutrient intake via the body to the brain as the end user and control organ, then obesity can be considered as being caused by a build-up in this supply-chain. This is characterized by an excessive accumulation of energy in the adipose tissue or blood. An allocation failure is expressed as a weakening of the sympathetic nervous system (SNS). The result is that energy intended for the brain mainly enters buffer storage areas, i.e. the adipose tissue and the musculature. Only a small proportion reaches the brain. In order to cover its huge energy needs the brain commands the individual to consume more food. The accumulation process escalates, and the buffer storage areas are continuously filled up. This leads to the development of obesity. In many cases, at a time which is dependent on an affected individual's personal disposition, obesity can also be overlain by a diabetes mellitus. In such a situation the adipose tissue and musculature can no longer accept any energy, and the energy then accumulates in the blood so that hyperglycemia results.

Work on the "Selfish Brain" theory

The basics of the theory

In 1998 Achim Peters drafted the basic version of the “Selfish Brain" theory and formulated its axioms. In his explanation of the “Selfish Brain” theory he referred to approx. 5000 published citations from classical endocrinology and diabetology and the modern neurosciences, but argued both mathematically (using differential equations) and system theoretically. That was a novel methodological approach for diabetology. The regulation of adenosine tripophoshate content plays a central role (a type of energy currency for the organism) in the brain.

Peters assumes a double feedback structure, where the ATP content in the neurons of the brain is stabilized by measurements from two sensors of differing sensitivity that produce the raw energy request signals. The more sensitive sensor records ATP deficits and induces an allocation signal for glucose that is compensated for by requests from the body. The other less sensitive sensor is only activated with glucose excesses and conveys a signal to halt the brain glucose allocation. The optimal ATP quantity is determined by the balance between these receptor signals.

Peters considers that the stress system also operates according to this double feedback structure, which is also closely related to the supply of glucose to the brain. If an individual is confronted with a stress-inducing stimulus, it responds with an increased central-nervous information processing and along with that an increased glucose requirement in the brain. The hormone cortisol, important for regulating stress reactions, and the hormone adrenaline, important for glucose procurement, are released from the adrenal glands. The amount of cortisol that is released is also determined by a balance between a sensitive and a less sensitive sensor, just as is the case with the control of ATP content. This process is terminated if the stress system returns to a resting state.

This model underlies the axioms for the “Selfish Brain" theory as developed by Peters:
  1. The ATP content in the brain is held constant within tight limits, irrespective of the state of the body
  2. The stress system strives to return to a resting state

Integrative power of the “Selfish Brain" theory

The "Selfish Brain" theory is an integrated concept, since from a methodological standpoint it can be seen as a union of two separate research directions. On the one hand it integrates peripheral metabolism research which investigates how energy metabolism functions through intake of nutrients into the organs of the body. On the other it incorporates the results of the brain metabolism expert Luc Pellerin from the University of Lausanne, who found that the neurons in the brain are supplied with energy via their neighboring astrocytes whenever required. This requirement oriented principle for the nerve cells is termed "Energy on demand".

With this approach the "Selfish Brain" theory recognizes the description of two ends of a supply chain. The brain doesn’t just control the supply chain, but it is also its end consumer, and not the body through which the supply chain passes. The priority of the brain implies that the regulation of energy supply in a human organism is accomplished by the demand rather than the supply principle: Energy is ordered when it is needed.
Fig. 1: Energy supply chain of the "Selfish Brain".
If the ATP concentration drops in the nerve cells of the brain, a cerebral mechanism is (pull 1) set in motion which increases the energy flux directed from the body to the brain according to the "Energy on demand" principle. (solid arrows show stimulation, interrupted arrows inhibition; yellow means: "belongs to the controlling brain parts "). If the energy content in the body falls (blood, adipose tissue), the falling glucose and the falling adipose tissue hormone leptin induce another cerebral mechanism (pull 2). This entails that more energy is absorbed from the immediate environment into the body (ingestion behavior). When the available supplies in the immediate vicinity disappear, a further cerebral mechanism (pull 3) initiates moving and exploration, i.e. foraging for food. The glucostatic and the lipostatic theories describe the second step in this supply chain (area with dark grey background). The "Selfish Brain" theory links to the two traditional theories and expands them by considering the brain as an end consumer in a continuous supply chain (light gray)

The founding of the "Selfish Brain" research group

After the axioms were formulated in 1998 Achim Peters sought experts in other specialties to develop his "Selfish Brain" theory further. Already at an early stage he had matched up his ideas with the views of other leading international scientists. Amongst them was the Swiss brain metabolism specialist Luc Pellerin, the renowned obesity expert Denis G. Baskin, the internationally famous stress researcher Mary Dallman and the renowned neurobiologist Larry W. Swanson. At the University of Luebeck Achim Peters compared his findings with the well-known neuroendocrinologist Prof. Dr. Horst Lorenz Fehm. A year later in 1999 an intensive collaboration was started with the psychiatrist and psychotherapist Prof. Dr. Ulrich Schweiger who also worked at the University of Luebeck.

In 2004 the interdisciplinary research group: "Selfish Brain: brain glucose and metabolic syndrome" supported by the German Research Foundation (DFG) was officially founded. Achim Peters was appointed to a professorship that was especially created for the group. He also succeeded in winning over additional reputable scientists for the project, including Prof. Dr. Rolf Hilgenfeld, an eminent SARS expert and the developer of one of the first inhibitors of the virus. At this time the research group consists of 18 scientific subproject investigators from a number of specialties including internal medicine, psychiatry, neurobiology, molecular medicine and mathematics. The advisory committee includes Professors Luc Pellerin, Denis Baskin and Mary Dallman under its ranks.

"Train the brain": a therapy of obesity based on the "Selfish Brain" theory

According to the “Selfish Brain” theory obesity can also be attributed to psychological causes. Poor coping strategies in stress situations represent one of these. An association was found between the tendency to evade conflict, and the habit of reducing psychological stress by immediately consuming sweets. The direct supply of glucose circumvents the glucose procurement from the body that would otherwise occur with a normal allocation process following the release of the stress hormone adrenaline. An existing allocation problem with obesity can be made even worse by such bad behavior. The stress system can also be weakened further because it may forget how to react autonomously.

These relationships have led to the development of an innovative multidisciplinary psychiatric and internal medical program at the University of Luebeck for obesity therapy. Prof. Dr. Ulrich Schweiger of the Clinic for Psychiatry and Psychotherapy led by Prof. Dr. F. Hohagen has been a key player in this development. In close cooperation with Schweiger, the internist Achim Peters derived a therapeutic concept from the “Selfish Brain” theory that was fixed on both feelings and coordinated behavior emanating from the brain. The aim of this therapy is to modify the settings and behaviors coded in the emotional memory centers of the brain that have become habit. "Train the Brain" is the catchphrase describing these therapeutic measures that may be enabled by the unusual plasticity and learning-capacity of the brain. It might just simply involve the practicing of eating behaviors that can be tolerated from a health perspective, and combining this with a reduction in detrimental habits. However, it could also involve the modification of behaviors associated with the handling of conflicts and other stress situations. According to the view of the “Selfish Brain” research group, if defective allocation is compensated for chronically by immediately consuming foodstuffs, a risk arises that eating will become the only reaction to a situation that requires a considerably more complex social behavior. The therapy of obesity therefore has both a physiological and a psychological component: It is not just the ability to allocate that must be restored, but actions and behaviors in everyday life.

Experimental evidence─ the theory’s scope of validity

In the first DFG funding period from 2004 to 2007 researchers from the Clinical Research Group “Selfish Brain: brain glucose and metabolic syndrome" expanded the scope of validity of the “Selfish Brain" theory in central aspects by carrying out experiments on healthy and diseased test subjects. The researchers in Luebeck found the following key results regarding the axioms of the theory:
  • The brain maintains its own glucose content "selfishly"
  • The brain is always supplied with a greater energy share than the body in extreme stress situations
  • In overweight individuals the brain’s energy distribution mechanism is disrupted
  • With chronic stress loads the energy flux between the brain and the body is diverted, a phenomenon that leads to the development of overweight
  • Nerve cells record their ATP content using two sensors of differing sensitivity
  • The resting state of the stress system is fine-tuned with the help of two cortisol receptors of differing sensitivity
The special position of the brain during inanition (due to fasting or tumor disease) was already confirmed experimentally over 80 years ago: The body mass reduces, but the mass of the brain hardly reduces, if at all (see 3). Recently this axiom of the selfish brain theory was supported by work at the University of Luebeck involving state-of-the-art magnetic resonance procedures, e.g. during metabolic stress. The ATP content in the brain and musculature of test subjects was examined by a magnetic resonance technique while either an energy deficit or surplus was induced in the blood by insulin or glucose injection. In both situations a sufficiently high ATP-concentration was measured in the brain. The measured high-energy-rich substances changed throughout to the benefit of the brain and to the disadvantage of the body cells. The glucose-supply of the brain had priority despite the physical stress that was being endured.

Some of the results were presented at the international congress organized by the "Selfish Brain” research group at the 23 and 24 February 2006 in Luebeck as well as at a press conference aimed at both specialists and the wider public.

In the second funding period that has been running since the end of 2007, the clarification of the following questions has now become the focus of interest:
  • How does the reward system of the "Selfish Brain" function and how does it lead amongst obese individuals to a faulty programming of energy management?
  • How can the redirection of metabolic fluxes be learned and trained?
  • How does "comfort feeding" affect stress reactions?
  • How is the glucose requirement of the brain increased in stress situations?
  • What does the molecular supply chain with which brain cells request glucose when needed look like?
  • Can viruses block this supply chain for the brain cells?

Gut–brain axis

From Wikipedia, the free encyclopedia
 
The gut-brain axis is the relationship between the GI tract and brain function and development

The gut–brain axis is the biochemical signaling that takes place between the gastrointestinal tract (GI tract) and the central nervous system (CNS). The term "gut–brain axis" is occasionally used to refer to the role of the gut flora in the interplay as well, whereas the term "microbiome–gut–brain axis" explicitly includes the role of gut flora in the biochemical signaling events that take place between the GI tract and CNS.

Broadly defined, the gut-brain axis includes the central nervous system, neuroendocrine and neuroimmune systems, including the hypothalamic–pituitary–adrenal axis (HPA axis), sympathetic and parasympathetic arms of the autonomic nervous system, including the enteric nervous system and the vagus nerve, and the gut microbiota. The first of the brain-gut interactions shown, was the cephalic phase of digestion, in the release of gastric and pancreatic secretions in response to sensory signals, such as the smell and sight of food. This was first demonstrated by pavlov.

Interest in the field was sparked by a 2004 study showing that germ-free mice showed an exaggerated HPA axis response to stress compared to non-GF laboratory mice.

As of October 2016, most of the work that had been done on the role of gut flora in the gut-brain axis had been conducted in animals, or on characterizing the various neuroactive compounds that gut flora can produce. Studies with humans – measuring variations in gut flora between people with various psychiatric and neurological conditions or when stressed, or measuring effects of various probiotics (dubbed "psychobiotics" in this context) – had generally been small and were just beginning to be generalized. Whether changes to gut flora are a result of disease, a cause of disease, or both in any number of possible feedback loops in the gut-brain axis, remained unclear.

Gut flora

Bifidobacterium adolescentis Gram
 
Lactobacillus sp 01

The gut flora is the complex community of microorganisms that live in the digestive tracts of humans and other animals. The gut metagenome is the aggregate of all the genomes of gut microbiota. The gut is one niche that human microbiota inhabit.

In humans, the gut microbiota has the largest numbers of bacteria and the greatest number of species compared to other areas of the body. In humans the gut flora is established at one to two years after birth, and by that time the intestinal epithelium and the intestinal mucosal barrier that it secretes have co-developed in a way that is tolerant to, and even supportive of, the gut flora and that also provides a barrier to pathogenic organisms.

The relationship between gut flora and humans is not merely commensal (a non-harmful coexistence), but rather a mutualistic relationship. Human gut microorganisms benefit the host by collecting the energy from the fermentation of undigested carbohydrates and the subsequent absorption of short-chain fatty acids (SCFAs), acetate, butyrate, and propionate. Intestinal bacteria also play a role in synthesizing vitamin B and vitamin K as well as metabolizing bile acids, sterols, and xenobiotics. The systemic importance of the SCFAs and other compounds they produce are like hormones and the gut flora itself appears to function like an endocrine organ, and dysregulation of the gut flora has been correlated with a host of inflammatory and autoimmune conditions.

The composition of human gut flora changes over time, when the diet changes, and as overall health changes.

Enteric nervous system

The enteric nervous system is one of the main divisions of the nervous system and consists of a mesh-like system of neurons that governs the function of the gastrointestinal system; it has been described as a "second brain" for several reasons. The enteric nervous system can operate autonomously. It normally communicates with the central nervous system (CNS) through the parasympathetic (e.g., via the vagus nerve) and sympathetic (e.g., via the prevertebral ganglia) nervous systems. However, vertebrate studies show that when the vagus nerve is severed, the enteric nervous system continues to function.

In vertebrates, the enteric nervous system includes efferent neurons, afferent neurons, and interneurons, all of which make the enteric nervous system capable of carrying reflexes in the absence of CNS input. The sensory neurons report on mechanical and chemical conditions. Through intestinal muscles, the motor neurons control peristalsis and churning of intestinal contents. Other neurons control the secretion of enzymes. The enteric nervous system also makes use of more than 30 neurotransmitters, most of which are identical to the ones found in CNS, such as acetylcholine, dopamine, and serotonin. More than 90% of the body's serotonin lies in the gut, as well as about 50% of the body's dopamine and the dual function of these neurotransmitters is an active part of gut-brain research.

The first of the gut-brain interactions was shown to be between the sight and smell of food and the release of gastric secretions, known as the cephalic phase, or cephalic response of digestion.[4][5]

Gut-brain integration

The gut–brain axis, a bidirectional neurohumoral communication system, is important for maintaining homeostasis and is regulated through the central and enteric nervous systems and the neural, endocrine, immune, and metabolic pathways, and especially including the hypothalamic–pituitary–adrenal axis (HPA axis). That term has been expanded to include the role of the gut flora as part of the "microbiome-gut-brain axis", a linkage of functions including the gut flora.

Interest in the field was sparked by a 2004 study (Nobuyuki Sudo and Yoichi Chida) showing that germ-free mice (genetically homogeneous laboratory mice, birthed and raised in an antiseptic environment) showed an exaggerated HPA axis response to stress compared to non-GF laboratory mice.

The gut flora can produce a range of neuroactive molecules, such as acetylcholine, catecholamines, γ-aminobutyric acid, histamine, melatonin, and serotonin, which is essential for regulating peristalsis and sensation in the gut. Changes in the composition of the gut flora due to diet, drugs, or disease correlate with changes in levels of circulating cytokines, some of which can affect brain function. The gut flora also release molecules that can directly activate the vagus nerve which transmits information about the state of the intestines to the brain.

Likewise, chronic or acutely stressful situations activate the hypothalamic–pituitary–adrenal axis, causing changes in the gut flora and intestinal epithelium, and possibly having systemic effects. Additionally, the cholinergic anti-inflammatory pathway, signaling through the vagus nerve, affects the gut epithelium and flora. Hunger and satiety are integrated in the brain, and the presence or absence of food in the gut and types of food present, also affect the composition and activity of gut flora.

That said, most of the work that has been done on the role of gut flora in the gut-brain axis has been conducted in animals, including the highly artificial germ-free mice. As of 2016 studies with humans measuring changes to gut flora in response to stress, or measuring effects of various probiotics, have generally been small and cannot be generalized; whether changes to gut flora are a result of disease, a cause of disease, or both in any number of possible feedback loops in the gut-brain axis, remains unclear.

Research

Probiotics

A 2016 systematic review of laboratory animal studies and preliminary human clinical trials using commercially available strains of probiotic bacteria found that certain species of the Bifidobacterium and Lactobacillus genera (i.e., B. longum, B. breve, B. infantis, L. helveticus, L. rhamnosus, L. plantarum, and L. casei) had the most potential to be useful for certain central nervous system disorders.

Anxiety and mood disorders

As of 2018 work on the relationship between gut flora and anxiety disorders and mood disorders, as well as trying to influence that relationship using probiotics or prebiotics (called "psychobiotics"), was at an early stage, with insufficient evidence to draw conclusions about a causal role for gut flora changes in these conditions, or about the efficacy of any probiotic or prebiotic treatment.

People with anxiety and mood disorders tend to have gastrointestinal problems; small studies have been conducted to compare the gut flora of people with major depressive disorder and healthy people, but those studies have had contradictory results.

Much interest was generated in the potential role of gut flora in anxiety disorders, and more generally in the role of gut flora in the gut-brain axis, by studies published in 2004 showing that germ-free mice have an exaggerated HPA axis response to stress caused by being restrained, which was reversed by colonizing their gut with a Bifidobacterium species. Studies looking at maternal separation for rats shows neonatal stress leads to long-term changes in the gut microbiota such as its diversity and composition, which also led to stress and anxiety-like behavior. Additionally, while much work had been done as of 2016 to characterize various neurotransmitters known to be involved in anxiety and mood disorders that gut flora can produce (for example, Escherichia, Bacillus, and Saccharomyces species can produce noradrenalin; Candida, Streptococcus, and Escherichia species can produce serotonin, etc) the inter-relationships and pathways by which the gut flora might affect anxiety in humans were unclear.

Autism

Around 70% of people with autism also have gastrointestinal problems, and autism is often diagnosed at the time that the gut flora becomes established, indicating that there may be a connection between autism and gut flora. Some studies have found differences in the gut flora of children with autism compared with children without autism – most notably elevations in the amount of Clostridium in the stools of children with autism compared with the stools of the children without – but these results have not been consistently replicated. Many of the environmental factors thought to be relevant to the development of autism would also affect the gut flora, leaving open the question whether specific developments in the gut flora drive the development of autism or whether those developments happen concurrently. As of 2016, studies with probiotics had only been conducted with animals; studies of other dietary changes to treat autism have been inconclusive.

Parkinson's disease

As of 2015 one study had been conducted comparing the gut flora of people with Parkinson's disease to healthy controls; in that study people with Parkinsons had lower levels of Prevotellaceae and people with Parkinsons who had higher levels of Enterobacteriaceae had more clinically severe symptoms; the authors of the study drew no conclusions about whether gut flora changes were driving the disease or vice versa.

Models of neural computation

From Wikipedia, the free encyclopedia
 
Models of neural computation are attempts to elucidate, in an abstract and mathematical fashion, the core principles that underlie information processing in biological nervous systems, or functional components thereof. This article aims to provide an overview of the most definitive models of neuro-biological computation as well as the tools commonly used to construct and analyze them.

Introduction

Due to the complexity of nervous system behavior, the associated experimental error bounds are ill-defined, but the relative merit of the different models of a particular subsystem can be compared according to how closely they reproduce real-world behaviors or respond to specific input signals. In the closely related field of computational neuroethology, the practice is to include the environment in the model in such a way that the loop is closed. In the cases where competing models are unavailable, or where only gross responses have been measured or quantified, a clearly formulated model can guide the scientist in designing experiments to probe biochemical mechanisms or network connectivity.

In all but the simplest cases, the mathematical equations that form the basis of a model cannot be solved exactly. Nevertheless, computer technology, sometimes in the form of specialized software or hardware architectures, allow scientists to perform iterative calculations and search for plausible solutions. A computer chip or a robot that can interact with the natural environment in ways akin to the original organism is one embodiment of a useful model. The ultimate measure of success is however the ability to make testable predictions.

General criteria for evaluating models

Speed of information processing

The rate of information processing in biological neural systems are constrained by the speed at which an action potential can propagate down a nerve fibre. This conduction velocity ranges from 1 m/s to over 100 m/s, and generally increases with the diameter of the neuronal process. Slow in the timescales of biologically-relevant events dictated by the speed of sound or the force of gravity, the nervous system overwhelmingly prefers parallel computations over serial ones in time-critical applications.

Robustness

A model is robust if it continues to produce the same computational results under variations in inputs or operating parameters introduced by noise. For example, the direction of motion as computed by a robust motion detector would not change under small changes of luminance, contrast or velocity jitter.

Gain control

This refers to the principle that the response of a nervous system should stay within certain bounds even as the inputs from the environment change drastically. For example, when adjusting between a sunny day and a moonless night, the retina changes the relationship between light level and neuronal output by a factor of more than 10^{6} so that the signals sent to later stages of the visual system always remain within a much narrower range of amplitudes.

Linearity versus nonlinearity

A linear system is one whose response in a specified unit of measure, to a set of inputs considered at once, is the sum of its responses due to the inputs considered individually.

Linear systems are easier to analyze mathematically and are a persuasive assumption in many models including the McCulloch and Pitts neuron, population coding models, and the simple neurons often used in Artificial neural networks. Linearity may occur in the basic elements of a neural circuit such as the response of a postsynaptic neuron, or as an emergent property of a combination of nonlinear subcircuits. Though linearity is often seen as incorrect, there has been recent work suggesting it may, in fact, be biophysically plausible in some cases.

Examples

A computational neural model may be constrained to the level of biochemical signalling in individual neurons or it may describe an entire organism in its environment. The examples here are grouped according to their scope.

Models of information transfer in neurons

The most widely used models of information transfer in biological neurons are based on analogies with electrical circuits. The equations to be solved are time-dependent differential equations with electro-dynamical variables such as current, conductance or resistance, capacitance and voltage.

Hodgkin–Huxley model and its derivatives

The Hodgkin–Huxley model, widely regarded as one of the great achievements of 20th-century biophysics, describes how action potentials in neurons are initiated and propagated in axons via voltage-gated ion channels. It is a set of nonlinear ordinary differential equations that were introduced by Alan Lloyd Hodgkin and Andrew Huxley in 1952 to explain the results of voltage clamp experiments on the squid giant axon. Analytic solutions do not exist, but the Levenberg–Marquardt algorithm, a modified Gauss–Newton algorithm, is often used to fit these equations to voltage-clamp data.

The FitzHugh–Nagumo model is a simplication of the Hodgkin–Huxley model. The Hindmarsh–Rose model is an extension which describes neuronal spike bursts. The Morris–Lecar model is a modification which does not generate spikes, but describes slow-wave propagation, which is implicated in the inhibitory synaptic mechanisms of central pattern generators.

Transfer functions and linear filters

This approach, influenced by control theory and signal processing, treats neurons and synapses as time-invariant entities that produce outputs that are linear combinations of input signals, often depicted as sine waves with a well-defined temporal or spatial frequencies.

The entire behavior of a neuron or synapse are encoded in a transfer function, lack of knowledge concerning the exact underlying mechanism notwithstanding. This brings a highly developed mathematics to bear on the problem of information transfer.

The accompanying taxonomy of linear filters turns out to be useful in characterizing neural circuitry. Both low- and high-pass filters are postulated to exist in some form in sensory systems, as they act to prevent information loss in high and low contrast environments, respectively.

Indeed, measurements of the transfer functions of neurons in the horseshoe crab retina according to linear systems analysis show that they remove short-term fluctuations in input signals leaving only the long-term trends, in the manner of low-pass filters. These animals are unable to see low-contrast objects without the help of optical distortions caused by underwater currents.

Models of computations in sensory systems

Lateral inhibition in the retina: Hartline–Ratliff equations

In the retina, an excited neural receptor can suppress the activity of surrounding neurons within an area called the inhibitory field. This effect, known as lateral inhibition, increases the contrast and sharpness in visual response, but leads to the epiphenomenon of Mach bands. This is often illustrated by the optical illusion of light or dark stripes next to a sharp boundary between two regions in an image of different luminance.

The Hartline-Ratliff model describes interactions within a group of p photoreceptor cells. Assuming these interactions to be linear, they proposed the following relationship for the steady-state response rate r_{p} of the given p-th photoreceptor in terms of the steady-state response rates r_j of the j surrounding receptors:

r_{{p}}=\left|\left[e_{{p}}-\sum _{{j=1,j\neq p}}^{{n}}k_{{pj}}\left|r_{{j}}-r_{{pj}}^{{o}}\right|\right]\right|.

Here,
e_{p} is the excitation of the target p-th receptor from sensory transduction
r_{{pj}}^{o} is the associated threshold of the firing cell, and
k_{{pj}} is the coefficient of inhibitory interaction between the p-th and the jth receptor. The inhibitory interaction decreases with distance from the target p-th receptor.

Cross-correlation in sound localization: Jeffress model

According to Jeffress, in order to compute the location of a sound source in space from interaural time differences, an auditory system relies on delay lines: the induced signal from an ipsilateral auditory receptor to a particular neuron is delayed for the same time as it takes for the original sound to go in space from that ear to the other. Each postsynaptic cell is differently delayed and thus specific for a particular inter-aural time difference. This theory is equivalent to the mathematical procedure of cross-correlation.

Following Fischer and Anderson, the response of the postsynaptic neuron to the signals from the left and right ears is given by

y_{{R}}\left(t\right)-y_{{L}}\left(t\right)

where

y_{{L}}\left(t\right)=\int _{{0}}^{{\tau }}u_{{L}}\left(\sigma \right)w\left(t-\sigma \right)d\sigma
y_{{R}}\left(t\right)=\int _{{0}}^{{\tau }}u_{{R}}\left(\sigma \right)w\left(t-\sigma \right)d\sigma

and

w\left(t-\sigma \right) represents the delay function. This is not entirely correct and a clear eye is needed to put the symbols in order.

Structures have been located in the barn owl which are consistent with Jeffress-type mechanisms.

Cross-correlation for motion detection: Hassenstein–Reichardt model

A motion detector needs to satisfy three general requirements: pair-inputs, asymmetry and nonlinearity. The cross-correlation operation implemented asymmetrically on the responses from a pair of photoreceptors satisfies these minimal criteria, and furthermore, predicts features which have been observed in the response of neurons of the lobula plate in bi-wing insects.

The master equation for response is

R=A_{1}(t-\tau )B_{2}(t)-A_{2}(t-\tau )B_{1}(t)

The HR model predicts a peaking of the response at a particular input temporal frequency. The conceptually similar Barlow–Levick model is deficient in the sense that a stimulus presented to only one receptor of the pair is sufficient to generate a response. This is unlike the HR model, which requires two correlated signals delivered in a time ordered fashion. However the HR model does not show a saturation of response at high contrasts, which is observed in experiment. Extensions of the Barlow-Levick model can provide for this discrepancy.

Watson–Ahumada model for motion estimation in humans

This uses a cross-correlation in both the spatial and temporal directions, and is related to the concept of optical flow.

Neurophysiological metronomes: neural circuits for pattern generation

Mutually inhibitory processes are a unifying motif of all central pattern generators. This has been demonstrated in the stomatogastric (STG) nervous system of crayfish and lobsters. Two and three-cell oscillating networks based on the STG have been constructed which are amenable to mathematical analysis, and which depend in a simple way on synaptic strengths and overall activity, presumably the knobs on these things. The mathematics involved is the theory of dynamical systems.

Feedback and control: models of flight control in the fly

Flight control in the fly is believed to be mediated by inputs from the visual system and also the halteres, a pair of knob-like organs which measure angular velocity. Integrated computer models of Drosophila, short on neuronal circuitry but based on the general guidelines given by control theory and data from the tethered flights of flies, have been constructed to investigate the details of flight control.

Software modelling approaches and tools

Neural networks

In this approach the strength and type, excitatory or inhibitory, of synaptic connections are represented by the magnitude and sign of weights, that is, numerical coefficients w' in front of the inputs x to a particular neuron. The response of the j-th neuron is given by a sum of nonlinear, usually "sigmoidal" functions g of the inputs as:

f_{{j}}=\sum _{{i}}g\left(w_{{ji}}'x_{{i}}+b_{{j}}\right).

This response is then fed as input into other neurons and so on. The goal is to optimize the weights of the neurons to output a desired response at the output layer respective to a set given inputs at the input layer. This optimization of the neuron weights is often performed using the backpropagation algorithm and an optimization method such as gradient descent or Newton's method of optimization. Backpropagation compares the output of the network with the expected output from the training data, then updates the weights of each neuron to minimize the contribution of that individual neuron to the total error of the network.

Genetic algorithms

Genetic algorithms are used to evolve neural (and sometimes body) properties in a model brain-body-environment system so as to exhibit some desired behavioral performance. The evolved agents can then be subjected to a detailed analysis to uncover their principles of operation. Evolutionary approaches are particularly useful for exploring spaces of possible solutions to a given behavioral task because these approaches minimize a priori assumptions about how a given behavior ought to be instantiated. They can also be useful for exploring different ways to complete a computational neuroethology model when only partial neural circuitry is available for a biological system of interest.

NEURON

The NEURON software, developed at Duke University, is a simulation environment for modeling individual neurons and networks of neurons. The NEURON environment is a self-contained environment allowing interface through its GUI or via scripting with hoc or python. The NEURON simulation engine is based on a Hodgkin–Huxley type model using a Borg–Graham formulation. Several examples of models written in NEURON are available from the online database ModelDB.

Embodiment in electronic hardware

Conductance-based silicon neurons

Nervous systems differ from the majority of silicon-based computing devices in that they resemble analog computers (not digital data processors) and massively parallel processors, not sequential processors. To model nervous systems accurately, in real-time, alternative hardware is required.
The most realistic circuits to date make use of analog properties of existing digital electronics (operated under non-standard conditions) to realize Hodgkin–Huxley-type models in silico.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...