Search This Blog

Saturday, April 6, 2024

Geomagnetic storm

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Geomagnetic_storm
Artist's depiction of solar wind particles interacting with Earth's magnetosphere. Sizes are not to scale.

A geomagnetic storm, also known as a magnetic storm, is temporary disturbance of the Earth's magnetosphere caused by a solar wind shock wave.

The disturbance that drives the magnetic storm may be a solar coronal mass ejection (CME) or (much less severely) a co-rotating interaction region (CIR), a high-speed stream of solar wind originating from a coronal hole. The frequency of geomagnetic storms increases and decreases with the sunspot cycle. During solar maximum, geomagnetic storms occur more often, with the majority driven by CMEs.

The increase in the solar wind pressure initially compresses the magnetosphere. The solar wind's magnetic field interacts with the Earth's magnetic field and transfers an increased energy into the magnetosphere. Both interactions cause an increase in plasma movement through the magnetosphere (driven by increased electric fields inside the magnetosphere) and an increase in electric current in the magnetosphere and ionosphere. During the main phase of a geomagnetic storm, electric current in the magnetosphere creates a magnetic force that pushes out the boundary between the magnetosphere and the solar wind.

Several space weather phenomena tend to be associated with or are caused by a geomagnetic storm. These include solar energetic particle (SEP) events, geomagnetically induced currents (GIC), ionospheric storms and its disturbances that cause radio and radar scintillation, disruption of navigation by magnetic compass and auroral displays at much lower latitudes than normal.

The largest recorded geomagnetic storm, the Carrington Event in September 1859, took down parts of the recently created US telegraph network, starting fires and electrically shocking telegraph operators. In 1989, a geomagnetic storm energized ground induced currents that disrupted electric power distribution throughout most of Quebec and caused aurorae as far south as Texas. The Carrington Event was mild compared with very rare extreme geomagnetic storms called Miyake events, which cause spikes in radioactive carbon-14 in tree rings.

Definition

A geomagnetic storm is defined by changes in the Dst (disturbance – storm time) index. The Dst index estimates the globally averaged change of the horizontal component of the Earth's magnetic field at the magnetic equator based on measurements from a few magnetometer stations. Dst is computed once per hour and reported in near-real-time. During quiet times, Dst is between +20 and −20 nano-Tesla (nT).

A geomagnetic storm has three phases: initial, main and recovery. The initial phase is characterized by Dst (or its one-minute component SYM-H) increasing by 20 to 50 nT in tens of minutes. The initial phase is also referred to as a storm sudden commencement (SSC). However, not all geomagnetic storms have an initial phase and not all sudden increases in Dst or SYM-H are followed by a geomagnetic storm. The main phase of a geomagnetic storm is defined by Dst decreasing to less than −50 nT. The selection of −50 nT to define a storm is somewhat arbitrary. The minimum value during a storm will be between −50 and approximately −600 nT. The duration of the main phase is typically 2–8 hours. The recovery phase is when Dst changes from its minimum value to its quiet time value. The recovery phase may last as short as 8 hours or as long as 7 days.

Aurora borealis

The size of a geomagnetic storm is classified as moderate (−50 nT > minimum of Dst > −100 nT), intense (−100 nT > minimum Dst > −250 nT) or super-storm (minimum of Dst < −250 nT).

Measuring intensity

Geomagnetic storm intensity is reported in several different ways, including:

History of the theory

In 1931, Sydney Chapman and Vincenzo C. A. Ferraro wrote an article, A New Theory of Magnetic Storms, that sought to explain the phenomenon. They argued that whenever the Sun emits a solar flare it also emits a plasma cloud, now known as a coronal mass ejection. They postulated that this plasma travels at a velocity such that it reaches Earth within 113 days, though we now know this journey takes 1 to 5 days. They wrote that the cloud then compresses the Earth's magnetic field and thus increases this field at the Earth's surface. Chapman and Ferraro's work drew on that of, among others, Kristian Birkeland, who had used recently-discovered cathode ray tubes to show that the rays were deflected towards the poles of a magnetic sphere. He theorised that a similar phenomenon was responsible for auroras, explaining why they are more frequent in polar regions.

Occurrences

The first scientific observation of the effects of a geomagnetic storm occurred early in the 19th century: from May 1806 until June 1807, Alexander von Humboldt recorded the bearing of a magnetic compass in Berlin. On 21 December 1806, he noticed that his compass had become erratic during a bright auroral event.

On September 1–2, 1859, the largest recorded geomagnetic storm occurred. From August 28 until September 2, 1859, numerous sunspots and solar flares were observed on the Sun, with the largest flare on September 1. This is referred to as the Solar storm of 1859 or the Carrington Event. It can be assumed that a massive coronal mass ejection (CME) was launched from the Sun and reached the Earth within eighteen hours—a trip that normally takes three to four days. The horizontal field was reduced by 1600 nT as recorded by the Colaba Observatory. It is estimated that Dst would have been approximately −1760 nT. Telegraph wires in both the United States and Europe experienced induced voltage increases (emf), in some cases even delivering shocks to telegraph operators and igniting fires. Aurorae were seen as far south as Hawaii, Mexico, Cuba and Italy—phenomena that are usually only visible in polar regions. Ice cores show evidence that events of similar intensity recur at an average rate of approximately once per 500 years.

Since 1859, less severe storms have occurred, notably the aurora of November 17, 1882 and the May 1921 geomagnetic storm, both with disruption of telegraph service and initiation of fires, and 1960, when widespread radio disruption was reported.

GOES-7 monitors the space weather conditions during the Great Geomagnetic storm of March 1989, the Moscow neutron monitor recorded the passage of a CME as a drop in levels known as a Forbush decrease.

In early August 1972, a series of flares and solar storms peaks with a flare estimated around X20 producing the fastest CME transit ever recorded and a severe geomagnetic and proton storm that disrupted terrestrial electrical and communications networks, as well as satellites (at least one made permanently inoperative), and spontaneously detonated numerous U.S. Navy magnetic-influence sea mines in North Vietnam.

The March 1989 geomagnetic storm caused the collapse of the Hydro-Québec power grid in seconds as equipment protection relays tripped in a cascading sequence. Six million people were left without power for nine hours. The storm caused auroras as far south as Texas and Florida. The storm causing this event was the result of a coronal mass ejected from the Sun on March 9, 1989.[18] The minimum Dst was −589 nT.

On July 14, 2000, an X5 class flare erupted (known as the Bastille Day event) and a coronal mass was launched directly at the Earth. A geomagnetic super storm occurred on July 15–17; the minimum of the Dst index was −301 nT. Despite the storm's strength, no power distribution failures were reported. The Bastille Day event was observed by Voyager 1 and Voyager 2, thus it is the farthest out in the Solar System that a solar storm has been observed.

Seventeen major flares erupted on the Sun between 19 October and 5 November 2003, including perhaps the most intense flare ever measured on the GOES XRS sensor—a huge X28 flare, resulting in an extreme radio blackout, on 4 November. These flares were associated with CME events that caused three geomagnetic storms between 29 October and 2 November, during which the second and third storms were initiated before the previous storm period had fully recovered. The minimum Dst values were −151, −353 and −383 nT. Another storm in this sequence occurred on 4–5 November with a minimum Dst of −69 nT. The last geomagnetic storm was weaker than the preceding storms, because the active region on the Sun had rotated beyond the meridian where the central portion CME created during the flare event passed to the side of the Earth. The whole sequence became known as the Halloween Solar Storm. The Wide Area Augmentation System (WAAS) operated by the Federal Aviation Administration (FAA) was offline for approximately 30 hours due to the storm. The Japanese ADEOS-2 satellite was severely damaged and the operation of many other satellites were interrupted due to the storm.

Interactions with planetary processes

Magnetosphere in the near-Earth space environment.

The solar wind also carries with it the Sun's magnetic field. This field will have either a North or South orientation. If the solar wind has energetic bursts, contracting and expanding the magnetosphere, or if the solar wind takes a southward polarization, geomagnetic storms can be expected. The southward field causes magnetic reconnection of the dayside magnetopause, rapidly injecting magnetic and particle energy into the Earth's magnetosphere.

During a geomagnetic storm, the ionosphere's F2 layer becomes unstable, fragments, and may even disappear. In the northern and southern pole regions of the Earth, auroras are observable.

Instruments

Magnetometers monitor the auroral zone as well as the equatorial region. Two types of radar, coherent scatter and incoherent scatter, are used to probe the auroral ionosphere. By bouncing signals off ionospheric irregularities, which move with the field lines, one can trace their motion and infer magnetospheric convection.

Spacecraft instruments include:

  • Magnetometers, usually of the flux gate type. Usually these are at the end of booms, to keep them away from magnetic interference by the spacecraft and its electric circuits.
  • Electric sensors at the ends of opposing booms are used to measure potential differences between separated points, to derive electric fields associated with convection. The method works best at high plasma densities in low Earth orbit; far from Earth long booms are needed, to avoid shielding-out of electric forces.
  • Radio sounders from the ground can bounce radio waves of varying frequency off the ionosphere, and by timing their return determine the electron density profile—up to its peak, past which radio waves no longer return. Radio sounders in low Earth orbit aboard the Canadian Alouette 1 (1962) and Alouette 2 (1965), beamed radio waves earthward and observed the electron density profile of the "topside ionosphere". Other radio sounding methods were also tried in the ionosphere (e.g. on IMAGE).
  • Particle detectors include a Geiger counter, as was used for the original observations of the Van Allen radiation belt. Scintillator detectors came later, and still later "channeltron" electron multipliers found particularly wide use. To derive charge and mass composition, as well as energies, a variety of mass spectrograph designs were used. For energies up to about 50 keV (which constitute most of the magnetospheric plasma) time-of-flight spectrometers (e.g. "top-hat" design) are widely used.

Computers have made it possible to bring together decades of isolated magnetic observations and extract average patterns of electrical currents and average responses to interplanetary variations. They also run simulations of the global magnetosphere and its responses, by solving the equations of magnetohydrodynamics (MHD) on a numerical grid. Appropriate extensions must be added to cover the inner magnetosphere, where magnetic drifts and ionospheric conduction need to be taken into account. At polar regions, directly linked to the solar wind, large-scale ionospheric anomalies can be successfully modeled, even during geomagnetic super-storms. At smaller scales (comparable to a degree of latitude/longitude) the results are difficult to interpret, and certain assumptions about the high-latitude forcing uncertainty are needed. 

Geomagnetic storm effects

Disruption of electrical systems

It has been suggested that a geomagnetic storm on the scale of the solar storm of 1859 today would cause billions or even trillions of dollars of damage to satellites, power grids and radio communications, and could cause electrical blackouts on a massive scale that might not be repaired for weeks, months, or even years. Such sudden electrical blackouts may threaten food production.

Main electrical grid

When magnetic fields move about in the vicinity of a conductor such as a wire, a geomagnetically induced current is produced in the conductor. This happens on a grand scale during geomagnetic storms (the same mechanism also influenced telephone and telegraph lines before fiber optics, see above) on all long transmission lines. Long transmission lines (many kilometers in length) are thus subject to damage by this effect. Notably, this chiefly includes operators in China, North America, and Australia, especially in modern high-voltage, low-resistance lines. The European grid consists mainly of shorter transmission circuits, which are less vulnerable to damage.

The (nearly direct) currents induced in these lines from geomagnetic storms are harmful to electrical transmission equipment, especially transformers—inducing core saturation, constraining their performance (as well as tripping various safety devices), and causing coils and cores to heat up. In extreme cases, this heat can disable or destroy them, even inducing a chain reaction that can overload transformers. Most generators are connected to the grid via transformers, isolating them from the induced currents on the grid, making them much less susceptible to damage due to geomagnetically induced current. However, a transformer that is subjected to this will act as an unbalanced load to the generator, causing negative sequence current in the stator and consequently rotor heating.

According to a study by Metatech corporation, a storm with a strength comparable to that of 1921 would destroy more than 300 transformers and leave over 130 million people without power in the United States, costing several trillion dollars. The extent of the disruption is debated, with some congressional testimony indicating a potentially indefinite outage until transformers can be replaced or repaired. These predictions are contradicted by a North American Electric Reliability Corporation report that concludes that a geomagnetic storm would cause temporary grid instability but no widespread destruction of high-voltage transformers. The report points out that the widely quoted Quebec grid collapse was not caused by overheating transformers but by the near-simultaneous tripping of seven relays.

Besides the transformers being vulnerable to the effects of a geomagnetic storm, electricity companies can also be affected indirectly by the geomagnetic storm. For instance, internet service providers may go down during geomagnetic storms (and/or remain non-operational long after). Electricity companies may have equipment requiring a working internet connection to function, so during the period the internet service provider is down, the electricity too may not be distributed.

By receiving geomagnetic storm alerts and warnings (e.g. by the Space Weather Prediction Center; via Space Weather satellites as SOHO or ACE), power companies can minimize damage to power transmission equipment, by momentarily disconnecting transformers or by inducing temporary blackouts. Preventive measures also exist, including preventing the inflow of GICs into the grid through the neutral-to-ground connection.

Communications

High frequency (3–30 MHz) communication systems use the ionosphere to reflect radio signals over long distances. Ionospheric storms can affect radio communication at all latitudes. Some frequencies are absorbed and others are reflected, leading to rapidly fluctuating signals and unexpected propagation paths. TV and commercial radio stations are little affected by solar activity, but ground-to-air, ship-to-shore, shortwave broadcast and amateur radio (mostly the bands below 30 MHz) are frequently disrupted. Radio operators using HF bands rely upon solar and geomagnetic alerts to keep their communication circuits up and running.

Military detection or early warning systems operating in the high frequency range are also affected by solar activity. The over-the-horizon radar bounces signals off the ionosphere to monitor the launch of aircraft and missiles from long distances. During geomagnetic storms, this system can be severely hampered by radio clutter. Also some submarine detection systems use the magnetic signatures of submarines as one input to their locating schemes. Geomagnetic storms can mask and distort these signals.

The Federal Aviation Administration routinely receives alerts of solar radio bursts so that they can recognize communication problems and avoid unnecessary maintenance. When an aircraft and a ground station are aligned with the Sun, high levels of noise can occur on air-control radio frequencies. This can also happen on UHF and SHF satellite communications, when an Earth station, a satellite and the Sun are in alignment. In order to prevent unnecessary maintenance on satellite communications systems aboard aircraft AirSatOne provides a live feed for geophysical events from NOAA's Space Weather Prediction Center. allows users to view observed and predicted space storms. Geophysical Alerts are important to flight crews and maintenance personnel to determine if any upcoming activity or history has or will have an effect on satellite communications, GPS navigation and HF Communications.

Telegraph lines in the past were affected by geomagnetic storms. Telegraphs used a single long wire for the data line, stretching for many miles, using the ground as the return wire and fed with DC power from a battery; this made them (together with the power lines mentioned below) susceptible to being influenced by the fluctuations caused by the ring current. The voltage/current induced by the geomagnetic storm could have diminished the signal, when subtracted from the battery polarity, or to overly strong and spurious signals when added to it; some operators learned to disconnect the battery and rely on the induced current as their power source. In extreme cases the induced current was so high the coils at the receiving side burst in flames, or the operators received electric shocks. Geomagnetic storms affect also long-haul telephone lines, including undersea cables unless they are fiber optic.

Damage to communications satellites can disrupt non-terrestrial telephone, television, radio and Internet links. The National Academy of Sciences reported in 2008 on possible scenarios of widespread disruption in the 2012–2013 solar peak. A solar superstorm could cause large-scale global months-long Internet outages. A study describes potential mitigation measures and exceptions – such as user-powered mesh networks, related peer-to-peer applications and new protocols – and analyzes the robustness of the current Internet infrastructure.

Navigation systems

Global navigation satellite systems (GNSS), and other navigation systems such as LORAN and the now-defunct OMEGA are adversely affected when solar activity disrupts their signal propagation. The OMEGA system consisted of eight transmitters located throughout the world. Airplanes and ships used the very low frequency signals from these transmitters to determine their positions. During solar events and geomagnetic storms, the system gave navigators information that was inaccurate by as much as several miles. If navigators had been alerted that a proton event or geomagnetic storm was in progress, they could have switched to a backup system.

GNSS signals are affected when solar activity causes sudden variations in the density of the ionosphere, causing the satellite signals to scintillate (like a twinkling star). The scintillation of satellite signals during ionospheric disturbances is studied at HAARP during ionospheric modification experiments. It has also been studied at the Jicamarca Radio Observatory.

One technology used to allow GNSS receivers to continue to operate in the presence of some confusing signals is Receiver Autonomous Integrity Monitoring (RAIM), used by GPS. However, RAIM is predicated on the assumption that a majority of the GPS constellation is operating properly, and so it is much less useful when the entire constellation is perturbed by global influences such as geomagnetic storms. Even if RAIM detects a loss of integrity in these cases, it may not be able to provide a useful, reliable signal.

Satellite hardware damage

Geomagnetic storms and increased solar ultraviolet emission heat Earth's upper atmosphere, causing it to expand. The heated air rises, and the density at the orbit of satellites up to about 1,000 km (600 mi) increases significantly. This results in increased drag, causing satellites to slow and change orbit slightly. Low Earth orbit satellites that are not repeatedly boosted to higher orbits slowly fall and eventually burn up. Skylab's 1979 destruction is an example of a spacecraft reentering Earth's atmosphere prematurely as a result of higher-than-expected solar activity. During the great geomagnetic storm of March 1989, four of the U.S. Navy's navigational satellites had to be taken out of service for up to a week, the U.S. Space Command had to post new orbital elements for over 1000 objects affected, and the Solar Maximum Mission satellite fell out of orbit in December the same year.

The vulnerability of the satellites depends on their position as well. The South Atlantic Anomaly is a perilous place for a satellite to pass through, due to the unusually weak geomagnetic field at low Earth orbit.

Pipelines

Rapidly fluctuating geomagnetic fields can produce geomagnetically induced currents in pipelines. This can cause multiple problems for pipeline engineers. Pipeline flow meters can transmit erroneous flow information and the corrosion rate of the pipeline can be dramatically increased.

Radiation hazards to humans

Earth's atmosphere and magnetosphere allow adequate protection at ground level, but astronauts are subject to potentially lethal radiation poisoning. The penetration of high-energy particles into living cells can cause chromosome damage, cancer and other health problems. Large doses can be immediately fatal. Solar protons with energies greater than 30 MeV are particularly hazardous.

Solar proton events can also produce elevated radiation aboard aircraft flying at high altitudes. Although these risks are small, flight crews may be exposed repeatedly, and monitoring of solar proton events by satellite instrumentation allows exposure to be monitored and evaluated, and eventually flight paths and altitudes to be adjusted to lower the absorbed dose.

Ground level enhancements, also known as ground level events or GLEs, occur when a solar particle event contains particles with sufficient energy to have effects at ground level, mainly detected as an increase in the number of neutrons measured at ground level. These events have been shown to have an impact on radiation dosage, but they do not significantly increase the risk of cancer.

Effect on animals

There is a large but controversial body of scientific literature on connections between geomagnetic storms and human health. This began with Russian papers, and the subject was subsequently studied by Western scientists. Theories for the cause include the involvement of cryptochrome, melatonin, the pineal gland, and the circadian rhythm.

Some scientists suggest that solar storms induce whales to beach themselves. Some have speculated that migrating animals which use magnetoreception to navigate, such as birds and honey bees, might also be affected.

Habitability of yellow dwarf systems

Artistic interpretation of Kepler-452b, a potentially habitable exoplanet belonging to a yellow dwarf.

Habitability of yellow dwarf systems defines the suitability for life of exoplanets belonging to yellow dwarf stars. These systems are the object of study among the scientific community because they are considered the most suitable for harboring living organisms, together with those belonging to K-type stars.

Yellow dwarfs comprise the G-type stars of the main sequence, with masses between 0.9 and 1.1 M☉ and surface temperatures between 5000 and 6000 K, like the Sun. They are the third most common in the Milky Way Galaxy and the only ones in which the habitable zone coincides completely with the ultraviolet habitable zone.

Since the habitable zone is farther away in more massive and luminous stars, the separation between the main star and the inner edge of this region is greater in yellow dwarfs than in red and orange dwarfs. Therefore, planets located in this zone of G-type stars are safe from the intense stellar emissions that occur after their formation and are not as affected by the gravitational influence of their star as those belonging to smaller stellar bodies. Thus, all planets in the habitable zone of such stars exceed the tidal locking limit and their rotation is therefore not synchronized with their orbit.

The Earth, orbiting a yellow dwarf, represents the only known example of planetary habitability. For this reason, the main goal in the field of exoplanetology is to find an Earth analog planet that meets its main characteristics, such as size, average temperature and location around a star similar to the Sun. However, technological limitations make it difficult to find these objects due to the infrequency of their transits, a consequence of the distance that separates them from their stars or semi-major axis.

Characteristics

Yellow dwarf stars correspond to the G-class stars of the main sequence, with a mass between 0.9 and 1.1 M☉, and surface temperatures between 5000 and 6000 K. Since the Sun itself is a yellow dwarf, of type G2V, these types of stars are also known as solar analogs. They rank third among the most common main sequence stars, after red and orange dwarfs, with a representativeness of 10% of the total Milky Way. They remain in the main sequence for approximately 10 billion years. After the Sun, the closest G-type star to the Earth is Alpha Centauri A, 4.4 light-years away and belonging to a multiple star system.

All stars go through a phase of intense activity after their formation due to their rotation, which is much faster at the beginning of their lives. The duration of this period varies according to the mass of the object: the least massive stars can remain in this state for up to 3 billion years, compared to 500 million for G-type stars. Studies by the team of Edward Guinan, an astrophysicist at Villanova University, reveal that the Sun rotated ten times faster in its early days. Since the rotation speed of a star affects its magnetic field, the Sun's X-ray and UV emissions were hundreds of times more intense than they are today.

The extension of this phase in red dwarfs, as well as the probable tidal locking of their potentially habitable planets with respect to them, could wipe out the magnetic field of these planets, resulting in the loss of almost all their atmosphere and water to space by interaction with the stellar wind. In contrast, the semi-major axis of planetary objects belonging to the habitable zone of G-type stars is wide enough to allow planetary rotation. In addition, the duration of the period of intense stellar activity is too short to eliminate a significant part of the atmosphere on planets with masses similar to or greater than that of the Earth, which have a gravity and magnetosphere capable of counteracting the effects of stellar winds.

Habitable area

Habitable zone of the stars Kepler-186 (red dwarf), Kepler-452 and the Sun (both yellow dwarfs)

The habitable zone around yellow dwarfs varies according to their size and luminosity, although the inner boundary is usually at 0.84 AU and the outer one at 1.67 in a G2V class dwarf like the Sun. In a G5V class dwarf -smaller- of 0.95 R☉ the habitable zone would correspond to the region located between 0.8 and 1.58 AU with respect to the star, while in a G0V type — larger — it would be located at a distance of between 1 and 2 AU from the stellar body. In orbits smaller than the inner boundary of the habitable zone, a process of water evaporation, hydrogen separation by photolysis and loss of hydrogen to space by hydrodynamic escape would be triggered. Beyond the outer limit of the habitable zone, temperatures would be low enough to allow CO2 condensation, which would lead to an increase in albedo and a feedback reduction of the greenhouse effect until a permanent global glaciation would occur.

The size of the habitable zone is directly proportional to the mass and luminosity of its star, so the larger the star, the larger the habitable zone and the farther from its surface. Red dwarfs, the smallest of the main sequence, have a very small habitable zone close to them, which subjects any potentially habitable planets in the system to the effects of their star, including probable tidal locking. Even in a small yellow dwarf like Tau Ceti, of type G8.5V, the locking limit is at 0.4237 AU versus the 0.522 AU that marks the inner boundary of the habitable zone, so any planetary object orbiting a G-class star in this region will far exceed the locking limit, and will have day-night cycles like Earth.

In yellow dwarfs, this region coincides entirely with the ultraviolet habitability zone. This area is determined by an inner limit beyond which exposure to ultraviolet radiation would be too high for DNA and by an outer limit that provides the minimum levels for living things to carry out their biogenic processes. In the solar system, this region is located between 0.71 and 1.9 AU with respect to the Sun, compared to the 0.84-1.67 AU that mark the extremes of the habitable zone.

Life potential

Given the length of the main sequence in G-type stars, the levels of ultraviolet radiation in their habitable zone, the semi-major axis of the inner boundary of this region and the distance to their tidal locking limit, among other factors, yellow dwarfs are considered to be the most hospitable to life next to K-type stars.

One goal in exoplanetary research is to find an object that has the main characteristics of our planet, such as radius, mass, temperature, atmospheric composition and belonging to a star similar to the Sun. In theory, these Earth analogs should have comparable habitability conditions that would allow the proliferation of extraterrestrial life.

Based on the serious problems for planetary habitability presented by red dwarf systems and stellar bodies of type F or higher, the only stars that might offer a bearable scenario for life would be those of type K and G. Solar analogs used to be considered as the most likely candidates to host a solar-like planetary system, and as the best positioned to support carbon-based life forms and liquid water oceans. Subsequent studies, such as "Superhabitable Worlds" by René Heller and John Armstrong, establish that orange dwarfs may be more suitable for life than G-type dwarfs, and host hypothetical superhabitable planets.

However, yellow dwarfs still represent the only stellar type for which there is evidence of their suitability for life. Moreover, while in other types of stars the habitable zone does not coincide entirely with the ultraviolet habitable zone, in G-class stars the habitable zone lies entirely within the limits of the latter. Finally, yellow dwarfs have a much shorter initial phase of intense stellar activity than K-type stars, which allows planets belonging to solar analogs to preserve their primordial atmospheres more easily and to maintain them for much of the main sequence.

Discoveries

Most of the exoplanets discovered have been detected by the Kepler space telescope, which uses the transit method to find planets around other systems. This procedure analyzes the brightness of stars to detect dips that indicate the passage of a planetary object in front of them from the perspective of the observatory. It is the method that has been most successful in exoplanetary research, together with the radial velocity method, which consists of analyzing the vibrations caused in the stars by the gravitational effects of the planets orbiting them. The use of these procedures with the limitations of current telescopes makes it difficult to find objects with orbits similar to the Earth's orbits or higher, which generates a bias in favor of planets with a short semi-major axis. As a consequence, most of the exoplanets detected are either excessively hot or belong to low-mass stars, whose habitable zone is close to them and any object orbiting in this region will have a significantly shorter year than the Earth.

Planetary bodies belonging to the habitable zone of yellow dwarfs, such as Kepler-22b, Kepler-452b or Earth, take hundreds of days to complete an orbit around their star. The higher luminosity of these stars, the scarcity of transits and the semi-major axis of their planets located in the habitable zone reduce the probabilities of detecting this class of objects and considerably increase the number of false positives, as in the cases of KOI-5123.01 and KOI-5927.01. The ground-based and orbital observatories projected for the next ten years may increase the discoveries of Earth analogs in yellow dwarf systems.

Kepler-452b

Kepler-452b lies 1400 light-years from Earth, in the Cygnus constellation. Its radius of about 1.6 R places it right on the boundary separating telluric planets from mini-Neptunes established by the team of Courtney Dressing, a researcher at the Harvard-Smithsonian Center for Astrophysics (CfA). If the planet's density is similar to Earth's, its mass will be about 5 M and its gravity twice as great. A G2V-type yellow dwarf like the Sun belongs to Kepler-452, with an estimated age of 6 billion years (6 Ga) versus the solar system's 4.5 Ga.

The mass of its star is slightly higher than that of the Sun, 1.04 M, so despite the fact that it completes an orbit around it every 385 days versus 365 terrestrial days, it is warmer than the Earth. If it has similar albedo and atmospheric composition, the average surface temperature will be around 29 °C.

According to Jon Jenkins of NASA's Ames Research Center, it is not known whether Kepler-452b is a terrestrial planet, an ocean world or a mini-Neptune. If it is an Earth-like telluric object, it is likely to have a higher concentration of clouds, intense volcanic activity, and is about to suffer an uncontrolled greenhouse effect similar to that of Venus due to the constant increase in the luminosity of its star, after having remained throughout the main sequence in its habitable zone. Doug Caldwell, a SETI Institute scientist and member of the Kepler mission, estimates that Kepler-452b may be undergoing the same process that the Earth will undergo in a billion years.

Tau Ceti e

Tau Ceti e orbits a G8.5V-type star in the constellation Cetus, 12 light-years from Earth. It has a radius of 1.59 R and a mass of 4.29 M, so like Kepler-452b it lies at the separation boundary between terrestrial and gaseous planets. With an orbital period of only 168 days, its temperature assuming an Earth-like atmospheric composition and albedo would be about 50 °C.

The planet is located just at the inner edge of the habitable zone and receives about 60% more light than Earth. Its size may also imply a higher concentration of gases in its atmosphere, making it a super-Venus type object. Otherwise, it could be the first thermoplanet discovered.

Kepler-22b

Kepler-22b is at a distance of 600 light-years, in the Cygnus constellation. It completes one orbit around its G5V-type star every 290 days. Its radius is 2.35 R and its estimated mass, for an Earth-like density, would be 20.36 M. If the planet's atmosphere and albedo were similar to Earth's, its surface temperature would be around 22 °C.

It was the first exoplanet found by the Kepler telescope belonging to the habitability zone of its star. Because of its size, considering the limit established by Courtney Dressing's team, its probability to be a mini-Neptune is very high.

Effects of ionizing radiation in spaceflight

From Wikipedia, the free encyclopedia
The Phantom Torso, as seen here in the Destiny laboratory on the International Space Station (ISS), is designed to measure the effects of radiation on organs inside the body by using a torso that is similar to those used to train radiologists on Earth. The torso is equivalent in height and weight to an average adult male. It contains radiation detectors that will measure, in real-time, how much radiation the brain, thyroid, stomach, colon, and heart and lung area receive on a daily basis. The data will be used to determine how the body reacts to and shields its internal organs from radiation, which will be important for longer duration space flights.

Astronauts are exposed to approximately 72 millisieverts (mSv) while on six-month-duration missions to the International Space Station (ISS). Longer 3-year missions to Mars, however, have the potential to expose astronauts to radiation in excess of 1,000 mSv. Without the protection provided by Earth's magnetic field, the rate of exposure is dramatically increased. The risk of cancer caused by ionizing radiation is well documented at radiation doses beginning at 100 mSv and above.

Related radiological effect studies have shown that survivors of the atomic bomb explosions in Hiroshima and Nagasaki, nuclear reactor workers and patients who have undergone therapeutic radiation treatments have received low-linear energy transfer (LET) radiation (x-rays and gamma rays) doses in the same 50-2,000 mSv range.

Composition of space radiation

While in space, astronauts are exposed to radiation which is mostly composed of high-energy protons, helium nuclei (alpha particles), and high-atomic-number ions (HZE ions), as well as secondary radiation from nuclear reactions from spacecraft parts or tissue.

The ionization patterns in molecules, cells, tissues and the resulting biological effects are distinct from typical terrestrial radiation (x-rays and gamma rays, which are low-LET radiation). Galactic cosmic rays (GCRs) from outside the Milky Way galaxy consist mostly of highly energetic protons with a small component of HZE ions.

Prominent HZE ions:

GCR energy spectra peaks (with median energy peaks up to 1,000 MeV/amu) and nuclei (energies up to 10,000 MeV/amu) are important contributors to the dose equivalent.

Uncertainties in cancer projections

One of the main roadblocks to interplanetary travel is the risk of cancer caused by radiation exposure. The largest contributors to this roadblock are: (1) The large uncertainties associated with cancer risk estimates, (2) The unavailability of simple and effective countermeasures and (3) The inability to determine the effectiveness of countermeasures. Operational parameters that need to be optimized to help mitigate these risks include:

  • length of space missions
  • crew age
  • crew sex
  • shielding
  • biological countermeasures

Major uncertainties

  • effects on biological damage related to differences between space radiation and x-rays
  • dependence of risk on dose-rates in space related to the biology of DNA repair, cell regulation and tissue responses
  • predicting solar particle events (SPEs)
  • extrapolation from experimental data to humans and between human populations
  • individual radiation sensitivity factors (genetic, epigenetic, dietary or "healthy worker" effects)

Minor uncertainties

  • data on galactic cosmic ray environments
  • physics of shielding assessments related to transmission properties of radiation through materials and tissue
  • microgravity effects on biological responses to radiation
  • errors in human data (statistical, dosimetry or recording inaccuracies)

Quantitative methods have been developed to propagate uncertainties that contribute to cancer risk estimates. The contribution of microgravity effects on space radiation has not yet been estimated, but it is expected to be small. However as microgravity has been shown to modulate cancer progression, more research is needed into the combined effects of microgravity and radiation on carcinogenesis. The effects of changes in oxygen levels or in immune dysfunction on cancer risks are largely unknown and are of great concern during space flight.

Types of cancer caused by radiation exposure

Studies are being conducted on populations accidentally exposed to radiation (such as Chernobyl, production sites, and Hiroshima and Nagasaki). These studies show strong evidence for cancer morbidity as well as mortality risks at more than 12 tissue sites. The largest risks for adults who have been studied include several types of leukemia, including myeloid leukemia and acute lymphatic lymphoma as well as tumors of the lung, breast, stomach, colon, bladder and liver. Inter-sex variations are very likely due to the differences in the natural incidence of cancer in males and females. Another variable is the additional risk for cancer of the breast, ovaries and lungs in females. There is also evidence of a declining risk of cancer caused by radiation with increasing age, but the magnitude of this reduction above the age of 30 is uncertain.

It is unknown whether high-LET radiation could cause the same types of tumors as low-LET radiation, but differences should be expected.

The ratio of a dose of high-LET radiation to a dose of x-rays or gamma rays that produce the same biological effect are called relative biological effectiveness (RBE) factors. The types of tumors in humans who are exposed to space radiation will be different from those who are exposed to low-LET radiation. This is evidenced by a study that observed mice with neutrons and have RBEs that vary with the tissue type and strain.

Measured rate of cancer among astronauts

The measured change rate of cancer is restricted by limited statistics. A study published in Scientific Reports looked over 301 U.S. astronauts and 117 Soviet and Russian cosmonauts, and found no measurable increase in cancer mortality compared to the general population, as reported by LiveScience.

An earlier 1998 study came to similar conclusions, with no statistically significant increase in cancer among astronauts compared to the reference group.

Approaches for setting acceptable risk levels

The various approaches to setting acceptable levels of radiation risk are summarized below:

Comparison of radiation doses - includes the amount detected on the trip from Earth to Mars by the RAD on the MSL (2011 - 2013).
  • Unlimited Radiation Risk - NASA management, the families of loved ones of astronauts, and taxpayers would find this approach unacceptable.
  • Comparison to Occupational Fatalities in Less-safe Industries - The life-loss from attributable radiation cancer death is less than that from most other occupational deaths. At this time, this comparison would also be very restrictive on ISS operations because of continued improvements in ground-based occupational safety over the last 20 years.
  • Comparison to Cancer Rates in General Population - The number of years of life-loss from radiation-induced cancer deaths can be significantly larger than from cancer deaths in the general population, which often occur late in life (> age 70 years) and with significantly less numbers of years of life-loss.
  • Doubling Dose for 20 Years Following Exposure - Provides a roughly equivalent comparison based on life-loss from other occupational risks or background cancer fatalities during a worker's career, however, this approach negates the role of mortality effects later in life.
  • Use of Ground-based Worker Limits - Provides a reference point equivalent to the standard that is set on Earth, and recognizes that astronauts face other risks. However, ground workers remain well below dose limits, and are largely exposed to low-LET radiation where the uncertainties of biological effects are much smaller than for space radiation.

NCRP Report No. 153 provides a more recent review of cancer and other radiation risks. This report also identifies and describes the information needed to make radiation protection recommendations beyond LEO, contains a comprehensive summary of the current body of evidence for radiation-induced health risks and also makes recommendations on areas requiring future experimentation.

Current permissible exposure limits

Career cancer risk limits

Astronauts' radiation exposure limit is not to exceed 3% of the risk of exposure-induced death (REID) from fatal cancer over their career. It is NASA's policy to ensure a 95% confidence level (CL) that this limit is not exceeded. These limits are applicable to all missions in low Earth orbit (LEO) as well as lunar missions that are less than 180 days in duration. In the United States, the legal occupational exposure limits for adult workers is set at an effective dose of 50 mSv annually.

Cancer risk to dose relationship

The relationship between radiation exposure and risk is both age- and sex-specific due to latency effects and differences in tissue types, sensitivities, and life spans between sexes. These relationships are estimated using the methods that are recommended by the NCRP and more recent radiation epidemiology information 

The principle of As Low As Reasonably Achievable

The as low as reasonably achievable (ALARA) principle is a legal requirement intended to ensure astronaut safety. An important function of ALARA is to ensure that astronauts do not approach radiation limits and that such limits are not considered as "tolerance values." ALARA is especially important for space missions in view of the large uncertainties in cancer and other risk projection models. Mission programs and terrestrial occupational procedures resulting in radiation exposures to astronauts are required to find cost-effective approaches to implement ALARA.

Evaluating career limits

Organ (T) Tissue weighting factor (wT)
Gonads 0.20
Bone Marrow (red) 0.12
Colon 0.12
Lung 0.12
Stomach 0.12
Bladder 0.05
Breast 0.05
Liver 0.05
Esophagus 0.05
Thyroid 0.05
Skin 0.01
Bone Surface 0.01
Remainder* 0.05
*Adrenals, brain, upper intestine, small intestine,
kidney, muscle, pancreas, spleen, thymus and uterus.

The risk of cancer is calculated by using radiation dosimetry and physics methods.

For the purpose of determining radiation exposure limits at NASA, the probability of fatal cancer is calculated as shown below:

  1. The body is divided into a set of sensitive tissues, and each tissue, T, is assigned a weight, wT, according to its estimated contribution to cancer risk.
  2. The absorbed dose, Dγ, that is delivered to each tissue is determined from measured dosimetry. For the purpose of estimating radiation risk to an organ, the quantity characterizing the ionization density is the LET (keV/μm).
  3. For a given interval of LET, between L and ΔL, the dose-equivalent risk (in units of sievert) to a tissue, T, Hγ(L) is calculated as

    where the quality factor, Q(L), is obtained according to the International Commission on Radiological Protection (ICRP).
  4. The average risk to a tissue, T, due to all types of radiation contributing to the dose is given by

    or, since , where Fγ(L) is the fluence of particles with LET=L, traversing the organ,
  5. The effective dose is used as a summation over radiation type and tissue using the tissue weighting factors, wγ
  6. For a mission of duration t, the effective dose will be a function of time, E(t), and the effective dose for mission i will be
  7. The effective dose is used to scale the mortality rate for radiation-induced death from the Japanese survivor data, applying the average of the multiplicative and additive transfer models for solid cancers and the additive transfer model for leukemia by applying life-table methodologies that are based on U.S. population data for background cancer and all causes of death mortality rates. A dose-dose rate effectiveness factor (DDREF) of 2 is assumed.

Evaluating cumulative radiation risks

The cumulative cancer fatality risk (%REID) to an astronaut for occupational radiation exposures, N, is found by applying life-table methodologies that can be approximated at small values of %REID by summing over the tissue-weighted effective dose, Ei, as

where R0 are the age- and sex- specific radiation mortality rates per unit dose.

For organ dose calculations, NASA uses the model of Billings et al. to represent the self-shielding of the human body in a water-equivalent mass approximation. Consideration of the orientation of the human body relative to vehicle shielding should be made if it is known, especially for SPEs.

Confidence levels for career cancer risks are evaluated using methods that are specified by the NPRC in Report No. 126 Archived 2014-03-08 at the Wayback Machine. These levels were modified to account for the uncertainty in quality factors and space dosimetry.

The uncertainties that were considered in evaluating the 95% confidence levels are the uncertainties in:

  • Human epidemiology data, including uncertainties in
    • statistics limitations of epidemiology data
    • dosimetry of exposed cohorts
    • bias, including misclassification of cancer deaths, and
    • the transfer of risk across populations.
  • The DDREF factor that is used to scale acute radiation exposure data to low-dose and dose-rate radiation exposures.
  • The radiation quality factor (Q) as a function of LET.
  • Space dosimetry

The so-called "unknown uncertainties" from the NCRP report No. 126 are ignored by NASA.

Models of cancer risks and uncertainties

Life-table methodology

The double-detriment life-table approach is what is recommended by the NPRC  to measure radiation cancer mortality risks. The age-specific mortality of a population is followed over its entire life span with competing risks from radiation and all other causes of death described.

For a homogenous population receiving an effective dose E at age aE, the probability of dying in the age-interval from a to a+1 is described by the background mortality-rate for all causes of death, M(a), and the radiation cancer mortality rate, m(E,aE,a), as:

The survival probability to age, a, following an exposure, E at age aE, is:

The excessive lifetime risk (ELR - the increased probability that an exposed individual will die from cancer) is defined by the difference in the conditional survival probabilities for the exposed and the unexposed groups as:

A minimum latency-time of 10 years is often used for low-LET radiation. Alternative assumptions should be considered for high-LET radiation. The REID (the lifetime risk that an individual in the population will die from cancer caused by radiation exposure) is defined by:

Generally, the value of the REID exceeds the value of the ELR by 10-20%.

The average loss of life-expectancy, LLE, in the population is defined by:

The loss of life-expectancy among exposure-induced-deaths (LLE-REID) is defined by:

Uncertainties in low-LET epidemiology data

The low-LET mortality rate per sievert, mi is written

where m0 is the baseline mortality rate per sievert and xα are quantiles (random variables) whose values are sampled from associated probability distribution functions (PDFs), P(Xa).

NCRP, in Report No. 126, defines the following subjective PDFs, P(Xa), for each factor that contributes to the acute low-LET risk projection:

  1. Pdosimetry is the random and systematic errors in the estimation of the doses received by atomic-bomb blast survivors.
  2. Pstatistical is the distribution in uncertainty in the point estimate of the risk coefficient, r0.
  3. Pbias is any bias resulting for over- or under-reporting cancer deaths.
  4. Ptransfer is the uncertainty in the transfer of cancer risk following radiation exposure from the Japanese population to the U.S. population.
  5. PDr is the uncertainty in the knowledge of the extrapolation of risks to low dose and dose-rates, which are embodied in the DDREF.

Risk in context of exploration mission operational scenarios

The accuracy of galactic cosmic ray environmental models, transport codes and nuclear interaction cross sections allow NASA to predict space environments and organ exposure that may be encountered on long-duration space missions. The lack of knowledge of the biological effects of radiation exposure raise major questions about risk prediction.

The cancer risk projection for space missions is found by

where represents the folding of predictions of tissue-weighted LET spectra behind spacecraft shielding with the radiation mortality rate to form a rate for trial J.

Alternatively, particle-specific energy spectra, Fj(E), for each ion, j, can be used

.

The result of either of these equations is inserted into the expression for the REID.

Related probability distribution functions (PDFs) are grouped together into a combined probability distribution function, Pcmb(x). These PDFs are related to the risk coefficient of the normal form (dosimetry, bias and statistical uncertainties). After a sufficient number of trials have been completed (approximately 105), the results for the REID estimated are binned and the median values and confidence intervals are found.

The chi-squared (χ2) test is used for determining whether two separate PDFs are significantly different (denoted p1(Ri) and p2(Ri), respectively). Each p(Ri) follows a Poisson distribution with variance .

The χ2 test for n-degrees of freedom characterizing the dispersion between the two distributions is

.

The probability, P(ņχ2), that the two distributions are the same is calculated once χ2 is determined.

Radiation carcinogenesis mortality rates

Age-and sex-dependent mortality rate per unit dose, multiplied by the radiation quality factor and reduced by the DDREF is used for projecting lifetime cancer fatality risks. Acute gamma ray exposures are estimated. The additivity of effects of each component in a radiation field is also assumed.

Rates are approximated using data gathered from Japanese atomic bomb survivors. There are two different models that are considered when transferring risk from Japanese to U.S. populations.

  • Multiplicative transfer model - assumes that radiation risks are proportional to spontaneous or background cancer risks.
  • Additive transfer model - assumes that radiation risk acts independently of other cancer risks.

The NCRP recommends a mixture model to be used that contains fractional contributions from both methods.

The radiation mortality rate is defined as:

Where:

  • ERR = excess relative risk per sievert
  • EAR = excess additive risk per sievert
  • Mc(a) = the sex- and age-specific cancer mortality rate in the U.S. population
  • F = the tissue-weighted fluence
  • L = the LET
  • v = the fractional division between the assumption of the multiplicative and additive risk transfer models. For solid cancer, it is assumed that v=1/2 and for leukemia, it is assumed that v=0.

Biological and physical countermeasures

Identifying effective countermeasures that reduce the risk of biological damage is still a long-term goal for space researchers. These countermeasures are probably not needed for extended duration lunar missions, but will be needed for other long-duration missions to Mars and beyond. On 31 May 2013, NASA scientists reported that a possible human mission to Mars may involve a great radiation risk based on the amount of energetic particle radiation detected by the RAD on the Mars Science Laboratory while traveling from the Earth to Mars in 2011-2012.

There are three fundamental ways to reduce exposure to ionizing radiation:

  • increasing the distance from the radiation source
  • reducing the exposure time
  • shielding (i.e.: a physical barrier)

Shielding is a plausible option, but due to current launch mass restrictions, it is prohibitively costly. Also, the current uncertainties in risk projection prevent the actual benefit of shielding from being determined. Strategies such as drugs and dietary supplements to reduce the effects of radiation, as well as the selection of crew members are being evaluated as viable options for reducing exposure to radiation and effects of irradiation. Shielding is an effective protective measure for solar particle events. As far as shielding from GCR, high-energy radiation is very penetrating and the effectiveness of radiation shielding depends on the atomic make-up of the material used.

Antioxidants are effectively used to prevent the damage caused by radiation injury and oxygen poisoning (the formation of reactive oxygen species), but since antioxidants work by rescuing cells from a particular form of cell death (apoptosis), they may not protect against damaged cells that can initiate tumor growth.

Evidence sub-pages

The evidence and updates to projection models for cancer risk from low-LET radiation are reviewed periodically by several bodies, which include the following organizations:

These committees release new reports about every 10 years on cancer risks that are applicable to low-LET radiation exposures. Overall, the estimates of cancer risks among the different reports of these panels will agree within a factor of two or less. There is continued controversy for doses that are below 5 mSv, however, and for low dose-rate radiation because of debate over the linear no-threshold hypothesis that is often used in statistical analysis of these data. The BEIR VII report, which is the most recent of the major reports is used in the following sub-pages. Evidence for low-LET cancer effects must be augmented by information on protons, neutrons, and HZE nuclei that is only available in experimental models. Such data have been reviewed by NASA several times in the past and by the NCRP.

Mathematical universe hypothesis

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Mathematical_universe_hypothesis   ...