Search This Blog

Monday, November 24, 2014

Molten salt reactor

From Wikipedia, the free encyclopedia
Molten salt reactor scheme.

A molten salt reactor (MSR) is a class of nuclear fission reactors in which the primary coolant, or even the fuel itself, is a molten salt mixture. MSRs run at higher temperatures than water-cooled reactors for higher thermodynamic efficiency, while staying at low vapor pressure.

The nuclear fuel may be solid or dissolved in the coolant itself. In many designs the nuclear fuel is dissolved in the molten fluoride salt coolant as uranium tetrafluoride (UF4). The fluid becomes critical in a graphite core which serves as the moderator. Solid fuel designs rely on ceramic fuel dispersed in a graphite matrix, with the molten salt providing low pressure, high temperature cooling. The salts are much more efficient than compressed helium at removing heat from the core, reducing the need for pumping and piping and reducing the size of the core.

The early Aircraft Reactor Experiment (1954) was primarily motivated by the small size that the design could provide, while the Molten-Salt Reactor Experiment (1965–1969) was a prototype for a thorium fuel cycle breeder reactor nuclear power plant. One of the Generation IV reactor designs is a molten-salt-cooled, molten-salt-fuelled reactor (illustrated on the right); the initial reference design is 1000 MWe.[1]

History

Aircraft reactor experiment

Aircraft Reactor Experiment building at ORNL, it was later retrofitted for the MSRE.

Extensive research into molten salt reactors started with the U.S. aircraft reactor experiment (ARE) in support of the U.S. Aircraft Nuclear Propulsion program. The ARE was a 2.5 MWth nuclear reactor experiment designed to attain a high power density for use as an engine in a nuclear-powered bomber. The project included several reactor experiments including high temperature reactor and engine tests collectively called the Heat Transfer Reactor Experiments: HTRE-1, HTRE-2 and HTRE-3 at the National Reactor Test Station (now Idaho National Laboratory) as well as an experimental high-temperature molten salt reactor at Oak Ridge National Laboratory - the ARE. The ARE used molten fluoride salt NaF-ZrF4-UF4 (53-41-6 mol%) as fuel, was moderated by beryllium oxide (BeO), used liquid sodium as a secondary coolant and had a peak temperature of 860 °C. It operated for 100 MW-hours over nine days in 1954. This experiment used Inconel 600 alloy for the metal structure and piping.[2] After the ARE, another reactor was made critical at the Critical Experiments Facility of the Oak Ridge National Laboratory in 1957 as part of the circulating-fuel reactor program of the Pratt and Whitney Aircraft Company (PWAC). This was called the PWAR-1, the Pratt and Whitney Aircraft Reactor-1. The experiment was run for only a few weeks and at essentially zero nuclear power, but it was a critical reactor. The operating temperature was held constant at approximately 1250°F (677°C). Like the 2.5 MWt ARE, the PWAR-1 used NaF-ZrF4-UF4 as the primary fuel and coolant, making it one of the three critical molten salt reactors ever built. [3]

Molten-salt reactor experiment

MSRE plant diagram

Oak Ridge National Laboratory (ORNL) took the lead in researching the MSR through 1960s, and much of their work culminated with the Molten-Salt Reactor Experiment (MSRE). The MSRE was a 7.4 MWth test reactor simulating the neutronic "kernel" of a type of epithermal thorium molten salt breeder reactor called the Liquid fluoride thorium reactor. The large, expensive breeding blanket of thorium salt was omitted in favor of neutron measurements.

The MSRE was located at ORNL. Its piping, core vat and structural components were made from Hastelloy-N and its moderator was pyrolytic graphite. It went critical in 1965 and ran for four years. The fuel for the MSRE was LiF-BeF2-ZrF4-UF4 (65-29-5-1), the graphite core moderated it, and its secondary coolant was FLiBe (2LiF-BeF2). It reached temperatures as high as 650 °C and operated for the equivalent of about 1.5 years of full power operation.

Oak Ridge National Laboratory molten salt breeder reactor

The culmination of the Oak Ridge National Laboratory research during the 1970–1976 timeframe resulted in a proposed molten salt breeder reactor (MSBR) design which would use LiF-BeF2-ThF4-UF4 (72-16-12-0.4) as fuel, was to be moderated by graphite with a 4-year replacement schedule, use NaF-NaBF4 as the secondary coolant, and have a peak operating temperature of 705 °C.[4] Despite the success, the MSR program closed down in the early 1970s in favor of the liquid metal fast-breeder reactor (LMFBR),[5] after which research stagnated in the United States.[6][7] As of 2011, the ARE and the MSRE remained the only molten-salt reactors ever operated.
The MSBR project received funding until 1976. Inflation-adjusted to 1991 dollars, the project received $38.9 million from 1968 to 1976.[8]

The following reasons were cited as responsible for the program cancellation:
  • The political and technical support for the program in the United States was too thin geographically. Within the United States, only in Oak Ridge, Tennessee, was the technology well understood.[5]
  • The MSR program was in competition with the fast breeder program at the time, which got an early start and had copious government development funds being spent in many parts of the United States. When the MSR development program had progressed far enough to justify a greatly expanded program leading to commercial development, the AEC could not justify the diversion of substantial funds from the LMFBR to a competing program.[5]

Oak Ridge National Laboratory denatured molten salt reactor (DMSR)

In 1980, the engineering technology division at Oak Ridge National Laboratory published a paper entitled “Conceptual Design Characteristics of a Denatured Molten-Salt Reactor with Once-Through Fueling.” In it, the authors “examine the conceptual feasibility of a molten-salt power reactor fueled with denatured uranium-235 (i.e. with low-enriched uranium) and operated with a minimum of chemical processing.” The main priority behind the design characteristics is proliferation resistance.[9] Lessons learned from past projects and research at ORNL were taken into strong consideration. Although the DMSR can theoretically be fueled partially by thorium or plutonium, fueling solely on low enriched uranium (LEU) helps maximize proliferation resistance.

Another important goal of the DMSR is to minimize R&D required and to maximize feasibility. The Generation IV international Forum (GIF) includes “salt processing” as a technology gap for molten salt reactors.[10] The DMSR requires minimal chemical processing because it is a burner design as opposed to a breeder. Both experimental reactors built at ORNL were burner designs. In addition, the choices to use graphite for neutron moderation, and enhanced Hastelloy-N for piping simplify the design and reduce R&D needed.

Russian MSR research program

In Russia, a molten-salt reactor research program was started in the second half of the 1970s at the Kurchatov Institute. It covered a wide range of theoretical and experimental studies, particularly the investigation of mechanical, corrosion and radiation properties of the molten salt container materials.
The main findings of completed program supported the conclusion that there are no physical nor technological obstacles to the practical implementation of MSRs.[11] A reduction in activity occurred after 1986 due to the Chernobyl disaster, along with a general stagnation of nuclear power and nuclear industry.[12](p381)

Recent developments

Denatured molten salt reactor

Terrestrial Energy Inc. (TEI), a Canadian based company, is developing a DMSR design called the Integral Molten Salt Reactor (IMSR). The IMSR is designed to be deployable as a small modular reactor (SMR) and will be constructed in three power formulations ranging from 80 to 600 MWth. With high operating temperatures, the IMSR has application in industrial heat markets as well as traditional power markets. The main design features include neutron moderation from graphite (thermal spectrum), fueling with low-enriched uranium, and a compact and replaceable Core-unit. The latter feature permits the operational simplicity necessary for industrial deployment.[13]

Liquid-salt very-high-temperature reactor

As of September 2010, research was continuing for reactors that utilize molten salts for coolant. Both the traditional molten-salt reactor and the very high temperature reactor (VHTR) were selected as potential designs for study under the Generation Four Initiative (GEN-IV). A version of the VHTR being studied was the liquid-salt very-high-temperature reactor (LS-VHTR), also commonly called the advanced high-temperature reactor (AHTR).[citation needed] It is essentially a standard VHTR design that uses liquid salt as a coolant in the primary loop, rather than a single helium loop. It relies on "TRISO" fuel dispersed in graphite. Early AHTR research focused on graphite would be in the form of graphite rods that would be inserted in hexagonal moderating graphite blocks, but current studies focus primarily on pebble-type fuel.[citation needed] The LS-VHTR has many attractive features, including: the ability to work at very high temperatures (the boiling point of most molten salts being considered are >1400 °C); low-pressure cooling that can be used to more easily match hydrogen production facility conditions (most thermochemical cycles require temperatures in excess of 750 °C); better electric conversion efficiency than a helium-cooled VHTR operating at similar conditions; passive safety systems, and better retention of fission products in the event of an accident.[citation needed] This concept is now referred to as "fluoride salt-cooled high-temperature reactor" (FHR).[14]

Liquid fluoride thorium reactor

Reactors containing molten thorium salt, called liquid fluoride thorium reactors (LFTR), would tap the abundant energy source of the thorium fuel cycle. Private companies from Japan, Russia, Australia and the United States, and the Chinese government, have expressed interest in developing this technology.[15][16][17]

Advocates estimate that five hundred metric tons of thorium could supply all U.S. energy needs for one year.[18] The U.S. Geological Survey estimates that the largest known U.S. thorium deposit, the Lemhi Pass district on the Montana-Idaho border, contains thorium reserves of 64,000 metric tons of thorium.[19]

Fuji reactor

The FUJI MSR is a 100 to 200 MWe LFTR, using technology similar to the Oak Ridge National Laboratory Reactor. It is being developed by a consortium including members from Japan, the U.S. and Russia. It would likely take 20 years to develop a full size reactor[20] but the project seems to lack funding.[15]

Chinese project

Under the direction of Jiang Mianheng, The People’s Republic of China has initiated a research project in thorium molten-salt reactor technology. It was formally announced at the Chinese Academy of Sciences (CAS) annual conference in January 2011. The plan was "to build a tiny 2 MW plant using liquid fluoride fuel by the end of the decade, before scaling up to commercially viable size over the 2020s. It is also working on a pebble-bed reactor."[17][21] The proposed completion date for a test 2 MW pebble-bed solid thorium and molten salt cooled reactor has been delayed from 2015 to 2017. The proposed "test thorium molten-salt reactor" has also been delayed.[22]

Indian research

Ratan Kumar Sinha, Chairman of Atomic Energy Commission of India, stated in 2013: "India is also investigating Molten Salt Reactor (MSR) technology. We have molten salt loops operational at BARC."[23]

U.S. companies

Kirk Sorensen, former NASA scientist and chief nuclear technologist at Teledyne Brown Engineering, has been a long-time promoter of the thorium fuel cycle, coining the term liquid fluoride thorium reactor. In 2011, Sorensen founded Flibe Energy, a company aimed at developing 20-50 MW LFTR reactor designs to power military bases. (It is easier to approve novel military designs than civilian power station designs in today's US nuclear regulatory environment).[16][24][25][26]

Another startup company, Transatomic Power, has been created by Ph.D. students from MIT including Dr Leslie Dewan, ceo and Russ Wilcox of E Ink.[27] They are pursuing what they term a Waste-Annihilating Molten Salt Reactor (acronym WAMSR), focused on the potential to consume existing nuclear waste more thoroughly.[28][29]

Weinberg Foundation

The Weinberg Foundation is a British non-profit organization founded in 2011, dedicated to act as a communications, debate and lobbying hub to raise awareness about the potential of thorium energy and LFTR. It was formally launched at the House of Lords on 8 September 2011.[30][31][32] It is named after American nuclear physicist Alvin M. Weinberg, who pioneered the thorium molten salt reactor research.

Molten-salt fueling options

Molten-salt-cooled reactors

Molten-salt-fueled reactors are quite different from molten-salt-cooled solid-fuel reactors, called simply "molten salt reactor system" in the Generation IV proposal, also called MSCR, which is also the acronym for the Molten Salt Converter Reactor design. These reactors were additionally referred to as "advanced high-temperature reactors (AHTRs), but since about 2010 the preferred DOE designation is "fluoride high-temperature reactors (FHRs)".[34]

The FHR concept cannot reprocess fuel easily and has fuel rods that need to be fabricated and validated, delaying deployment by up to twenty years[citation needed] from project inception. However, since it uses fabricated fuel, reactor manufacturers can still profit by selling fuel assemblies.

The FHR retains the safety and cost advantages of a low-pressure, high-temperature coolant, also shared by liquid metal cooled reactors. Notably, there is no steam in the core to cause an explosion, and no large, expensive steel pressure vessel. Since it can operate at high temperatures, the conversion of the heat to electricity can also use an efficient, lightweight Brayton cycle gas turbine.

Much of the current research on FHRs is focused on small compact heat exchangers. By using smaller heat exchangers, less molten salt needs to be used and therefore significant cost savings could be achieved.[35]

Molten salts can be highly corrosive, more so as temperatures rise. For the primary cooling loop of the MSR, a material is needed that can withstand corrosion at high temperatures and intense radiation. Experiments show that Hastelloy-N and similar alloys are quite suited to the tasks at operating temperatures up to about 700 °C. However, long-term experience with a production scale reactor has yet to be gained. In spite of serious engineering difficulties higher operating temperatures may be desirable - at 850 °C thermochemical production of hydrogen becomes possible. Materials for this temperature range have not been validated, though carbon composites, molybdenum alloys (e.g. TZM), carbides, and refractory metal based or ODS alloys might be feasible.

Fused salt selection

Molten FLiBe

The salt mixtures are chosen to make the reactor safer and more practical. Fluoride salts are favored, because fluorine has only one stable isotope (F-19), and it does not easily become radioactive under neutron bombardment. Both of these make fluorine better than chlorine, which has two stable isotopes (Cl-35 and Cl-37), as well as a slow-decaying isotope between them which facilitates neutron absorption by Cl-35. Compared to chlorine and other halides, fluorine also absorbs fewer neutrons and slows ("moderates") neutrons better. Low-valence fluorides boil at high temperatures, though many pentafluorides and hexafluorides boil at low temperatures. They also must be very hot before they break down into their simpler components, such molten salts are "chemically stable" when maintained well below their boiling points.

On the other hand, some salts are so useful that isotope separation of the halide is worthwhile. Chlorides permit fast breeder reactors to be constructed using molten salts. However, not nearly as much work has been done on reactor designs using chloride salts. Chlorine, unlike fluorine, must be purified to isolate the heavier stable isotope, chlorine-37, thus reducing production of sulfur tetrafluoride that occurs when chlorine-35 absorbs a neutron to become chlorine-36, then degrades by beta decay to sulfur-36. Similarly, any lithium present in a salt mixture must be in the form of purified lithium-7 to reduce tritium production by lithium-6 (the tritium then forms corrosive hydrogen fluoride).

Reactor salts are usually close to eutectic mixtures to reduce their melting point. A low melting point simplifies melting the salt at startup and reduces the risk of the salt freezing as it's cooled in the heat exchanger.

Due to the high "redox window" of fused fluoride salts, the chemical potential of the fused salt system can be changed. Fluorine-Lithium-Beryllium ("FLiBe") can be used with beryllium additions to lower the electrochemical potential and almost eliminate corrosion. However, since beryllium is extremely toxic, special precautions must be engineered into the design to prevent its release into the environment. Many other salts can cause plumbing corrosion, especially if the reactor is hot enough to make highly reactive hydrogen.

To date, most research has focused on FLiBe, because lithium and beryllium are reasonably effective moderators, and form a eutectic salt mixture with a lower melting point than each of the constituent salts. Beryllium also performs neutron doubling, improving the neutron economy. This process occurs when the Beryllium nucleus re-emits two neutrons after absorbing a single neutron. For the fuel carrying salts, generally 1% or 2% (by mole) of UF4 is added. Thorium and plutonium fluorides have also been used.

Comparison of the neutron capture and moderating efficiency of several materials. Red are Be-bearing, blue are ZrF4-bearing and green are LiF-bearing salts.[36]
Material Total neutron capture
relative to graphite
(per unit volume)
Moderating ratio
(Avg. 0.1 to 10 eV)
Heavy water 0.2 11449
Light water 75 246
Graphite 1 863
Sodium 47 2
UCO 285 2
UO2 3583 0.1
2LiF–BeF2 8 60
LiF–BeF2–ZrF4 (64.5–30.5–5) 8 54
NaF–BeF2 (57–43) 28 15
LiF–NaF–BeF2 (31–31–38) 20 22
LiF–ZrF4 (51–49) 9 29
NaF–ZrF4 (59.5–40.5) 24 10
LiF-NaF–ZrF4 (26–37–37) 20 13
KF–ZrF4 (58–42) 67 3
RbF–ZrF4 (58–42) 14 13
LiF–KF (50–50) 97 2
LiF–RbF (44–56) 19 9
LiF–NaF–KF (46.5–11.5–42) 90 2
LiF–NaF–RbF (42–6–52) 20 8

Fused salt purification

Techniques for preparing and handling molten salt had been first developed at Oak Ridge National Lab.[37] The purpose of salt purification was to eliminate oxides, Sulfur, and metal impurities. Oxides could result in the deposition of solid particles in reactor operation. Sulfur had to be removed because of their corrosive attack on nickel-base alloys at operational temperature. Structural metal such as Chromium, Nickel, and Iron had to be removed for corrosion control.

A water content reduction purification stage using HF and Helium sweep gas was specified to run at 400 °C. Oxide and Sulfur contamination in the salt mixtures were removed using gas sparging of HF - H2 mixture, with the salt heated to 600 °C.[37](p8) Structural metal contamination in the salt mixtures were removed using Hydrogen gas sparging, at 700 °C.[37](p26) Solid ammonium hydrofluoride was proposed as a safer alternative for oxide removal.[38]

Fused salt processing

The possibility of online processing can be an advantage of the MSR design. Continuous processing would reduce the inventory of fission products, control corrosion and improve neutron economy by removing fission products with high neutron absorption cross-section, especially xenon. This makes the MSR particularly suited to the neutron-poor thorium fuel cycle. Online fuel processing can introduce risks of fuel processing accidents[39](p15), which can trigger release of radio isotopes.

In some thorium breeding scenarios, the intermediate product protactinium-233 would be removed from the reactor and allowed to decay into highly pure uranium-233, an attractive bomb-making material. More modern designs propose to use a lower specific power or a separate large thorium breeding blanket. This dilutes the protactinium to such an extent that few protactinium atoms absorb a second neutron or, via a (n, 2n) reaction (in which an incident neutron is not absorbed but instead knocks a neutron out of the nucleus), generate uranium-232. Because U-232 has a short half-life and its decay chain contains hard gamma emitters, it makes the isotopic mix of uranium less attractive for bomb-making. This benefit would come with the added expense of a larger fissile inventory or a 2-fluid design with a large quantity of blanket salt.

The necessary fuel salt reprocessing technology has been demonstrated, but only at laboratory scale. A prerequisite to full-scale commercial reactor design is the R&D to engineer an economically competitive fuel salt cleaning system.

Fissile fuel reprocessing issues

Reprocessing refers to the chemical separation of fissionable uranium and plutonium from spent nuclear fuel.[40] The recovery of uranium or plutonium could be subject to the risk of nuclear proliferation. In the United States the regulatory regime has varied dramatically in different administrations.[40]

In the original 1971 Molten Salt Breeder Reactor proposal, uranium reprocessing was scheduled every ten days as part of reactor operation.[41](p181) Subsequently a once-through fueling design was proposed that limited uranium reprocessing to every thirty years at the end of useful salt life.[42](p98) A mixture of uranium-238 was called for to make sure recovered uranium would not be weapons grade. This design is referred to as denatured molten salt reactor.[43] If reprocessing were to be prohibited then the uranium would be disposed with other fission products.

Comparison to ordinary light water reactors

MSRs, especially those with the fuel dissolved in the salt differ considerably from conventional reactors. The pressure can be low and the temperature is much higher. In this respect an MSR is more similar to a liquid metal cooled reactor than a conventional light water cooled reactor. As an additional difference MSRs are often planned as breeding reactor with a closed fuel cycle - as opposed to using a once-through fuel currently used in US nuclear reactors.

The typical safety concepts rely on a negative temperature coefficient of reactivity and a large possible temperature rise to limit reactivity excursions. As an additional method for shutdown a separate, passively cooled container below the reactor is planned. In case of problems and for regular maintenance the fuel is drained from the reactor. This stops the nuclear reaction and gives a second cooling system. Neutron-producing accelerators have even been proposed for some super-safe subcritical experimental designs.[44]

Cost estimates from the 1970s were slightly lower than for conventional light-water reactors.[45]
The temperatures of some proposed designs are high enough to produce process heat for hydrogen production or other chemical reactions. Because of this, they have been included in the GEN-IV roadmap for further study.[46]

Advantages

The molten salt reactor offers many potential advantages compared to current light water reactors:[4]
  • Inherently safe design (safety by passive components and the strong negative temperature coefficient of reactivity of some designs).
  • Operating at a low pressure improves safety and simplifies the design
  • In theory a full recycle system can be much cleaner: the discharge wastes after chemical separation are predominately fission products, most of which have relatively short half lives compared to longer-lived actinide wastes. This can result in a significant reduction in the containment period in a geologic repository (300 years vs. tens of thousands of years).
  • The fuel's liquid phase is adequate for pyroprocessing for separation of fission products. This may have advantages over conventional reprocessing, though much development is still needed.
  • There is no need for fuel rod manufacturing
  • Some designs can "burn" problematic transuranic elements from traditional solid-fuel nuclear reactors.
  • An MSR can react to load changes in less than 60 seconds (unlike "traditional" solid-fuel nuclear power plants that suffer from Xenon poisoning).
  • Molten salt reactors can run at high temperatures, yielding high efficiencies to produce electricity.
  • Some MSRs can offer a high "specific power", that is high power at a low mass. This was demonstrated by the ARE, the aircraft reactor experiment.[2]
  • A possibly good neutron economy makes the MSR attractive for the neutron poor thorium fuel cycle.
  • Lithium containing salts will cause significant tritium production (comparable with heavy water reactors), even if pure 7Li is used. Tritium itself is valuable, but also decays (half-life 12.32 yrs) to helium-3, another valuable product.
  • LWR's (and most other solid-fuel reactors) have no clean "off switch", but once the initial criticality is overcome an MSR is comparatively easy and fast to turn on and off. For example, it is said that the researchers would "turn off the Molten-Salt Reactor Experiment for the weekend". At a minimum, the reactor needs enough energy to re-melt the salt and engage the pumps.

Disadvantages

  • Little development compared to most Gen IV designs - much is unknown.
  • Need to operate an on-site chemical plant to manage core mixture and remove fission products.
  • Likely need for regulatory changes to deal with radically different design features.
  • Corrosion may occur over many decades of reactor operation and could be problematic.[47]
  • Nickel and iron based alloys are prone to embrittlement under high neutron flux.[42](p83)
  • Being a breeder reactor, it may be possible to modify an MSR to produce weapons grade nuclear material.[48]

Thursday, October 23, 2014

General Circulation Model

General Circulation Model

From Wikipedia, the free encyclopedia

Climate models are systems of differential equations based on the basic laws of physics, fluid motion, and chemistry. To “run” a model, scientists divide the planet into a 3-dimensional grid, apply the basic equations, and evaluate the results. Atmospheric models calculate winds, heat transfer, radiation, relative humidity, and surface hydrology within each grid and evaluate interactions with neighboring points.[1]
This visualization shows early test renderings of a global computational model of Earth's atmosphere based on data from NASA's Goddard Earth Observing System Model, Version 5 (GEOS-5).

A general circulation model (GCM), a type of climate model, is a mathematical model of the general circulation of a planetary atmosphere or ocean and based on the Navier–Stokes equations on a rotating sphere with thermodynamic terms for various energy sources (radiation, latent heat). These equations are the basis for complex computer programs commonly used for simulating the atmosphere or ocean of the Earth.
Atmospheric and oceanic GCMs (AGCM and OGCM) are key components of global climate models along with sea ice and land-surface components. GCMs and global climate models are widely applied for weather forecasting, understanding the climate, and projecting climate change. Versions designed for decade to century time scale climate applications were originally created by Syukuro Manabe and Kirk Bryan at the Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey.[2] These computationally intensive numerical models are based on the integration of a variety of fluid dynamical, chemical, and sometimes biological equations.

Note on nomenclature

The initialism GCM stands originally for general circulation model. Recently, a second meaning has come into use, namely global climate model. While these do not refer to the same thing, General Circulation Models are typically the tools used for modelling climate, and hence the two terms are sometimes used as if they were interchangeable. However, the term "global climate model" is ambiguous, and may refer to an integrated framework incorporating multiple components which may include a general circulation model, or may refer to the general class of climate models that use a variety of means to represent the climate mathematically with differing levels of detail.

History: general circulation models

In 1956, Norman Phillips developed a mathematical model which could realistically depict monthly and seasonal patterns in the troposphere, which became the first successful climate model.[3][4] Following Phillips's work, several groups began working to create general circulation models.[5] The first general circulation climate model that combined both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory.[2] By the early 1980s, the United States' National Center for Atmospheric Research had developed the Community Atmosphere Model; this model has been continuously refined into the 2000s.[6] In 1996, efforts began to initialize and model soil and vegetation types, which led to more realistic forecasts.[7] Coupled ocean-atmosphere climate models such as the Hadley Centre for Climate Prediction and Research's HadCM3 model are currently being used as inputs for climate change studies.[5] The role of gravity waves was neglected within these models until the mid-1980s. Now, gravity waves are required within global climate models to simulate regional and global scale circulations accurately, though their broad spectrum makes their incorporation complicated.[8]

Atmospheric vs oceanic models

There are both atmospheric GCMs (AGCMs) and oceanic GCMs (OGCMs). An AGCM and an OGCM can be coupled together to form an atmosphere-ocean coupled general circulation model (CGCM or AOGCM). With the addition of other components (such as a sea ice model or a model for evapotranspiration over land), the AOGCM becomes the basis for a full climate model. Within this structure, different variations can exist, and their varying response to climate change may be studied (e.g., Sun and Hansen, 2003).

Modeling trends

A recent trend in GCMs is to apply them as components of Earth system models, e.g. by coupling to ice sheet models for the dynamics of the Greenland and Antarctic ice sheets, and one or more chemical transport models (CTMs) for species important to climate. Thus a carbon CTM may allow a GCM to better predict changes in carbon dioxide concentrations resulting from changes in anthropogenic emissions. In addition, this approach allows accounting for inter-system feedback: e.g. chemistry-climate models allow the possible effects of climate change on the recovery of the ozone hole to be studied.[9]

Climate prediction uncertainties depend on uncertainties in chemical, physical, and social models (see IPCC scenarios below).[10] Progress has been made in incorporating more realistic chemistry and physics in the models, but significant uncertainties and unknowns remain, especially regarding the future course of human population, industry, and technology.

Note that many simpler levels of climate model exist; some are of only heuristic interest, while others continue to be scientifically relevant.

Model structure

Three-dimensional (more properly four-dimensional) GCMs discretise the equations for fluid motion and integrate these forward in time. They also contain parameterisations for processes – such as convection – that occur on scales too small to be resolved directly. More sophisticated models may include representations of the carbon and other cycles.

A simple general circulation model (SGCM), a minimal GCM, consists of a dynamical core that relates material properties such as temperature to dynamical properties such as pressure and velocity. Examples are programs that solve the primitive equations, given energy input into the model, and energy dissipation in the form of scale-dependent friction, so that atmospheric waves with the highest wavenumbers are the ones most strongly attenuated. Such models may be used to study atmospheric processes within a simplified framework but are not suitable for future climate projections.

Atmospheric GCMs (AGCMs) model the atmosphere (and typically contain a land-surface model as well) and impose sea surface temperatures (SSTs). A large amount of information including model documentation is available from AMIP.[11] They may include atmospheric chemistry.
  • AGCMs consist of a dynamical core which integrates the equations of fluid motion, typically for:
    • surface pressure
    • horizontal components of velocity in layers
    • temperature and water vapor in layers
  • There is generally a radiation code, split into solar/short wave and terrestrial/infra-red/long wave
  • Parametrizations are used to include the effects of various processes. All modern AGCMs include parameterizations for:
A GCM contains a number of prognostic equations that are stepped forward in time (typically winds, temperature, moisture, and surface pressure) together with a number of diagnostic equations that are evaluated from the simultaneous values of the variables. As an example, pressure at any height can be diagnosed by applying the hydrostatic equation to the predicted surface pressure and the predicted values of temperature between the surface and the height of interest. The pressure diagnosed in this way then is used to compute the pressure gradient force in the time-dependent equation for the winds.

Oceanic GCMs (OGCMs) model the ocean (with fluxes from the atmosphere imposed) and may or may not contain a sea ice model. For example, the standard resolution of HadOM3 is 1.25 degrees in latitude and longitude, with 20 vertical levels, leading to approximately 1,500,000 variables.

Coupled atmosphere–ocean GCMs (AOGCMs) (e.g. HadCM3, GFDL CM2.X) combine the two models. They thus have the advantage of removing the need to specify fluxes across the interface of the ocean surface. These models are the basis for sophisticated model predictions of future climate, such as are discussed by the IPCC.

AOGCMs represent the pinnacle of complexity in climate models and internalise as many processes as possible. They are the only tools that could provide detailed regional predictions of future climate change. However, they are still under development. The simpler models are generally susceptible to simple analysis and their results are generally easy to understand. AOGCMs, by contrast, are often nearly as hard to analyse as the real climate system.

Model grids

The fluid equations for AGCMs are discretised using either the finite difference method or the spectral method. For finite differences, a grid is imposed on the atmosphere. The simplest grid uses constant angular grid spacing (i.e., a latitude / longitude grid), however, more sophisticated non-rectantangular grids (e.g., icosahedral) and grids of variable resolution[12] are more often used.[13] The "LMDz" model can be arranged to give high resolution over any given section of the planet. HadGEM1 (and other ocean models) use an ocean grid with higher resolution in the tropics to help resolve processes believed to be important for ENSO. Spectral models generally use a gaussian grid, because of the mathematics of transformation between spectral and grid-point space. Typical AGCM resolutions are between 1 and 5 degrees in latitude or longitude: the Hadley Centre model HadCM3, for example, uses 3.75 in longitude and 2.5 degrees in latitude, giving a grid of 96 by 73 points (96 x 72 for some variables); and has 19 levels in the vertical. This results in approximately 500,000 "basic" variables, since each grid point has four variables (u,v, T, Q), though a full count would give more (clouds; soil levels). HadGEM1 uses a grid of 1.875 degrees in longitude and 1.25 in latitude in the atmosphere; HiGEM, a high-resolution variant, uses 1.25 x 0.83 degrees respectively.[14] These resolutions are lower than is typically used for weather forecasting.[15] Ocean resolutions tend to be higher, for example HadCM3 has 6 ocean grid points per atmospheric grid point in the horizontal.

For a standard finite difference model, uniform gridlines converge towards the poles. This would lead to computational instabilities (see CFL condition) and so the model variables must be filtered along lines of latitude close to the poles. Ocean models suffer from this problem too, unless a rotated grid is used in which the North Pole is shifted onto a nearby landmass. Spectral models do not suffer from this problem. There are experiments using geodesic grids[16] and icosahedral grids, which (being more uniform) do not have pole-problems. Another approach to solving the grid spacing problem is to deform a Cartesian cube such that it covers the surface of a sphere.[17]

Flux buffering

Some early incarnations of AOGCMs required a somewhat ad hoc process of "flux correction" to achieve a stable climate (not all model groups used this technique). This resulted from separately prepared ocean and atmospheric models each having a different implicit flux from the other component than the other component could actually provide. If uncorrected this could lead to a dramatic drift away from observations in the coupled model. However, if the fluxes were 'corrected', the problems in the model that led to these unrealistic fluxes might be unrecognised and that might affect the model sensitivity. As a result, there has always been a strong disincentive to use flux corrections, and the vast majority of models used in the current round of the Intergovernmental Panel on Climate Change do not use them. The model improvements that now make flux corrections unnecessary are various, but include improved ocean physics, improved resolution in both atmosphere and ocean, and more physically consistent coupling between atmosphere and ocean models. Confidence in model projections is increased by the improved performance of several models that do not use flux adjustment. These models now maintain stable, multi-century simulations of surface climate that are considered to be of sufficient quality to allow their use for climate change projections.[18]

Convection

Moist convection causes the release of latent heat and is important to the Earth's energy budget. Convection occurs on too small a scale to be resolved by climate models, and hence it must be parameterized. This has been done since the earliest days of climate modelling, in the 1950s. Akio Arakawa did much of the early work, and variants of his scheme are still used,[19] although there are a variety of different schemes now in use.[20][21][22] Clouds are typically parametrized, not because their physical processes are poorly understood, but because they occur on a scale smaller than the resolved scale of most GCMs. The causes and effects of their small scale actions on the large scale are represented by large scale parameters, hence "parameterization". The fact that cloud processes are not perfectly parameterized is due in part to a lack of understanding of clouds, but not due to some inherent shortcoming of the method.[23]

Output variables

Most models include software to diagnose a wide range of variables for comparison with observations or study of processes within the atmosphere. An example is the 1.5-metre temperature, which is the standard height for near-surface observations of air temperature. This temperature is not directly predicted from the model but is deduced from the surface and lowest-model-layer temperatures. Other software is used for creating plots and animations.

Projections of future climate change

File:Animation of projected annual mean surface air temperature from 1970-2100, based on SRES emissions scenario A1B (NOAA GFDL CM2.1).webmPlay media

Projected annual mean surface air temperature from 1970-2100, based on SRES emissions scenario A1B, using the NOAA GFDL CM2.1 climate model (credit: NOAA Geophysical Fluid Dynamics Laboratory).[24]

Coupled ocean–atmosphere GCMs use transient climate simulations to project/predict future temperature changes under various scenarios. These can be idealised scenarios (most commonly, CO2 increasing at 1%/yr) or more realistic (usually the "IS92a" or more recently the SRES scenarios). Which scenarios should be considered most realistic is currently uncertain, as the projections of future CO2 (and sulphate) emission are themselves uncertain.

The 2001 IPCC Third Assessment Report figure 9.3 shows the global mean response of 19 different coupled models to an idealised experiment in which CO2 is increased at 1% per year.[25] Figure 9.5 shows the response of a smaller number of models to more realistic forcing. For the 7 climate models shown there, the temperature change to 2100 varies from 2 to 4.5 °C with a median of about 3 °C.

Future scenarios do not include unknowable events – for example, volcanic eruptions or changes in solar forcing. These effects are believed to be small in comparison to GHG forcing in the long term, but large volcanic eruptions, for example, are known to exert a temporary cooling effect.

Human emissions of GHGs are an external input to the models, although it would be possible to couple in an economic model to provide these as well. Atmospheric GHG levels are usually supplied as an input, though it is possible to include a carbon cycle model including land vegetation and oceanic processes to calculate GHG levels.

Emissions scenarios

In the 21st century, changes in global mean temperature are projected to vary across the world
Projected change in annual mean surface air temperature from the late 20th century to the middle 21st century, based on SRES emissions scenario A1B (credit: NOAA Geophysical Fluid Dynamics Laboratory).[24]

For the six SRES marker scenarios, IPCC (2007:7–8) gave a "best estimate" of global mean temperature increase (2090–2099 relative to the period 1980–99) that ranged from 1.8 °C to 4.0 °C.[26] Over the same time period, the "likely" range (greater than 66% probability, based on expert judgement) for these scenarios was for a global mean temperature increase of between 1.1 and 6.4 °C.[26]

Pope (2008) described a study where climate change projections were made using several different emission scenarios.[27] In a scenario where global emissions start to decrease by 2010 and then decline at a sustained rate of 3% per year, the likely global average temperature increase was predicted to be 1.7 °C above pre-industrial levels by 2050, rising to around 2 °C by 2100. In a projection designed to simulate a future where no efforts are made to reduce global emissions, the likely rise in global average temperature was predicted to be 5.5 °C by 2100. A rise as high as 7 °C was thought possible but less likely.

Sokolov et al. (2009) examined a scenario designed to simulate a future where there is no policy to reduce emissions. In their integrated model, this scenario resulted in a median warming over land (2090–99 relative to the period 1980–99) of 5.1 °C. Under the same emissions scenario but with different modeling of the future climate, the predicted median warming was 4.1 °C.[28]

Accuracy of models that predict global warming

SST errors in HadCM3
North American precipitation from various models.
Temperature predictions from some climate models assuming the SRES A2 emissions scenario.

AOGCMs represent the pinnacle of complexity in climate models and internalise as many processes as possible. However, they are still under development and uncertainties remain. They may be coupled to models of other processes, such as the carbon cycle, so as to better model feedback effects. Most recent simulations show "plausible" agreement with the measured temperature anomalies over the past 150 years, when forced by observed changes in greenhouse gases and aerosols, and better agreement is achieved when both natural and man-made forcings are included.[29][30]

No model – whether a wind-tunnel model for designing aircraft, or a climate model for projecting global warming – perfectly reproduces the system being modeled. Such inherently imperfect models may nevertheless produce useful results. In this context, GCMs are capable of reproducing the general features of the observed global temperature over the past century.[29]

A debate over how to reconcile climate model predictions that upper air (tropospheric) warming should be greater than surface warming, with observations some of which appeared to show otherwise[31] now appears to have been resolved in favour of the models, following revisions to the data: see satellite temperature record.

The effects of clouds are a significant area of uncertainty in climate models. Clouds have competing effects on the climate. One of the roles that clouds play in climate is in cooling the surface by reflecting sunlight back into space; another is warming by increasing the amount of infrared radiation emitted from the atmosphere to the surface.[32] In the 2001 IPCC report on climate change, the possible changes in cloud cover were highlighted as one of the dominant uncertainties in predicting future climate change;[33] see also[34]

Thousands of climate researchers around the world use climate models to understand the climate system. There are thousands of papers published about model-based studies in peer-reviewed journals – and a part of this research is work improving the models. Improvement has been difficult but steady (most obviously, state of the art AOGCMs no longer require flux correction), and progress has sometimes led to discovering new uncertainties.

In 2000, a comparison between measurements and dozens of GCM simulations of ENSO-driven tropical precipitation, water vapor, temperature, and outgoing longwave radiation found similarity between measurements and simulation of most factors. However the simulated change in precipitation was about one-fourth less than what was observed. Errors in simulated precipitation imply errors in other processes, such as errors in the evaporation rate that provides moisture to create precipitation. The other possibility is that the satellite-based measurements are in error. Either indicates progress is required in order to monitor and predict such changes.[35]

A more complete discussion of climate models is provided in the IPCC's Third Assessment Report.[36]
  • The model mean exhibits good agreement with observations.
  • The individual models often exhibit worse agreement with observations.
  • Many of the non-flux adjusted models suffered from unrealistic climate drift up to about 1 °C/century in global mean surface temperature.
  • The errors in model-mean surface air temperature rarely exceed 1 °C over the oceans and 5 °C over the continents; precipitation and sea level pressure errors are relatively greater but the magnitudes and patterns of these quantities are recognisably similar to observations.
  • Surface air temperature is particularly well simulated, with nearly all models closely matching the observed magnitude of variance and exhibiting a correlation > 0.95 with the observations.
  • Simulated variance of sea level pressure and precipitation is within ±25% of observed.
  • All models have shortcomings in their simulations of the present day climate of the stratosphere, which might limit the accuracy of predictions of future climate change.
    • There is a tendency for the models to show a global mean cold bias at all levels.
    • There is a large scatter in the tropical temperatures.
    • The polar night jets in most models are inclined poleward with height, in noticeable contrast to an equatorward inclination of the observed jet.
    • There is a differing degree of separation in the models between the winter sub-tropical jet and the polar night jet.
  • For nearly all models the r.m.s. error in zonal- and annual-mean surface air temperature is small compared with its natural variability.
    • There are problems in simulating natural seasonal variability.[citation needed] ( 2000)
      • In flux-adjusted models, seasonal variations are simulated to within 2 K of observed values over the oceans. The corresponding average over non-flux-adjusted models shows errors up to about 6 K in extensive ocean areas.
      • Near-surface land temperature errors are substantial in the average over flux-adjusted models, which systematically underestimates (by about 5 K) temperature in areas of elevated terrain. The corresponding average over non-flux-adjusted models forms a similar error pattern (with somewhat increased amplitude) over land.
      • In Southern Ocean mid-latitudes, the non-flux-adjusted models overestimate the magnitude of January-minus-July temperature differences by ~5 K due to an overestimate of summer (January) near-surface temperature. This error is common to five of the eight non-flux-adjusted models.
      • Over Northern Hemisphere mid-latitude land areas, zonal mean differences between July and January temperatures simulated by the non-flux-adjusted models show a greater spread (positive and negative) about observed values than results from the flux-adjusted models.
      • The ability of coupled GCMs to simulate a reasonable seasonal cycle is a necessary condition for confidence in their prediction of long-term climatic changes (such as global warming), but it is not a sufficient condition unless the seasonal cycle and long-term changes involve similar climatic processes.
  • Coupled climate models do not simulate with reasonable accuracy clouds and some related hydrological processes (in particular those involving upper tropospheric humidity). Problems in the simulation of clouds and upper tropospheric humidity, remain worrisome because the associated processes account for most of the uncertainty in climate model simulations of anthropogenic change.
The precise magnitude of future changes in climate is still uncertain;[37] for the end of the 21st century (2071 to 2100), for SRES scenario A2, the change of global average SAT change from AOGCMs compared with 1961 to 1990 is +3.0 °C (4.8 °F) and the range is +1.3 to +4.5 °C (+2 to +7.2 °F).

In the IPCC's Fifth Assessment Report, it was stated that there was "...very high confidence that models reproduce the general features of the global-scale annual mean surface temperature increase over the historical period." However, the report also observed that the rate of warming over the period 1998-2012 was lower than that predicted by 111 out of 114 Coupled Model Intercomparison Project climate models.[38]

Relation to weather forecasting

The global climate models used for climate projections are very similar in structure to (and often share computer code with) numerical models for weather prediction but are nonetheless logically distinct.

Most weather forecasting is done on the basis of interpreting the output of numerical model results. Since forecasts are short—typically a few days or a week—such models do not usually contain an ocean model but rely on imposed SSTs. They also require accurate initial conditions to begin the forecast—typically these are taken from the output of a previous forecast, with observations blended in. Because the results are needed quickly the predictions must be run in a few hours; but because they only need to cover a week of real time these predictions can be run at higher resolution than in climate mode. Currently the ECMWF runs at 40 km (25 mi) resolution[39] as opposed to the 100-to-200 km (62-to-124 mi) scale used by typical climate models. Often nested models are run forced by the global models for boundary conditions, to achieve higher local resolution: for example, the Met Office runs a mesoscale model with an 11 km (6.8 mi) resolution[40] covering the UK, and various agencies in the U.S. also run nested models such as the NGM and NAM models. Like most global numerical weather prediction models such as the GFS, global climate models are often spectral models[41] instead of grid models. Spectral models are often used for global models because some computations in modeling can be performed faster thus reducing the time needed to run the model simulation.

Computations involved

Climate models use quantitative methods to simulate the interactions of the atmosphere, oceans, land surface, and ice. They are used for a variety of purposes from study of the dynamics of the climate system to projections of future climate.

All climate models take account of incoming energy as short wave electromagnetic radiation, chiefly visible and short-wave (near) infrared, as well as outgoing energy as long wave (far) infrared electromagnetic radiation from the earth. Any imbalance results in a change in temperature.

The most talked-about models of recent years have been those relating temperature to emissions of carbon dioxide (see greenhouse gas). These models project an upward trend in the surface temperature record, as well as a more rapid increase in temperature at higher altitudes.[42]

Three (or more properly, four since time is also considered) dimensional GCM's discretise the equations for fluid motion and energy transfer and integrate these over time. They also contain parametrisations for processes—such as convection—that occur on scales too small to be resolved directly.

Atmospheric GCMs (AGCMs) model the atmosphere and impose sea surface temperatures as boundary conditions. Coupled atmosphere-ocean GCMs (AOGCMs, e.g. HadCM3, EdGCM, GFDL CM2.X, ARPEGE-Climat[43]) combine the two models.

Models can range from relatively simple to quite complex:
  • A simple radiant heat transfer model that treats the earth as a single point and averages outgoing energy
  • this can be expanded vertically (radiative-convective models), or horizontally
  • finally, (coupled) atmosphere–ocean–sea ice global climate models discretise and solve the full equations for mass and energy transfer and radiant exchange.
This is not a full list; for example "box models" can be written to treat flows across and within ocean basins. Furthermore, other types of modelling can be interlinked, such as land use, allowing researchers to predict the interaction between climate and ecosystems.

Other climate models

Earth-system models of intermediate complexity (EMICs)

Depending on the nature of questions asked and the pertinent time scales, there are, on the one extreme, conceptual, more inductive models, and, on the other extreme, general circulation models operating at the highest spatial and temporal resolution currently feasible. Models of intermediate complexity bridge the gap.
One example is the Climber-3 model. Its atmosphere is a 2.5-dimensional statistical-dynamical model with 7.5° × 22.5° resolution and time step of 1/2 a day; the ocean is MOM-3 (Modular Ocean Model) with a 3.75° × 3.75° grid and 24 vertical levels.

Radiative-convective models (RCM)

One-dimensional, radiative-convective models were used to verify basic climate assumptions in the '80s and '90s.[44]

Climate modelers

A climate modeler is a person who designs, develops, implements, tests, maintains or exploits climate models. There are three major types of institutions where a climate modeller may be found:

Equality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Equality_...