Search This Blog

Friday, December 10, 2021

Numerical weather prediction

From Wikipedia, the free encyclopedia
Weather models use systems of differential equations based on the laws of physics, which are in detail fluid motion, thermodynamics, radiative transfer, and chemistry, and use a coordinate system which divides the planet into a 3D grid. Winds, heat transfer, solar radiation, relative humidity, phase changes of water and surface hydrology are calculated within each grid cell, and the interactions with neighboring cells are used to calculate atmospheric properties in the future.

Numerical weather prediction (NWP) uses mathematical models of the atmosphere and oceans to predict the weather based on current weather conditions. Though first attempted in the 1920s, it was not until the advent of computer simulation in the 1950s that numerical weather predictions produced realistic results. A number of global and regional forecast models are run in different countries worldwide, using current weather observations relayed from radiosondes, weather satellites and other observing systems as inputs.

Mathematical models based on the same physical principles can be used to generate either short-term weather forecasts or longer-term climate predictions; the latter are widely applied for understanding and projecting climate change. The improvements made to regional models have allowed for significant improvements in tropical cyclone track and air quality forecasts; however, atmospheric models perform poorly at handling processes that occur in a relatively constricted area, such as wildfires.

Manipulating the vast datasets and performing the complex calculations necessary to modern numerical weather prediction requires some of the most powerful supercomputers in the world. Even with the increasing power of supercomputers, the forecast skill of numerical weather models extends to only about six days. Factors affecting the accuracy of numerical predictions include the density and quality of observations used as input to the forecasts, along with deficiencies in the numerical models themselves. Post-processing techniques such as model output statistics (MOS) have been developed to improve the handling of errors in numerical predictions.

A more fundamental problem lies in the chaotic nature of the partial differential equations that govern the atmosphere. It is impossible to solve these equations exactly, and small errors grow with time (doubling about every five days). Present understanding is that this chaotic behavior limits accurate forecasts to about 14 days even with accurate input data and a flawless model. In addition, the partial differential equations used in the model need to be supplemented with parameterizations for solar radiation, moist processes (clouds and precipitation), heat exchange, soil, vegetation, surface water, and the effects of terrain. In an effort to quantify the large amount of inherent uncertainty remaining in numerical predictions, ensemble forecasts have been used since the 1990s to help gauge the confidence in the forecast, and to obtain useful results farther into the future than otherwise possible. This approach analyzes multiple forecasts created with an individual forecast model or multiple models.

History

The ENIAC main control panel at the Moore School of Electrical Engineering operated by Betty Jennings and Frances Bilas.

The history of numerical weather prediction began in the 1920s through the efforts of Lewis Fry Richardson, who used procedures originally developed by Vilhelm Bjerknes to produce by hand a six-hour forecast for the state of the atmosphere over two points in central Europe, taking at least six weeks to do so. It was not until the advent of the computer and computer simulations that computation time was reduced to less than the forecast period itself. The ENIAC was used to create the first weather forecasts via computer in 1950, based on a highly simplified approximation to the atmospheric governing equations. In 1954, Carl-Gustav Rossby's group at the Swedish Meteorological and Hydrological Institute used the same model to produce the first operational forecast (i.e., a routine prediction for practical use). Operational numerical weather prediction in the United States began in 1955 under the Joint Numerical Weather Prediction Unit (JNWPU), a joint project by the U.S. Air Force, Navy and Weather Bureau. In 1956, Norman Phillips developed a mathematical model which could realistically depict monthly and seasonal patterns in the troposphere; this became the first successful climate model. Following Phillips' work, several groups began working to create general circulation models. The first general circulation climate model that combined both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory.

As computers have become more powerful, the size of the initial data sets has increased and newer atmospheric models have been developed to take advantage of the added available computing power. These newer models include more physical processes in the simplifications of the equations of motion in numerical simulations of the atmosphere. In 1966, West Germany and the United States began producing operational forecasts based on primitive-equation models, followed by the United Kingdom in 1972 and Australia in 1977. The development of limited area (regional) models facilitated advances in forecasting the tracks of tropical cyclones as well as air quality in the 1970s and 1980s. By the early 1980s models began to include the interactions of soil and vegetation with the atmosphere, which led to more realistic forecasts.

The output of forecast models based on atmospheric dynamics is unable to resolve some details of the weather near the Earth's surface. As such, a statistical relationship between the output of a numerical weather model and the ensuing conditions at the ground was developed in the 1970s and 1980s, known as model output statistics (MOS). Starting in the 1990s, model ensemble forecasts have been used to help define the forecast uncertainty and to extend the window in which numerical weather forecasting is viable farther into the future than otherwise possible.

Initialization

A WP-3D Orion weather reconnaissance aircraft in flight.
Weather reconnaissance aircraft, such as this WP-3D Orion, provide data that is then used in numerical weather forecasts.

The atmosphere is a fluid. As such, the idea of numerical weather prediction is to sample the state of the fluid at a given time and use the equations of fluid dynamics and thermodynamics to estimate the state of the fluid at some time in the future. The process of entering observation data into the model to generate initial conditions is called initialization. On land, terrain maps available at resolutions down to 1 kilometer (0.6 mi) globally are used to help model atmospheric circulations within regions of rugged topography, in order to better depict features such as downslope winds, mountain waves and related cloudiness that affects incoming solar radiation. The main inputs from country-based weather services are observations from devices (called radiosondes) in weather balloons that measure various atmospheric parameters and transmits them to a fixed receiver, as well as from weather satellites. The World Meteorological Organization acts to standardize the instrumentation, observing practices and timing of these observations worldwide. Stations either report hourly in METAR reports, or every six hours in SYNOP reports. These observations are irregularly spaced, so they are processed by data assimilation and objective analysis methods, which perform quality control and obtain values at locations usable by the model's mathematical algorithms. The data are then used in the model as the starting point for a forecast.

A variety of methods are used to gather observational data for use in numerical models. Sites launch radiosondes in weather balloons which rise through the troposphere and well into the stratosphere. Information from weather satellites is used where traditional data sources are not available. Commerce provides pilot reports along aircraft routes and ship reports along shipping routes. Research projects use reconnaissance aircraft to fly in and around weather systems of interest, such as tropical cyclones. Reconnaissance aircraft are also flown over the open oceans during the cold season into systems which cause significant uncertainty in forecast guidance, or are expected to be of high impact from three to seven days into the future over the downstream continent. Sea ice began to be initialized in forecast models in 1971. Efforts to involve sea surface temperature in model initialization began in 1972 due to its role in modulating weather in higher latitudes of the Pacific.

Computation

A prognostic chart of the North American continent provides geopotential heights, temperatures, and wind velocities at regular intervals. The values are taken at the altitude corresponding to the 850-millibar pressure surface.
A prognostic chart of the 96-hour forecast of 850 mbar geopotential height and temperature from the Global Forecast System

An atmospheric model is a computer program that produces meteorological information for future times at given locations and altitudes. Within any modern model is a set of equations, known as the primitive equations, used to predict the future state of the atmosphere. These equations—along with the ideal gas law—are used to evolve the density, pressure, and potential temperature scalar fields and the air velocity (wind) vector field of the atmosphere through time. Additional transport equations for pollutants and other aerosols are included in some primitive-equation high-resolution models as well. The equations used are nonlinear partial differential equations which are impossible to solve exactly through analytical methods, with the exception of a few idealized cases. Therefore, numerical methods obtain approximate solutions. Different models use different solution methods: some global models and almost all regional models use finite difference methods for all three spatial dimensions, while other global models and a few regional models use spectral methods for the horizontal dimensions and finite-difference methods in the vertical.

These equations are initialized from the analysis data and rates of change are determined. These rates of change predict the state of the atmosphere a short time into the future; the time increment for this prediction is called a time step. This future atmospheric state is then used as the starting point for another application of the predictive equations to find new rates of change, and these new rates of change predict the atmosphere at a yet further time step into the future. This time stepping is repeated until the solution reaches the desired forecast time. The length of the time step chosen within the model is related to the distance between the points on the computational grid, and is chosen to maintain numerical stability. Time steps for global models are on the order of tens of minutes, while time steps for regional models are between one and four minutes. The global models are run at varying times into the future. The UKMET Unified Model is run six days into the future, while the European Centre for Medium-Range Weather Forecasts' Integrated Forecast System and Environment Canada's Global Environmental Multiscale Model both run out to ten days into the future, and the Global Forecast System model run by the Environmental Modeling Center is run sixteen days into the future. The visual output produced by a model solution is known as a prognostic chart, or prog.

Parameterization

Field of cumulus clouds, which are parameterized since they are too small to be explicitly included within numerical weather prediction

Some meteorological processes are too small-scale or too complex to be explicitly included in numerical weather prediction models. Parameterization is a procedure for representing these processes by relating them to variables on the scales that the model resolves. For example, the gridboxes in weather and climate models have sides that are between 5 kilometers (3 mi) and 300 kilometers (200 mi) in length. A typical cumulus cloud has a scale of less than 1 kilometer (0.6 mi), and would require a grid even finer than this to be represented physically by the equations of fluid motion. Therefore, the processes that such clouds represent are parameterized, by processes of various sophistication. In the earliest models, if a column of air within a model gridbox was conditionally unstable (essentially, the bottom was warmer and moister than the top) and the water vapor content at any point within the column became saturated then it would be overturned (the warm, moist air would begin rising), and the air in that vertical column mixed. More sophisticated schemes recognize that only some portions of the box might convect and that entrainment and other processes occur. Weather models that have gridboxes with sizes between 5 and 25 kilometers (3 and 16 mi) can explicitly represent convective clouds, although they need to parameterize cloud microphysics which occur at a smaller scale. The formation of large-scale (stratus-type) clouds is more physically based; they form when the relative humidity reaches some prescribed value. Sub-grid scale processes need to be taken into account. Rather than assuming that clouds form at 100% relative humidity, the cloud fraction can be related to a critical value of relative humidity less than 100%, reflecting the sub grid scale variation that occurs in the real world.

The amount of solar radiation reaching the ground, as well as the formation of cloud droplets occur on the molecular scale, and so they must be parameterized before they can be included in the model. Atmospheric drag produced by mountains must also be parameterized, as the limitations in the resolution of elevation contours produce significant underestimates of the drag. This method of parameterization is also done for the surface flux of energy between the ocean and the atmosphere, in order to determine realistic sea surface temperatures and type of sea ice found near the ocean's surface. Sun angle as well as the impact of multiple cloud layers is taken into account. Soil type, vegetation type, and soil moisture all determine how much radiation goes into warming and how much moisture is drawn up into the adjacent atmosphere, and thus it is important to parameterize their contribution to these processes. Within air quality models, parameterizations take into account atmospheric emissions from multiple relatively tiny sources (e.g. roads, fields, factories) within specific grid boxes.

Domains

A sigma coordinate system is shown. The lines of equal sigma values follow the terrain at the bottom, and gradually smoothen towards the top of the atmosphere.
A cross-section of the atmosphere over terrain with a sigma-coordinate representation shown. Mesoscale models divide the atmosphere vertically using representations similar to the one shown here.

The horizontal domain of a model is either global, covering the entire Earth, or regional, covering only part of the Earth. Regional models (also known as limited-area models, or LAMs) allow for the use of finer grid spacing than global models because the available computational resources are focused on a specific area instead of being spread over the globe. This allows regional models to resolve explicitly smaller-scale meteorological phenomena that cannot be represented on the coarser grid of a global model. Regional models use a global model to specify conditions at the edge of their domain (boundary conditions) in order to allow systems from outside the regional model domain to move into its area. Uncertainty and errors within regional models are introduced by the global model used for the boundary conditions of the edge of the regional model, as well as errors attributable to the regional model itself.

Coordinate systems

Horizontal coordinates

Horizontal position may be expressed directly in geographic coordinates (latitude and longitude) for global models or in a map projection planar coordinates for regional models. The German weather service is using for its global ICON model (icosahedral non-hydrostatic global circulation model) a grid based on a regular icosahedron. Basic cells in this grid are triangles instead of the four corner cells in a traditional latitude-longitude grid. The advantage is that, different from a latitude-longitude cells are everywhere on the globe the same size. Disadvantage is that equations in this non rectangular grid are more complicated.

Vertical coordinates

The vertical coordinate is handled in various ways. Lewis Fry Richardson's 1922 model used geometric height () as the vertical coordinate. Later models substituted the geometric coordinate with a pressure coordinate system, in which the geopotential heights of constant-pressure surfaces become dependent variables, greatly simplifying the primitive equations. This correlation between coordinate systems can be made since pressure decreases with height through the Earth's atmosphere. The first model used for operational forecasts, the single-layer barotropic model, used a single pressure coordinate at the 500-millibar (about 5,500 m (18,000 ft)) level, and thus was essentially two-dimensional. High-resolution models—also called mesoscale models—such as the Weather Research and Forecasting model tend to use normalized pressure coordinates referred to as sigma coordinates. This coordinate system receives its name from the independent variable used to scale atmospheric pressures with respect to the pressure at the surface, and in some cases also with the pressure at the top of the domain.

Model output statistics

Because forecast models based upon the equations for atmospheric dynamics do not perfectly determine weather conditions, statistical methods have been developed to attempt to correct the forecasts. Statistical models were created based upon the three-dimensional fields produced by numerical weather models, surface observations and the climatological conditions for specific locations. These statistical models are collectively referred to as model output statistics (MOS), and were developed by the National Weather Service for their suite of weather forecasting models in the late 1960s.

Model output statistics differ from the perfect prog technique, which assumes that the output of numerical weather prediction guidance is perfect. MOS can correct for local effects that cannot be resolved by the model due to insufficient grid resolution, as well as model biases. Because MOS is run after its respective global or regional model, its production is known as post-processing. Forecast parameters within MOS include maximum and minimum temperatures, percentage chance of rain within a several hour period, precipitation amount expected, chance that the precipitation will be frozen in nature, chance for thunderstorms, cloudiness, and surface winds.

Ensembles

Two images are shown. The top image provides three potential tracks that could have been taken by Hurricane Rita. Contours over the coast of Texas correspond to the sea-level air pressure predicted as the storm passed. The bottom image shows an ensemble of track forecasts produced by different weather models for the same hurricane.
Top: Weather Research and Forecasting model (WRF) simulation of Hurricane Rita (2005) tracks. Bottom: The spread of NHC multi-model ensemble forecast.

In 1963, Edward Lorenz discovered the chaotic nature of the fluid dynamics equations involved in weather forecasting. Extremely small errors in temperature, winds, or other initial inputs given to numerical models will amplify and double every five days, making it impossible for long-range forecasts—those made more than two weeks in advance—to predict the state of the atmosphere with any degree of forecast skill. Furthermore, existing observation networks have poor coverage in some regions (for example, over large bodies of water such as the Pacific Ocean), which introduces uncertainty into the true initial state of the atmosphere. While a set of equations, known as the Liouville equations, exists to determine the initial uncertainty in the model initialization, the equations are too complex to run in real-time, even with the use of supercomputers. These uncertainties limit forecast model accuracy to about five or six days into the future.

Edward Epstein recognized in 1969 that the atmosphere could not be completely described with a single forecast run due to inherent uncertainty, and proposed using an ensemble of stochastic Monte Carlo simulations to produce means and variances for the state of the atmosphere. Although this early example of an ensemble showed skill, in 1974 Cecil Leith showed that they produced adequate forecasts only when the ensemble probability distribution was a representative sample of the probability distribution in the atmosphere.

Since the 1990s, ensemble forecasts have been used operationally (as routine forecasts) to account for the stochastic nature of weather processes – that is, to resolve their inherent uncertainty. This method involves analyzing multiple forecasts created with an individual forecast model by using different physical parametrizations or varying initial conditions. Starting in 1992 with ensemble forecasts prepared by the European Centre for Medium-Range Weather Forecasts (ECMWF) and the National Centers for Environmental Prediction, model ensemble forecasts have been used to help define the forecast uncertainty and to extend the window in which numerical weather forecasting is viable farther into the future than otherwise possible. The ECMWF model, the Ensemble Prediction System, uses singular vectors to simulate the initial probability density, while the NCEP ensemble, the Global Ensemble Forecasting System, uses a technique known as vector breeding. The UK Met Office runs global and regional ensemble forecasts where perturbations to initial conditions are produced using a Kalman filter.There are 24 ensemble members in the Met Office Global and Regional Ensemble Prediction System (MOGREPS).

In a single model-based approach, the ensemble forecast is usually evaluated in terms of an average of the individual forecasts concerning one forecast variable, as well as the degree of agreement between various forecasts within the ensemble system, as represented by their overall spread. Ensemble spread is diagnosed through tools such as spaghetti diagrams, which show the dispersion of one quantity on prognostic charts for specific time steps in the future. Another tool where ensemble spread is used is a meteogram, which shows the dispersion in the forecast of one quantity for one specific location. It is common for the ensemble spread to be too small to include the weather that actually occurs, which can lead to forecasters misdiagnosing model uncertainty; this problem becomes particularly severe for forecasts of the weather about ten days in advance. When ensemble spread is small and the forecast solutions are consistent within multiple model runs, forecasters perceive more confidence in the ensemble mean, and the forecast in general. Despite this perception, a spread-skill relationship is often weak or not found, as spread-error correlations are normally less than 0.6, and only under special circumstances range between 0.6–0.7. The relationship between ensemble spread and forecast skill varies substantially depending on such factors as the forecast model and the region for which the forecast is made.

In the same way that many forecasts from a single model can be used to form an ensemble, multiple models may also be combined to produce an ensemble forecast. This approach is called multi-model ensemble forecasting, and it has been shown to improve forecasts when compared to a single model-based approach. Models within a multi-model ensemble can be adjusted for their various biases, which is a process known as superensemble forecasting. This type of forecast significantly reduces errors in model output.

Applications

Air quality modeling

Air quality forecasting attempts to predict when the concentrations of pollutants will attain levels that are hazardous to public health. The concentration of pollutants in the atmosphere is determined by their transport, or mean velocity of movement through the atmosphere, their diffusion, chemical transformation, and ground deposition. In addition to pollutant source and terrain information, these models require data about the state of the fluid flow in the atmosphere to determine its transport and diffusion. Meteorological conditions such as thermal inversions can prevent surface air from rising, trapping pollutants near the surface, which makes accurate forecasts of such events crucial for air quality modeling. Urban air quality models require a very fine computational mesh, requiring the use of high-resolution mesoscale weather models; in spite of this, the quality of numerical weather guidance is the main uncertainty in air quality forecasts.

Climate modeling

A General Circulation Model (GCM) is a mathematical model that can be used in computer simulations of the global circulation of a planetary atmosphere or ocean. An atmospheric general circulation model (AGCM) is essentially the same as a global numerical weather prediction model, and some (such as the one used in the UK Unified Model) can be configured for both short-term weather forecasts and longer-term climate predictions. Along with sea ice and land-surface components, AGCMs and oceanic GCMs (OGCM) are key components of global climate models, and are widely applied for understanding the climate and projecting climate change. For aspects of climate change, a range of man-made chemical emission scenarios can be fed into the climate models to see how an enhanced greenhouse effect would modify the Earth's climate. Versions designed for climate applications with time scales of decades to centuries were originally created in 1969 by Syukuro Manabe and Kirk Bryan at the Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey. When run for multiple decades, computational limitations mean that the models must use a coarse grid that leaves smaller-scale interactions unresolved.

Ocean surface modeling

A wind and wave forecast for the North Atlantic Ocean. Two areas of high waves are identified: One west of the southern tip of Greenland, and the other in the North Sea. Calm seas are forecast for the Gulf of Mexico. Wind barbs show the expected wind strengths and directions at regularly spaced intervals over the North Atlantic.
NOAA Wavewatch III 120-hour wind and wave forecast for the North Atlantic

The transfer of energy between the wind blowing over the surface of an ocean and the ocean's upper layer is an important element in wave dynamics. The spectral wave transport equation is used to describe the change in wave spectrum over changing topography. It simulates wave generation, wave movement (propagation within a fluid), wave shoaling, refraction, energy transfer between waves, and wave dissipation. Since surface winds are the primary forcing mechanism in the spectral wave transport equation, ocean wave models use information produced by numerical weather prediction models as inputs to determine how much energy is transferred from the atmosphere into the layer at the surface of the ocean. Along with dissipation of energy through whitecaps and resonance between waves, surface winds from numerical weather models allow for more accurate predictions of the state of the sea surface.

Tropical cyclone forecasting

Tropical cyclone forecasting also relies on data provided by numerical weather models. Three main classes of tropical cyclone guidance models exist: Statistical models are based on an analysis of storm behavior using climatology, and correlate a storm's position and date to produce a forecast that is not based on the physics of the atmosphere at the time. Dynamical models are numerical models that solve the governing equations of fluid flow in the atmosphere; they are based on the same principles as other limited-area numerical weather prediction models but may include special computational techniques such as refined spatial domains that move along with the cyclone. Models that use elements of both approaches are called statistical-dynamical models.

In 1978, the first hurricane-tracking model based on atmospheric dynamics—the movable fine-mesh (MFM) model—began operating. Within the field of tropical cyclone track forecasting, despite the ever-improving dynamical model guidance which occurred with increased computational power, it was not until the 1980s when numerical weather prediction showed skill, and until the 1990s when it consistently outperformed statistical or simple dynamical models. Predictions of the intensity of a tropical cyclone based on numerical weather prediction continue to be a challenge, since statistical methods continue to show higher skill over dynamical guidance.

Wildfire modeling

A simple wildfire propagation model

On a molecular scale, there are two main competing reaction processes involved in the degradation of cellulose, or wood fuels, in wildfires. When there is a low amount of moisture in a cellulose fiber, volatilization of the fuel occurs; this process will generate intermediate gaseous products that will ultimately be the source of combustion. When moisture is present—or when enough heat is being carried away from the fiber, charring occurs. The chemical kinetics of both reactions indicate that there is a point at which the level of moisture is low enough—and/or heating rates high enough—for combustion processes to become self-sufficient. Consequently, changes in wind speed, direction, moisture, temperature, or lapse rate at different levels of the atmosphere can have a significant impact on the behavior and growth of a wildfire. Since the wildfire acts as a heat source to the atmospheric flow, the wildfire can modify local advection patterns, introducing a feedback loop between the fire and the atmosphere.

A simplified two-dimensional model for the spread of wildfires that used convection to represent the effects of wind and terrain, as well as radiative heat transfer as the dominant method of heat transport led to reaction–diffusion systems of partial differential equations. More complex models join numerical weather models or computational fluid dynamics models with a wildfire component which allow the feedback effects between the fire and the atmosphere to be estimated. The additional complexity in the latter class of models translates to a corresponding increase in their computer power requirements. In fact, a full three-dimensional treatment of combustion via direct numerical simulation at scales relevant for atmospheric modeling is not currently practical because of the excessive computational cost such a simulation would require. Numerical weather models have limited forecast skill at spatial resolutions under 1 kilometer (0.6 mi), forcing complex wildfire models to parameterize the fire in order to calculate how the winds will be modified locally by the wildfire, and to use those modified winds to determine the rate at which the fire will spread locally. Although models such as Los Alamos' FIRETEC solve for the concentrations of fuel and oxygen, the computational grid cannot be fine enough to resolve the combustion reaction, so approximations must be made for the temperature distribution within each grid cell, as well as for the combustion reaction rates themselves.

 

Climate model

From Wikipedia, the free encyclopedia
Climate models are systems of differential equations based on the basic laws of physics, fluid motion, and chemistry. To “run” a model, scientists divide the planet into a 3-dimensional grid, apply the basic equations, and evaluate the results. Atmospheric models calculate winds, heat transfer, radiation, relative humidity, and surface hydrology within each grid and evaluate interactions with neighboring points.

Numerical climate models use quantitative methods to simulate the interactions of the important drivers of climate, including atmosphere, oceans, land surface and ice. They are used for a variety of purposes from study of the dynamics of the climate system to projections of future climate. Climate models may also be qualitative (i.e. not numerical) models and also narratives, largely descriptive, of possible futures.

Quantitative climate models take account of incoming energy from the sun as short wave electromagnetic radiation, chiefly visible and short-wave (near) infrared, as well as outgoing long wave (far) infrared electromagnetic. Any imbalance results in a change in temperature.

Quantitative models vary in complexity:

  • A simple radiant heat transfer model treats the earth as a single point and averages outgoing energy. This can be expanded vertically (radiative-convective models) and/or horizontally
  • Finally, (coupled) atmosphere–ocean–sea ice global climate models solve the full equations for mass and energy transfer and radiant exchange.
  • Other types of modelling can be interlinked, such as land use, in Earth System Models, allowing researchers to predict the interaction between climate and ecosystems.

Box models

Schematic of a simple box model used to illustrate fluxes in geochemical cycles, showing a source (Q), sink (S) and reservoir (M)

Box models are simplified versions of complex systems, reducing them to boxes (or reservoirs) linked by fluxes. The boxes are assumed to be mixed homogeneously. Within a given box, the concentration of any chemical species is therefore uniform. However, the abundance of a species within a given box may vary as a function of time due to the input to (or loss from) the box or due to the production, consumption or decay of this species within the box.

Simple box models, i.e. box model with a small number of boxes whose properties (e.g. their volume) do not change with time, are often useful to derive analytical formulas describing the dynamics and steady-state abundance of a species. More complex box models are usually solved using numerical techniques.

Box models are used extensively to model environmental systems or ecosystems and in studies of ocean circulation and the carbon cycle. They are instances of a multi-compartment model.

Zero-dimensional models

A very simple model of the radiative equilibrium of the Earth is

where

  • the left hand side represents the incoming energy from the Sun
  • the right hand side represents the outgoing energy from the Earth, calculated from the Stefan–Boltzmann law assuming a model-fictive temperature, T, sometimes called the 'equilibrium temperature of the Earth', that is to be found,

and

  • S is the solar constant – the incoming solar radiation per unit area—about 1367 W·m−2
  • is the Earth's average albedo, measured to be 0.3.
  • r is Earth's radius—approximately 6.371×106m
  • π is the mathematical constant (3.141...)
  • is the Stefan–Boltzmann constant—approximately 5.67×10−8 J·K−4·m−2·s−1
  • is the effective emissivity of earth, about 0.612

The constant πr2 can be factored out, giving

Solving for the temperature,

This yields an apparent effective average earth temperature of 288 K (15 °C; 59 °F). This is because the above equation represents the effective radiative temperature of the Earth (including the clouds and atmosphere).

This very simple model is quite instructive. For example, it easily determines the effect on average earth temperature of changes in solar constant or change of albedo or effective earth emissivity.

The average emissivity of the earth is readily estimated from available data. The emissivities of terrestrial surfaces are all in the range of 0.96 to 0.99 (except for some small desert areas which may be as low as 0.7). Clouds, however, which cover about half of the earth's surface, have an average emissivity of about 0.5 (which must be reduced by the fourth power of the ratio of cloud absolute temperature to average earth absolute temperature) and an average cloud temperature of about 258 K (−15 °C; 5 °F). Taking all this properly into account results in an effective earth emissivity of about 0.64 (earth average temperature 285 K (12 °C; 53 °F)).

This simple model readily determines the effect of changes in solar output or change of earth albedo or effective earth emissivity on average earth temperature. It says nothing, however about what might cause these things to change. Zero-dimensional models do not address the temperature distribution on the earth or the factors that move energy about the earth.

Radiative-convective models

The zero-dimensional model above, using the solar constant and given average earth temperature, determines the effective earth emissivity of long wave radiation emitted to space. This can be refined in the vertical to a one-dimensional radiative-convective model, which considers two processes of energy transport:

  • upwelling and downwelling radiative transfer through atmospheric layers that both absorb and emit infrared radiation
  • upward transport of heat by convection (especially important in the lower troposphere).

The radiative-convective models have advantages over the simple model: they can determine the effects of varying greenhouse gas concentrations on effective emissivity and therefore the surface temperature. But added parameters are needed to determine local emissivity and albedo and address the factors that move energy about the earth.

Effect of ice-albedo feedback on global sensitivity in a one-dimensional radiative-convective climate model.

Higher-dimension models

The zero-dimensional model may be expanded to consider the energy transported horizontally in the atmosphere. This kind of model may well be zonally averaged. This model has the advantage of allowing a rational dependence of local albedo and emissivity on temperature – the poles can be allowed to be icy and the equator warm – but the lack of true dynamics means that horizontal transports have to be specified.

EMICs (Earth-system models of intermediate complexity)

Depending on the nature of questions asked and the pertinent time scales, there are, on the one extreme, conceptual, more inductive models, and, on the other extreme, general circulation models operating at the highest spatial and temporal resolution currently feasible. Models of intermediate complexity bridge the gap. One example is the Climber-3 model. Its atmosphere is a 2.5-dimensional statistical-dynamical model with 7.5° × 22.5° resolution and time step of half a day; the ocean is MOM-3 (Modular Ocean Model) with a 3.75° × 3.75° grid and 24 vertical levels.

GCMs (global climate models or general circulation models)

General Circulation Models (GCMs) discretise the equations for fluid motion and energy transfer and integrate these over time. Unlike simpler models, GCMs divide the atmosphere and/or oceans into grids of discrete "cells", which represent computational units. Unlike simpler models which make mixing assumptions, processes internal to a cell—such as convection—that occur on scales too small to be resolved directly are parameterised at the cell level, while other functions govern the interface between cells.

Atmospheric GCMs (AGCMs) model the atmosphere and impose sea surface temperatures as boundary conditions. Coupled atmosphere-ocean GCMs (AOGCMs, e.g. HadCM3, EdGCM, GFDL CM2.X, ARPEGE-Climat) combine the two models. The first general circulation climate model that combined both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory AOGCMs represent the pinnacle of complexity in climate models and internalise as many processes as possible. However, they are still under development and uncertainties remain. They may be coupled to models of other processes, such as the carbon cycle, so as to better model feedback effects. Such integrated multi-system models are sometimes referred to as either "earth system models" or "global climate models."

Research and development

There are three major types of institution where climate models are developed, implemented and used:

The World Climate Research Programme (WCRP), hosted by the World Meteorological Organization (WMO), coordinates research activities on climate modelling worldwide.

A 2012 U.S. National Research Council report discussed how the large and diverse U.S. climate modeling enterprise could evolve to become more unified. Efficiencies could be gained by developing a common software infrastructure shared by all U.S. climate researchers, and holding an annual climate modeling forum, the report found.

 

Climate system

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
The five components of the climate system all interact.

Earth's climate arises from the interaction of five major climate system components: the atmosphere (air), the hydrosphere (water), the cryosphere (ice and permafrost), the lithosphere (earth's upper rocky layer) and the biosphere (living things). Climate is the average weather, typically over a period of 30 years, and is determined by a combination of processes in the climate system, such as ocean currents and wind patterns. Circulation in the atmosphere and oceans is primarily driven by solar radiation and transports heat from the tropical regions to regions that receive less energy from the Sun. The water cycle also moves energy throughout the climate system. In addition, different chemical elements, necessary for life, are constantly recycled between the different components.

The climate system can change due to internal variability and external forcings. These external forcings can be natural, such as variations in solar intensity and volcanic eruptions, or caused by humans. Accumulation of heat-trapping greenhouse gases, mainly being emitted by people burning fossil fuels, is causing global warming. Human activity also releases cooling aerosols, but their net effect is far less than that of greenhouse gases. Changes can be amplified by feedback processes in the different climate system components.

Components of the climate system

The atmosphere envelops the earth and extends hundreds of kilometres from the surface. It consists mostly of inert nitrogen (78%), oxygen (21%) and argon (0.9%). Some trace gases in the atmosphere, such as water vapour and carbon dioxide, are the gases most important for the workings of the climate system, as they are greenhouse gases which allow visible light from the Sun to penetrate to the surface, but block some of the infra-red radiation the Earth's surface emits to balance the Sun's radiation. This causes surface temperatures to rise. The hydrological cycle is the movement of water through the atmosphere. Not only does the hydrological cycle determine patterns of precipitation, it also has an influence on the movement of energy throughout the climate system.

The hydrosphere proper contains all the liquid water on Earth, with most of it contained in the world's oceans. The ocean covers 71% of Earth's surface to an average depth of nearly 4 kilometres (2.5 miles), and can hold substantially more heat than the atmosphere. It contains seawater with a salt content of about 3.5% on average, but this varies spatially. Brackish water is found in estuaries and some lakes, and most freshwater, 2.5% of all water, is held in ice and snow.

The cryosphere contains all parts of the climate system where water is solid. This includes sea ice, ice sheets, permafrost and snow cover. Because there is more land in the Northern Hemisphere compared to the Southern Hemisphere, a larger part of that hemisphere is covered in snow. Both hemispheres have about the same amount of sea ice. Most frozen water is contained in the ice sheets on Greenland and Antarctica, which average about 2 kilometres (1.2 miles) in height. These ice sheets slowly flow towards their margins.

The Earth's crust, specifically mountains and valleys, shapes global wind patterns: vast mountain ranges form a barrier to winds and impact where and how much it rains. Land closer to open ocean has a more moderate climate than land farther from the ocean. For the purpose of modelling the climate, the land is often considered static as it changes very slowly compared to the other elements that make up the climate system. The position of the continents determines the geometry of the oceans and therefore influences patterns of ocean circulation. The locations of the seas are important in controlling the transfer of heat and moisture across the globe, and therefore, in determining global climate.

Lastly, the biosphere also interacts with the rest of the climate system. Vegetation is often darker or lighter than the soil beneath, so that more or less of the Sun's heat gets trapped in areas with vegetation.[18] Vegetation is good at trapping water, which is then taken up by its roots. Without vegetation, this water would have run off to the closest rivers or other water bodies. Water taken up by plants instead evaporates, contributing to the hydrological cycle. Precipitation and temperature influences the distribution of different vegetation zones. Carbon assimilation from seawater by the growth of small phytoplankton is almost as much as land plants from the atmosphere. While humans are technically part of the biosphere, they are often treated as a separate components of Earth's climate system, the anthroposphere, because of human's large impact on the planet.

Flows of energy, water and elements

Earth's atmospheric circulation is driven by the energy imbalance between the equator and the poles. It is further influenced by the rotation of Earth around its own axis.

Energy and general circulation

The climate system receives energy from the Sun, and to a far lesser extent from the Earth's core, as well as tidal energy from the Moon. The Earth gives off energy to outer space in two forms: it directly reflects a part of the radiation of the Sun and it emits infra-red radiation as black-body radiation. The balance of incoming and outgoing energy, and the passage of the energy through the climate system, determines Earth's energy budget. When the total of incoming energy is greater than the outgoing energy, Earth's energy budget is positive and the climate system is warming. If more energy goes out, the energy budget is negative and Earth experiences cooling.

More energy reaches the tropics than the polar regions and the subsequent temperature difference drives the global circulation of the atmosphere and oceans. Air rises when it warms, flows polewards and sinks again when it cools, returning to the equator. Due to the conservation of angular momentum, the Earth's rotation diverts the air to the right in the Northern Hemisphere and to the left in the Southern hemisphere, thus forming distinct atmospheric cells. Monsoons, seasonal changes in wind and precipitation that occur mostly in the tropics, form due to the fact that land masses heat up more easily than the ocean. The temperature difference induces a pressure difference between land and ocean, driving a steady wind.

Ocean water that has more salt has a higher density and differences in density play an important role in ocean circulation. The thermohaline circulation transports heat from the tropics to the polar regions. Ocean circulation is further driven by the interaction with wind. The salt component also influences the freezing point temperature. Vertical movements can bring up colder water to the surface in a process called upwelling, which cools down the air above.

Hydrological cycle

The hydrological cycle or water cycle describes how it is constantly moved between the surface of the Earth and the atmosphere. Plants evapotranspirate and sunlight evaporates water from oceans and other water bodies, leaving behind salt and other minerals. The evaporated freshwater later rains back onto the surface. Precipitation and evaporation are not evenly distributed across the globe, with some regions such as the tropics having more rainfall than evaporation, and others having more evaporation than rainfall. The evaporation of water requires substantial quantities of energy, whereas a lot of heat is released during condensation. This latent heat is the primary source of energy in the atmosphere.

Biochemical cycles

Carbon is constantly transported between the different elements of the climate system: fixed by living creatures and transported through the ocean and atmosphere.

Chemical elements, vital for life, are constantly cycled through the different components of the climate system. The carbon cycle is directly important for climate as it determines the concentrations of two important greenhouse gases in the atmosphere: CO2 and methane. In the fast part of the carbon cycle, plants take up carbon dioxide from the atmosphere using photosynthesis; this is later re-emitted by the breathing of living creatures. As part of the slow carbon cycle, volcanoes release CO2 by degassing, releasing carbon dioxide from the Earth's crust and mantle. As CO2 in the atmosphere makes rain a bit acidic, this rain can slowly dissolve some rocks, a process known as weathering. The minerals that are released in this way, transported to the sea, are used by living creatures whose remains can form sedimentary rocks, bringing the carbon back to the lithosphere.

The nitrogen cycle describes the flow of active nitrogen. As atmospheric nitrogen is inert, micro-organisms first have to convert this to an active nitrogen compound in a process called fixing nitrogen, before it can be used as a building block in the biosphere. Human activities play an important role in both carbon and nitrogen cycles: the burning of fossil fuels has displaced carbon from the lithosphere to the atmosphere, and the use of fertilizers has vastly increased the amount of available fixed nitrogen.

Changes within the climate system

Climate is constantly varying, on timescales that range from seasons to the lifetime of the Earth. Changes caused by the system's own components and dynamics are called internal climate variability. The system can also experience external forcing from phenomena outside of the system (e.g. a change in Earth's orbit). Longer changes, usually defined as changes that persist for at least 30 years, are referred to as climate changes, although this phrase usually refers to the current global climate change. When the climate changes, the effects may build on each other, cascading through the other parts of the system in a series of climate feedbacks (e.g. albedo changes), producing many different effects (e.g. sea level rise).

Internal variability

Difference between normal December sea surface temperature [°C] and temperatures during the strong El Niño of 1997. El Niño typically brings wetter weather to Mexico and the United States.

Components of the climate system vary continuously, even without external pushes (external forcing). One example in the atmosphere is the North Atlantic Oscillation (NAO), which operates as an atmospheric pressure see-saw. The Portuguese Azores typically have high pressure, whereas there is often lower pressure over Iceland. The difference in pressure oscillates and this affects weather patterns across the North Atlantic region up to central Eurasia. For instance, the weather in Greenland and Canada is cold and dry during a positive NAO. Different phases of the North Atlantic oscillation can be sustained for multiple decades.

The ocean and atmosphere can also work together to spontaneously generate internal climate variability that can persist for years to decades at a time. Examples of this type of variability include the El Niño–Southern Oscillation, the Pacific decadal oscillation, and the Atlantic Multidecadal Oscillation. These variations can affect global average surface temperature by redistributing heat between the deep ocean and the atmosphere; but also by altering the cloud, water vapour or sea ice distribution, which can affect the total energy budget of the earth.

The oceanic aspects of these oscillations can generate variability on centennial timescales due to the ocean having hundreds of times more mass than the atmosphere, and therefore very high thermal inertia. For example, alterations to ocean processes such as thermohaline circulation play a key role in redistributing heat in the world's oceans. Understanding internal variability helped scientists to attribute recent climate change to greenhouse gases.

External climate forcing

On long timescales, the climate is determined mostly by how much energy is in the system and where it goes. When the Earth's energy budget changes, the climate follows. A change in the energy budget is called a forcing, and when the change is caused by something outside of the five components of the climate system, it is called an external forcing. Volcanoes, for example, result from deep processes within the earth that are not considered part of the climate system. Off-planet changes, such as solar variation and incoming asteroids, are also "external" to the climate system's five components, as are human actions.

The main value to quantify and compare climate forcings is radiative forcing.

Incoming sunlight

The Sun is the predominant source of energy input to the Earth and drives atmospheric circulation. The amount of energy coming from the Sun varies on shorter time scales, including the 11-year solar cycle and longer-term time scales. While the solar cycle is too small to directly warm and cool Earth's surface, it does influence a higher layer of the atmosphere directly, the stratosphere, which may have an effect on the atmosphere near the surface.

Slight variations in the Earth's motion can cause large changes in the seasonal distribution of sunlight reaching the Earth's surface and how it is distributed across the globe, although not to the global and yearly average sunlight. The three types of kinematic change are variations in Earth's eccentricity, changes in the tilt angle of Earth's axis of rotation, and precession of Earth's axis. Together these produce Milankovitch cycles, which affect climate and are notable for their correlation to glacial and interglacial periods.

Greenhouse gases

Greenhouse gases trap heat in the lower part of the atmosphere by absorbing longwave radiation. In the Earth's past, many processes contributed to variations in greenhouse gas concentrations. Currently, emissions by humans are the cause of increasing concentrations of some greenhouse gases, such as CO2, methane and N2O. The dominant contributor to the greenhouse effect is water vapour (~50%), with clouds (~25%) and CO2 (~20%) also playing an important role. When concentrations of long-lived greenhouse gases such as CO2 are increased and temperature rises, the amount of water vapour increases as well, so that water vapour and clouds are not seen as external forcings, but instead as feedbacks. Rock weathering is a very slow process that removes carbon from the atmosphere.

Aerosols

Liquid and solid particles in the atmosphere, collectively named aerosols, have diverse effects on the climate. Some primarily scatter sunlight and thereby cool the planet, while others absorb sunlight and warm the atmosphere. Indirect effects include the fact that aerosols can act as cloud condensation nuclei, stimulating cloud formation. Natural sources of aerosols include sea spray, mineral dust, meteorites and volcanoes, but humans also contribute as human activity such as causing wildfires or combustion of fossil fuels releases aerosols into the atmosphere. Aerosols counteract a part of the warming effects of emitted greenhouse gases, but only until they fall back to the surface in a few years or less.

In atmospheric temperature from 1979 to 2010, determined by MSU NASA satellites, effects appear from aerosols released by major volcanic eruptions (El Chichón and Pinatubo). El Niño is a separate event, from ocean variability.

Although volcanoes are technically part of the lithosphere, which itself is part of the climate system, volcanism is defined as an external forcing agent. On average, there are only several volcanic eruptions per century that influence Earth's climate for longer than a year by ejecting tons of SO2 into the stratosphere. The sulfur dioxide is chemically converted into aerosols that cause cooling by blocking a fraction of sunlight to the Earth's surface. Small eruptions affect the atmosphere only subtly.

Land use and cover change

Changes in land cover, such as change of water cover (e.g. rising sea level, drying up of lakes and outburst floods) or deforestation, particularly through human use of the land, can affect the climate. The reflectivity of the area can change, causing the region to capture more or less sunlight. In addition, vegetation interacts with the hydrological cycle, so that precipitation is also affected. Landscape fires release greenhouse gases into the atmosphere and release black carbon, which darkens snow making it easier to melt.

Responses and feedbacks

The different elements of the climate system respond to external forcing in different ways. One important difference between the components is the speed at which they react to a forcing. The atmosphere typically responds within a couple of hours to weeks, while the deep ocean and ice sheets take centuries to millennia to reach a new equilibrium.

The initial response of a component to an external forcing can be damped by negative feedbacks and enhanced by positive feedbacks. For example, a significant decrease of solar intensity would quickly lead to a temperature decrease on Earth, which would then allow ice and snow cover to expand. The extra snow and ice has a higher albedo or reflectivity, and therefore reflects more of the Sun's radiation back into space before it can be absorbed by the climate system as a whole; this in turn causes the Earth to cool down further.

Geochemistry of carbon

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

The geochemistry of carbon is the study of the transformations involving the element carbon within the systems of the Earth. To a large extent this study is organic geochemistry, but it also includes the very important carbon dioxide. Carbon is transformed by life, and moves between the major phases of the Earth, including the water bodies, atmosphere, and the rocky parts. Carbon is important in the formation of organic mineral deposits, such as coal, petroleum or natural gas. Most carbon is cycled through the atmosphere into living organisms and then respirated back into the atmosphere. However an important part of the carbon cycle involves the trapping of living matter into sediments. The carbon then becomes part of a sedimentary rock when lithification happens. Human technology or natural processes such as weathering, or underground life or water can return the carbon from sedimentary rocks to the atmosphere. From that point it can be transformed in the rock cycle into metamorphic rocks, or melted into igneous rocks. Carbon can return to the surface of the Earth by volcanoes or via uplift in tectonic processes. Carbon is returned to the atmosphere via volcanic gases. Carbon undergoes transformation in the mantle under pressure to diamond and other minerals, and also exists in the Earth's outer core in solution with iron, and may also be present in the inner core.

Carbon can form a huge variety stable compounds. It is an essential component of living matter. Living organisms can live in a limited range of conditions on the Earth that are limited by temperature and the existence of liquid water. The potential habitability of other planets or moons can also be assessed by the existence of liquid water.

Carbon makes up only 0.08% of the combination of the lithosphere, hydrosphere, and atmosphere. Yet it is the twelfth most common element there. In the rock of the lithosphere, carbon commonly occurs as carbonate minerals containing calcium or magnesium. It is also found as fossil fuels in coal and petroleum and gas. Native forms of carbon are much rarer, requiring pressure to form. Pure carbon exists as graphite or diamond.

The deeper parts of Earth such as the mantle are very hard to discover. Few samples are known, in the form of uplifted rocks, or xenoliths. Even fewer remain in the same state they were in where the pressure and temperature is much higher. Some diamonds retain inclusions held at pressures they were formed at, but the temperature is much lower at the surface. Iron meteorites may represent samples of the core of an asteroid, but it would have formed under different conditions to the Earth's core. Therefore, experimental studies are conducted in which minerals or substances are compressed and heated to determine what happens in similar conditions to the planetary interior.

The two common isotopes of carbon are stable. On Earth, carbon 12, 12C is by far the most common at 98.894%. Carbon 13 is much rarer averaging 1.106%. This percentage can vary slightly and its value is important in isotope geochemistry whereby the origin of the carbon is suggested.

Origins

Formation

Carbon can be produced in stars at least as massive as the Sun by fusion of three helium-4 nuclei: 4He + 4He + 4He --> 12C. This is the triple alpha process. In stars as massive as the Sun, carbon 12 is also converted to carbon 13 and then onto nitrogen 14 by fusion with protons. 12C + 1H --> 13C + e+. 13C + 1H --> 14N. In more massive stars, two carbon nuclei can fuse to magnesium, or a carbon and an oxygen to sulfur.

Astrochemistry

In molecular clouds, simple carbon molecules are formed, including carbon monoxide and dicarbon. Reactions with the trihydrogen cation of the simple carbon molecules yield carbon containing ions that readily react to form larger organic molecules. Carbon compounds that exist as ions, or isolated gas molecules in the interstellar medium, can condense onto dust grains. Carbonaceous dust grains consist mostly of carbon. Grains can stick together to form larger aggregates.

Earth formation

Meteorites and interplanetary dust shows the composition of solid material at the start of the Solar System, as they have not been modified since its formation. Carbonaceous chondrites are meteorites with around 5% carbon compounds. Their composition resembles the Sun's minus the very volatile elements like hydrogen and noble gases. The Earth is believed to have formed by the gravitational collapse of material like meteorites.

Important effects on Earth in the first Hadian Era include strong solar winds during the T-Tauri stage of the Sun. The Moon forming impact caused major changes to the surface. Juvenile volatiles outgased from the early molten surface of the Earth. These included carbon dioxide and carbon monoxide. The emissions probably did not include methane, but the Earth was probably free of molecular oxygen. The Late Heavy Bombardment was between 4.0 and 3.8 billion years ago (Ga). To start with, the Earth did not have a crust as it does today. Plate tectonics in its present form commenced about 2.5 Ga.

Early sedimentary rocks formed under water date to 3.8 Ga. Pillow lavas dating from 3.5 Ga prove the existence of oceans. Evidence of early life is given by fossils of stromatolites, and later by chemical tracers.

Organic matter continues to be added to the Earth from space via interplanetary dust, which also includes some interstellar particles. The amounts added to the Earth were around 60,000 tonnes per year about 4 Ga.

Isotope

Biological sequestration of carbon causes enrichment of carbon-12, so that substances that originate from living organisms have a higher carbon-12 content. Due to the kinetic isotope effect, chemical reactions can happen faster with lighter isotopes, so that photosynthesis fixes lighter carbon-12 faster than carbon-13. Also lighter isotopes diffuse across a biological membrane faster. Enrichment in carbon 13 is measured by delta 13C(o/oo) = [(13C/12C)sample/(13C/12C)standard - 1] * 1000. The common standard for carbon is Cretaceous Peedee formation belemnite.

Stereoisomers

Complex molecules, in particular those containing carbon can be in the form of stereoisomers. With abiotic processes they would be expected to be equally likely, but in carbonaceous chondrites this is not the case. The reasons for this are unknown.

Crust

The outer layer of the Earth, the crust along with its outer layers contain about 1020 kg of carbon. This is enough for each square meter of the surface to have 200 tons of carbon.

Sedimentation

Carbon added to sedimentary rocks can take the form of carbonates, or organic carbon compounds. In order of source quantity the organic carbon comes from phytoplankton, plants, bacteria and zooplankton. However terrestrial sediments may be mostly from higher plants, and some oxygen deficient sediments from water may be mostly bacteria. Fungi and other animals make insignificant contributions. On the oceans the main contributor of organic matter to sediments is plankton, either dead fragments or faecal pellets termed marine snow. Bacteria degrade this matter in the water column, and the amount surviving to the ocean floor is inversely proportional to the depth. This is accompanied by biominerals consisting of silicates and carbonates. The particulate organic matter in sediments is about 20% of known molecules 80% of material that cannot be analysed. Detritivores consume some of the fallen organic materials. Aerobic bacteria and fungi also consume organic matter in the oxic surface parts of the sediment. Coarse-grained sediments are oxygenated to about half a meter, but fine grained clays may only have a couple of millimetres exposed to oxygen. The organic matter in the oxygenated zone will become completely mineralized if it stays there long enough.

Deeper in sediments where oxygen is exhausted, anaerobic biological processes continue at a slower rate. These include anaerobic mineralization making ammonium, phosphate and sulfide ions; fermentation making short chain alcohols, acids or methyl amines; acetogenesis making acetic acid; methanogenesis making methane, and sulfate, nitrite and nitrate reduction. Carbon dioxide and hydrogen are also outputs. Under freshwater, sulfate is usually very low, so methanogensis is more important. Yet other bacteria can convert methane, back into living matter, by oxidising with other substrates. Bacteria can reside at great depths in sediments. However sedimentary organic matter accumulates the indigestible components.

Deep bacteria may be lithotrophes, using hydrogen, and carbon dioxide as a carbon source.

In the oceans and other waters there is much dissolved organic materials. These are several thousand years old on average, and are called gelbstoff (yellow substance) particularly in fresh waters. Much of this is tannins. The nitrogen containing materials here appear to be amides, perhaps from peptidoglycans from bacteria. Microorganisms have trouble consuming the high molecular weight dissolved substances, but quickly consume small molecules.

From terrestrial sources black carbon produced by charring is an important component. Fungi are important decomposers in soil.

Macromolecules

Proteins are normally hydrolysed slowly even without enzymes or bacteria, with a half life of 460 years, but can be preserved if they are desiccated, pickled or frozen. Being enclosed in bone also helps preservation. Over time the amino acids tend to racemize, and those with more functional groups are lost earlier. Protein still will degrade on the timescale of a million years. DNA degrades rapidly, lasting only about four years in water. Cellulose and chitin have a half life in water at 25° of about 4.7 million years. Enzymes can accelerate this by a factor of 1017. About 1011 tons of chiting are produced each year, but it is almost all degraded.

Lignin is only efficiently degraded by fungi, white rot, or brown rot. These require oxygen.

Lipids are hydrolysed to fatty acids over long time periods. Plant cuticle waxes are very difficult to degrade, and may survive over geological time periods.

Preservation

More organic matter is preserved in sediments if there is high primary production, or the sediment is fine-grained. The lack of oxygen helps preservation greatly, and that also is caused by a large supply of organic matter. Soil does not usually preserve organic matter, it would need to be acidified or water logged, as in the bog. Rapid burial ensures the material gets to an oxygen free depth, but also dilutes the organic matter. A low energy environment ensures the sediment is not stirred up and oxygenated. Salt marshes and mangroves meet some of these requirements, but unless the sea level is rising will not have a chance to accumulate much. Coral reefs are very productive, but are well oxygenated, and recycle everything before it is buried.

Sphagnum bog

In dead Sphagnum, sphagnan a polysaccharide with D-lyxo-5-hexosulouronic acid is a major remaining substance. It makes the bog very acidic, so that bacteria cannot grow. Not only that, the plant ensures there is no available nitrogen. Holocellulose also absorbs any digestive enzymes around. Together this leads to major accumulation of peat under sphagnum bogs.

Mantle

Earth's mantle is a significant reservoir of carbon. The mantle contains more carbon than the crust, oceans, biosphere, and atmosphere put together. The figure is estimated to be very roughly 1022 kg. Carbon concentration in the mantle is very variable, varying by more than a factor of 100 between different parts.

The form carbon takes depends on its oxidation state, which depends on the oxygen fugacity of the environment. Carbon dioxide and carbonate are found where the oxygen fugacity is high. Lower oxygen fugacity results in diamond formation, first in eclogite, then peridotite, and lastly in fluid water mixtures. At even lower oxygen fugacity, methane is stable in contact with water, and even lower, metallic iron and nickel form along with carbides. Iron carbides include Fe3C and Fe7C3.

Minerals that contain carbon include calcite and its higher density polymorphs. Other significant carbon minerals include magnesium and iron carbonates. Dolomite is stable above 100 km depth. Below 100 km, dolomite reacts with orthopyroxine (found in peridotite) to yield magnesite (an iron magnesium carbonate). Below 200 km deep, carbon dioxide is reduced by ferrous iron (Fe2+), forming diamond, and ferric iron (Fe3+). Even deeper pressure induced disproportionation of iron minerals produces more ferric iron, and metallic iron. The metallic iron combines with carbon to form the mineral cohenite with formula Fe3C. Cohenite also contains some nickel substituting for iron. This form or carbon is called "carbide". Diamond forms in the mantle below 150 km deep, but because it is so durable, it can survive in eruptions to the surface in kimberlites, lamproites, or ultramafic lamprophyres.

Xenoliths can come from the mantle, and different compositions come from different depths. Above 90 km (3.2 GPa) spinel peridotite occurs, below this garnet peridotite is found.

Inclusions trapped in diamond can reveal the material and conditions much deeper in the mantle. Large gem diamonds are usually formed in the transition zone part of the mantle, (410 to 660 km deep) and crystallise from a molten iron-nickel-carbon solution, that also contains sulfur and trace amounts of hydrogen, chromium, phosphorus and oxygen. Carbon atoms constitute about 12% of the melt (about 3% by mass). Inclusions of the crystallised metallic melt are sometimes included in diamonds. Diamond can be caused to precipitate from the liquid metal, by increasing pressure, or by adding sulfur.

Fluid inclusions in crystals from the mantle have contents that most often are liquid carbon dioxide, but which also include carbon oxysulfide, methane and carbon monoxide

Material is added by subduction from the crust. This includes the major carbon containing sediments such as limestone, or coal. Each year 2×1011 kg of CO2 is transferred from the crust to the mantle by subduction. (1700 tons of carbon per second).

Upwelling mantle material can add to the crust at mid oceanic ridges. Fluids can extract carbon from the mantle and erupt in volcanoes. At 330 km deep a liquid consisting of carbon dioxide and water can form. It is highly corrosive, and dissolves incompatible elements from the solid mantle. These elements include uranium, thorium, potassium, helium and argon. The fluids can then go on to cause metasomatism or extend to the surface in carbonatite eruptions. The total mid oceanic ridge, and hot spot volcanic emissions of carbon dioxide match the loss due to subduction: 2×1011 kg of CO2 per year.

In slowly convecting mantle rocks, diamond that slowly rises above 150 km will slowly turn into graphite or be oxidised to carbon dioxide or carbonate minerals.

Core

Earth's core is believed to be mostly an alloy of iron and nickel. The density indicates that it also contains a significant amount of lighter elements. Elements such as hydrogen would be stable in the Earth's core, however the conditions at the formation of the core would not be suitable for its inclusion. Carbon is a very likely constituent of the core. Preferential partitioning of the carbon isotope12C into the metallic core, during its formation, may explain why there seems to be more 13C on the surface and mantle of the Earth compared to other solar system bodies (−5‰ compared to -20‰). The difference can also help to predict the value of the carbon proportion of the core.

The outer core has a density around 11 cm−3, and a mass of 1.3×1024kg. It contains roughly 1022 kg of carbon. Carbon dissolved in liquid iron affect the solution of other elements. Dissolved carbon changes lead from a siderophile to a lithophile. It has the opposite effect on tungsten and molybdenum, causing more tungsten or molybdenum to dissolve in the metallic phase. The measured amounts of these elements in the rocks compared to the Solar System can be explained by a 0.6% carbon composition of the core.

The inner core is about 1221 km in radius. It has a density of 13 g cm−3, and a total mass of 9×1022 kg and a surface area of 18,000,000 square kilometers. Experiments with mixtures under pressure and temperature attempt to reproduce the known properties of the inner and outer core. Carbides are among the first to precipitate from a molten metal mix, and so the inner core may be mostly iron carbides, Fe7C3 or Fe3C. At atmospheric pressure (100 kPa) the iron-Fe3C eutectic point is at 4.1% carbon. This percentage decreases as pressure increases to around 50 GPa. Above that pressure the percentage of carbon at the eutectic increases. The pressure on the inner core ranges from 330 GPa to 360 GPa at the centre of the Earth. The temperature at the inner core surface is about 6000 K. The material of the inner core must be stable at the pressure and temperature found there, and more dense than that of the outer core liquid. Extrapolations show that either Fe3C or Fe7C3 match the requirements. Fe7C3 is 8.4% carbon, and Fe3C is 6.7% carbon. The inner core is growing by about 1 mm per year, or adding about 18 cubic kilometres per year. This is about 18×1012kg of carbon added to the inner core every year. It contains about 8×1021 kg of carbon.

High pressure experimentation

In order to determine the fate of natural carbon containing substances deep in the Earth, experiments have been conducted to see what happens when high pressure, and or temperatures are applied. Such substances include carbon dioxide, carbon monoxide, graphite, methane, and other hydrocarbons such as benzene, carbon dioxide water mixtures and carbonate minerals such as calcite, magnesium carbonate, or ferrous carbonate. Under super high pressures carbon may take on a higher coordination number than the four found in sp3 compounds like diamond, or the three found in carbonates. Perhaps carbon can substitute into silicates, or form a silicon oxycarbide. Carbides may be possible.

Carbon

At 15 GPa graphite changes to a hard transparent form, that is not diamond. Diamond is very resistant to pressure, but at about 1 TPa (1000 GPa) transforms to a BC-8 form.

Carbides

Carbides are predicted to be more likely lower in the mantle as experiments have shown a much lower oxygen fugacity for high pressure iron silicates. Cohenite remains stable to over 187 GPa, but is predicted to have a denser orthorhombic Cmcm form in the inner core.

Carbon dioxide

Under 0.3 GPa pressure, carbon dioxide is stable at room temperature in the same form as dry ice. Over 0.5 GPa carbon dioxide forms a number of different solid forms containing molecules. At pressures over 40 GPa and high temperatures, carbon dioxide forms a covalent solid that contains CO4 tetrahedra, and has the same structure as β-cristobalite. This is called phase V or CO2-V. When CO2-V is subjected to high temperatures, or higher pressures, experiments show it breaks down to form diamond and oxygen. In the mantle the geotherm would mean that carbon dioxide would be a liquid till a pressure of 33 GPa, then it would adopt the solid CO2-V form till 43 GPa, and deeper than that would make diamond and fluid oxygen.

Carbonyls

High pressure carbon monoxide forms the high energy polycarbonyl covalent solid, however it is not expected to be present inside the Earth.

Hydrocarbons

Under 1.59 GPa pressure at 25 °C, methane converts to a cubic solid. The molecules are rotationally disordered. But over 5.25 GPa the molecules become locked into position and cannot spin. Other hydrocarbons under high pressure have hardly been studied.

Carbonates

Calcite changes to calcite-II and calcite-III at pressures of 1.5, and 2.2 GPa. Siderite undergoes a chemical change at 10 GPa at 1800K to form Fe4O5. Dolomite decomposes 7GPa and below 1000 °C to yield aragonite and magnesite. However, there are forms of iron containing dolomite stable at higher pressures and temperatures. Over 130 GPa aragonite undergoes a transformation to a SP3 tetrahedrally connected carbon, in a covalent network in a C2221 structure. Magnesite can survive 80 GPa, but with more than 100 GPa (as at a depth of 1800 km it changes to forms with three-member rings of CO4 tetrahedra (C3O96−). If iron is present in this mineral, at these pressures it will convert to magnetite and diamond. Melted carbonates with SP3 carbon are predicted to be very viscous.

Some minerals that contain both silicate and carbonate exist, spurrite and tilleyite. But high-pressure forms have not been studied. There have been attempts to make silicon carbonate. Six coordinated silicates mixed with carbonate should not exist on Earth, but may exist on more massive planets.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...