Search This Blog

Monday, December 9, 2024

Atmospheric model

From Wikipedia, the free encyclopedia
A 96-hour forecast of 850 mbar geopotential height and temperature from the Global Forecast System

In atmospheric science, an atmospheric model is a mathematical model constructed around the full set of primitive, dynamical equations which govern atmospheric motions. It can supplement these equations with parameterizations for turbulent diffusion, radiation, moist processes (clouds and precipitation), heat exchange, soil, vegetation, surface water, the kinematic effects of terrain, and convection. Most atmospheric models are numerical, i.e. they discretize equations of motion. They can predict microscale phenomena such as tornadoes and boundary layer eddies, sub-microscale turbulent flow over buildings, as well as synoptic and global flows. The horizontal domain of a model is either global, covering the entire Earth (or other planetary body), or regional (limited-area), covering only part of the Earth. Atmospheric models also differ in how they compute vertical fluid motions; some types of models are thermotropic, barotropic, hydrostatic, and non-hydrostatic. These model types are differentiated by their assumptions about the atmosphere, which must balance computational speed with the model's fidelity to the atmosphere it is simulating.

Forecasts are computed using mathematical equations for the physics and dynamics of the atmosphere. These equations are nonlinear and are impossible to solve exactly. Therefore, numerical methods obtain approximate solutions. Different models use different solution methods. Global models often use spectral methods for the horizontal dimensions and finite-difference methods for the vertical dimension, while regional models usually use finite-difference methods in all three dimensions. For specific locations, model output statistics use climate information, output from numerical weather prediction, and current surface weather observations to develop statistical relationships which account for model bias and resolution issues.

Types

Thermotropic

The main assumption made by the thermotropic model is that while the magnitude of the thermal wind may change, its direction does not change with respect to height, and thus the baroclinicity in the atmosphere can be simulated using the 500 mb (15 inHg) and 1,000 mb (30 inHg) geopotential height surfaces and the average thermal wind between them.

Barotropic

Barotropic models assume the atmosphere is nearly barotropic, which means that the direction and speed of the geostrophic wind are independent of height. In other words, no vertical wind shear of the geostrophic wind. It also implies that thickness contours (a proxy for temperature) are parallel to upper level height contours. In this type of atmosphere, high and low pressure areas are centers of warm and cold temperature anomalies. Warm-core highs (such as the subtropical ridge and Bermuda-Azores high) and cold-core lows have strengthening winds with height, with the reverse true for cold-core highs (shallow arctic highs) and warm-core lows (such as tropical cyclones). A barotropic model tries to solve a simplified form of atmospheric dynamics based on the assumption that the atmosphere is in geostrophic balance; that is, that the Rossby number of the air in the atmosphere is small. If the assumption is made that the atmosphere is divergence-free, the curl of the Euler equations reduces into the barotropic vorticity equation. This latter equation can be solved over a single layer of the atmosphere. Since the atmosphere at a height of approximately 5.5 kilometres (3.4 mi) is mostly divergence-free, the barotropic model best approximates the state of the atmosphere at a geopotential height corresponding to that altitude, which corresponds to the atmosphere's 500 mb (15 inHg) pressure surface.

Hydrostatic

Hydrostatic models filter out vertically moving acoustic waves from the vertical momentum equation, which significantly increases the time step used within the model's run. This is known as the hydrostatic approximation. Hydrostatic models use either pressure or sigma-pressure vertical coordinates. Pressure coordinates intersect topography while sigma coordinates follow the contour of the land. Its hydrostatic assumption is reasonable as long as horizontal grid resolution is not small, which is a scale where the hydrostatic assumption fails. Models which use the entire vertical momentum equation are known as nonhydrostatic. A nonhydrostatic model can be solved anelastically, meaning it solves the complete continuity equation for air assuming it is incompressible, or elastically, meaning it solves the complete continuity equation for air and is fully compressible. Nonhydrostatic models use altitude or sigma altitude for their vertical coordinates. Altitude coordinates can intersect land while sigma-altitude coordinates follow the contours of the land.

History

The ENIAC main control panel at the Moore School of Electrical Engineering

The history of numerical weather prediction began in the 1920s through the efforts of Lewis Fry Richardson who utilized procedures developed by Vilhelm Bjerknes. It was not until the advent of the computer and computer simulation that computation time was reduced to less than the forecast period itself. ENIAC created the first computer forecasts in 1950, and more powerful computers later increased the size of initial datasets and included more complicated versions of the equations of motion. In 1966, West Germany and the United States began producing operational forecasts based on primitive-equation models, followed by the United Kingdom in 1972 and Australia in 1977. The development of global forecasting models led to the first climate models. The development of limited area (regional) models facilitated advances in forecasting the tracks of tropical cyclone as well as air quality in the 1970s and 1980s.

Because the output of forecast models based on atmospheric dynamics requires corrections near ground level, model output statistics (MOS) were developed in the 1970s and 1980s for individual forecast points (locations). Even with the increasing power of supercomputers, the forecast skill of numerical weather models only extends to about two weeks into the future, since the density and quality of observations—together with the chaotic nature of the partial differential equations used to calculate the forecast—introduce errors which double every five days. The use of model ensemble forecasts since the 1990s helps to define the forecast uncertainty and extend weather forecasting farther into the future than otherwise possible.

Initialization

A WP-3D Orion weather reconnaissance aircraft in flight.
Weather reconnaissance aircraft, such as this WP-3D Orion, provide data that is then used in numerical weather forecasts.

The atmosphere is a fluid. As such, the idea of numerical weather prediction is to sample the state of the fluid at a given time and use the equations of fluid dynamics and thermodynamics to estimate the state of the fluid at some time in the future. The process of entering observation data into the model to generate initial conditions is called initialization. On land, terrain maps available at resolutions down to 1 kilometer (0.6 mi) globally are used to help model atmospheric circulations within regions of rugged topography, in order to better depict features such as downslope winds, mountain waves and related cloudiness that affects incoming solar radiation. The main inputs from country-based weather services are observations from devices (called radiosondes) in weather balloons that measure various atmospheric parameters and transmits them to a fixed receiver, as well as from weather satellites. The World Meteorological Organization acts to standardize the instrumentation, observing practices and timing of these observations worldwide. Stations either report hourly in METAR reports, or every six hours in SYNOP reports. These observations are irregularly spaced, so they are processed by data assimilation and objective analysis methods, which perform quality control and obtain values at locations usable by the model's mathematical algorithms. The data are then used in the model as the starting point for a forecast.

A variety of methods are used to gather observational data for use in numerical models. Sites launch radiosondes in weather balloons which rise through the troposphere and well into the stratosphere. Information from weather satellites is used where traditional data sources are not available. Commerce provides pilot reports along aircraft routes and ship reports along shipping routes. Research projects use reconnaissance aircraft to fly in and around weather systems of interest, such as tropical cyclones. Reconnaissance aircraft are also flown over the open oceans during the cold season into systems which cause significant uncertainty in forecast guidance, or are expected to be of high impact from three to seven days into the future over the downstream continent. Sea ice began to be initialized in forecast models in 1971. Efforts to involve sea surface temperature in model initialization began in 1972 due to its role in modulating weather in higher latitudes of the Pacific.

Computation

An example of 500 mbar geopotential height prediction from a numerical weather prediction model.

A model is a computer program that produces meteorological information for future times at given locations and altitudes. Within any model is a set of equations, known as the primitive equations, used to predict the future state of the atmosphere. These equations are initialized from the analysis data and rates of change are determined. These rates of change predict the state of the atmosphere a short time into the future, with each time increment known as a time step. The equations are then applied to this new atmospheric state to find new rates of change, and these new rates of change predict the atmosphere at a yet further time into the future. Time stepping is repeated until the solution reaches the desired forecast time. The length of the time step chosen within the model is related to the distance between the points on the computational grid, and is chosen to maintain numerical stability. Time steps for global models are on the order of tens of minutes, while time steps for regional models are between one and four minutes. The global models are run at varying times into the future. The UKMET Unified model is run six days into the future, the European Centre for Medium-Range Weather Forecasts model is run out to 10 days into the future, while the Global Forecast System model run by the Environmental Modeling Center is run 16 days into the future.

The equations used are nonlinear partial differential equations which are impossible to solve exactly through analytical methods, with the exception of a few idealized cases. Therefore, numerical methods obtain approximate solutions. Different models use different solution methods: some global models use spectral methods for the horizontal dimensions and finite difference methods for the vertical dimension, while regional models and other global models usually use finite-difference methods in all three dimensions. The visual output produced by a model solution is known as a prognostic chart, or prog.

Parameterization

Weather and climate model gridboxes have sides of between 5 kilometres (3.1 mi) and 300 kilometres (190 mi). A typical cumulus cloud has a scale of less than 1 kilometre (0.62 mi), and would require a grid even finer than this to be represented physically by the equations of fluid motion. Therefore, the processes that such clouds represent are parameterized, by processes of various sophistication. In the earliest models, if a column of air in a model gridbox was unstable (i.e., the bottom warmer than the top) then it would be overturned, and the air in that vertical column mixed. More sophisticated schemes add enhancements, recognizing that only some portions of the box might convect and that entrainment and other processes occur. Weather models that have gridboxes with sides between 5 kilometres (3.1 mi) and 25 kilometres (16 mi) can explicitly represent convective clouds, although they still need to parameterize cloud microphysics. The formation of large-scale (stratus-type) clouds is more physically based, they form when the relative humidity reaches some prescribed value. Still, sub grid scale processes need to be taken into account. Rather than assuming that clouds form at 100% relative humidity, the cloud fraction can be related to a critical relative humidity of 70% for stratus-type clouds, and at or above 80% for cumuliform clouds, reflecting the sub grid scale variation that would occur in the real world.

The amount of solar radiation reaching ground level in rugged terrain, or due to variable cloudiness, is parameterized as this process occurs on the molecular scale. Also, the grid size of the models is large when compared to the actual size and roughness of clouds and topography. Sun angle as well as the impact of multiple cloud layers is taken into account. Soil type, vegetation type, and soil moisture all determine how much radiation goes into warming and how much moisture is drawn up into the adjacent atmosphere. Thus, they are important to parameterize.

Domains

The horizontal domain of a model is either global, covering the entire Earth, or regional, covering only part of the Earth. Regional models also are known as limited-area models, or LAMs. Regional models use finer grid spacing to resolve explicitly smaller-scale meteorological phenomena, since their smaller domain decreases computational demands. Regional models use a compatible global model for initial conditions of the edge of their domain. Uncertainty and errors within LAMs are introduced by the global model used for the boundary conditions of the edge of the regional model, as well as within the creation of the boundary conditions for the LAMs itself.

The vertical coordinate is handled in various ways. Some models, such as Richardson's 1922 model, use geometric height () as the vertical coordinate. Later models substituted the geometric coordinate with a pressure coordinate system, in which the geopotential heights of constant-pressure surfaces become dependent variables, greatly simplifying the primitive equations. This follows since pressure decreases with height through the Earth's atmosphere. The first model used for operational forecasts, the single-layer barotropic model, used a single pressure coordinate at the 500-millibar (15 inHg) level, and thus was essentially two-dimensional. High-resolution models—also called mesoscale models—such as the Weather Research and Forecasting model tend to use normalized pressure coordinates referred to as sigma coordinates.

Global versions

Some of the better known global numerical models are:

Regional versions

Some of the better known regional numerical models are:

  • WRF The Weather Research and Forecasting model was developed cooperatively by NCEP, NCAR, and the meteorological research community. WRF has several configurations, including:
    • WRF-NMM The WRF Nonhydrostatic Mesoscale Model is the primary short-term weather forecast model for the U.S., replacing the Eta model.
    • WRF-ARW Advanced Research WRF developed primarily at the U.S. National Center for Atmospheric Research (NCAR)
  • HARMONIE-Climate (HCLIM) is a limited area climate model based on the HARMONIE model developed by a large consortium of European weather forecastign and research institutes . It is a model system that like WRF can be run in many configurations, including at high resolution with the non-hydrostatic Arome physics or at lower resolutions with hydrostatic physics based on the ALADIN physical schemes. It has mostly been used in Europe and the Arctic for climate studies including 3km downscaling over Scandinavia and in studies looking at extreme weather events.
  • RACMO was developed at the Netherlands Meteorological Institute, KNMI and is based on the dynamics of the HIRLAM model with physical schemes from the IFS
    • RACMO2.3p2 is a polar version of the model used in many studies to provide surface mass balance of the polar ice sheets that was developed at the University of Utrecht
  • MAR (Modele Atmospherique Regionale) is a regional climate model developed at the University of Grenoble in France and the University of Liege in Belgium.
  • HIRHAM5 is a regional climate model developed at the Danish Meteorological Institute and the Alfred Wegener Institute in Potsdam. It is also based on the HIRLAM dynamics with physical schemes based on those in the ECHAM model. Like the RACMO model HIRHAM has been used widely in many different parts of the world under the CORDEX scheme to provide regional climate projections. It also has a polar mode that has been used for polar ice sheet studies in Greenland and Antarctica
  • NAM The term North American Mesoscale model refers to whatever regional model NCEP operates over the North American domain. NCEP began using this designation system in January 2005. Between January 2005 and May 2006 the Eta model used this designation. Beginning in May 2006, NCEP began to use the WRF-NMM as the operational NAM.
  • RAMS the Regional Atmospheric Modeling System developed at Colorado State University for numerical simulations of atmospheric meteorology and other environmental phenomena on scales from meters to hundreds of kilometers – now supported in the public domain
  • MM5 The Fifth Generation Penn State/NCAR Mesoscale Model
  • ARPS the Advanced Region Prediction System developed at the University of Oklahoma is a comprehensive multi-scale nonhydrostatic simulation and prediction system that can be used for regional-scale weather prediction up to the tornado-scale simulation and prediction. Advanced radar data assimilation for thunderstorm prediction is a key part of the system..
  • HIRLAM High Resolution Limited Area Model, is developed by the European NWP research consortia co-funded by 10 European weather services. The meso-scale HIRLAM model is known as HARMONIE and developed in collaboration with Meteo France and ALADIN consortia.
  • GEM-LAM Global Environmental Multiscale Limited Area Model, the high resolution 2.5 km (1.6 mi) GEM by the Meteorological Service of Canada (MSC)
  • ALADIN The high-resolution limited-area hydrostatic and non-hydrostatic model developed and operated by several European and North African countries under the leadership of Météo-France
  • COSMO The COSMO Model, formerly known as LM, aLMo or LAMI, is a limited-area non-hydrostatic model developed within the framework of the Consortium for Small-Scale Modelling (Germany, Switzerland, Italy, Greece, Poland, Romania, and Russia).
  • Meso-NH The Meso-NH Model is a limited-area non-hydrostatic model developed jointly by the Centre National de Recherches Météorologiques and the Laboratoire d'Aérologie (France, Toulouse) since 1998. Its application is from mesoscale to centimetric scales weather simulations.

Model output statistics

Because forecast models based upon the equations for atmospheric dynamics do not perfectly determine weather conditions near the ground, statistical corrections were developed to attempt to resolve this problem. Statistical models were created based upon the three-dimensional fields produced by numerical weather models, surface observations, and the climatological conditions for specific locations. These statistical models are collectively referred to as model output statistics (MOS), and were developed by the National Weather Service for their suite of weather forecasting models. The United States Air Force developed its own set of MOS based upon their dynamical weather model by 1983.

Model output statistics differ from the perfect prog technique, which assumes that the output of numerical weather prediction guidance is perfect. MOS can correct for local effects that cannot be resolved by the model due to insufficient grid resolution, as well as model biases. Forecast parameters within MOS include maximum and minimum temperatures, percentage chance of rain within a several hour period, precipitation amount expected, chance that the precipitation will be frozen in nature, chance for thunderstorms, cloudiness, and surface winds.

Applications

Climate modeling

In 1956, Norman Phillips developed a mathematical model that realistically depicted monthly and seasonal patterns in the troposphere. This was the first successful climate model. Several groups then began working to create general circulation models.[63] The first general circulation climate model combined oceanic and atmospheric processes and was developed in the late 1960s at the Geophysical Fluid Dynamics Laboratory, a component of the U.S. National Oceanic and Atmospheric Administration.

By 1975, Manabe and Wetherald had developed a three-dimensional global climate model that gave a roughly accurate representation of the current climate. Doubling CO2 in the model's atmosphere gave a roughly 2 °C rise in global temperature. Several other kinds of computer models gave similar results: it was impossible to make a model that gave something resembling the actual climate and not have the temperature rise when the CO2 concentration was increased.

By the early 1980s, the U.S. National Center for Atmospheric Research had developed the Community Atmosphere Model (CAM), which can be run by itself or as the atmospheric component of the Community Climate System Model. The latest update (version 3.1) of the standalone CAM was issued on 1 February 2006. In 1986, efforts began to initialize and model soil and vegetation types, resulting in more realistic forecasts. Coupled ocean-atmosphere climate models, such as the Hadley Centre for Climate Prediction and Research's HadCM3 model, are being used as inputs for climate change studies.

Limited area modeling

Model spread with Hurricane Ernesto (2006) within the National Hurricane Center limited area models

Air pollution forecasts depend on atmospheric models to provide fluid flow information for tracking the movement of pollutants. In 1970, a private company in the U.S. developed the regional Urban Airshed Model (UAM), which was used to forecast the effects of air pollution and acid rain. In the mid- to late-1970s, the United States Environmental Protection Agency took over the development of the UAM and then used the results from a regional air pollution study to improve it. Although the UAM was developed for California, it was during the 1980s used elsewhere in North America, Europe, and Asia.

The Movable Fine-Mesh model, which began operating in 1978, was the first tropical cyclone forecast model to be based on atmospheric dynamics. Despite the constantly improving dynamical model guidance made possible by increasing computational power, it was not until the 1980s that numerical weather prediction (NWP) showed skill in forecasting the track of tropical cyclones. And it was not until the 1990s that NWP consistently outperformed statistical or simple dynamical models. Predicting the intensity of tropical cyclones using NWP has also been challenging. As of 2009, dynamical guidance remained less skillful than statistical methods.

Idealized greenhouse model

From Wikipedia, the free encyclopedia
A schematic representation of a planet's radiation balance with its parent star and the rest of space. Thermal radiation absorbed and emitted by the idealized atmosphere can raise the equilibrium surface temperature.

The temperatures of a planet's surface and atmosphere are governed by a delicate balancing of their energy flows. The idealized greenhouse model is based on the fact that certain gases in the Earth's atmosphere, including carbon dioxide and water vapour, are transparent to the high-frequency solar radiation, but are much more opaque to the lower frequency infrared radiation leaving Earth's surface. Thus heat is easily let in, but is partially trapped by these gases as it tries to leave. Rather than get hotter and hotter, Kirchhoff's law of thermal radiation says that the gases of the atmosphere also have to re-emit the infrared energy that they absorb, and they do so, also at long infrared wavelengths, both upwards into space as well as downwards back towards the Earth's surface. In the long-term, the planet's thermal inertia is surmounted and a new thermal equilibrium is reached when all energy arriving on the planet is leaving again at the same rate. In this steady-state model, the greenhouse gases cause the surface of the planet to be warmer than it would be without them, in order for a balanced amount of heat energy to finally be radiated out into space from the top of the atmosphere.

Essential features of this model where first published by Svante Arrhenius in 1896. It has since become a common introductory "textbook model" of the radiative heat transfer physics underlying Earth's energy balance and the greenhouse effect. The planet is idealized by the model as being functionally "layered" with regard to a sequence of simplified energy flows, but dimensionless (i.e. a zero-dimensional model) in terms of its mathematical space. The layers include a surface with constant temperature Ts and an atmospheric layer with constant temperature Ta. For diagrammatic clarity, a gap can be depicted between the atmosphere and the surface. Alternatively, Ts could be interpreted as a temperature representative of the surface and the lower atmosphere, and Ta could be interpreted as the temperature of the upper atmosphere, also called the skin temperature. In order to justify that Ta and Ts remain constant over the planet, strong oceanic and atmospheric currents can be imagined to provide plentiful lateral mixing. Furthermore, the temperatures are understood to be multi-decadal averages such that any daily or seasonal cycles are insignificant.

Simplified energy flows

The model will find the values of Ts and Ta that will allow the outgoing radiative power, escaping the top of the atmosphere, to be equal to the absorbed radiative power of sunlight. When applied to a planet like Earth, the outgoing radiation will be longwave and the sunlight will be shortwave. These two streams of radiation will have distinct emission and absorption characteristics. In the idealized model, we assume the atmosphere is completely transparent to sunlight. The planetary albedo αP is the fraction of the incoming solar flux that is reflected back to space (since the atmosphere is assumed totally transparent to solar radiation, it does not matter whether this albedo is imagined to be caused by reflection at the surface of the planet or at the top of the atmosphere or a mixture). The flux density of the incoming solar radiation is specified by the solar constant S0. For application to planet Earth, appropriate values are S0=1366 W m−2 and αP=0.30. Accounting for the fact that the surface area of a sphere is 4 times the area of its intercept (its shadow), the average incoming radiation is S0/4.

For longwave radiation, the surface of the Earth is assumed to have an emissivity of 1 (i.e. it is a black body in the infrared, which is realistic). The surface emits a radiative flux density F according to the Stefan–Boltzmann law:

where σ is the Stefan–Boltzmann constant. A key to understanding the greenhouse effect is Kirchhoff's law of thermal radiation. At any given wavelength the absorptivity of the atmosphere will be equal to the emissivity. Radiation from the surface could be in a slightly different portion of the infrared spectrum than the radiation emitted by the atmosphere. The model assumes that the average emissivity (absorptivity) is identical for either of these streams of infrared radiation, as they interact with the atmosphere. Thus, for longwave radiation, one symbol ε denotes both the emissivity and absorptivity of the atmosphere, for any stream of infrared radiation.

Idealized greenhouse model with an isothermal atmosphere. The blue arrows denote shortwave (solar) radiative flux density and the red arrow denotes longwave (terrestrial) radiative flux density. The radiation streams are shown with lateral displacement for clarity; they are collocated in the model. The atmosphere, which interacts only with the longwave radiation, is indicated by the layer within the dashed lines. A specific solution is depicted for ε=0.78 and αp=0.3, representing Planet Earth. The numbers in the parentheses indicate the flux densities as a percent of S0/4.
The equilibrium solution with ε=0.82. The increase by Δε=0.04 corresponds to doubling carbon dioxide and the associated positive feedback on water vapor.
The equilibrium solution with no greenhouse effect: ε=0

The infrared flux density out of the top of the atmosphere is computed as:

In the last term, ε represents the fraction of upward longwave radiation from the surface that is absorbed, the absorptivity of the atmosphere. The remaining fraction (1-ε) is transmitted to space through an atmospheric window. In the first term on the right, ε is the emissivity of the atmosphere, the adjustment of the Stefan–Boltzmann law to account for the fact that the atmosphere is not optically thick. Thus ε plays the role of neatly blending, or averaging, the two streams of radiation in the calculation of the outward flux density.

The energy balance solution

Zero net radiation leaving the top of the atmosphere requires:

Zero net radiation entering the surface requires:

Energy equilibrium of the atmosphere can be either derived from the two above equilibrium conditions, or independently deduced:

Note the important factor of 2, resulting from the fact that the atmosphere radiates both upward and downward. Thus the ratio of Ta to Ts is independent of ε:

Thus Ta can be expressed in terms of Ts, and a solution is obtained for Ts in terms of the model input parameters:

or

The solution can also be expressed in terms of the effective emission temperature Te, which is the temperature that characterizes the outgoing infrared flux density F, as if the radiator were a perfect radiator obeying F=σTe4. This is easy to conceptualize in the context of the model. Te is also the solution for Ts, for the case of ε=0, or no atmosphere:

With the definition of Te:

For a perfect greenhouse, with no radiation escaping from the surface, or ε=1:

Application to Earth

Using the parameters defined above to be appropriate for Earth,

For ε=1:

For ε=0.78,

.

This value of Ts happens to be close to the published 287.2 K of the average global "surface temperature" based on measurements. ε=0.78 implies 22% of the surface radiation escapes directly to space, consistent with the statement of 15% to 30% escaping in the greenhouse effect.

The radiative forcing for doubling carbon dioxide is 3.71 W m−2, in a simple parameterization. This is also the value endorsed by the IPCC. From the equation for ,

Using the values of Ts and Ta for ε=0.78 allows for = -3.71 W m−2 with Δε=.019. Thus a change of ε from 0.78 to 0.80 is consistent with the radiative forcing from a doubling of carbon dioxide. For ε=0.80,

Thus this model predicts a global warming of ΔTs = 1.2 K for a doubling of carbon dioxide. A typical prediction from a GCM is 3 K surface warming, primarily because the GCM allows for positive feedback, notably from increased water vapor. A simple surrogate for including this feedback process is to posit an additional increase of Δε=.02, for a total Δε=.04, to approximate the effect of the increase in water vapor that would be associated with an increase in temperature. This idealized model then predicts a global warming of ΔTs = 2.4 K for a doubling of carbon dioxide, roughly consistent with the IPCC.

Tabular summary with K, C, and F units

ε Ts (K) Ts (C) Ts (F)
0 254.8 -18.3 -1
0.78 288.3 15.2 59
0.80 289.5 16.4 61
0.82 290.7 17.6 64
1 303.0 29.9 86

Extensions

The one-level atmospheric model can be readily extended to a multiple-layer atmosphere. In this case the equations for the temperatures become a series of coupled equations. These simple energy-balance models always predict a decreasing temperature away from the surface, and all levels increase in temperature as "greenhouse gases are added". Neither of these effects are fully realistic: in the real atmosphere temperatures increase above the tropopause, and temperatures in that layer are predicted (and observed) to decrease as GHG's are added. This is directly related to the non-greyness of the real atmosphere.

An interactive version of a model with 2 atmospheric layers, and which accounts for convection, is available online.

Planetary mass

From Wikipedia, the free encyclopedia

In astronomy, planetary mass is a measure of the mass of a planet-like astronomical object. Within the Solar System, planets are usually measured in the astronomical system of units, where the unit of mass is the solar mass (M), the mass of the Sun. In the study of extrasolar planets, the unit of measure is typically the mass of Jupiter (MJ) for large gas giant planets, and the mass of Earth (ME) for smaller rocky terrestrial planets.

The mass of a planet within the Solar System is an adjusted parameter in the preparation of ephemerides. There are three variations of how planetary mass can be calculated:

  • If the planet has natural satellites, its mass can be calculated using Newton's law of universal gravitation to derive a generalization of Kepler's third law that includes the mass of the planet and its moon. This permitted an early measurement of Jupiter's mass, as measured in units of the solar mass.
  • The mass of a planet can be inferred from its effect on the orbits of other planets. In 1931-1948 flawed applications of this method led to incorrect calculations of the mass of Pluto.
  • Data from influence collected from the orbits of space probes can be used. Examples include Voyager probes to the outer planets and the MESSENGER spacecraft to Mercury.
  • Also, numerous other methods can give reasonable approximations. For instance, Varuna, a potential dwarf planet, rotates very quickly upon its axis, as does the dwarf planet Haumea. Haumea has to have a very high density in order not to be ripped apart by centrifugal forces. Through some calculations, one can place a limit on the object's density. Thus, if the object's size is known, a limit on the mass can be determined. See the links in the aforementioned articles for more details on this.

Choice of units

The choice of solar mass, M, as the basic unit for planetary mass comes directly from the calculations used to determine planetary mass. In the most precise case, that of the Earth itself, the mass is known in terms of solar masses to twelve significant figures: the same mass, in terms of kilograms or other Earth-based units, is only known to five significant figures, which is less than a millionth as precise.

The difference comes from the way in which planetary masses are calculated. It is impossible to "weigh" a planet, and much less the Sun, against the sort of mass standards which are used in the laboratory. On the other hand, the orbits of the planets give a great range of observational data as to the relative positions of each body, and these positions can be compared to their relative masses using Newton's law of universal gravitation (with small corrections for General Relativity where necessary). To convert these relative masses to Earth-based units such as the kilogram, it is necessary to know the value of the Newtonian constant of gravitation, G. This constant is remarkably difficult to measure in practice, and its value is known to a relative precision of only 2.2×10−5.

The solar mass is quite a large unit on the scale of the Solar System: 1.9884(2)×1030 kg. The largest planet, Jupiter, is 0.09% the mass of the Sun, while the Earth is about three millionths (0.0003%) of the mass of the Sun.

When comparing the planets among themselves, it is often convenient to use the mass of the Earth (ME or ME) as a standard, particularly for the terrestrial planets. For the mass of gas giants, and also for most extrasolar planets and brown dwarfs, the mass of Jupiter (MJ) is a convenient comparison.

Planetary masses relative to the mass of Earth ME and Jupiter MJ
Planet Mercury Venus Earth Mars Jupiter Saturn Uranus Neptune
Earth mass ME 0.0553 0.815 1 0.1075 317.8 95.2 14.6 17.2
Jupiter mass MJ 0.000 17 0.002 56 0.003 15 0.000 34 1 0.299 0.046 0.054

Planetary mass and planet formation

Vesta is the second largest body in the asteroid belt after Ceres. This image from the Dawn spacecraft shows that it is not perfectly spherical.

The mass of a planet has consequences for its structure by having a large mass, especially while it is in the hand of process of formation. A body with enough mass can overcome its compressive strength and achieve a rounded shape (roughly hydrostatic equilibrium). Since 2006, these objects have been classified as dwarf planet if it orbits around the Sun (that is, if it is not the satellite of another planet). The threshold depends on a number of factors, such as composition, temperature, and the presence of tidal heating. The smallest body that is known to be rounded is Saturn's moon Mimas, at about 1160000 the mass of Earth; on the other hand, bodies as large as the Kuiper belt object Salacia, at about 113000 the mass of Earth, may not have overcome their compressive strengths. Smaller bodies like asteroids are classified as "small Solar System bodies".

A dwarf planet, by definition, is not massive enough to have gravitationally cleared its neighbouring region of planetesimals. The mass needed to do so depends on location: Mars clears its orbit in its current location, but would not do so if it orbited in the Oort cloud.

The smaller planets retain only silicates and metals, and are terrestrial planets like Earth or Mars. The interior structure of rocky planets is mass-dependent: for example, plate tectonics may require a minimum mass to generate sufficient temperatures and pressures for it to occur. Geophysical definitions would also include the dwarf planets and moons in the outer Solar System, which are like terrestrial planets except that they are composed of ice and rock rather than rock and metal: the largest such bodies are Ganymede, Titan, Callisto, Triton, and Pluto.

If the protoplanet grows by accretion to more than about twice the mass of Earth, its gravity becomes large enough to retain hydrogen in its atmosphere. In this case, it will grow into an ice giant or gas giant. As such, Earth and Venus are close to the maximum size a planet can usually grow to while still remaining rocky. If the planet then begins migration, it may move well within its system's frost line, and become a hot Jupiter orbiting very close to its star, then gradually losing small amounts of mass as the star's radiation strips its atmosphere.

The theoretical minimum mass a star can have, and still undergo hydrogen fusion at the core, is estimated to be about 75 MJ, though fusion of deuterium can occur at masses as low as 13 Jupiters.

Values from the DE405 ephemeris

The DE405/LE405 ephemeris from the Jet Propulsion Laboratory is a widely used ephemeris dating from 1998 and covering the whole Solar System. As such, the planetary masses form a self-consistent set, which is not always the case for more recent data (see below).

Planets and
natural satellites
Planetary mass
(relative to the
Sun × 10−6 )
Satellite mass
(relative to the
parent planet)
Absolute
mass
Mean
density
Mercury 0.16601 3.301×1023 kg 5.43 g/cm3
Venus 2.4478383 4.867×1024 kg 5.24 g/cm3
Earth/Moon system 3.04043263333 6.046×1024 kg 4.4309 g/cm3
  Earth 3.00348959632 5.972×1024 kg  5.514 g/cm3
Moon   1.23000383×10−2 7.348×1022 kg  3.344 g/cm3
Mars 0.3227151 6.417×1023 kg 3.91 g/cm3
Jupiter 954.79194 1.899×1027 kg 1.24 g/cm3
  Io   4.70×10−5 8.93×1022 kg  
Europa   2.53×10−5 4.80×1022 kg  
Ganymede   7.80×10−5 1.48×1023 kg  
Callisto   5.67×10−5 1.08×1023 kg  
Saturn 285.8860 5.685×1026 kg 0.62 g/cm3
  Titan   2.37×10−4 1.35×1023 kg  
Uranus 43.66244 8.682×1025 kg 1.24 g/cm3
  Titania   4.06×10−5 3.52×1021 kg  
Oberon   3.47×10−5 3.01×1021 kg  
Neptune 51.51389 1.024×1026 kg 1.61 g/cm3
  Triton   2.09×10−4 2.14×1022 kg  
Dwarf planets and asteroids
Pluto/Charon system 0.007396 1.471×1022 kg 2.06 g/cm3
Ceres 0.00047 9.3×1020 kg
Vesta 0.00013 2.6×1020 kg
Pallas 0.00010 2.0×1020 kg

Earth mass and lunar mass

Where a planet has natural satellites, its mass is usually quoted for the whole system (planet + satellites), as it is the mass of the whole system which acts as a perturbation on the orbits of other planets. The distinction is very slight, as natural satellites are much smaller than their parent planets (as can be seen in the table above, where only the largest satellites are even listed).

The Earth and the Moon form a case in point, partly because the Moon is unusually large (just over 1% of the mass of the Earth) in relation to its parent planet compared with other natural satellites. There are also very precise data available for the Earth–Moon system, particularly from the Lunar Laser Ranging experiment (LLR).

The geocentric gravitational constant – the product of the mass of the Earth times the Newtonian constant of gravitation – can be measured to high precision from the orbits of the Moon and of artificial satellites. The ratio of the two masses can be determined from the slight wobble in the Earth's orbit caused by the gravitational attraction of the Moon.

More recent values

The construction of a full, high-precision Solar System ephemeris is an onerous task. It is possible (and somewhat simpler) to construct partial ephemerides which only concern the planets (or dwarf planets, satellites, asteroids) of interest by "fixing" the motion of the other planets in the model. The two methods are not strictly equivalent, especially when it comes to assigning uncertainties to the results: however, the "best" estimates – at least in terms of quoted uncertainties in the result – for the masses of minor planets and asteroids usually come from partial ephemerides.

Nevertheless, new complete ephemerides continue to be prepared, most notably the EPM2004 ephemeris from the Institute of Applied Astronomy of the Russian Academy of Sciences. EPM2004 is based on 317014 separate observations between 1913 and 2003, more than seven times as many as DE405, and gave more precise masses for Ceres and five asteroids.

Planetary mass (relative to the Sun × 10−6)
  EPM2004 Vitagliano & Stoss
(2006)
Brown & Schaller
(2007)
Tholen et al.
(2008)
Pitjeva & Standish
(2009)
Ragozzine & Brown
(2009)
136199 Eris     84.0(1.0)×10−4      
134340 Pluto       73.224(15)×10−4     
136108 Haumea           20.1(2)×10−4
1 Ceres 4.753(7)×10−4       4.72(3)×10−4  
4 Vesta 1.344(1)×10−4       1.35(3)×10−4  
2 Pallas 1.027(3)×10−4       1.03(3)×10−4  
15 Eunomia   0.164(6)×10−4        
3 Juno 0.151(3)×10−4          
7 Iris 0.063(1)×10−4          
324 Bamberga 0.055(1)×10−4          

IAU best estimates (2009)

A new set of "current best estimates" for various astronomical constants was approved the 27th General Assembly of the International Astronomical Union (IAU) in August 2009.

Planet Ratio of the solar mass
to the planetary mass
(including satellites)
Planetary mass
(relative to the Sun × 10−6)
Mass (kg)
Mercury 6023.6(3)×103 0.166014(8) 3.3010(3)×1023
Venus 408.523719(8)×103 2.08106272(3) 4.1380(4)×1024
Mars 3098.70359(2)×103 0.3232371722(21) 6.4273(6)×1023
Jupiter 1.0473486(17)×103 954.7919(15) 1.89852(19)×1027
Saturn 3.4979018(1)×103 285.885670(8) 5.6846(6)×1026
Uranus 22.90298(3)×103 43.66244(6) 8.6819(9)×1025
Neptune 19.41226(3)×103 51.51384(8) 1.02431(10)×1026

IAU current best estimates (2012)

The 2009 set of "current best estimates" was updated in 2012 by resolution B2 of the IAU XXVIII General Assembly.  Improved values were given for Mercury and Uranus (and also for the Pluto system and Vesta).

Planet Ratio of the solar mass
to the planetary mass
(including satellites)
Mercury 6023.657 33 (24)×103
Uranus 22.902951(17)×103

Cognitive rehabilitation therapy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cognitive_rehabilitation_therapy     ...