Search This Blog

Friday, October 30, 2015

Solar irradiance


From Wikipedia, the free encyclopedia

Solar irradiance (also Insolation, from Latin insolare, to expose to the sun)[1][2] is the power per unit area produced by the Sun in the form of electromagnetic radiation. Irradiance may be measured in space or at the Earth's surface after atmospheric absorption and scattering. Total solar irradiance (TSI), is a measure of the solar radiative power per unit area normal to the rays, incident on the Earth's upper atmosphere. The solar constant is a conventional measure of mean TSI at a distance of one Astronomical Unit (AU). Irradiance is a function of distance from the Sun, the solar cycle, and cross-cycle changes.[3] Irradiance on Earth is most intense at points directly facing (normal to) the Sun.


Annual mean insolation at the top of Earth's atmosphere (TOA) and at the planet's surface

Units

The unit recommended by the World Meteorological Organization is the megajoule per square metre (MJ/m2) or joule per square millimetre (J/mm2).[4]

An alternate unit of measure is the Langley (1 thermochemical calorie per square centimeter or 41,840 J/m2) or irradiance per unit time.

The solar energy business uses watt-hour per square metre (Wh/m2). Divided by the recording time, this measure becomes insolation, another unit of irradiance.

Insolation can be measured in space, at the edge of the atmosphere or at a terrestrial object.
Insolation can also be expressed in Suns, where one Sun equals 1000 W/m2 at the point of arrival, with kWh/m2/day expressed as hours/day.[5]

Absorption and reflection[edit source | edit]

Solar irradiance spectrum above atmosphere and at surface

Reaching an object, part of the irradiance is absorbed and the remainder reflected. Usually the absorbed radiation is converted to thermal energy, increasing the object's temperature. Manmade or natural systems, however, can convert part of the absorbed radiation into another form such as electricity or chemical bonds, as in the case of photovoltaic cells or plants. The proportion of reflected radiation is the object's reflectivity or albedo.

Projection effect


One sunbeam one mile wide shines on the ground at a 90° angle, and another at a 30° angle. The oblique sunbeam distributes its light energy over twice as much area.

Insolation onto a surface is largest when the surface directly faces (is normal to) the sun. As the angle between the surface and the Sun moves from normal, the insolation is reduced in proportion to the angle's Cosine; see Effect of sun angle on climate.

In the figure, the angle shown is between the ground and the sunbeam rather than between the vertical direction and the sunbeam; hence the sine rather than the cosine is appropriate. A sunbeam one mile (1.6 km) wide arrives from directly overhead, and another at a 30° angle to the horizontal. The Sine of a 30° angle is 1/2, whereas the sine of a 90° angle is 1. Therefore, the angled sunbeam spreads the light over twice the area. Consequently, half as much light falls on each square mile.

This 'projection effect' is the main reason why Earth's polar regions are much colder than equatorial regions. On an annual average the poles receive less insolation than does the equator, because the poles are always angled more away from the sun than the tropics. At a lower angle the light must travel through more atmosphere. This attenuates it (by absorption and scattering) further reducing insolation.

Categories


Solar potential – global horizontal irradiation

Direct insolation is measured at a given location with a surface element perpendicular to the Sun. It excludes diffuse insolation (radiation that is scattered or reflected by atmospheric components). Direct insolation is equal to the Solar constant minus the atmospheric losses due to absorption and scattering. While the solar constant varies, losses depend on time of day (length of light's path through the atmosphere depending on the Solar elevation angle), Cloud cover, Moisture content and other contents. Insolation affects plant metabolism and animal behavior.[6]

Diffuse insolation is the contribution of light scattered by the atmosphere to total insolation.

Earth

Average annual solar radiation arriving at the top of the Earth's atmosphere is roughly 1366 W/m2.[7][8] The radiation is distributed across the Electromagnetic spectrum, although most is Visible light. The Sun's rays are attenuated as they pass through the Atmosphere, leaving maximum normal surface irradiance at approximately 1000 W /m2 at Sealevel on a clear day.[clarification needed]


A Pyranometer, a component of a temporary remote meteorological station, measures insolation on Skagit Bay, Washington.

The actual figure varies with the Sun's angle and atmospheric circumstances. Ignoring clouds, the daily average irradiance for the Earth is approximately 6 kWh/m2 = 21.6 MJ/m2. The output of, for example, a photovoltaic panel, partly depends on the angle of the sun relative to the panel. One Sun is a unit of power flux, not a standard value for actual insolation. Sometimes this unit is referred to as a Sol, not to be confused with a sol, meaning one solar day.[9]

Solar potential maps

Top of the atmosphere


Spherical triangle for application of the spherical law of cosines for the calculation the solar zenith angle Θ for observer at latitude φ and longitude λ from knowledge of the hour angle h and solar declination δ. (δ is latitude of subsolar point, and h is relative longitude of subsolar point).

\overline{Q}^{\mathrm{day}}, the theoretical daily-average insolation at the top of the atmosphere, where θ is the polar angle of the Earth's orbit, and θ = 0 at the vernal equinox, and θ = 90° at the summer solstice; φ is the latitude of the Earth. The calculation assumed conditions appropriate for 2000 A.D.: a solar constant of S0 = 1367 W m−2, obliquity of ε = 23.4398°, longitude of perihelion of ϖ = 282.895°, eccentricity e = 0.016704. Contour labels (green) are in units of W m−2.

The distribution of solar radiation at the top of the atmosphere is determined by Earth's sphericity and orbital parameters. This applies to any unidirectional beam incident to a rotating sphere. Insolation is essential for numerical weather prediction and understanding seasons and climate change. Application to ice ages is known as Milankovitch cycles.

Distribution is based on a fundamental identity from Spherical trigonometry, the spherical law of cosines:
\cos(c) = \cos(a) \cos(b) + \sin(a) \sin(b) \cos(C) \,
 
where a, b and c are arc lengths, in radians, of the sides of a spherical triangle. C is the angle in the vertex opposite the side which has arc length c. Applied to the calculation of Solar zenith angle Θ, the following applies to the Spherical law of cosines:
C=h \,
c=\Theta \,
a=\tfrac{1}{2}\pi-\phi \,
b=\tfrac{1}{2}\pi-\delta \,
\cos(\Theta) = \sin(\phi) \sin(\delta) + \cos(\phi) \cos(\delta) \cos(h) \,
The separation of Earth from the sun can be denoted RE and the mean distance can be denoted R0, approximately 1 AU. The solar constant is denoted S0. The solar flux density (insolation) onto a plane tangent to the sphere of the Earth, but above the bulk of the atmosphere (elevation 100 km or greater) is:
Q = S_o \frac{R_o^2}{R_E^2}\cos(\Theta)\text{ when }\cos(\Theta)>0
and
Q=0\text{ when }\cos(\Theta)\le 0 \,
The average of Q over a day is the average of Q over one rotation, or the hour angle progressing from h = π to h = −π:
\overline{Q}^{\text{day}} = -\frac{1}{2\pi}{\int_{\pi}^{-\pi}Q\,dh}
Let h0 be the hour angle when Q becomes positive. This could occur at sunrise when \Theta=\tfrac{1}{2}\pi, or for h0 as a solution of
\sin(\phi) \sin(\delta) + \cos(\phi) \cos(\delta) \cos(h_o) = 0 \,
or
\cos(h_o)=-\tan(\phi)\tan(\delta)
If tan(φ)tan(δ) > 1, then the sun does not set and the sun is already risen at h = π, so ho = π. If tan(φ)tan(δ) < −1, the sun does not rise and \overline{Q}^{\mathrm{day}}=0.
\frac{R_o^2}{R_E^2}
is nearly constant over the course of a day, and can be taken outside the integral
\int_\pi^{-\pi}Q\,dh = \int_{h_o}^{-h_o}Q\,dh = S_o\frac{R_o^2}{R_E^2}\int_{h_o}^{-h_o}\cos(\Theta)\, dh
 \int_\pi^{-\pi}Q\,dh = S_o\frac{R_o^2}{R_E^2}\left[ h \sin(\phi)\sin(\delta) + \cos(\phi)\cos(\delta)\sin(h) \right]_{h=h_o}^{h=-h_o}
 \int_\pi^{-\pi}Q\,dh = -2 S_o\frac{R_o^2}{R_E^2}\left[ h_o \sin(\phi) \sin(\delta) + \cos(\phi) \cos(\delta) \sin(h_o) \right]
 \overline{Q}^{\text{day}} =  \frac{S_o}{\pi}\frac{R_o^2}{R_E^2}\left[ h_o \sin(\phi) \sin(\delta) + \cos(\phi) \cos(\delta) \sin(h_o) \right]
Let θ be the conventional polar angle describing a planetary orbit. Let θ = 0 at the vernal equinox. The declination δ as a function of orbital position is
\sin \delta = \sin \varepsilon~\sin(\theta - \varpi )\,
where ε is the obliquity. The conventional longitude of perihelion ϖ is defined relative to the vernal equinox, so for the elliptical orbit:
R_E=\frac{R_o}{1+e\cos(\theta-\varpi)}
or
\frac{R_o}{R_E}={1+e\cos(\theta-\varpi)}
With knowledge of ϖ, ε and e from astrodynamical calculations [10] and So from a consensus of observations or theory, \overline{Q}^{\mathrm{day}}can be calculated for any latitude φ and θ. Because of the elliptical orbit, and as a consequence of Kepler's second law, θ does not progress uniformly with time. Nevertheless, θ = 0° is exactly the time of the vernal equinox, θ = 90° is exactly the time of the summer solstice, θ = 180° is exactly the time of the autumnal equinox and θ = 270° is exactly the time of the winter solstice.

Variation

Total irradiance

Total solar irradiance (TSI)[11] changes slowly on decadal and longer timescales. The variation during solar cycle 21 was about 0.1% (peak-to-peak).[12] In contrast to older reconstructions,[13] most recent TSI reconstructions point to an increase of only about 0.05% to 0.1% between the Maunder Minimum and the present.[14][15][16]

Ultraviolet irradiance

Ultraviolet irradiance (EUV) varies by approximately 1.5 percent from solar maxima to minima, for 200 to 300 nm wavelengths.[17] However, a proxy study estimated that UV has increased by 3.0% since the Maunder Minimum.[18]

Milankovitch cycles

Milankovitch Variations.png

Some variations in insolation are not due to solar changes but rather due to the Earth moving between its perigee and apogee, or changes in the latitudinal distribution of radiation. These orbital changes or Milankovitch cycles have caused radiance variations of as much as 25% (locally; global average changes are much smaller) over long periods. The most recent significant event was an axial tilt of 24° during boreal summer near the Holocene climatic optimum.

Obtaining a time series for a \overline{Q}^{\mathrm{day}} for a particular time of year, and particular latitude, is a useful application in the theory of Milankovitch cycles. For example, at the summer solstice, the declination δ is equal to the obliquity ε. The distance from the sun is
\frac{R_o}{R_E} = 1+e\cos(\theta-\varpi) = 1+e\cos(\tfrac{\pi}{2}-\varpi) = 1 + e \sin(\varpi)
For this summer solstice calculation, the role of the elliptical orbit is entirely contained within the important product e \sin(\varpi), the precession index, whose variation dominates the variations in insolation at 65° N when eccentricity is large. For the next 100,000 years, with variations in eccentricity being relatively small, variations in obliquity dominate.

Measurement

The space-based TSI record comprises measurements from more than ten radiometers spanning three solar cycles.

Technique

All modern TSI satellite instruments employ active cavity electrical substitution radiometry. This technique applies measured electrical heating to maintain an absorptive blackened cavity in thermal equilibrium while incident sunlight passes through a precision aperture of calibrated area. The aperture is modulated via a shutter. Accuracy uncertainties of <0.01% are required to detect long term solar irradiance variations, because expected changes are in the range 0.05 to 0.15 W m−2 per century.[19]

Intertemporal calibration

In orbit, radiometric calibrations drift for reasons including solar degradation of the cavity, electronic degradation of the heater, surface degradation of the precision aperture and varying surface emissions and temperatures that alter thermal backgrounds. These calibrations require compensation to preserve consistent measurements.[19]

For various reasons, the sources do not always agree. The Solar Radiation and Climate Experiment/Total Irradiance Measurement (SORCE/TIM) TSI values are lower than prior measurements by the Earth Radiometer Budget Experiment (ERBE) on the Earth Radiation Budget Satellite (ERBS), VIRGO on the Solar Heliospheric Observatory (SoHO) and the ACRIM instruments on the Solar Maximum Mission (SMM), Upper Atmosphere Research Satellite (UARS) and ACRIMSat. Pre-launch ground calibrations relied on component rather than system level measurements, since irradiance standards lacked absolute accuracies.[19]

Measurement stability involves exposing different radiometer cavities to different accumulations of solar radiation to quantify exposure-dependent degradation effects. These effects are then compensated for in final data. Observation overlaps permits corrections for both absolute offsets and validation of instrumental drifts.[19]

Uncertainties of individual observations exceed irradiance variability (∼0.1%). Thus, instrument stability and measurement continuity are relied upon to compute real variations.

Long-term radiometer drifts can be mistaken for irradiance variations that can be misinterpreted as affecting climate. Examples include the issue of the irradiance increase between cycle minima in 1986 and 1996, evident only in the ACRIM composite (and not the model) and the low irradiance levels in the PMOD composite during the 2008 minimum.

Despite the fact that ACRIM I, ACRIM II, ACRIM III, VIRGO and TIM all track degradation with redundant cavities, notable and unexplained differences remain in irradiance and the modeled influences of sunspots and faculae.

Persistent inconsistencies

Disagreement among overlapping observations indicates unresolved drifts that suggest the TSI record is not sufficiently stable to discern solar changes on decadal time scales. Only the ACRIM composite shows irradiance increasing by ∼1 W m−2 between 1986 and 1996; this change is also absent in the model.[19]

Recommendations to resolve the instrument discrepancies include validating optical measurement accuracy by comparing ground-based instruments to laboratory references, such as those at National Institute of Science and Technology (NIST); NIST validation of aperture area calibrations uses spares from each instrument; and applying diffraction corrections from the view-limiting aperture.[19]

For ACRIM, NIST determined that diffraction from the view-limiting aperture contributes a 0.13% signal not accounted for in the three ACRIM instruments. This correction lowers the reported ACRIM values, bringing ACRIM closer to TIM. In ACRIM and all other instruments, the aperture is deep inside the instrument, with a larger view-limiting aperture at the front. Depending on edge imperfections this can directly scatter light into the cavity. This design admits two to three times the amount of light intended to be measured; if not completely absorbed or scattered, this additional light produces erroneously high signals. In contrast, TIM's design places the precision aperture at the front so that only desired light enters.[19]

Variations from other sources likely include an annual cycle that is nearly in phase with the Sun-Earth distance in ACRIM III data and 90-day spikes in the VIRGO data coincident with SoHO spacecraft maneuvers that were most apparent during the 2008 solar minimum.

TSI Radiometer Facility

TIM's high absolute accuracy creates new opportunities for measuring climate variables. TSI Radiometer Facility (TRF) is a cryogenic radiometer that operates in a vacuum with controlled light sources. L-1 Standards and Technology (LASP) designed and built the system, completed in 2008. It was calibrated for optical power against the NIST Primary Optical Watt Radiometer, a cryogenic radiometer that maintains the NIST radiant power scale to an uncertainty of 0.02% (1σ). As of 2011 TRF was the only facility that approached the desired <0.01% uncertainty for pre-launch validation of solar radiometers measuring irradiance (rather than merely optical power) at solar power levels and under vacuum conditions.[19]

TRF encloses both the reference radiometer and the instrument under test in a common vacuum system that contains a stationary, spatially uniform illuminating beam. A precision aperture with area calibrated to 0.0031% (1σ) determines the beam's measured portion. The test instrument's precision aperture is positioned in the same location, without optically altering the beam, for direct comparison to the reference. Variable beam power provides linearity diagnostics, and variable beam diameter diagnoses scattering from different instrument components.[19]

The Glory/TIM and PICARD/PREMOS flight instrument absolute scales are now traceable to the TRF in both optical power and irradiance. The resulting high accuracy reduces the consequences of any future gap in the solar irradiance record.[19]

Difference Relative to TRF[19]
Instrument Irradiance: View-Limiting Aperture Overfilled Irradiance: Precision Aperture Overfilled Difference Attributable To Scatter Error Measured Optical Power Error Residual Irradiance Agreement Uncertainty
SORCE/TIM ground NA −0.037% NA −0.037% 0.000% 0.032%
Glory/TIM flight NA −0.012% NA −0.029% 0.017% 0.020%
PREMOS-1 ground −0.005% −0.104% 0.098% −0.049% −0.104% ∼0.038%
PREMOS-3 flight 0.642% 0.605% 0.037% 0.631% −0.026% ∼0.027%
VIRGO-2 ground 0.897% 0.743% 0.154% 0.730% 0.013% ∼0.025%

2011 reassessment

The most probable value of TSI representative of solar minimum is 1360.8 ± 0.5 W m−2, lower than the earlier accepted value of 1365.4 ± 1.3 W m−2, established in the 1990s. The new value came from SORCE/TIM and radiometric laboratory tests. Scattered light is a primary cause of the higher irradiance values measured by earlier satellites in which the precision aperture is located behind a larger, view-limiting aperture. The TIM uses a view-limiting aperture that is smaller than precision aperture that precludes this spurious signal. The new estimate is from better measurement rather than a change in solar output.[19]

A regression model-based split of the relative proportion of sunspot and facular influences from SORCE/TIM data accounts for 92% of observed variance and tracks the observed trends to within TIM's stability band. This agreement provides further evidence that TSI variations are primarily due to solar surface magnetic activity.[19]

Instrument inaccuracies add a significant uncertainty in determining Earth's energy balance. The energy imbalance has been variously measured (during a deep solar minimum of 2005–2010) to be +0.58 ± 0.15 W/m²),[20] +0.60 ± 0.17 W/m²[21] and +0.85 W m−2. Estimates from space-based measurements range from +3 to 7 W m−2. SORCE/TIM's lower TSI value reduces this discrepancy by 1 W m−2. This difference between the new lower TIM value and earlier TSI measurements corresponds to a climate forcing of −0.8 W m−2, which is comparable to the energy imbalance.[19]

2014 reassessment

In 2014 a new ACRIM composite was developed using the updated ACRIM3 record. It added corrections for scattering and diffraction revealed during recent testing at TRF and two algorithm updates. The algorithm updates more accurately account for instrument thermal behavior and parsing of shutter cycle data. These corrected a component of the quasi-annual signal and increased the signal to noise ratio, respectively. The net effect of these corrections decreased the average ACRIM3 TSI value without affecting the trending in the ACRIM Composite TSI.[22]

Differences between ACRIM and PMOD TSI composites are evident, but the most significant is the solar minimum-to-minimum trends during solar cycles 21-23. ACRIM established an increase of +0.037%/decade from 1980 to 2000 and a decrease thereafter. PMOD instead presents a steady decrease since 1978. Significant differences can also be seen during the peak of solar cycles 21 and 22. These arise from the fact that ACRIM uses the original TSI results published by the satellite experiment teams while PMOD significantly modifies some results to conform them to specific TSI proxy models. The implications of increasing TSI during the global warming of the last two decades of the 20th century are that solar forcing may be a significantly larger factor in climate change than represented in the CMIP5 general circulation climate models.[22]

Applications

Buildings

In construction, insolation is an important consideration when designing a building for a particular site.[23]


Insolation variation by month; 1984–1993 averages for January (top) and April (bottom)

The projection effect can be used to design buildings that are cool in summer and warm in winter, by providing vertical windows on the equator-facing side of the building (the south face in the northern hemisphere, or the north face in the southern hemisphere): this maximizes insolation in the winter months when the Sun is low in the sky and minimizes it in the summer when the Sun is high. (The Sun's north/south path through the sky spans 47 degrees through the year).

Solar power

Insolation figures are used as an input to worksheets to size solar power systems.[24] Because (except for asphalt solar collectors)[25] panels are almost always mounted at an angle[26] towards the sun, insolation must be adjusted to prevent estimates that are inaccurately low for winter and inaccurately high for summer.[27] In many countries the figures can be obtained from an insolation map or from insolation tables that reflect data over the prior 30–50 years. Photovoltaic panels are rated under standard conditions to determine the Wp rating (watts peak),[28] which can then be used with insolation to determine the expected output, adjusted by factors such as tilt, tracking and shading (which can be included to create the installed Wp rating).[29] Insolation values range from 800 to 950 kWh/(kWp·y) in Norway to up to 2,900 in Australia.

Climate research

Irradiance plays a part in climate modeling and weather forecasting. A non-zero average global net radiation at the top of the atmosphere is indicative of Earth's thermal disequilibrium as imposed by climate forcing.

The impact of the lower 2014 TSI value on climate models is unknown. A few tenths of a percent change in the absolute TSI level is typically considered to be of minimal consequence for climate simulations. The new measurements require climate model parameter adjustments.

Experiments with GISS Model 3 investigated the sensitivity of model performance to the TSI absolute value during present and pre-industrial epochs, and describe, for example, how the irradiance reduction is partitioned between the atmosphere and surface and the effects on outgoing radiation.[19]

Assessing the impact of long-term irradiance changes on climate requires greater instrument stability[19] combined with reliable global surface temperature observations to quantify climate response processes to radiative forcing on decadal time scales. The observed 0.1% irradiance increase imparts 0.22 W m−2 climate forcing, which suggests a transient climate response of 0.6 °C per W m−2. This response is larger by a factor of 2 or more than in the IPCC-assessed 2008 models, possibly appearing in the models' heat uptake by the ocean.[19]

Space travel

Insolation is the primary variable affecting equilibrium temperature in spacecraft design and planetology.

Solar activity and irradiance measurement is a concern for space travel. For example, the American space agency, NASA, launched its Solar Radiation and Climate Experiment (SORCE) satellite with Solar Irradiance Monitors.[3]

Civil engineering

In civil engineering and hydrology, numerical models of snowmelt runoff use observations of insolation. This permits estimation of the rate at which water is released from a melting snowpack. Field measurement is accomplished using a pyranometer.

Conversion factor (multiply top row by factor to obtain side column)
W/m2 kW·h/(m2·day) sun hours/day kWh/(m2·y) kWh/(kWp·y)
W/m2 1 41.66666 41.66666 0.1140796 0.1521061
kW·h/(m2·day) 0.024 1 1 0.0027379 0.0036505
sun hours/day 0.024 1 1 0.0027379 0.0036505
kWh/(m2·y) 8.765813 365.2422 365.2422 1 1.333333
kWh/(kWp·y) 6.574360 273.9316 273.9316 0.75 1

Kinetic theory of gases


From Wikipedia, the free encyclopedia


The temperature of an ideal monatomic gas is proportional to the average kinetic energy of its atoms. The size of helium atoms relative to their spacing is shown to scale under 1950 atmospheres of pressure. The atoms have a certain, average speed, slowed down here two trillion fold from room temperature.
Kinetic theory defines temperature in its own way, not identical with the thermodynamic definition.[1]

Under a microscope, the molecules making up a liquid are too small to be visible, but the jittering motion of pollen grains or dust particles can be seen. Known as Brownian motion, it results directly from collisions between the grains or particles and liquid molecules. As analyzed by Albert Einstein in 1905, this experimental evidence for kinetic theory is generally seen as having confirmed the concrete material existence of atoms and molecules.

Assumptions

The theory for ideal gases makes the following assumptions:
  • The gas consists of very small particles known as molecules. This smallness of their size is such that the total volume of the individual gas molecules added up is negligible compared to the volume of the smallest open ball containing all the molecules. This is equivalent to stating that the average distance separating the gas particles is large compared to their size.
  • These particles have the same mass.
  • The number of molecules is so large that statistical treatment can be applied.
  • These molecules are in constant, random, and rapid motion.
  • The rapidly moving particles constantly collide among themselves and with the walls of the container. All these collisions are perfectly elastic. This means, the molecules are considered to be perfectly spherical in shape, and elastic in nature.
  • Except during collisions, the interactions among molecules are negligible. (That is, they exert no forces on one another.)
This implies:
1. Relativistic effects are negligible.
2. Quantum-mechanical effects are negligible. This means that the inter-particle distance is much larger than the thermal de Broglie wavelength and the molecules are treated as classical objects.
3. Because of the above two, their dynamics can be treated classically. This means, the equations of motion of the molecules are time-reversible.
  • The average kinetic energy of the gas particles depends only on the absolute temperature of the system. The kinetic theory has its own definition of temperature, not identical with the thermodynamic definition.
  • The time during collision of molecule with the container's wall is negligible as compared to the time between successive collisions.
  • Because they have mass, the gas molecules will be affected by gravity.
More modern developments relax these assumptions and are based on the Boltzmann equation. These can accurately describe the properties of dense gases, because they include the volume of the molecules. The necessary assumptions are the absence of quantum effects, molecular chaos and small gradients in bulk properties. Expansions to higher orders in the density are known as virial expansions.

An important book on kinetic theory is that by Chapman and Cowling.[1] An important approach to the subject is called Chapman–Enskog theory.[2] There have been many modern developments and there is an alternative approach developed by Grad based on moment expansions.[3] In the other limit, for extremely rarefied gases, the gradients in bulk properties are not small compared to the mean free paths. This is known as the Knudsen regime and expansions can be performed in the Knudsen number.

Properties

Pressure and kinetic energy

Pressure is explained by kinetic theory as arising from the force exerted by molecules or atoms impacting on the walls of a container. Consider a gas of N molecules, each of mass m, enclosed in a cuboidal container of volume V=L3. When a gas molecule collides with the wall of the container perpendicular to the x coordinate axis and bounces off in the opposite direction with the same speed (an elastic collision), then the momentum lost by the particle and gained by the wall is:
\Delta p = p_{i,x} - p_{f,x} = p_{i,x} - (-p_{i,x}) = 2 p_{i,x} = 2 m v_x\,
where vx is the x-component of the initial velocity of the particle.

The particle impacts one specific side wall once every
\Delta t = \frac{2L}{v_x}
(where L is the distance between opposite walls).

The force due to this particle is:
F = \frac{\Delta p}{\Delta t} = \frac{m v_x^2}{L}.
The total force on the wall is
F = \frac{Nm\overline{v_x^2}}{L}
where the bar denotes an average over the N particles. Since the assumption is that the particles move in random directions, we will have to conclude that if we divide the velocity vectors of all particles in three mutually perpendicular directions, the average value along each direction must be equal (though their proportions are arbitrary for individual particles). That is,
 \overline{v_x^2} = \overline{v^2}/3 .
We can rewrite the force as
F = \frac{Nm\overline{v^2}}{3L}.
This force is exerted on an area L2. Therefore the pressure of the gas is
P = \frac{F}{L^2} = \frac{Nm\overline{v^2}}{3V}
where V=L3 is the volume of the box. The ratio n=N/V is the number density of the gas (the mass density ρ=nm is less convenient for theoretical derivations on atomic level). Using n, we can rewrite the pressure as
 P =  \frac{n m \overline{v^2}}{3}.
This is a first non-trivial result of the kinetic theory because it relates pressure, a macroscopic property, to the average (translational) kinetic energy per molecule {1 \over 2} m\overline{v^2} which is a microscopic property.

Temperature and kinetic energy

Rewriting the above result for the pressure as PV = {Nm\overline{v^2}\over 3} , we may combine it with the ideal gas law
\displaystyle PV = N k_B T ,




(1)
where \displaystyle k_B is the Boltzmann constant and \displaystyle T the absolute temperature defined by the ideal gas law, to obtain
k_B T  =   {m\overline{v^2}\over 3} ,
which leads to the expression of the average kinetic energy of a molecule,
   \displaystyle     \frac {1} {2} m\overline{v^2} =  \frac {3} {2}  k_B T.
The kinetic energy of the system is N times that of a molecule, namely  K= \frac {1} {2} N m \overline{v^2} . Then the temperature \displaystyle T takes the form
   \displaystyle    T   =   {m\overline{v^2}\over 3 k_B}




(2)
which becomes
   \displaystyle    T   =   \frac   {2}   {3}   \frac   {K}   {N k_B}.




(3)
Eq.(3) is one important result of the kinetic theory: The average molecular kinetic energy is proportional to the ideal gas law's absolute temperature. From Eq.(1) and Eq.(3), we have

   \displaystyle 
   PV 
   =
   \frac
   {2}
   {3}
   K.




(4)
Thus, the product of pressure and volume per mole is proportional to the average (translational) molecular kinetic energy.

Eq.(1) and Eq.(4) are called the "classical results", which could also be derived from statistical mechanics; for more details, see .[4]

Since there are \displaystyle 3N degrees of freedom in a monatomic-gas system with \displaystyle N particles, the kinetic energy per degree of freedom per molecule is

   \displaystyle 
   \frac
   {K}
   {3 N}
   =
   \frac
   {k_B T}
   {2}




(5)
In the kinetic energy per degree of freedom, the constant of proportionality of temperature is 1/2 times Boltzmann constant. In addition to this, the temperature will decrease when the pressure drops to a certain point.[why?] This result is related to the equipartition theorem.

As noted in the article on heat capacity, diatomic gases should have 7 degrees of freedom, but the lighter gases act as if they have only 5.

Thus the kinetic energy per kelvin (monatomic ideal gas) is:
  • per mole: 12.47 J
  • per molecule: 20.7 yJ = 129 μeV.
At standard temperature (273.15 K), we get:
  • per mole: 3406 J
  • per molecule: 5.65 zJ = 35.2 meV.....

Collisions with container

One can calculate the number of atomic or molecular collisions with a wall of a container per unit area per unit time.

Assuming an ideal gas, a derivation[5] results in an equation for total number of collisions per unit time per area:
A = \frac{1}{4}\frac{N}{V} v_{avg} = \frac{n}{4} \sqrt{\frac{8 k_{B} T}{\pi m}} . \,
This quantity is also known as the "impingement rate" in vacuum physics.

Speed of molecules

From the kinetic energy formula it can be shown that
v_\mathrm{rms} = \sqrt {{3 k_{B} T}\over{m}}
with v in m/s, T in kelvins, and m is the molecular mass (kg). The most probable speed is 81.6% of the rms speed, and the mean speeds 92.1% (isotropic distribution of speeds).

Transport properties

The kinetic theory of gases deals not only with gases in thermodynamic equilibrium, but also very importantly with gases not in thermodynamic equilibrium. This means considering what are known as 'transport properties', such a viscosity and thermal conductivity.

History

In approximately 50 BCE, the Roman philosopher Lucretius proposed that apparently static macroscopic bodies were composed on a small scale of rapidly moving atoms all bouncing off each other.[6] This Epicurean atomistic point of view was rarely considered in the subsequent centuries, when Aristotlean ideas were dominant.


Hydrodynamica front cover

In 1738 Daniel Bernoulli published Hydrodynamica, which laid the basis for the kinetic theory of gases. In this work, Bernoulli posited the argument, still used to this day, that gases consist of great numbers of molecules moving in all directions, that their impact on a surface causes the gas pressure that we feel, and that what we experience as heat is simply the kinetic energy of their motion. The theory was not immediately accepted, in part because conservation of energy had not yet been established, and it was not obvious to physicists how the collisions between molecules could be perfectly elastic.[7]:36–37

Other pioneers of the kinetic theory (which were neglected by their contemporaries) were Mikhail Lomonosov (1747),[8] Georges-Louis Le Sage (ca. 1780, published 1818),[9] John Herapath (1816)[10] and John James Waterston (1843),[11] which connected their research with the development of mechanical explanations of gravitation. In 1856 August Krönig (probably after reading a paper of Waterston) created a simple gas-kinetic model, which only considered the translational motion of the particles.[12]

In 1857 Rudolf Clausius, according to his own words independently of Krönig, developed a similar, but much more sophisticated version of the theory which included translational and contrary to Krönig also rotational and vibrational molecular motions. In this same work he introduced the concept of mean free path of a particle. [13] In 1859, after reading a paper by Clausius, James Clerk Maxwell formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. This was the first-ever statistical law in physics.[14] In his 1873 thirteen page article 'Molecules', Maxwell states: "we are told that an 'atom' is a material point, invested and surrounded by 'potential forces' and that when 'flying molecules' strike against a solid body in constant succession it causes what is called pressure of air and other gases."[15] In 1871, Ludwig Boltzmann generalized Maxwell's achievement and formulated the Maxwell–Boltzmann distribution. Also the logarithmic connection between entropy and probability was first stated by him.

In the beginning of the twentieth century, however, atoms were considered by many physicists to be purely hypothetical constructs, rather than real objects. An important turning point was Albert Einstein's (1905)[16] and Marian Smoluchowski's (1906)[17] papers on Brownian motion, which succeeded in making certain accurate quantitative predictions based on the kinetic theory.

Thursday, October 29, 2015

Green Climate Fund


From Wikipedia, the free encyclopedia
Green Climate Fund
Emblem of the United Nations.svg
Green Climate Fund Logo.png
Abbreviation GCF
Formation 2010
Legal status Active
Headquarters Songdo, Incheon, South Korea
Website www.gcfund.org

The Green Climate Fund (GCF) is a fund within the framework of the UNFCCC founded as a mechanism to assist developing countries in adaptation and mitigation practices to counter climate change. The GCF is based in the new Songdo district of Incheon, South Korea. It is governed by a Board of 24 members and initially supported by a Secretariat.

‘The Green Climate Fund will support projects, programmes, policies and other activities in developing country Parties using thematic funding windows’.[1] It is intended to be the centrepiece of efforts to raise Climate Finance of $100 billion a year by 2020. This is not an official figure for the size of the Fund itself, however. Disputes also remain as to whether the funding target will be based on public sources, or whether "leveraged" private finance will be counted towards the total.[2] Only a fraction of this sum had been pledged as of July 2013, mostly to cover start-up costs.

According to the Climate & Development Knowledge Network, at the third meeting of the Board in Berlin, Germany, in March 2013, members agreed on how to move forward with the fund’s Business Model Framework (BMF). They identified the need to assess various options for how nations could access the fund, approaches for involving the private sector, plus ways to measure results and ensure requests for monies are country-driven.[3] At the fourth Board meeting in Songdo, South Korea, in June 2013, Hela Cheikhrouhou, a Tunisian national, was selected to become the Fund's first Executive Director.[4] "Resource mobilisation" (establishing a process for funding pledges) is expected to be the most contentious issue for the fifth Board meeting in Paris, France, in October 2013.[5]

History

The Copenhagen Accord, established during the 15th Conference Of the Parties (COP-15) in Copenhagen in 2009 mentioned the "Copenhagen Green Climate Fund". The fund was formally established during the 2010 United Nations Climate Change Conference in Cancun and is a fund within the UNFCCC framework.[6] Its governing instrument was adopted at the 2011 UN Climate Change Conference (COP 17) in Durban, South Africa.[7]

Organization

During COP-16 in Cancun, the matter of governing the GCF was entrusted to the newly founded Green Climate Fund Board, and the World Bank was chosen as the temporary trustee.[6] To develop a design for the functioning of the GCF, the ‘Transitional Committee for the Green Climate Fund’ was established in Cancun too. The committee met four times throughout the year 2011, and submitted a report to the 17th COP in Durban, South Africa. Based on this report, the COP decided that the ‘GCF would become an operating entity of the financial mechanism’ of the UNFCCC,[8] and that on COP-18 in 2012, the necessary rules should be adopted to ensure that the GCF ‘is accountable to and functions under the guidance of the COP’.[8] Researchers at the Overseas Development Institute state that without this last minute agreement on a governing instrument for the GCF, the "African COP" would have been considered a failure.[9] Furthermore, the GCF Board was tasked with developing rules and procedures for the disbursement of funds, ensuring that these should be consistent with the national objectives of the countries where projects and programmes will be taking place. The GCF Board was also charged with establishing an independent secretariat and the permanent trustee of the GCF.[8]

Sources of finance

The Green Climate Fund is intended to be the centrepiece of Long Term Financing under the UNFCCC, which has set itself a goal of raising $100 billion per year by 2020. Uncertainty over where this money would come from led to the creation of a High Level Advisory Group on Climate Change Financing (AGF) was founded by UN Secretary-General Ban Ki-Moon in February 2010.
There is no formal connection between this Panel and the GCF, although its report is one source for debates on "resource mobilisation" for the GCF, an item that will be discussed at the Fund's October 2013 Board meeting.[10]

The lack of pledged funds and potential reliance on the private sector is controversial and has been criticized by developing countries.[11]

Pledges to the fund reached $10.2 billion on May 28, 2015.[12]

Issues

The process of designing the GCF has raised several issues. These include ongoing questions on how funds will be raised,[13] the role of the private sector,[14] the level of "country ownership" of resources,[15] and the transparency of the Board itself.[16] In addition, questions have been raised about the need for yet another new international climate institution which may further fragment public dollars that are put toward mitigation and adaptation annually.[17]

The Fund is also pledged to offer "balanced" support to adaptation and mitigation, although there is some concern amongst developing countries that inadequate adaptation financing will be offered, in particular if the fund is reliant on "leveraging" private sector finance.[18]

Role of the private sector

One of the most controversial aspects of the GCF concerns the creation of the Fund's Private Sector Facility (PSF). Many of the developed countries represented on the GCF board advocate a PSF that appeals to capital markets, in particular the pension funds and other institutional investors that control trillions of dollars that pass through Wall Street and other financial centers. They hope that the Fund will ultimately use a broad range of financial instruments.[19]

However, several developing countries and non-governmental organizations have suggested that the PSF should focus on "pro-poor climate finance" that addresses the difficulties faced by micro-, small-, and medium-sized enterprises in developing countries. This emphasis on encouraging the domestic private sector is also written into the GCF’s Governing Instrument, its founding document.[20]

Additionality of funds

The Cancun agreements clearly specify that the funds provided to the developing countries as climate finance, including through the GCF, should be ‘new’ and ‘additional’ to existing development aid.[6]
The condition of funds having to be new means that pledges should come on top of those made in previous years. As far as additionality is concerned, there is no strict definition of this term, which has already led to serious problems in evaluating the additionality of emission reductions through CDM-projects, leading to counterproductivity, and even fraud.[21][22]

A lack of stakeholder involvement

Using the money in the right way in order to enforce actual change on the ground is one of the biggest challenges ahead. Many academics argue that, in order to do this in an efficient way, all stakeholders should be involved in the process, instead of using a top-down approach. They point to that fact that, without their input, it is harder to achieve targets set. Moreover, projects often even miss out on their actual purpose.[18][23][24][25][26] A group of researchers associated with the Australian National University,[27] call for the foundation of so-called ‘National Implementing Entities’ (NIE) in each country, that would become responsible for ‘the implementation of sub-national projects’.[27] This would avoid national governments getting too involved, because in the past, they ‘often hindered the flow of international support to subnational scale reform for sustainable development’.[27] Overall, this view on the need for more stakeholder involvement can be framed within the movement in environmental governance calling for a shift from traditional ways of government to governance.[28] The Climate & Development Knowledge Network is funding a research project that aims to help the GCF Board, by analysing how best to allocate resources among countries. The project will research and present four case studies of how federal or central government money is presently distributed to sub-national entities. Chosen for the diversity in their underlying political systems, these are: China, India, Switzerland and the USA.[29]

Failure to ban fossil fuel funding under climate finance

At its board meeting in South Korea held in March 2015, the GCF refused an explicit ban on fossil fuel projects. Japan, China, and Saudi Arabia were opposing the ban. “It’s like a torture convention that doesn’t forbid torture,” Karen Orenstein, a campaigner for Friends of the Earth US who attended the meeting told the Guardian. “Honestly it should be a no-brainer at this point.”[30][31]

Accredited entities

Wednesday, October 28, 2015

van der Waals force


From Wikipedia, the free encyclopedia


Geckos can stick to walls and ceilings because of Van der Waals forces; see the section below.

In physical chemistry, the van der Waals forces (or van der Waals' interaction), named after Dutch scientist Johannes Diderik van der Waals, is the sum of the attractive or repulsive forces between molecules (or between parts of the same molecule) other than those due to covalent bonds, or the electrostatic interaction of ions with one another, with neutral molecules, or with charged molecules.[1] The resulting van der Waals forces can be attractive or repulsive.[2]

The term includes:
It is also sometimes used loosely as a synonym for the totality of intermolecular forces. Van der Waals forces are relatively weak compared to covalent bonds, but play a fundamental role in fields as diverse as supramolecular chemistry, structural biology, polymer science, nanotechnology, surface science, and condensed matter physics. Van der Waals forces define many properties of organic compounds, including their solubility in polar and non-polar media.

In low molecular weight alcohols, the hydrogen-bonding properties of the polar hydroxyl group dominate other weaker van der Waals interactions. In higher molecular weight alcohols, the properties of the nonpolar hydrocarbon chain(s) dominate and define the solubility. Van der Waals forces quickly vanish at longer distances between interacting molecules.

In 2012, the first direct measurements of the strength of the van der Waals force for a single organic molecule bound to a metal surface was made via atomic force microscopy and corroborated with density functional calculations.[3]

Definition


Attractive interactions resulting from dipole-dipole interaction of two hydrogen chloride molecules

Van der Waals forces include attractions and repulsions between atoms, molecules, and surfaces, as well as other intermolecular forces. They differ from covalent and ionic bonding in that they are caused by correlations in the fluctuating polarizations of nearby particles (a consequence of quantum dynamics[4]).

Intermolecular forces have four major contributions:
  1. A repulsive component resulting from the Pauli exclusion principle that prevents the collapse of molecules.
  2. Attractive or repulsive electrostatic interactions between permanent charges (in the case of molecular ions), dipoles (in the case of molecules without inversion center), quadrupoles (all molecules with symmetry lower than cubic), and in general between permanent multipoles. The electrostatic interaction is sometimes called the Keesom interaction or Keesom force after Willem Hendrik Keesom.
  3. Induction (also known as polarization), which is the attractive interaction between a permanent multipole on one molecule with an induced multipole on another. This interaction is sometimes called Debye force after Peter J.W. Debye.
  4. Dispersion (usually named after Fritz London), which is the attractive interaction between any pair of molecules, including non-polar atoms, arising from the interactions of instantaneous multipoles.
Returning to nomenclature, different texts refer to different things using the term "van der Waals force." Some texts describe the van der Waals force as the totality of forces (including repulsion); others mean all the attractive forces (and then sometimes distinguish van der Waals-Keesom, van der Waals-Debye, and van der Waals-London).

All intermolecular/van der Waals forces are anisotropic (except those between two noble gas atoms), which means that they depend on the relative orientation of the molecules. The induction and dispersion interactions are always attractive, irrespective of orientation, but the electrostatic interaction changes sign upon rotation of the molecules. That is, the electrostatic force can be attractive or repulsive, depending on the mutual orientation of the molecules. When molecules are in thermal motion, as they are in the gas and liquid phase, the electrostatic force is averaged out to a large extent, because the molecules thermally rotate and thus probe both repulsive and attractive parts of the electrostatic force. Sometimes this effect is expressed by the statement that "random thermal motion around room temperature can usually overcome or disrupt them" (which refers to the electrostatic component of the van der Waals force). Clearly, the thermal averaging effect is much less pronounced for the attractive induction and dispersion forces.

The Lennard-Jones potential is often used as an approximate model for the isotropic part of a total (repulsion plus attraction) van der Waals force as a function of distance.

Van der Waals forces are responsible for certain cases of pressure broadening (van der Waals broadening) of spectral lines and the formation of van der Waals molecules. The London-van der Waals forces are related to the Casimir effect for dielectric media, the former being the microscopic description of the latter bulk property. The first detailed calculations of this were done in 1955 by E. M. Lifshitz.[5] A more general theory of van der Waals forces has also been developed.[6][7]

The main characteristics of van der Waals forces are:- [8]
  • They are weaker than normal covalent ionic bonds.
  • Van der Waals forces are additive and cannot be saturated.
  • They have no directional characteristic.
  • They are all short-range forces and hence only interactions between nearest need to be considered instead of all the particles. The greater is the attraction if the molecules are closer due to Van der Waals forces.
  • Van der Waals forces are independent of temperature except dipole - dipole interactions.

London dispersion force

London dispersion forces, named after the German-American physicist Fritz London, are weak intermolecular forces that arise from the interactive forces between instantaneous multipoles in molecules without permanent multipole moments. These forces dominate the interaction of non-polar molecules, and are often more significant than Keesom and Debye forces in polar molecules. London dispersion forces are also known as dispersion forces, London forces, or instantaneous dipole–induced dipole forces. The strength of London dispersion forces is proportional to the polarizability of the molecule, which in turn depends on the total number of electrons and the area over which they are spread. Any connection between the strength of London dispersion forces and mass is coincidental.

Van der Waals forces between macroscopic objects

For macroscopic bodies with known volumes and numbers of atoms or molecules per unit volume, the total van der Waals force is often computed based on the "microscopic theory" as the sum over all interacting pairs. It is necessary to integrate over the total volume of the object, which makes the calculation dependent on the objects' shapes. For example, the van der Waals' interaction energy between spherical bodies of radii R1 and R2 and with smooth surfaces was approximated in 1937 by Hamaker[9] (using London's famous 1937 equation for the dispersion interaction energy between atoms/molecules[10] as the starting point) by:
\begin{align}
     &U(z;R_{1},R_{2}) = -\frac{A}{6}\left(\frac{2R_{1}R_{2}}{z^2 - (R_{1} + R_{2})^2} + \frac{2R_{1}R_{2}}{z^2 - (R_{1} - R_{2})^2} + \ln\left[\frac{z^2-(R_{1}+ R_{2})^2}{z^2-(R_{1}- R_{2})^2}\right]\right)
\end{align}




(1)
where A is the Hamaker coefficient, which is a constant (~10−19 − 10−20 J) that depends on the material properties (it can be positive or negative in sign depending on the intervening medium), and z is the center-to-center distance; i.e., the sum of R1, R2, and r (the distance between the surfaces): \ z = R_{1} + R_{2} + r.

In the limit of close-approach, the spheres are sufficiently large compared to the distance between them; i.e., \ r \ll R_{1} or R_{2}, so that equation (1) for the potential energy function simplifies to:
\ U(r;R_{1},R_{2})= -\frac{AR_{1}R_{2}}{(R_{1}+R_{2})6r}




(2)
The van der Waals force between two spheres of constant radii (R1 and R2 are treated as parameters) is then a function of separation since the force on an object is the negative of the derivative of the potential energy function,\ F_{VW}(r) = -\frac{d}{dr}U(r). This yields:
\ F_{VW}(r)= -\frac{AR_{1}R_{2}}{(R_{1}+R_{2})6r^2}




(3)
The van der Waals forces between objects with other geometries using the Hamaker model have been published in the literature.[11][12][13]

From the expression above, it is seen that the van der Waals force decreases with decreasing size of bodies (R). Nevertheless, the strength of inertial forces, such as gravity and drag/lift, decrease to a greater extent. Consequently, the van der Waals forces become dominant for collections of very small particles such as very fine-grained dry powders (where there are no capillary forces present) even though the force of attraction is smaller in magnitude than it is for larger particles of the same substance. Such powders are said to be cohesive, meaning they are not as easily fluidized or pneumatically conveyed as easily as their more coarse-grained counterparts. Generally, free-flow occurs with particles greater than about 250 μm.

The van der Waals force of adhesion is also dependent on the surface topography. If there are surface asperities, or protuberances, that result in a greater total area of contact between two particles or between a particle and a wall, this increases the van der Waals force of attraction as well as the tendency for mechanical interlocking.

The microscopic theory assumes pairwise additivity. It neglects many-body interactions and retardation. A more rigorous approach accounting for these effects, called the "macroscopic theory" was developed by Lifshitz in 1956.[14] Langbein derived a much more cumbersome "exact" expression in 1970 for spherical bodies within the framework of the Lifshitz theory[15] while a simpler macroscopic model approximation had been made by Derjaguin as early as 1934.[16] Expressions for the van der Waals forces for many different geometries using the Lifshitz theory have likewise been published.

Use by geckos and spiders


Gecko climbing a glass surface

The ability of geckos – which can hang on a glass surface using only one toe – to climb on sheer surfaces has been attributed to the van der Waals forces between these surfaces and the spatulae, or microscopic projections, which cover the hair-like setae found on their footpads.[17][18] A later study suggested that capillary adhesion might play a role,[19] but that hypothesis has been rejected by more recent studies.[20][21][22] There were efforts in 2008 to create a dry glue that exploits the effect,[23] and success was achieved in 2011 to create an adhesive tape on similar grounds.[24] In 2011, a paper was published relating the effect to both velcro-like hairs and the presence of lipids in gecko footprints.[25]

Some spiders have convergently evolved similar setae on their scopulae or scopula pads, enabling them to climb or hang upside-down from extremely smooth surfaces such as glass or porcelain.[26]

In modern technology

In May 2014, DARPA demonstrated the latest iteration of its Geckskin by having a 100 kg researcher (saddled with 20 kg of recording gear) scale an 8m tall glass wall using only two climbing paddles. Tests are ongoing, but DARPA hopes one day to make the technology available for military use.

Memory and trauma

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Memory_and_trauma ...