Search This Blog

Tuesday, April 7, 2015

Greenhouse gas


From Wikipedia, the free encyclopedia
refer to caption and image description
Greenhouse effect schematic showing energy flows between space, the atmosphere, and Earth's surface. Energy influx and emittance are expressed in watts per square meter (W/m2).

A greenhouse gas (sometimes abbreviated GHG) is a gas in an atmosphere that absorbs and emits radiation within the thermal infrared range. This process is the fundamental cause of the greenhouse effect.[1] The primary greenhouse gases in the Earth's atmosphere are water vapor, carbon dioxide, methane, nitrous oxide, and ozone. Greenhouse gases greatly affect the temperature of the Earth; without them, Earth's surface would average about 33 °C colder, which is about 59 °F below the present average of 14 °C (57 °F).[2][3][4]

Since the beginning of the Industrial Revolution (taken as the year 1750), the burning of fossil fuels and extensive clearing of native forests has contributed to a 40% increase in the atmospheric concentration of carbon dioxide, from 280 ppm in 1750 to 392.6 ppm in 2012.[5][6] It has now reached 400 ppm in the northern hemisphere. This increase has occurred despite the uptake of a large portion of the emissions by various natural "sinks" involved in the carbon cycle.[7][8] Anthropogenic carbon dioxide (CO2) emissions (i.e., emissions produced by human activities) come from combustion of carbon-based fuels, principally wood, coal, oil, and natural gas.[9] Under ongoing greenhouse gas emissions, available Earth System Models project that the Earth's surface temperature could exceed historical analogs as early as 2047 affecting most ecosystems on Earth and the livelihoods of over 3 billion people worldwide.[10] Greenhouse gases also trigger[clarification needed] ocean bio-geochemical changes with broad ramifications in marine systems.[11]

In the Solar System, the atmospheres of Venus, Mars, and Titan also contain gases that cause a greenhouse effect, though Titan's atmosphere has an anti-greenhouse effect that reduces the warming.

Gases in Earth's atmosphere

Greenhouse gases

refer to caption and adjacent text
Atmospheric absorption and scattering at different wavelengths of electromagnetic waves. The largest absorption band of carbon dioxide is in the infrared.

Greenhouse gases are those that can absorb and emit infrared radiation,[1] but not radiation in or near the visible spectrum. In order, the most abundant greenhouse gases in Earth's atmosphere are:
Atmospheric concentrations of greenhouse gases are determined by the balance between sources (emissions of the gas from human activities and natural systems) and sinks (the removal of the gas from the atmosphere by conversion to a different chemical compound).[12] The proportion of an emission remaining in the atmosphere after a specified time is the "airborne fraction" (AF). More precisely, the annual AF is the ratio of the atmospheric increase in a given year to that year's total emissions. For CO2 the AF over the last 50 years (1956–2006) has been increasing at 0.25 ± 0.21%/year.[13]

Non-greenhouse gases

Although contributing to many other physical and chemical reactions, the major atmospheric constituents, nitrogen (N
2
), oxygen (O
2
), and argon (Ar), are not greenhouse gases. This is because molecules containing two atoms of the same element such as N
2
and O
2
and monatomic molecules such as argon (Ar) have no net change in their dipole moment when they vibrate and hence are almost totally unaffected by infrared radiation. Although molecules containing two atoms of different elements such as carbon monoxide (CO) or hydrogen chloride (HCl) absorb IR, these molecules are short-lived in the atmosphere owing to their reactivity and solubility. Because they do not contribute significantly to the greenhouse effect, they are usually omitted when discussing greenhouse gases.

Indirect radiative effects

world map of carbon monoxide concentrations in the lower atmosphere
The false colors in this image represent levels of carbon monoxide in the lower atmosphere, ranging from about 390 parts per billion (dark brown pixels), to 220 parts per billion (red pixels), to 50 parts per billion (blue pixels).[14]

Some gases have indirect radiative effects (whether or not they are a greenhouse gas themselves). This happens in two main ways. One way is that when they break down in the atmosphere they produce another greenhouse gas. For example, methane and carbon monoxide (CO) are oxidized to give carbon dioxide (and methane oxidation also produces water vapor; that will be considered below). Oxidation of CO to CO2 directly produces an unambiguous increase in radiative forcing although the reason is subtle. The peak of the thermal IR emission from the Earth's surface is very close to a strong vibrational absorption band of CO2 (667 cm−1). On the other hand, the single CO vibrational band only absorbs IR at much higher frequencies (2145 cm−1)[clarification needed], where the ~300 K thermal emission of the surface is at least a factor of ten lower. On the other hand, oxidation of methane to CO2, which requires reactions with the OH radical, produces an instantaneous reduction, since CO2 is a weaker greenhouse gas than methane; but it has a longer lifetime. As described below this is not the whole story, since the oxidations of CO and CH
4
are intertwined by both consuming OH radicals. In any case, the calculation of the total radiative effect needs to include both the direct and indirect forcing.

A second type of indirect effect happens when chemical reactions in the atmosphere involving these gases change the concentrations of greenhouse gases. For example, the destruction of non-methane volatile organic compounds (NMVOCs) in the atmosphere can produce ozone. The size of the indirect effect can depend strongly on where and when the gas is emitted.[15]

Methane has a number of indirect effects in addition to forming CO2. Firstly, the main chemical that destroys methane in the atmosphere is the hydroxyl radical (OH). Methane reacts with OH and so more methane means that the concentration of OH goes down. Effectively, methane increases its own atmospheric lifetime and therefore its overall radiative effect. The second effect is that the oxidation of methane can produce ozone. Thirdly, as well as making CO2 the oxidation of methane produces water; this is a major source of water vapor in the stratosphere, which is otherwise very dry. CO and NMVOC also produce CO2 when they are oxidized. They remove OH from the atmosphere and this leads to higher concentrations of methane. The surprising effect of this is that the global warming potential of CO is three times that of CO2.[16] The same process that converts NMVOC to carbon dioxide can also lead to the formation of tropospheric ozone. Halocarbons have an indirect effect because they destroy stratospheric ozone. Finally hydrogen can lead to ozone production and CH
4
increases as well as producing water vapor in the stratosphere.[15]

Contribution of clouds to Earth's greenhouse effect

The major non-gas contributor to the Earth's greenhouse effect, clouds, also absorb and emit infrared radiation and thus have an effect on radiative properties of the greenhouse gases. Clouds are water droplets or ice crystals suspended in the atmosphere.[17][18]

Impacts on the overall greenhouse effect

refer to caption and adjacent text
Schmidt et al. (2010)[19] analysed how individual components of the atmosphere contribute to the total greenhouse effect. They estimated that water vapor accounts for about 50% of the Earth's greenhouse effect, with clouds contributing 25%, carbon dioxide 20%, and the minor greenhouse gases and aerosols accounting for the remaining 5%. In the study, the reference model atmosphere is for 1980 conditions. Image credit: NASA.[20]

The contribution of each gas to the greenhouse effect is affected by the characteristics of that gas, its abundance, and any indirect effects it may cause. For example, the direct radiative effect of a mass of methane is about 72 times stronger than the same mass of carbon dioxide over a 20-year time frame[21] but it is present in much smaller concentrations so that its total direct radiative effect is smaller, in part due to its shorter atmospheric lifetime. On the other hand, in addition to its direct radiative impact, methane has a large, indirect radiative effect because it contributes to ozone formation. Shindell et al. (2005)[22] argue that the contribution to climate change from methane is at least double previous estimates as a result of this effect.[23]

When ranked by their direct contribution to the greenhouse effect, the most important are:[17]

Compound
Formula
Contribution
(%)
Water vapor and clouds H
2
O
36–72%  
Carbon dioxide CO2 9–26%
Methane CH
4
4–9%  
Ozone O
3
3–7%  

In addition to the main greenhouse gases listed above, other greenhouse gases include sulfur hexafluoride, hydrofluorocarbons and perfluorocarbons (see IPCC list of greenhouse gases). Some greenhouse gases are not often listed. For example, nitrogen trifluoride has a high global warming potential (GWP) but is only present in very small quantities.[24]

Proportion of direct effects at a given moment

It is not possible to state that a certain gas causes an exact percentage of the greenhouse effect. This is because some of the gases absorb and emit radiation at the same frequencies as others, so that the total greenhouse effect is not simply the sum of the influence of each gas. The higher ends of the ranges quoted are for each gas alone; the lower ends account for overlaps with the other gases.[17][18] In addition, some gases such as methane are known to have large indirect effects that are still being quantified.[25]

Atmospheric lifetime

Aside from water vapor, which has a residence time of about nine days,[26] major greenhouse gases are well mixed and take many years to leave the atmosphere.[27] Although it is not easy to know with precision how long it takes greenhouse gases to leave the atmosphere, there are estimates for the principal greenhouse gases. Jacob (1999)[28] defines the lifetime \tau of an atmospheric species X in a one-box model as the average time that a molecule of X remains in the box. Mathematically \tau can be defined as the ratio of the mass m (in kg) of X in the box to its removal rate, which is the sum of the flow of X out of the box (F_{out}), chemical loss of X (L), and deposition of X (D) (all in kg/s): \tau = \frac{m}{F_{out}+L+D}.[28] If one stopped pouring any of this gas into the box, then after a time \tau, its concentration would be about halved.

The atmospheric lifetime of a species therefore measures the time required to restore equilibrium following a sudden increase or decrease in its concentration in the atmosphere. Individual atoms or molecules may be lost or deposited to sinks such as the soil, the oceans and other waters, or vegetation and other biological systems, reducing the excess to background concentrations. The average time taken to achieve this is the mean lifetime.

Carbon dioxide has a variable atmospheric lifetime, and cannot be specified precisely.[29] The atmospheric lifetime of CO2 is estimated of the order of 30–95 years.[30] This figure accounts for CO2 molecules being removed from the atmosphere by mixing into the ocean, photosynthesis, and other processes. However, this excludes the balancing fluxes of CO2 into the atmosphere from the geological reservoirs, which have slower characteristic rates.[31] While more than half of the CO2 emitted is removed from the atmosphere within a century, some fraction (about 20%) of emitted CO2 remains in the atmosphere for many thousands of years.[32][33][34] Similar issues apply to other greenhouse gases, many of which have longer mean lifetimes than CO2. E.g., N2O has a mean atmospheric lifetime of 114 years.[21]

Radiative forcing

The Earth absorbs some of the radiant energy received from the sun, reflects some of it as light and reflects or radiates the rest back to space as heat.[35] The Earth's surface temperature depends on this balance between incoming and outgoing energy.[35] If this energy balance is shifted, the Earth's surface could become warmer or cooler, leading to a variety of changes in global climate.[35]

A number of natural and man-made mechanisms can affect the global energy balance and force changes in the Earth's climate.[35] Greenhouse gases are one such mechanism.[35] Greenhouse gases in the atmosphere absorb and re-emit some of the outgoing energy radiated from the Earth's surface, causing that heat to be retained in the lower atmosphere.[35] As explained above, some greenhouse gases remain in the atmosphere for decades or even centuries, and therefore can affect the Earth's energy balance over a long time period.[35] Factors that influence Earth's energy balance can be quantified in terms of "radiative climate forcing."[35] Positive radiative forcing indicates warming (for example, by increasing incoming energy or decreasing the amount of energy that escapes to space), while negative forcing is associated with cooling.[35]

Global warming potential

The global warming potential (GWP) depends on both the efficiency of the molecule as a greenhouse gas and its atmospheric lifetime. GWP is measured relative to the same mass of CO2 and evaluated for a specific timescale. Thus, if a gas has a high (positive) radiative forcing but also a short lifetime, it will have a large GWP on a 20-year scale but a small one on a 100-year scale. Conversely, if a molecule has a longer atmospheric lifetime than CO2 its GWP will increase with the timescale considered. Carbon dioxide is defined to have a GWP of 1 over all time periods.

Methane has an atmospheric lifetime of 12 ± 3 years. The 2007 IPCC report lists the GWP as 72 over a time scale of 20 years, 25 over 100 years and 7.6 over 500 years.[21] A 2014 analysis, however, states that although methane’s initial impact is about 100 times greater than that of CO2, because of the shorter atmospheric lifetime, after six or seven decades, the impact of the two gases is about equal, and from then on methane’s relative role continues to decline.[36] The decrease in GWP at longer times is because methane is degraded to water and CO2 through chemical reactions in the atmosphere.

Examples of the atmospheric lifetime and GWP relative to CO2 for several greenhouse gases are given in the following table:[21]

Atmospheric lifetime and GWP relative to CO2 at different time horizon for various greenhouse gases.
Gas name Chemical
formula
Lifetime
(years)
Global warming potential (GWP) for given time horizon
20-yr 100-yr 500-yr
Carbon dioxide CO2 See above 1 1 1
Methane CH
4
12 72 25 7.6
Nitrous oxide N
2
O
114 289 298 153
CFC-12 CCl
2
F
2
100 11 000 10 900 5 200
HCFC-22 CHClF
2
12 5 160 1 810 549
Tetrafluoromethane CF
4
50 000 5 210 7 390 11 200
Hexafluoroethane C
2
F
6
10 000 8 630 12 200 18 200
Sulfur hexafluoride SF
6
3 200 16 300 22 800 32 600
Nitrogen trifluoride NF
3
740 12 300 17 200 20 700

The use of CFC-12 (except some essential uses) has been phased out due to its ozone depleting properties.[37] The phasing-out of less active HCFC-compounds will be completed in 2030.[38]

Natural and anthropogenic sources

refer to caption and article text
Top: Increasing atmospheric carbon dioxide levels as measured in the atmosphere and reflected in ice cores. Bottom: The amount of net carbon increase in the atmosphere, compared to carbon emissions from burning fossil fuel.
refer to caption and image description
This diagram shows a simplified representation of the contemporary global carbon cycle. Changes are measured in gigatons of carbon per year (GtC/y). Canadell et al. (2007) estimated the growth rate of global average atmospheric CO2 for 2000–2006 as 1.93 parts-per-million per year (4.1 petagrams of carbon per year).[39]

Aside from purely human-produced synthetic halocarbons, most greenhouse gases have both natural and human-caused sources. During the pre-industrial Holocene, concentrations of existing gases were roughly constant. In the industrial era, human activities have added greenhouse gases to the atmosphere, mainly through the burning of fossil fuels and clearing of forests.[40][41]

The 2007 Fourth Assessment Report compiled by the IPCC (AR4) noted that "changes in atmospheric concentrations of greenhouse gases and aerosols, land cover and solar radiation alter the energy balance of the climate system", and concluded that "increases in anthropogenic greenhouse gas concentrations is very likely to have caused most of the increases in global average temperatures since the mid-20th century".[42] In AR4, "most of" is defined as more than 50%.

Abbreviations used in the two tables below: ppm = parts-per-million; ppb = parts-per-billion; ppt = parts-per-trillion; W/m2 = watts per square metre

Natural and anthropogenic sources

refer to caption and article text
Top: Increasing atmospheric carbon dioxide levels as measured in the atmosphere and reflected in ice cores. Bottom: The amount of net carbon increase in the atmosphere, compared to carbon emissions from burning fossil fuel.
refer to caption and image description
This diagram shows a simplified representation of the contemporary global carbon cycle. Changes are measured in gigatons of carbon per year (GtC/y). Canadell et al. (2007) estimated the growth rate of global average atmospheric CO2 for 2000–2006 as 1.93 parts-per-million per year (4.1 petagrams of carbon per year).[39]

Aside from purely human-produced synthetic halocarbons, most greenhouse gases have both natural and human-caused sources. During the pre-industrial Holocene, concentrations of existing gases were roughly constant. In the industrial era, human activities have added greenhouse gases to the atmosphere, mainly through the burning of fossil fuels and clearing of forests.[40][41]

The 2007 Fourth Assessment Report compiled by the IPCC (AR4) noted that "changes in atmospheric concentrations of greenhouse gases and aerosols, land cover and solar radiation alter the energy balance of the climate system", and concluded that "increases in anthropogenic greenhouse gas concentrations is very likely to have caused most of the increases in global average temperatures since the mid-20th century".[42] In AR4, "most of" is defined as more than 50%.

Abbreviations used in the two tables below: ppm = parts-per-million; ppb = parts-per-billion; ppt = parts-per-trillion; W/m2 = watts per square metre

Current greenhouse gas concentrations[5]
Gas Pre-1750
tropospheric
concentration[43]
Recent
tropospheric
concentration[44]
Absolute increase
since 1750
Percentage
increase
since 1750
Increased
radiative forcing
(W/m2)[45]
Carbon dioxide (CO2) 280 ppm[46] 395.4 ppm[47] 115.4 ppm 41.2% 1.88
Methane (CH
4
)
700 ppb[48] 1893 ppb /[49]
1762 ppb[49]
1193 ppb /
1062 ppb
170.4% /
151.7%
0.49
Nitrous oxide (N
2
O
)
270 ppb[45][50] 326 ppb /[49]
324 ppb[49]
56 ppb /
54 ppb
20.7% /
20.0%
0.17
Tropospheric
ozone (O
3
)
237 ppb[43] 337 ppb[43] 100 ppb 42% 0.4[51]

refer to caption and article text
400,000 years of ice core data
Ice cores provide evidence for greenhouse gas concentration variations over the past 800,000 years (see the following section). Both CO2 and CH
4
vary between glacial and interglacial phases, and concentrations of these gases correlate strongly with temperature. Direct data does not exist for periods earlier than those represented in the ice core record, a record that indicates CO2 mole fractions stayed within a range of 180 ppm to 280 ppm throughout the last 800,000 years, until the increase of the last 250 years. However, various proxies and modeling suggests larger variations in past epochs; 500 million years ago CO2 levels were likely 10 times higher than now.[53] Indeed higher CO2 concentrations are thought to have prevailed throughout most of the Phanerozoic eon, with concentrations four to six times current concentrations during the Mesozoic era, and ten to fifteen times current concentrations during the early Palaeozoic era until the middle of the Devonian period, about 400 Ma.[54][55][56] The spread of land plants is thought to have reduced CO2 concentrations during the late Devonian, and plant activities as both sources and sinks of CO2 have since been important in providing stabilising feedbacks.[57] Earlier still, a 200-million year period of intermittent, widespread glaciation extending close to the equator (Snowball Earth) appears to have been ended suddenly, about 550 Ma, by a colossal volcanic outgassing that raised the CO2 concentration of the atmosphere abruptly to 12%, about 350 times modern levels, causing extreme greenhouse conditions and carbonate deposition as limestone at the rate of about 1 mm per day.[58] This episode marked the close of the Precambrian eon, and was succeeded by the generally warmer conditions of the Phanerozoic, during which multicellular animal and plant life evolved. No volcanic carbon dioxide emission of comparable scale has occurred since. In the modern era, emissions to the atmosphere from volcanoes are only about 1% of emissions from human sources.[58][59][60]

Ice cores

Measurements from Antarctic ice cores show that before industrial emissions started atmospheric CO2 mole fractions were about 280 parts per million (ppm), and stayed between 260 and 280 during the preceding ten thousand years.[61] Carbon dioxide mole fractions in the atmosphere have gone up by approximately 35 percent since the 1900s, rising from 280 parts per million by volume to 387 parts per million in 2009. One study using evidence from stomata of fossilized leaves suggests greater variability, with carbon dioxide mole fractions above 300 ppm during the period seven to ten thousand years ago,[62] though others have argued that these findings more likely reflect calibration or contamination problems rather than actual CO2 variability.[63][64] Because of the way air is trapped in ice (pores in the ice close off slowly to form bubbles deep within the firn) and the time period represented in each ice sample analyzed, these figures represent averages of atmospheric concentrations of up to a few centuries rather than annual or decadal levels.

Changes since the Industrial Revolution

Refer to caption
Recent year-to-year increase of atmospheric CO2.
Refer to caption
Major greenhouse gas trends.

Since the beginning of the Industrial Revolution, the concentrations of most of the greenhouse gases have increased. For example, the mole fraction of carbon dioxide has increased from 280 ppm by about 36% to 380 ppm, or 100 ppm over modern pre-industrial levels. The first 50 ppm increase took place in about 200 years, from the start of the Industrial Revolution to around 1973.[citation needed]; however the next 50 ppm increase took place in about 33 years, from 1973 to 2006.[65]

Recent data also shows that the concentration is increasing at a higher rate. In the 1960s, the average annual increase was only 37% of what it was in 2000 through 2007.[66]

Today, the stock of carbon in the atmosphere increases by more than 3 million tonnes per annum (0.04%) compared with the existing stock.[clarification needed] This increase is the result of human activities by burning fossil fuels, deforestation and forest degradation in tropical and boreal regions.[67]

The other greenhouse gases produced from human activity show similar increases in both amount and rate of increase. Many observations are available online in a variety of Atmospheric Chemistry Observational Databases.

Anthropogenic greenhouse gases

This graph shows changes in the annual greenhouse gas index (AGGI) between 1979 and 2011.[68] The AGGI measures the levels of greenhouse gases in the atmosphere based on their ability to cause changes in the Earth's climate.[68]
This bar graph shows global greenhouse gas emissions by sector from 1990 to 2005, measured in carbon dioxide equivalents.[69]
Modern global CO2 emissions from the burning of fossil fuels.

Since about 1750 human activity has increased the concentration of carbon dioxide and other greenhouse gases. Measured atmospheric concentrations of carbon dioxide are currently 100 ppm higher than pre-industrial levels.[70] Natural sources of carbon dioxide are more than 20 times greater than sources due to human activity,[71] but over periods longer than a few years natural sources are closely balanced by natural sinks, mainly photosynthesis of carbon compounds by plants and marine plankton. As a result of this balance, the atmospheric mole fraction of carbon dioxide remained between 260 and 280 parts per million for the 10,000 years between the end of the last glacial maximum and the start of the industrial era.[72]

It is likely that anthropogenic (i.e., human-induced) warming, such as that due to elevated greenhouse gas levels, has had a discernible influence on many physical and biological systems.[73] Future warming is projected to have a range of impacts, including sea level rise,[74] increased frequencies and severities of some extreme weather events,[74] loss of biodiversity,[75] and regional changes in agricultural productivity.[75]

The main sources of greenhouse gases due to human activity are:
  • burning of fossil fuels and deforestation leading to higher carbon dioxide concentrations in the air. Land use change (mainly deforestation in the tropics) account for up to one third of total anthropogenic CO2 emissions.[72]
  • livestock enteric fermentation and manure management,[76] paddy rice farming, land use and wetland changes, pipeline losses, and covered vented landfill emissions leading to higher methane atmospheric concentrations. Many of the newer style fully vented septic systems that enhance and target the fermentation process also are sources of atmospheric methane.
  • use of chlorofluorocarbons (CFCs) in refrigeration systems, and use of CFCs and halons in fire suppression systems and manufacturing processes.
  • agricultural activities, including the use of fertilizers, that lead to higher nitrous oxide (N
    2
    O
    ) concentrations.
The seven sources of CO2 from fossil fuel combustion are (with percentage contributions for 2000–2004):[77]

Seven main fossil fuel
combustion sources
Contribution
(%)
Liquid fuels (e.g., gasoline, fuel oil) 36%
Solid fuels (e.g., coal) 35%
Gaseous fuels (e.g., natural gas) 20%
Cement production  3 %
Flaring gas industrially and at wells < 1%  
Non-fuel hydrocarbons < 1%  
"International bunker fuels" of transport
not included in national inventories[78]
 4 %

Carbon dioxide, methane, nitrous oxide (N
2
O
) and three groups of fluorinated gases (sulfur hexafluoride (SF
6
), hydrofluorocarbons (HFCs), and perfluorocarbons (PFCs)) are the major anthropogenic greenhouse gases,[79]:147[80] and are regulated under the Kyoto Protocol international treaty, which came into force in 2005.[81] Emissions limitations specified in the Kyoto Protocol expire in 2012.[81] The Cancún agreement, agreed in 2010, includes voluntary pledges made by 76 countries to control emissions.[82] At the time of the agreement, these 76 countries were collectively responsible for 85% of annual global emissions.[82]

Although CFCs are greenhouse gases, they are regulated by the Montreal Protocol, which was motivated by CFCs' contribution to ozone depletion rather than by their contribution to global warming. Note that ozone depletion has only a minor role in greenhouse warming though the two processes often are confused in the media.

Sectors

Tourism
According to UNEP global tourism is closely linked to climate change. Tourism is a significant contributor to the increasing concentrations of greenhouse gases in the atmosphere. Tourism accounts for about 50% of traffic movements. Rapidly expanding air traffic contributes about 2.5% of the production of CO2. The number of international travelers is expected to increase from 594 million in 1996 to 1.6 billion by 2020, adding greatly to the problem unless steps are taken to reduce emissions.[83]

Role of water vapor


Increasing water vapor in the stratosphere at Boulder, Colorado.

Water vapor accounts for the largest percentage of the greenhouse effect, between 36% and 66% for clear sky conditions and between 66% and 85% when including clouds.[18] Water vapor concentrations fluctuate regionally, but human activity does not significantly affect water vapor concentrations except at local scales, such as near irrigated fields. The atmospheric concentration of vapor is highly variable and depends largely on temperature, from less than 0.01% in extremely cold regions up to 3% by mass at in saturated air at about 32 °C.(see Relative humidity#other important facts) [84]

The average residence time of a water molecule in the atmosphere is only about nine days, compared to years or centuries for other greenhouse gases such as CH
4
and CO2.[85] Thus, water vapor responds to and amplifies effects of the other greenhouse gases. The Clausius–Clapeyron relation establishes that more water vapor will be present per unit volume at elevated temperatures. This and other basic principles indicate that warming associated with increased concentrations of the other greenhouse gases also will increase the concentration of water vapor (assuming that the relative humidity remains approximately constant; modeling and observational studies find that this is indeed so). Because water vapor is a greenhouse gas, this results in further warming and so is a "positive feedback" that amplifies the original warming. Eventually other earth processes offset these positive feedbacks, stabilizing the global temperature at a new equilibrium and preventing the loss of Earth's water through a Venus-like runaway greenhouse effect.[86]

Direct greenhouse gas emissions

Between the period 1970 to 2004, GHG emissions (measured in CO2-equivalent)[87] increased at an average rate of 1.6% per year, with CO2 emissions from the use of fossil fuels growing at a rate of 1.9% per year.[88][89] Total anthropogenic emissions at the end of 2009 were estimated at 49.5 gigatonnes CO2-equivalent.[90]:15 These emissions include CO2 from fossil fuel use and from land use, as well as emissions of methane, nitrous oxide and other GHGs covered by the Kyoto Protocol.

At present, the primary source of CO2 emissions is the burning of coal, natural gas, and petroleum for electricity and heat.[91]

Regional and national attribution of emissions

This figure shows the relative fraction of anthropogenic greenhouse gases coming from each of eight categories of sources, as estimated by the Emission Database for Global Atmospheric Research version 3.2, fast track 2000 project [1]. These values are intended to provide a snapshot of global annual greenhouse gas emissions in the year 2000. The top panel shows the sum over all anthropogenic greenhouse gases, weighted by their global warming potential over the next 100 years. This consists of 72% carbon dioxide, 18% methane, 8% nitrous oxide and 1% other gases. Lower panels show the comparable information for each of these three primary greenhouse gases, with the same coloring of sectors as used in the top chart. Segments with less than 1% fraction are not labeled.

There are several different ways of measuring GHG emissions, for example, see World Bank (2010)[92]:362 for tables of national emissions data. Some variables that have been reported[93] include:
  • Definition of measurement boundaries: Emissions can be attributed geographically, to the area where they were emitted (the territory principle) or by the activity principle to the territory produced the emissions. These two principles result in different totals when measuring, for example, electricity importation from one country to another, or emissions at an international airport.
  • Time horizon of different GHGs: Contribution of a given GHG is reported as a CO2 equivalent. The calculation to determine this takes into account how long that gas remains in the atmosphere. This is not always known accurately and calculations must be regularly updated to reflect new information.
  • What sectors are included in the calculation (e.g., energy industries, industrial processes, agriculture etc.): There is often a conflict between transparency and availability of data.
  • The measurement protocol itself: This may be via direct measurement or estimation. The four main methods are the emission factor-based method, mass balance method, predictive emissions monitoring systems, and continuous emissions monitoring systems. These methods differ in accuracy, cost, and usability.
These different measures are sometimes used by different countries to assert various policy/ethical positions on climate change (Banuri et al., 1996, p. 94).[94] This use of different measures leads to a lack of comparability, which is problematic when monitoring progress towards targets. There are arguments for the adoption of a common measurement tool, or at least the development of communication between different tools.[93]

Emissions may be measured over long time periods. This measurement type is called historical or cumulative emissions. Cumulative emissions give some indication of who is responsible for the build-up in the atmospheric concentration of GHGs (IEA, 2007, p. 199).[95]

The national accounts balance would be positively related to carbon emissions. The national accounts balance shows the difference between exports and imports. For many richer nations, such as the United States, the accounts balance is negative because more goods are imported than they are exported. This is mostly due to the fact that it is cheaper to produce goods outside of developed countries, leading the economies of developed countries to become increasingly dependent on services and not goods. We believed that a positive accounts balance would means that more production was occurring in a country, so more factories working would increase carbon emission levels.(Holtz-Eakin, 1995, pp.;85;101).[96]

Emissions may also be measured across shorter time periods. Emissions changes may, for example, be measured against a base year of 1990. 1990 was used in the United Nations Framework Convention on Climate Change (UNFCCC) as the base year for emissions, and is also used in the Kyoto Protocol (some gases are also measured from the year 1995).[79]:146,149 A country's emissions may also be reported as a proportion of global emissions for a particular year.

Another measurement is of per capita emissions. This divides a country's total annual emissions by its mid-year population.[92]:370 Per capita emissions may be based on historical or annual emissions (Banuri et al., 1996, pp. 106–107).[94]

Land-use change

Refer to caption.
Greenhouse gas emissions from agriculture, forestry and other land use, 1970-2010.

Land-use change, e.g., the clearing of forests for agricultural use, can affect the concentration of GHGs in the atmosphere by altering how much carbon flows out of the atmosphere into carbon sinks.[97] Accounting for land-use change can be understood as an attempt to measure "net" emissions, i.e., gross emissions from all GHG sources minus the removal of emissions from the atmosphere by carbon sinks (Banuri et al., 1996, pp. 92–93).[94]

There are substantial uncertainties in the measurement of net carbon emissions.[98] Additionally, there is controversy over how carbon sinks should be allocated between different regions and over time (Banuri et al., 1996, p. 93).[94] For instance, concentrating on more recent changes in carbon sinks is likely to favour those regions that have deforested earlier, e.g., Europe.

Greenhouse gas intensity

Refer to caption.
Greenhouse gas intensity in the year 2000, including land-use change.
Refer to caption.
Carbon intensity of GDP (using PPP) for different regions, 1982-2011.
Refer to caption.
Carbon intensity of GDP (using MER) for different regions, 1982-2011.

Greenhouse gas intensity is a ratio between greenhouse gas emissions and another metric, e.g., gross domestic product (GDP) or energy use. The terms "carbon intensity" and "emissions intensity" are also sometimes used.[99] GHG intensities may be calculated using market exchange rates (MER) or purchasing power parity (PPP) (Banuri et al., 1996, p. 96).[94] Calculations based on MER show large differences in intensities between developed and developing countries, whereas calculations based on PPP show smaller differences.

Cumulative and historical emissions

Cumulative energy-related CO2 emissions between the years 1850–2005 grouped into low-income, middle-income, high-income, the EU-15, and the OECD countries.
Cumulative energy-related CO2 emissions between the years 1850–2005 for individual countries.
Map of cumulative per capita anthropogenic atmospheric CO2 emissions by country. Cumulative emissions include land use change, and are measured between the years 1950 and 2000.
Regional trends in annual CO2 emissions from fuel combustion between 1971 and 2009.
Regional trends in annual per capita CO2 emissions from fuel combustion between 1971 and 2009.

Cumulative anthropogenic (i.e., human-emitted) emissions of CO2 from fossil fuel use are a major cause of global warming,[100] and give some indication of which countries have contributed most to human-induced climate change.[101]:15

Top-5 historic CO2 contributors by region over the years 1800 to 1988 (in %)
Region Industrial
CO2
Total
CO2
OECD North America 33.2 29.7
OECD Europe 26.1 16.6
Former USSR 14.1 12.5
China   5.5   6.0
Eastern Europe   5.5   4.8

The table above to the left is based on Banuri et al. (1996, p. 94).[94] Overall, developed countries accounted for 83.8% of industrial CO2 emissions over this time period, and 67.8% of total CO2 emissions. Developing countries accounted for industrial CO2 emissions of 16.2% over this time period, and 32.2% of total CO2 emissions. The estimate of total CO2 emissions includes biotic carbon emissions, mainly from deforestation. Banuri et al. (1996, p. 94)[94] calculated per capita cumulative emissions based on then-current population. The ratio in per capita emissions between industrialized countries and developing countries was estimated at more than 10 to 1.

Including biotic emissions brings about the same controversy mentioned earlier regarding carbon sinks and land-use change (Banuri et al., 1996, pp. 93–94).[94] The actual calculation of net emissions is very complex, and is affected by how carbon sinks are allocated between regions and the dynamics of the climate system.

Non-OECD countries accounted for 42% of cumulative energy-related CO2 emissions between 1890–2007.[102]:179–180 Over this time period, the US accounted for 28% of emissions; the EU, 23%; Russia, 11%; China, 9%; other OECD countries, 5%; Japan, 4%; India, 3%; and the rest of the world, 18%.[102]:179–180

Changes since a particular base year

Between 1970–2004, global growth in annual CO2 emissions was driven by North America, Asia, and the Middle East.[103] The sharp acceleration in CO2 emissions since 2000 to more than a 3% increase per year (more than 2 ppm per year) from 1.1% per year during the 1990s is attributable to the lapse of formerly declining trends in carbon intensity of both developing and developed nations. China was responsible for most of global growth in emissions during this period. Localised plummeting emissions associated with the collapse of the Soviet Union have been followed by slow emissions growth in this region due to more efficient energy use, made necessary by the increasing proportion of it that is exported.[77] In comparison, methane has not increased appreciably, and N
2O by 0.25% y−1.

Using different base years for measuring emissions has an effect on estimates of national contributions to global warming.[101]:17–18[104] This can be calculated by dividing a country's highest contribution to global warming starting from a particular base year, by that country's minimum contribution to global warming starting from a particular base year. Choosing between different base years of 1750, 1900, 1950, and 1990 has a significant effect for most countries.[101]:17–18 Within the G8 group of countries, it is most significant for the UK, France and Germany. These countries have a long history of CO2 emissions (see the section on Cumulative and historical emissions).

Annual emissions


Per capita anthropogenic greenhouse gas emissions by country for the year 2000 including land-use change.

Annual per capita emissions in the industrialized countries are typically as much as ten times the average in developing countries.[79]:144 Due to China's fast economic development, its annual per capita emissions are quickly approaching the levels of those in the Annex I group of the Kyoto Protocol (i.e., the developed countries excluding the USA).[105] Other countries with fast growing emissions are South Korea, Iran, and Australia. On the other hand, annual per capita emissions of the EU-15 and the USA are gradually decreasing over time.[105] Emissions in Russia and the Ukraine have decreased fastest since 1990 due to economic restructuring in these countries.[106]

Energy statistics for fast growing economies are less accurate than those for the industrialized countries. For China's annual emissions in 2008, the Netherlands Environmental Assessment Agency estimated an uncertainty range of about 10%.[105]

The GHG footprint, or greenhouse gas footprint, refers to the amount of GHG that are emitted during the creation of products or services. It is more comprehensive than the commonly used carbon footprint, which measures only carbon dioxide, one of many greenhouse gases.

Top emitter countries

Bar graph of annual per capita CO2 emissions from fuel combustion for 140 countries in 2009.
Bar graph of cumulative energy-related per capita CO2 emissions between 1850–2008 for 185 countries.

Annual

In 2009, the annual top ten emitting countries accounted for about two-thirds of the world's annual energy-related CO2 emissions.[107]

Top-10 annual energy-related CO2 emitters for the year 2009[108]
Country  % of global total
annual emissions
Tonnes of GHG
per capita
People's Rep. of China 23.6 5.13
United States 17.9 16.9
India 5.5 1.37
Russian Federation 5.3 10.8
Japan 3.8 8.6
Germany 2.6 9.2
Islamic Rep. of Iran 1.8 7.3
Canada 1.8 15.4
Korea 1.8 10.6
United Kingdom 1.6 7.5

Cumulative

Top-10 cumulative energy-related CO2 emitters between 1850–2008[109]
Country  % of world
total
Metric tonnes
CO2 per person
United States 28.5 1,132.7
China 9.36 85.4
Russian Federation 7.95 677.2
Germany 6.78 998.9
United Kingdom 5.73 1,127.8
Japan 3.88 367
France 2.73 514.9
India 2.52 26.7
Canada 2.17 789.2
Ukraine 2.13 556.4

Embedded emissions

One way of attributing greenhouse gas (GHG) emissions is to measure the embedded emissions (also referred to as "embodied emissions") of goods that are being consumed. Emissions are usually measured according to production, rather than consumption.[110] For example, in the main international treaty on climate change (the UNFCCC), countries report on emissions produced within their borders, e.g., the emissions produced from burning fossil fuels.[102]:179[111]:1 Under a production-based accounting of emissions, embedded emissions on imported goods are attributed to the exporting, rather than the importing, country. Under a consumption-based accounting of emissions, embedded emissions on imported goods are attributed to the importing country, rather than the exporting, country.
Davis and Caldeira (2010)[111]:4 found that a substantial proportion of CO2 emissions are traded internationally. The net effect of trade was to export emissions from China and other emerging markets to consumers in the US, Japan, and Western Europe. Based on annual emissions data from the year 2004, and on a per-capita consumption basis, the top-5 emitting countries were found to be (in tCO2 per person, per year): Luxembourg (34.7), the US (22.0), Singapore (20.2), Australia (16.7), and Canada (16.6).[111]:5 Carbon Trust research revealed that approximately 25% of all CO2 emissions from human activities 'flow' (i.e. are imported or exported) from one country to another. Major developed economies were found to be typically net importers of embodied carbon emissions — with UK consumption emissions 34% higher than production emissions, and Germany (29%), Japan (19%) and the USA (13%) also significant net importers of embodied emissions.[112]

Effect of policy

Governments have taken action to reduce GHG emissions (climate change mitigation). Assessments of policy effectiveness have included work by the Intergovernmental Panel on Climate Change,[113] International Energy Agency,[114][115] and United Nations Environment Programme.[116] Policies implemented by governments have included[117][118][119] national and regional targets to reduce emissions, promoting energy efficiency, and support for renewable energy.
Countries and regions listed in Annex I of the United Nations Framework Convention on Climate Change (UNFCCC) (i.e., the OECD and former planned economies of the Soviet Union) are required to submit periodic assessments to the UNFCCC of actions they are taking to address climate change.[119]:3 Analysis by the UNFCCC (2011)[119]:8 suggested that policies and measures undertaken by Annex I Parties may have produced emission savings of 1.5 thousand Tg CO2-eq in the year 2010, with most savings made in the energy sector. The projected emissions saving of 1.5 thousand Tg CO2-eq is measured against a hypothetical "baseline" of Annex I emissions, i.e., projected Annex I emissions in the absence of policies and measures. The total projected Annex I saving of 1.5 thousand CO2-eq does not include emissions savings in seven of the Annex I Parties.[119]:8

Projections

A wide range of projections of future GHG emissions have been produced.[120] Rogner et al. (2007)[121] assessed the scientific literature on GHG projections. Rogner et al. (2007)[88] concluded that unless energy policies changed substantially, the world would continue to depend on fossil fuels until 2025–2030. Projections suggest that more than 80% of the world's energy will come from fossil fuels. This conclusion was based on "much evidence" and "high agreement" in the literature.[88] Projected annual energy-related CO2 emissions in 2030 were 40–110% higher than in 2000, with two-thirds of the increase originating in developing countries.[88] Projected annual per capita emissions in developed country regions remained substantially lower (2.8–5.1 tonnes CO2) than those in developed country regions (9.6–15.1 tonnes CO2).[122] Projections consistently showed increase in annual world GHG emissions (the "Kyoto" gases,[123] measured in CO2-equivalent) of 25–90% by 2030, compared to 2000.[88]

Relative CO2 emission from various fuels

One liter of gasoline, when used as a fuel, produces 2.32 kg (about 1300 liters or 1.3 cubic meters) of carbon dioxide, a greenhouse gas. One US gallon produces 19.4 lb (1,291.5 gallons or 172.65 cubic feet)[124][125][126]
Mass of carbon dioxide emitted per quantity of energy for various fuels[127]
Fuel name CO2
emitted
(lbs/106 Btu)
CO2
emitted
(g/MJ)
Natural gas 117 50.30
Liquefied petroleum gas 139 59.76
Propane 139 59.76
Aviation gasoline 153 65.78
Automobile gasoline 156 67.07
Kerosene 159 68.36
Fuel oil 161 69.22
Tires/tire derived fuel 189 81.26
Wood and wood waste 195 83.83
Coal (bituminous) 205 88.13
Coal (sub-bituminous) 213 91.57
Coal (lignite) 215 92.43
Petroleum coke 225 96.73
Tar-sand Bitumen [citation needed] [citation needed]
Coal (anthracite) 227 97.59

Life-cycle greenhouse-gas emissions of energy sources

A literature review of numerous energy sources CO2 emissions by the IPCC in 2011, found that, the CO2 emission value, that fell within the 50th percentile of all total life cycle emissions studies conducted, was as follows.[128]

Lifecycle greenhouse gas emissions by electricity source.
Technology Description 50th percentile
(g CO2/kWhe)
Hydroelectric reservoir 4
Ocean Energy wave and tidal 8
Wind onshore 12
Nuclear various generation II reactor types 16
Biomass various 18
Solar thermal parabolic trough 22
Geothermal hot dry rock 45
Solar PV Polycrystaline silicon 46
Natural gas various combined cycle turbines without scrubbing 469
Coal various generator types without scrubbing 1001

Removal from the atmosphere ("sinks")

Natural processes

Greenhouse gases can be removed from the atmosphere by various processes, as a consequence of:
  • a physical change (condensation and precipitation remove water vapor from the atmosphere).
  • a chemical reaction within the atmosphere. For example, methane is oxidized by reaction with naturally occurring hydroxyl radical, OH· and degraded to CO2 and water vapor (CO2 from the oxidation of methane is not included in the methane Global warming potential). Other chemical reactions include solution and solid phase chemistry occurring in atmospheric aerosols.
  • a physical exchange between the atmosphere and the other compartments of the planet. An example is the mixing of atmospheric gases into the oceans.
  • a chemical change at the interface between the atmosphere and the other compartments of the planet. This is the case for CO2, which is reduced by photosynthesis of plants, and which, after dissolving in the oceans, reacts to form carbonic acid and bicarbonate and carbonate ions (see ocean acidification).
  • a photochemical change. Halocarbons are dissociated by UV light releasing Cl· and F· as free radicals in the stratosphere with harmful effects on ozone (halocarbons are generally too stable to disappear by chemical reaction in the atmosphere).

Negative emissions

A number of technologies remove greenhouse gases emissions from the atmosphere. Most widely analysed are those that remove carbon dioxide from the atmosphere, either to geologic formations such as bio-energy with carbon capture and storage[129][130][131] and carbon dioxide air capture,[131] or to the soil as in the case with biochar.[131] The IPCC has pointed out that many long-term climate scenario models require large scale manmade negative emissions to avoid serious climate change.[132]

History of scientific research

In the late 19th century scientists experimentally discovered that N
2
and O
2
do not absorb infrared radiation (called, at that time, "dark radiation"). On the contrary, water (both as true vapor and condensed in the form of microscopic droplets suspended in clouds) and CO2 and other poly-atomic gaseous molecules do absorb infrared radiation. In the early 20th century researchers realized that greenhouse gases in the atmosphere made the Earth's overall temperature higher than it would be without them. During the late 20th century, a scientific consensus evolved that increasing concentrations of greenhouse gases in the atmosphere cause a substantial rise in global temperatures and changes to other parts of the climate system,[133] with consequences for the environment and for human health.

Infrared spectroscopy


From Wikipedia, the free encyclopedia

Infrared spectroscopy (IR spectroscopy) is the spectroscopy that deals with the infrared region of the electromagnetic spectrum, that is light with a longer wavelength and lower frequency than visible light. It covers a range of techniques, mostly based on absorption spectroscopy. As with all spectroscopic techniques, it can be used to identify and study chemicals. For a given sample which may be solid, liquid, or gaseous, the method or technique of infrared spectroscopy uses an instrument called an infrared spectrometer (or spectrophotometer) to produce an infrared spectrum. A basic IR spectrum is essentially a graph of infrared light absorbance (or transmittance) on the vertical axis vs. frequency or wavelength on the horizontal axis. Typical units of frequency used in IR spectra are reciprocal centimeters (sometimes called wave numbers), with the symbol cm−1. Units of IR wavelength are commonly given in micrometers (formerly called "microns"), symbol μm, which are related to wave numbers in a reciprocal way. A common laboratory instrument that uses this technique is a Fourier transform infrared (FTIR) spectrometer. Two-dimensional IR is also possible as discussed below.

The infrared portion of the electromagnetic spectrum is usually divided into three regions; the near-, mid- and far- infrared, named for their relation to the visible spectrum. The higher-energy near-IR, approximately 14000–4000 cm−1 (0.8–2.5 μm wavelength) can excite overtone or harmonic vibrations. The mid-infrared, approximately 4000–400 cm−1 (2.5–25 μm) may be used to study the fundamental vibrations and associated rotational-vibrational structure. The far-infrared, approximately 400–10 cm−1 (25–1000 μm), lying adjacent to the microwave region, has low energy and may be used for rotational spectroscopy. The names and classifications of these subregions are conventions, and are only loosely based on the relative molecular or electromagnetic properties.

Theory


Sample of an IR spec. reading; this one is from bromomethane (CH3Br), showing peaks around 3000, 1300, and 1000 cm−1 (on the horizontal axis).

Infrared spectroscopy exploits the fact that molecules absorb specific frequencies that are characteristic of their structure. These absorptions are resonant frequencies, i.e. the frequency of the absorbed radiation matches the transition energy of the bond or group that vibrates. The energies are determined by the shape of the molecular potential energy surfaces, the masses of the atoms, and the associated vibronic coupling.

In particular, in the Born–Oppenheimer and harmonic approximations, i.e. when the molecular Hamiltonian corresponding to the electronic ground state can be approximated by a harmonic oscillator in the neighborhood of the equilibrium molecular geometry, the resonant frequencies are associated with the normal modes corresponding to the molecular electronic ground state potential energy surface. The resonant frequencies are also related to the strength of the bond and the mass of the atoms at either end of it. Thus, the frequency of the vibrations are associated with a particular normal mode of motion and a particular bond type.

Number of vibrational modes

In order for a vibrational mode in a molecule to be "IR active", it must be associated with changes in the dipole. A permanent dipole is not necessary, as the rule requires only a change in dipole moment.[1]

A molecule can vibrate in many ways, and each way is called a vibrational mode. For molecules with N number of atoms in them, linear molecules have 3N – 5 degrees of vibrational modes, whereas nonlinear molecules have 3N – 6 degrees of vibrational modes (also called vibrational degrees of freedom). As an example H2O, a non-linear molecule, will have 3 × 3 – 6 = 3 degrees of vibrational freedom, or modes.

Simple diatomic molecules have only one bond and only one vibrational band. If the molecule is symmetrical, e.g. N2, the band is not observed in the IR spectrum, but only in the Raman spectrum. Asymmetrical diatomic molecules, e.g. CO, absorb in the IR spectrum. More complex molecules have many bonds, and their vibrational spectra are correspondingly more complex, i.e. big molecules have many peaks in their IR spectra.

The atoms in a CH2X2 group, commonly found in organic compounds and where X can represent any other atom, can vibrate in nine different ways. Six of these involve only the CH2 portion: symmetric and antisymmetric stretching, scissoring, rocking, wagging and twisting, as shown below. (Note, that because CH2 is attached to X2 it has 6 modes, unlike H2O, which only has 3 modes. The rocking, wagging, and twisting modes do not exist for H2O, since they are rigid body translations and no relative displacements exist.)

Symmetrical
stretching
Antisymmetrical
stretching

Scissoring
Symmetrical stretching.gif Asymmetrical stretching.gif Scissoring.gif
Rocking Wagging Twisting
Modo rotacao.gif Wagging.gif Twisting.gif

These figures do not represent the "recoil" of the C atoms, which, though necessarily present to balance the overall movements of the molecule, are much smaller than the movements of the lighter H atoms.

Special effects

The simplest and most important IR bands arise from the "normal modes," the simplest distortions of the molecule. In some cases, "overtone bands" are observed. These bands arise from the absorption of a photon that leads to a doubly excited vibrational state. Such bands appear at approximately twice the energy of the normal mode. Some vibrations, so-called 'combination modes," involve more than one normal mode. The phenomenon of Fermi resonance can arise when two modes are similar in energy; Fermi resonance results in an unexpected shift in energy and intensity of the bands etc.

Practical IR spectroscopy

The infrared spectrum of a sample is recorded by passing a beam of infrared light through the sample. When the frequency of the IR is the same as the vibrational frequency of a bond, absorption occurs. Examination of the transmitted light reveals how much energy was absorbed at each frequency (or wavelength). This can be achieved by scanning the wavelength range using a monochromator. Alternatively, the whole wavelength range is measured at once using a Fourier transform instrument and then a transmittance or absorbance spectrum is generated using a dedicated procedure. Analysis of the position, shape and intensity of peaks in this spectrum reveals details about the molecular structure of the sample.

This technique works almost exclusively on samples with covalent bonds. Simple spectra are obtained from samples with few IR active bonds and high levels of purity. More complex molecular structures lead to more absorption bands and more complex spectra. The technique has been used for the characterization of very complex mixtures[citation needed]. Spectra issues with infrared fluorescence are rare.

Sample preparation

Gaseous samples require a sample cell with a long pathlength to compensate for the diluteness. The pathlength of the sample cell depends on the concentration of the compound of interest. A simple glass tube with length of 5 to 10 cm equipped with infrared-transparent windows at the both ends of the tube can be used for concentrations down to several hundred ppm. Sample gas concentrations well below ppm can be measured with a White's cell in which the infrared light is guided with mirrors to travel through the gas. White's cells are available with optical pathlength starting from 0.5 m up to hundred meters.

Liquid samples can be sandwiched between two plates of a salt (commonly sodium chloride, or common salt, although a number of other salts such as potassium bromide or calcium fluoride are also used).[2] The plates are transparent to the infrared light and do not introduce any lines onto the spectra.

Solid samples can be prepared in a variety of ways. One common method is to crush the sample with an oily mulling agent (usually Nujol) in a marble or agate mortar, with a pestle. A thin film of the mull is smeared onto salt plates and measured. The second method is to grind a quantity of the sample with a specially purified salt (usually potassium bromide) finely (to remove scattering effects from large crystals). This powder mixture is then pressed in a mechanical press to form a translucent pellet through which the beam of the spectrometer can pass.[2] A third technique is the "cast film" technique, which is used mainly for polymeric materials. The sample is first dissolved in a suitable, non hygroscopic solvent. A drop of this solution is deposited on surface of KBr or NaCl cell. The solution is then evaporated to dryness and the film formed on the cell is analysed directly. Care is important to ensure that the film is not too thick otherwise light cannot pass through. This technique is suitable for qualitative analysis. The final method is to use microtomy to cut a thin (20–100 µm) film from a solid sample. This is one of the most important ways of analysing failed plastic products for example because the integrity of the solid is preserved.

In photoacoustic spectroscopy the need for sample treatment is minimal. The sample, liquid or solid, is placed into the sample cup which is inserted into the photoacoustic cell which is then sealed for the measurement. The sample may be one solid piece, powder or basically in any form for the measurement. For example, a piece of rock can be inserted into the sample cup and the spectrum measured from it.

It is important to note that spectra obtained from different sample preparation methods will look slightly different from each other due to differences in the samples' physical states.

Comparing to a reference


Schematics of a two-beam absorption spectrometer. A beam of infrared light is produced, passed through an interferometer (not shown), and then split into two separate beams. One is passed through the sample, the other passed through a reference. The beams are both reflected back towards a detector, however first they pass through a splitter, which quickly alternates which of the two beams enters the detector. The two signals are then compared and a printout is obtained. This "two-beam" setup gives accurate spectra even if the intensity of the light source drifts over time.

To take the infrared spectrum of a sample, it is necessary to measure both the sample and a "reference" (or "control"). This is because each measurement is affected by not only the light-absorption properties of the sample, but also the properties of the instrument (for example, what light source is used, what infrared detector is used, etc.). The reference measurement makes it possible to eliminate the instrument influence. Mathematically, the sample transmission spectrum is divided by the reference transmission spectrum.

The appropriate "reference" depends on the measurement and its goal. The simplest reference measurement is to simply remove the sample (replacing it by air). However, sometimes a different reference is more useful. For example, if the sample is a dilute solute dissolved in water in a beaker, then a good reference measurement might be to measure pure water in the same beaker. Then the reference measurement would cancel out not only all the instrumental properties (like what light source is used), but also the light-absorbing and light-reflecting properties of the water and beaker, and the final result would just show the properties of the solute (at least approximately).

A common way to compare to a reference is sequentially: first measure the reference, then replace the reference by the sample and measure the sample. This technique is not perfectly reliable; if the infrared lamp is a bit brighter during the reference measurement, then a bit dimmer during the sample measurement, the measurement will be distorted. More elaborate methods, such as a "two-beam" setup (see figure), can correct for these types of effects to give very accurate results. The Standard addition method can be used to statistically cancel these errors.

FTIR

An interferogram from an FTIR measurement. The horizontal axis is the position of the mirror, and the vertical axis is the amount of light detected. This is the "raw data" which can be Fourier transformed to get the actual spectrum.

Fourier transform infrared (FTIR) spectroscopy is a measurement technique that allows one to record infrared spectra. Infrared light is guided through an interferometer and then through the sample (or vice versa). A moving mirror inside the apparatus alters the distribution of infrared light that passes through the interferometer. The signal directly recorded, called an "interferogram", represents light output as a function of mirror position. A data-processing technique called Fourier transform turns this raw data into the desired result (the sample's spectrum): Light output as a function of infrared wavelength (or equivalently, wavenumber). As described above, the sample's spectrum is always compared to a reference.

There is an alternate method for taking spectra (the "dispersive" or "scanning monochromator" method), where one wavelength at a time passes through the sample. The dispersive method is more common in UV-Vis spectroscopy, but is less practical in the infrared than the FTIR method. One reason that FTIR is favored is called "Fellgett's advantage" or the "multiplex advantage": The information at all frequencies is collected simultaneously, improving both speed and signal-to-noise ratio. Another is called "Jacquinot's Throughput Advantage": A dispersive measurement requires detecting much lower light levels than an FTIR measurement.[3] There are other advantages, as well as some disadvantages,[3] but virtually all modern infrared spectrometers are FTIR instruments.

Absorption bands

IR spectroscopy is often used to identify structures because functional groups give rise to characteristic bands both in terms of intensity and position (frequency). The positions of these bands is summarized in correlation tables as shown below.

IR summary version 2.gif
Wavenumbers listed in cm−1.

Badger's rule

For many kinds of samples, the assignments are known, i.e. which bond deformation(s) are associated with which frequency. In such cases further information can be gleaned about the strength on a bond, relying on the empirical guideline called Badger's Rule. Originally published by Richard Badger in 1934,[4] this rule states that the strength of a bond correlates with the frequency of its vibrational mode. That is, increase in bond strength leads to corresponding frequency increase and vice versa.

Uses and applications

Infrared spectroscopy is a simple and reliable technique widely used in both organic and inorganic chemistry, in research and industry. It is used in quality control, dynamic measurement, and monitoring applications such as the long-term unattended measurement of CO2 concentrations in greenhouses and growth chambers by infrared gas analyzers.

It is also used in forensic analysis in both criminal and civil cases, for example in identifying polymer degradation. It can be used in determining the blood alcohol content of a suspected drunk driver.

A useful way of analysing solid samples without the need for cutting samples uses ATR or attenuated total reflectance spectroscopy. Using this approach, samples are pressed against the face of a single crystal. The infrared radiation passes through the crystal and only interacts with the sample at the interface between the two materials.

With increasing technology in computer filtering and manipulation of the results, samples in solution can now be measured accurately (water produces a broad absorbance across the range of interest, and thus renders the spectra unreadable without this computer treatment).

Some instruments will also automatically tell you what substance is being measured from a store of thousands of reference spectra held in storage.

Infrared spectroscopy is also useful in measuring the degree of polymerization in polymer manufacture. Changes in the character or quantity of a particular bond are assessed by measuring at a specific frequency over time. Modern research instruments can take infrared measurements across the range of interest as frequently as 32 times a second. This can be done whilst simultaneous measurements are made using other techniques. This makes the observations of chemical reactions and processes quicker and more accurate.

Infrared spectroscopy has also been successfully utilized in the field of semiconductor microelectronics:[5] for example, infrared spectroscopy can be applied to semiconductors like silicon, gallium arsenide, gallium nitride, zinc selenide, amorphous silicon, silicon nitride, etc.

The instruments are now small, and can be transported, even for use in field trials.

In February 2014, NASA announced a greatly upgraded database, based on IR spectroscopy, for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. According to scientists, more than 20% of the carbon in the universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets.[6]

Isotope effects

The different isotopes in a particular species may exhibit different fine details in infrared spectroscopy. For example, the O–O stretching frequency (in reciprocal centimeters) of oxyhemocyanin is experimentally determined to be 832 and 788 cm−1 for ν(16O–16O) and ν(18O–18O), respectively.

By considering the O–O bond as a spring, the wavenumber of absorbance, ν can be calculated:
\nu = \frac{1}{2 \pi c} \sqrt{\frac{k}{\mu}}
where k is the spring constant for the bond, c is the speed of light, and μ is the reduced mass of the A–B system:
\mu = \frac{m_A m_B}{m_A + m_B}
(m_i is the mass of atom i).

The reduced masses for 16O–16O and 18O–18O can be approximated as 8 and 9 respectively. Thus
\frac{\nu(^{16}O)}{\nu(^{18}O)} = \sqrt{\frac{9}{8}} \approx \frac{832}{788}.
Where \nu is the wavenumber; [wavenumber = frequency/(speed of light)]

The effect of isotopes, both on the vibration and the decay dynamics, has been found to be stronger than previously thought. In some systems, such as silicon and germanium, the decay of the anti-symmetric stretch mode of interstitial oxygen involves the symmetric stretch mode with a strong isotope dependence. For example, it was shown that for a natural silicon sample, the lifetime of the anti-symmetric vibration is 11.4 ps. When the isotope of one of the silicon atoms is increased to 29Si, the lifetime increases to 19 ps. In similar manner, when the silicon atom is changed to 30Si, the lifetime becomes 27 ps.[7]

Two-dimensional IR

Two-dimensional infrared correlation spectroscopy analysis combines multiple samples of infrared spectra to reveal more complex properties. By extending the spectral information of a perturbed sample, spectral analysis is simplified and resolution is enhanced. The 2D synchronous and 2D asynchronous spectra represent a graphical overview of the spectral changes due to a perturbation (such as a changing concentration or changing temperature) as well as the relationship between the spectral changes at two different wavenumbers.


Pulse Sequence used to obtain a two-dimensional Fourier transform infrared spectrum. The time period \tau_1 is usually referred to as the coherence time and the second time period \tau_2 is known as the waiting time. The excitation frequency is obtained by Fourier transforming along the \tau_1 axis.

Nonlinear two-dimensional infrared spectroscopy[8][9] is the infrared version of correlation spectroscopy. Nonlinear two-dimensional infrared spectroscopy is a technique that has become available with the development of femtosecond infrared laser pulses. In this experiment, first a set of pump pulses is applied to the sample. This is followed by a waiting time during which the system is allowed to relax. The typical waiting time lasts from zero to several picoseconds, and the duration can be controlled with a resolution of tens of femtoseconds. A probe pulse is then applied, resulting in the emission of a signal from the sample. The nonlinear two-dimensional infrared spectrum is a two-dimensional correlation plot of the frequency ω1 that was excited by the initial pump pulses and the frequency ω3 excited by the probe pulse after the waiting time. This allows the observation of coupling between different vibrational modes; because of its extremely fine time resolution, it can be used to monitor molecular dynamics on a picosecond timescale. It is still a largely unexplored technique and is becoming increasingly popular for fundamental research.

As with two-dimensional nuclear magnetic resonance (2DNMR) spectroscopy, this technique spreads the spectrum in two dimensions and allows for the observation of cross peaks that contain information on the coupling between different modes. In contrast to 2DNMR, nonlinear two-dimensional infrared spectroscopy also involves the excitation to overtones. These excitations result in excited state absorption peaks located below the diagonal and cross peaks. In 2DNMR, two distinct techniques, COSY and NOESY, are frequently used. The cross peaks in the first are related to the scalar coupling, while in the latter they are related to the spin transfer between different nuclei. In nonlinear two-dimensional infrared spectroscopy, analogs have been drawn to these 2DNMR techniques.
Nonlinear two-dimensional infrared spectroscopy with zero waiting time corresponds to COSY, and nonlinear two-dimensional infrared spectroscopy with finite waiting time allowing vibrational population transfer corresponds to NOESY. The COSY variant of nonlinear two-dimensional infrared spectroscopy has been used for determination of the secondary structure content of proteins.[10]

Molecular vibration


From Wikipedia, the free encyclopedia

A molecular vibration occurs when atoms in a molecule are in periodic motion while the molecule as a whole has constant translational and rotational motion. The frequency of the periodic motion is known as a vibration frequency, and the typical frequencies of molecular vibrations range from less than 1012 to approximately 1014 Hz.

In general, a molecule with N atoms has 3N – 6 normal modes of vibration, but a linear molecule has 3N – 5 such modes, as rotation about its molecular axis cannot be observed.[1] A diatomic molecule has one normal mode of vibration. The normal modes of vibration of polyatomic molecules are independent of each other but each normal mode will involve simultaneous vibrations of different parts of the molecule such as different chemical bonds.

A molecular vibration is excited when the molecule absorbs a quantum of energy, E, corresponding to the vibration's frequency, ν, according to the relation E = (where h is Planck's constant). A fundamental vibration is excited when one such quantum of energy is absorbed by the molecule in its ground state. When two quanta are absorbed the first overtone is excited, and so on to higher overtones.

To a first approximation, the motion in a normal vibration can be described as a kind of simple harmonic motion. In this approximation, the vibrational energy is a quadratic function (parabola) with respect to the atomic displacements and the first overtone has twice the frequency of the fundamental. In reality, vibrations are anharmonic and the first overtone has a frequency that is slightly lower than twice that of the fundamental.
Excitation of the higher overtones involves progressively less and less additional energy and eventually leads to dissociation of the molecule, as the potential energy of the molecule is more like a Morse potential.

The vibrational states of a molecule can be probed in a variety of ways. The most direct way is through infrared spectroscopy, as vibrational transitions typically require an amount of energy that corresponds to the infrared region of the spectrum. Raman spectroscopy, which typically uses visible light, can also be used to measure vibration frequencies directly. The two techniques are complementary and comparison between the two can provide useful structural information such as in the case of the rule of mutual exclusion for centrosymmetric molecules.

Vibrational excitation can occur in conjunction with electronic excitation (vibronic transition), giving vibrational fine structure to electronic transitions, particularly with molecules in the gas state.

Simultaneous excitation of a vibration and rotations gives rise to vibration-rotation spectra.

Vibrational coordinates

The coordinate of a normal vibration is a combination of changes in the positions of atoms in the molecule. When the vibration is excited the coordinate changes sinusoidally with a frequency ν, the frequency of the vibration.

Internal coordinates

Internal coordinates are of the following types, illustrated with reference to the planar molecule ethylene,
Ethylene
  • Stretching: a change in the length of a bond, such as C-H or C-C
  • Bending: a change in the angle between two bonds, such as the HCH angle in a methylene group
  • Rocking: a change in angle between a group of atoms, such as a methylene group and the rest of the molecule.
  • Wagging: a change in angle between the plane of a group of atoms, such as a methylene group and a plane through the rest of the molecule,
  • Twisting: a change in the angle between the planes of two groups of atoms, such as a change in the angle between the two methylene groups.
  • Out-of-plane: a change in the angle between any one of the C-H bonds and the plane defined by the remaining atoms of the ethylene molecule. Another example is in BF3 when the boron atom moves in and out of the plane of the three fluorine atoms.
In a rocking, wagging or twisting coordinate the bond lengths within the groups involved do not change. The angles do. Rocking is distinguished from wagging by the fact that the atoms in the group stay in the same plane.

In ethene there are 12 internal coordinates: 4 C-H stretching, 1 C-C stretching, 2 H-C-H bending, 2 CH2 rocking, 2 CH2 wagging, 1 twisting. Note that the H-C-C angles cannot be used as internal coordinates as the angles at each carbon atom cannot all increase at the same time.

Vibrations of a methylene group (-CH2-) in a molecule for illustration

The atoms in a CH2 group, commonly found in organic compounds, can vibrate in six different ways: symmetric and asymmetric stretching, scissoring, rocking, wagging and twisting as shown here:

Symmetrical
stretching
Asymmetrical
stretching
Scissoring (Bending)
Symmetrical stretching.gif Asymmetrical stretching.gif Scissoring.gif
Rocking Wagging Twisting
Modo rotacao.gif Wagging.gif Twisting.gif

(These figures do not represent the "recoil" of the C atoms, which, though necessarily present to balance the overall movements of the molecule, are much smaller than the movements of the lighter H atoms).

Symmetry-adapted coordinates

Symmetry-adapted coordinates may be created by applying a projection operator to a set of internal coordinates.[2]
The projection operator is constructed with the aid of the character table of the molecular point group. For example, the four(un-normalised) C-H stretching coordinates of the molecule ethene are given by
Q_{s1} =  q_{1} + q_{2} + q_{3} + q_{4}\!
Q_{s2} =  q_{1} + q_{2} - q_{3} - q_{4}\!
Q_{s3} =  q_{1} - q_{2} + q_{3} - q_{4}\!
Q_{s4} =  q_{1} - q_{2} - q_{3} + q_{4}\!
where q_{1} - q_{4} are the internal coordinates for stretching of each of the four C-H bonds.

Illustrations of symmetry-adapted coordinates for most small molecules can be found in Nakamoto.[3]

Normal coordinates

The normal coordinates, denoted as Q, refer to the positions of atoms away from their equilibrium positions, with respect to a normal mode of vibration. Each normal mode is assigned a single normal coordinate, and so the normal coordinate refers to the "progress" along that normal mode at any given time. Formally, normal modes are determined by solving a secular determinant, and then the normal coordinates (over the normal modes) can be expressed as a summation over the cartesian coordinates (over the atom positions). The advantage of working in normal modes is that they diagonalize the matrix governing the molecular vibrations, so each normal mode is an independent molecular vibration, associated with its own spectrum of quantum mechanical states. If the molecule possesses symmetries, it will belong to a point group, and the normal modes will "transform as" an irreducible representation under that group. The normal modes can then be qualitatively determined by applying group theory and projecting the irreducible representation onto the cartesian coordinates. For example, when this treatment is applied to CO2, it is found that the C=O stretches are not independent, but rather there is an O=C=O symmetric stretch and an O=C=O asymmetric stretch.
  • symmetric stretching: the sum of the two C-O stretching coordinates; the two C-O bond lengths change by the same amount and the carbon atom is stationary. Q = q1 + q2
  • asymmetric stretching: the difference of the two C-O stretching coordinates; one C-O bond length increases while the other decreases. Q = q1 - q2
When two or more normal coordinates belong to the same irreducible representation of the molecular point group (colloquially, have the same symmetry) there is "mixing" and the coefficients of the combination cannot be determined a priori. For example, in the linear molecule hydrogen cyanide, HCN, The two stretching vibrations are
  1. principally C-H stretching with a little C-N stretching; Q1 = q1 + a q2 (a << 1)
  2. principally C-N stretching with a little C-H stretching; Q2 = b q1 + q2 (b << 1)
The coefficients a and b are found by performing a full normal coordinate analysis by means of the Wilson GF method.[4]

Newtonian mechanics


The HCl molecule as an anharmonic oscillator vibrating at energy level E3. D0 is dissociation energy here, r0 bond length, U potential energy. Energy is expressed in wavenumbers. The hydrogen chloride molecule is attached to the coordinate system to show bond length changes on the curve.

Perhaps surprisingly, molecular vibrations can be treated using Newtonian mechanics to calculate the correct vibration frequencies. The basic assumption is that each vibration can be treated as though it corresponds to a spring. In the harmonic approximation the spring obeys Hooke's law: the force required to extend the spring is proportional to the extension. The proportionality constant is known as a force constant, k. The anharmonic oscillator is considered elsewhere.[5]
\mathrm{Force}=- k Q \!
By Newton’s second law of motion this force is also equal to a reduced mass, μ, times acceleration.
 \mathrm{Force} = \mu \frac{d^2Q}{dt^2}
Since this is one and the same force the ordinary differential equation follows.
\mu \frac{d^2Q}{dt^2} + k Q = 0
The solution to this equation of simple harmonic motion is
Q(t) =  A \cos (2 \pi \nu  t) ;\ \  \nu =   {1\over {2 \pi}} \sqrt{k \over \mu}. \!
A is the maximum amplitude of the vibration coordinate Q. It remains to define the reduced mass, μ. In general, the reduced mass of a diatomic molecule, AB, is expressed in terms of the atomic masses, mA and mB, as
\frac{1}{\mu} = \frac{1}{m_A}+\frac{1}{m_B}.
The use of the reduced mass ensures that the centre of mass of the molecule is not affected by the vibration. In the harmonic approximation the potential energy of the molecule is a quadratic function of the normal coordinate. It follows that the force-constant is equal to the second derivative of the potential energy.
k=\frac{\partial ^2V}{\partial Q^2}
When two or more normal vibrations have the same symmetry a full normal coordinate analysis must be performed (see GF method). The vibration frequencies,νi are obtained from the eigenvalues,λi, of the matrix product GF. G is a matrix of numbers derived from the masses of the atoms and the geometry of the molecule.[4] F is a matrix derived from force-constant values. Details concerning the determination of the eigenvalues can be found in.[6]

Quantum mechanics

In the harmonic approximation the potential energy is a quadratic function of the normal coordinates. Solving the Schrödinger wave equation, the energy states for each normal coordinate are given by
E_n = h \left( n + {1 \over 2 } \right)\nu=h\left( n + {1 \over 2 } \right) {1\over {2 \pi}} \sqrt{k \over m} \!,
where n is a quantum number that can take values of 0, 1, 2 ... In molecular spectroscopy where several types of molecular energy are studied and several quantum numbers are used, this vibrational quantum number is often designated as v.[7][8]

The difference in energy when n (or v) changes by 1 is therefore equal to h\nu, the product of the Planck constant and the vibration frequency derived using classical mechanics. For a transition from level n to level n+1 due to absorption of a photon, the frequency of the photon is equal to the classical vibration frequency \nu (in the harmonic oscillator approximation).

See quantum harmonic oscillator for graphs of the first 5 wave functions, which allow certain selection rules to be formulated. For example, for a harmonic oscillator transitions are allowed only when the quantum number n changes by one,
\Delta n = \pm 1
but this does not apply to an anharmonic oscillator; the observation of overtones is only possible because vibrations are anharmonic. Another consequence of anharmonicity is that transitions such as between states n=2 and n=1 have slightly less energy than transitions between the ground state and first excited state. Such a transition gives rise to a hot band.

Intensities

In an infrared spectrum the intensity of an absorption band is proportional to the derivative of the molecular dipole moment with respect to the normal coordinate.[9] The intensity of Raman bands depends on polarizability.

Watch scientist challenge the scare-promoting Food Babe

& | April 7, 2015 |
 
Original link:  http://geneticliteracyproject.org/2015/04/watch-scientist-challenge-the-scare-promoting-food-babe/
 
food-babe-1cba4a5bff1e5b917ff142309f6ab837e9a62b36-s1100-c15

If you don’t know who Vani Hari is by now, just ask Subway. Under her nom de plume, Food Babe, she and her legion of followers pounded the fast food company until it removed a harmless chemical with a scary sounding name from their bread.

That’s Hari’s stock and trade as self-proclaimed consumer advocate–demonizing benign ingredients, focusing on their seeming “yuck” factor but ignoring the science. In the case of the Subway fiasco, for example, although azodicarbonamide, a dough conditioner is perfectly harmless and widely used in foods, it’s also used in yoga mats. Guess who won that ‘public debate’?

Hari often recounts her self-authored narrative, her personal journey from unhealthy and frumpy to beautiful “babe.” She claims to have achieved this transformation by rejecting standard American fare and its reliance on unhealthy ingredients in favor of an organic diet. Her book, The Food Babe Way, hit shelves in February with the promise of revealing food-industry secrets and helping readers improve their health and waistlines with exclusive tips and tricks. It shot to the top of many bestseller lists–but the science community sees her less of a defender of the culinary downtrodden than as a fearmonger and promoter of chemophobia and dangerous misinformation.

Alison Bernstein, AKA Mommy Ph.D.–co-author of this piece–attended one of Hari’s book promotion appearances last month at the Marcus Jewish Community Center in Atlanta. Bernstein, who recently launched the Scientists Are People campaign to showcase the humanity of oft-demonized scientists, made waves in the science-based food community when she challenged Hari during the Atlanta event’s Q&A.

The event started out in typical Hari fashion–all Food Babe, all the time, replete with her fabulous tale of sickly and homely waif to the organic world’s superwoman. For years, she said, she felt lousy because she was sleep deprived, and regularly gorged on fast food and candy. She was overweight and plagued with eczema. Voila. Now she’s bikini ready–and all by eliminating “chemicals and GMOs,”

Hari’s typical narrative drew on misguided sympathy. She recounted growing up as a child of Indian immigrants. Wanting to fit in, she rejected her mother’s traditional home cooking, which she “shunned as a child,” complaining that it looked and tasted funny. Instead, she and her brother subsisted on Wendy’s, Burger King, McDonalds, microwavable salisbury steak, and Betty Crocker meals.

https://www.youtube.com/watch?feature=player_embedded&v=EnUDQmNr0p4

Hari’s food misadventures continued as an adult. A peripatetic businesswoman, she would dine at high-end restaurants such as Morton’s and Ruth’s Chris on her company’s expense account. All of that changed, she said, after suffering from appendicitis. It was a wake up call. She examined her food eating patterns, and learned, she said, that she had been “duped” by the food industry. She changed her habits and launched her crusade. Now she asks people to follow the Food Babe Way (which many experts say promotes orthorexia).

Some of what she advocates is just food nutrition 101. Dieticians, physicians, and scientists argue that switching to a diet high in produce, getting adequate sleep (by leaving a high pressure job), and cutting calories and junk food would improve anyone’s health and mood, Food Babe Way or not. Her transformation had nothing to do with avoiding specific chemicals or any specific food.

Hari vigorously disagrees. In Atlanta, following her script, she claimed that an organic diet is the best to avoid “harmful” chemicals. She not only touted eating produce–a good thing–she said it is imperative to purchase organic–something science does not support. She cited the Consumers Union and Environmental Working Group as her sources for her recommendations; both use flawed methodology to arrive at their lists of so-called “safe” and “dangerous” produce. And she promoted the common misconception that organic farming doesn’t use pesticides–or that the natural ones that are used are necessarily safer than targeted, synthetic alternatives.

https://www.youtube.com/watch?feature=player_embedded&v=oJ0nkPXgOhQ

Bernstein corrected Hari’s misinformation during the Q&A session:

https://www.youtube.com/watch?feature=player_embedded&v=S0z2eeq_c_4

When Bernstein challenged Hari’s claim that organic farming don’t use pesticides, Hari first falsely contradicted her, then changed the subject and finally entered attack mode, accusing Bernstein of being one of those “people who don’t want the [pro-organic] message spread.” Bernstein, she implied, wasn’t an independent scientist–she is–but a shill for the food industry.

It was surprising that Hari even answered questions. At most events–such as an appearance earlier this year at the University of Florida–Hari refused to respond to audience queries. Taking questions implies a commitment to dialogue. Rather, Hari motivates by instilling fear, uncertainty and doubt. She demonizes what she doesn’t understand, and has become quite wealthy in the process. Indeed, the hypocrisy of her calling science advocates “shills” is astonishing.

Bernstein observed that Hari courts her audience by making it seem as if she is empowering them: knowledge is power is her narrative. As per her M.O., Food Babe gushed about empowering consumers with information of the dangerous ingredients that haunt their foods. But is arming consumers with misinformation empowering? Or does it exploit their fears of the unknown? People can only make wise decisions when they have access to accurate information. Instilling unfounded fears of food is the opposite of empowering.

https://www.youtube.com/watch?feature=player_embedded&v=rbePYkc4bXk

To be truly empowered, consumers must have access to accurate information. Even Hari admitted that truth has not been her stock and trade. “It was just a hobby,” she said of her early blog. “It wasn’t this well-researched facts kind of thing. It was just my opinion about things.” She claims to have cleared up her act, but medical and science experts disagree.

Hari’s perspective is a mixture of the mystical and misinformation. At one point, she described the body as acidic in the morning, advising drinking lemon water to combat the acidity because “lemon water is very alkaline”. Nothing about what she said is correct. The body’s balance between acidity and alkalinity is referred to as acid-base balance. The pH of the human body is naturally regulated, using different mechanisms to control the blood’s acid-base balance. Lemon juice is not alkaline (basic), it’s acidic. Food does not alter the body’s pH.

https://www.youtube.com/watch?feature=player_embedded&v=ftq4nJqyRqo

Hari also fails to assess the quality of the information that she passes on to he unquestioning followers. And she makes it seems that she,and only she, has cracked the wall of deception built by Food Inc. Most of the supposedly top-secret information she claims to reveal is publicly available at university and government websites, including FDA, USDA, EPA and NIH, as well as non-government websites for the organic industry like OMRI, and various publicly available science publications.

With notoriety comes responsibility. Spreading misinformation and fear is irresponsible and distracting.
The best place for consumers to get accurate information is from those with training and experience in food, farming and nutrition. At the MJCCA event, she repeatedly described scientists as people who hoard information and intentionally deceive the public about “chemicals.” She painted farmers as uncaring, and only out for profit. In contrast, she positioned herself as the anti-expert, rejecting “people who say this is too complicated, you can’t understand this, you need an advanced degree, they don’t want to empower the individual.”

Hari’s assertion that scientists hoard knowledge–even as a scientist was engaging her in open discourse – was baffling. Hari’s fans are hungry for real information. After the Atlanta Q&A, a small crowd formed around Bernstein to ask questions about Parkinson’s disease, pesticide toxicity and where to find accurate information. Many of them had no idea that much of this information is publicly available.

The huge number of scientists, science communicators and farmers who actively, and often selflessly, share medical food information on social media, and cordially engage with the public, serve as proof that Hari’s characterization is wrong, even offensive. She apparently believes she needs to demonize trained experts to convince people to listen to her.

Alison Bernstein is a scientist studying Parkinson’s disease. She lives in Atlanta, GA with her husband, 2 kids and 2 cats. Follow her on her Mommy Ph.D. Facebook page and on Twitter @mommyphd2.

Kavin Senapathy is a contributor at Genetic Literacy Project and other sites. She is a mother of two and a freelance writer who works for a genomics and bioinformatics R&D in Madison, WI. Opinions expressed are her own and do not reflect her employer. Follow Kavin on her science advocacy Facebook page, and Twitter @ksenapathy

Additional Resources:

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...