Search This Blog

Sunday, October 12, 2014

Greenhouse gas

Greenhouse gas

From Wikipedia, the free encyclopedia
 
refer to caption and image description
Greenhouse effect schematic showing energy flows between space, the atmosphere, and Earth's surface. Energy influx and emittance are expressed in watts per square meter (W/m2).

A greenhouse gas (sometimes abbreviated GHG) is a gas in an atmosphere that absorbs and emits radiation within the thermal infrared range. This process is the fundamental cause of the greenhouse effect.[1] The primary greenhouse gases in the Earth's atmosphere are water vapor, carbon dioxide, methane, nitrous oxide, and ozone. Greenhouse gases greatly affect the temperature of the Earth; without them, Earth's surface would average about 33 °C colder, which is about 59 °F below the present average of 14 °C (57 °F).[2][3][4]

Since the beginning of the Industrial Revolution (taken as the year 1750), the burning of fossil fuels and extensive clearing of native forests has contributed to a 40% increase in the atmospheric concentration of carbon dioxide, from 280 to 392.6 parts per million (ppm) in 2012.[5][6] and has now reached 400 ppm in the northern hemisphere. This increase has occurred despite the uptake of a large portion of the emissions by various natural "sinks" involved in the carbon cycle.[7][8] Anthropogenic carbon dioxide (CO
2
) emissions (i.e., emissions produced by human activities) come from combustion of carbon-based fuels, principally wood, coal, oil, and natural gas.[9] Under ongoing greenhouse gas emissions, available Earth System Models project that the Earth's surface temperature could exceed historical analogs as early as 2047 affecting most ecosystems on Earth and the livelihoods of over 3 billion people worldwide.[10] Greenhouse gases also trigger[clarification needed] ocean bio-geochemical changes with broad ramifications in marine systems.[11]

In the Solar System, the atmospheres of Venus, Mars, and Titan also contain gases that cause a greenhouse effect, though Titan's atmosphere has an anti-greenhouse effect which reduces the warming.

Gases in Earth's atmosphere

Greenhouse gases

refer to caption and adjacent text
Atmospheric absorption and scattering at different wavelengths of electromagnetic waves. The largest absorption band of carbon dioxide is in the infrared.

Greenhouse gases are those that can absorb and emit infrared radiation,[1] but not radiation in or near the visible spectrum. In order, the most abundant greenhouse gases in Earth's atmosphere are:
Atmospheric concentrations of greenhouse gases are determined by the balance between sources (emissions of the gas from human activities and natural systems) and sinks (the removal of the gas from the atmosphere by conversion to a different chemical compound).[12] The proportion of an emission remaining in the atmosphere after a specified time is the "Airborne fraction" (AF). More precisely, the annual AF is the ratio of the atmospheric increase in a given year to that year's total emissions. For CO
2
the AF over the last 50 years (1956–2006) has been increasing at 0.25 ± 0.21%/year.[13]

Non-greenhouse gases

Although contributing to many other physical and chemical reactions, the major atmospheric constituents, nitrogen (N
2
), oxygen (O
2
), and argon (Ar), are not greenhouse gases. This is because molecules containing two atoms of the same element such as N
2
and O
2
and monatomic molecules such as argon (Ar) have no net change in their dipole moment when they vibrate and hence are almost totally unaffected by infrared radiation. Although molecules containing two atoms of different elements such as carbon monoxide (CO) or hydrogen chloride (HCl) absorb IR, these molecules are short-lived in the atmosphere owing to their reactivity and solubility. Because they do not contribute significantly to the greenhouse effect, they are usually omitted when discussing greenhouse gases.

Indirect radiative effects

world map of carbon monoxide concentrations in the lower atmosphere
The false colors in this image represent levels of carbon monoxide in the lower atmosphere, ranging from about 390 parts per billion (dark brown pixels), to 220 parts per billion (red pixels), to 50 parts per billion (blue pixels).[14]

Some gases have indirect radiative effects (whether or not they are a greenhouse gas themselves). This happens in two main ways. One way is that when they break down in the atmosphere they produce another greenhouse gas. For example methane and carbon monoxide (CO) are oxidized to give carbon dioxide (and methane oxidation also produces water vapor; that will be considered below). Oxidation of CO to CO
2
directly produces an unambiguous increase in radiative forcing although the reason is subtle. The peak of the thermal IR emission from the Earth's surface is very close to a strong vibrational absorption band of CO
2
(667 cm−1). On the other hand, the single CO vibrational band only absorbs IR at much higher frequencies (2145 cm−1), where the ~300 K thermal emission of the surface is at least a factor of ten lower. On the other hand, oxidation of methane to CO
2
which requires reactions with the OH radical, produces an instantaneous reduction, since CO
2
is a weaker greenhouse gas than methane; but it has a longer lifetime. As described below this is not the whole story, since the oxidations of CO and CH
4
are intertwined by both consuming OH radicals. In any case, the calculation of the total radiative effect needs to include both the direct and indirect forcing.

A second type of indirect effect happens when chemical reactions in the atmosphere involving these gases change the concentrations of greenhouse gases. For example, the destruction of non-methane volatile organic compounds (NMVOC) in the atmosphere can produce ozone. The size of the indirect effect can depend strongly on where and when the gas is emitted.[15]

Methane has a number of indirect effects in addition to forming CO
2
. Firstly, the main chemical which destroys methane in the atmosphere is the hydroxyl radical (OH). Methane reacts with OH and so more methane means that the concentration of OH goes down. Effectively, methane increases its own atmospheric lifetime and therefore its overall radiative effect. The second effect is that the oxidation of methane can produce ozone. Thirdly, as well as making CO
2
the oxidation of methane produces water; this is a major source of water vapor in the stratosphere which is otherwise very dry. CO and NMVOC also produce CO
2
when they are oxidized. They remove OH from the atmosphere and this leads to higher concentrations of methane. The surprising effect of this is that the global warming potential of CO is three times that of CO
2
.[16] The same process that converts NMVOC to carbon dioxide can also lead to the formation of tropospheric ozone. Halocarbons have an indirect effect because they destroy stratospheric ozone. Finally hydrogen can lead to ozone production and CH
4
increases as well as producing water vapor in the stratosphere.[15]

Contribution of clouds to Earth's greenhouse effect

The major non-gas contributor to the Earth's greenhouse effect, clouds, also absorb and emit infrared radiation and thus have an effect on radiative properties of the greenhouse gases. Clouds are water droplets or ice crystals suspended in the atmosphere.[17][18]

Impacts on the overall greenhouse effect

refer to caption and adjacent text
Schmidt et al. (2010)[19] analysed how individual components of the atmosphere contribute to the total greenhouse effect. They estimated that water vapor accounts for about 50% of the Earth's greenhouse effect, with clouds contributing 25%, carbon dioxide 20%, and the minor greenhouse gases and aerosols accounting for the remaining 5%. In the study, the reference model atmosphere is for 1980 conditions. Image credit: NASA.[20]

The contribution of each gas to the greenhouse effect is affected by the characteristics of that gas, its abundance, and any indirect effects it may cause. For example, the direct radiative effect of a mass of methane is about 72 times stronger than the same mass of carbon dioxide over a 20-year time frame[21] but it is present in much smaller concentrations so that its total direct radiative effect is smaller, in part due to its shorter atmospheric lifetime. On the other hand, in addition to its direct radiative impact, methane has a large, indirect radiative effect because it contributes to ozone formation. Shindell et al. (2005)[22] argue that the contribution to climate change from methane is at least double previous estimates as a result of this effect.[23]

When ranked by their direct contribution to the greenhouse effect, the most important are:[17]

Compound
 
Formula
 
Contribution
(%)
Water vapor and clouds H
2
O
36 – 72%  
Carbon dioxide CO
2
9 – 26%
Methane CH
4
4–9%  
Ozone O
3
3–7%  

In addition to the main greenhouse gases listed above, other greenhouse gases include sulfur hexafluoride, hydrofluorocarbons and perfluorocarbons (see IPCC list of greenhouse gases). Some greenhouse gases are not often listed. For example, nitrogen trifluoride has a high global warming potential (GWP) but is only present in very small quantities.[24]

Proportion of direct effects at a given moment

It is not possible to state that a certain gas causes an exact percentage of the greenhouse effect. This is because some of the gases absorb and emit radiation at the same frequencies as others, so that the total greenhouse effect is not simply the sum of the influence of each gas. The higher ends of the ranges quoted are for each gas alone; the lower ends account for overlaps with the other gases.[17][18] In addition, some gases such as methane are known to have large indirect effects that are still being quantified.[25]

Atmospheric lifetime

Aside from water vapor, which has a residence time of about nine days,[26] major greenhouse gases are well-mixed, and take many years to leave the atmosphere.[27] Although it is not easy to know with precision how long it takes greenhouse gases to leave the atmosphere, there are estimates for the principal greenhouse gases. Jacob (1999)[28] defines the lifetime \tau of an atmospheric species X in a one-box model as the average time that a molecule of X remains in the box. Mathematically \tau can be defined as the ratio of the mass m (in kg) of X in the box to its removal rate, which is the sum of the flow of X out of the box (F_{out}), chemical loss of X (L), and deposition of X (D) (all in kg/s): \tau = \frac{m}{F_{out}+L+D}.[28] If one stopped pouring any of this gas into the box, then after a time \tau, its concentration would be about halved.

The atmospheric lifetime of a species therefore measures the time required to restore equilibrium following a sudden increase or decrease in its concentration in the atmosphere. Individual atoms or molecules may be lost or deposited to sinks such as the soil, the oceans and other waters, or vegetation and other biological systems, reducing the excess to background concentrations. The average time taken to achieve this is the mean lifetime.

Carbon dioxide has a variable atmospheric lifetime, and cannot be specified precisely.[29] The atmospheric lifetime of CO
2
is estimated of the order of 30–95 years.[30] This figure accounts for CO
2
molecules being removed from the atmosphere by mixing into the ocean, photosynthesis, and other processes. However, this excludes the balancing fluxes of CO
2
into the atmosphere from the geological reservoirs, which have slower characteristic rates.[31] While more than half of the CO
2
emitted is removed from the atmosphere within a century, some fraction (about 20%) of emitted CO
2
remains in the atmosphere for many thousands of years.[32][33][34] Similar issues apply to other greenhouse gases, many of which have longer mean lifetimes than CO
2
. E.g., N2O has a mean atmospheric lifetime of 114 years.[21]

Radiative forcing

The Earth absorbs some of the radiant energy received from the sun, reflects some of it as light and reflects or radiates the rest back to space as heat.[35] The Earth's surface temperature depends on this balance between incoming and outgoing energy.[35] If this energy balance is shifted, the Earth's surface could become warmer or cooler, leading to a variety of changes in global climate.[35]

A number of natural and man-made mechanisms can affect the global energy balance and force changes in the Earth's climate.[35] Greenhouse gases are one such mechanism.[35] Greenhouse gases in the atmosphere absorb and re-emit some of the outgoing energy radiated from the Earth's surface, causing that heat to be retained in the lower atmosphere.[35] As explained above, some greenhouse gases remain in the atmosphere for decades or even centuries, and therefore can affect the Earth's energy balance over a long time period.[35] Factors that influence Earth's energy balance can be quantified in terms of "radiative climate forcing."[35] Positive radiative forcing indicates warming (for example, by increasing incoming energy or decreasing the amount of energy that escapes to space), while negative forcing is associated with cooling.[35]

Global warming potential

The global warming potential (GWP) depends on both the efficiency of the molecule as a greenhouse gas and its atmospheric lifetime. GWP is measured relative to the same mass of CO
2
and evaluated for a specific timescale. Thus, if a gas has a high (positive) radiative forcing but also a short lifetime, it will have a large GWP on a 20-year scale but a small one on a 100-year scale. Conversely, if a molecule has a longer atmospheric lifetime than CO
2
its GWP will increase with the timescale considered. Carbon dioxide is defined to have a GWP of 1 over all time periods.

Methane has an atmospheric lifetime of 12 ± 3 years and a GWP of 72 over 20 years, 25 over 100 years and 7.6 over 500 years. The decrease in GWP at longer times is because methane is degraded to water and CO
2
through chemical reactions in the atmosphere.

Examples of the atmospheric lifetime and GWP relative to CO
2
for several greenhouse gases are given in the following table:[21]

Atmospheric lifetime and GWP relative to CO
2
at different time horizon for various greenhouse gases.
Gas name Chemical
formula
Lifetime
(years)
Global warming potential (GWP) for given time horizon
20-yr 100-yr 500-yr
Carbon dioxide CO
2
See above 1 1 1
Methane CH
4
12 72 25 7.6
Nitrous oxide N
2
O
114 289 298 153
CFC-12 CCl
2
F
2
100 11 000 10 900 5 200
HCFC-22 CHClF
2
12 5 160 1 810 549
Tetrafluoromethane CF
4
50 000 5 210 7 390 11 200
Hexafluoroethane C
2
F
6
10 000 8 630 12 200 18 200
Sulfur hexafluoride SF
6
3 200 16 300 22 800 32 600
Nitrogen trifluoride NF
3
740 12 300 17 200 20 700

The use of CFC-12 (except some essential uses) has been phased out due to its ozone depleting properties.[36] The phasing-out of less active HCFC-compounds will be completed in 2030.[37]

Natural and anthropogenic sources

refer to caption and article text
Top: Increasing atmospheric carbon dioxide levels as measured in the atmosphere and reflected in ice cores. Bottom: The amount of net carbon increase in the atmosphere, compared to carbon emissions from burning fossil fuel.
refer to caption and image description
This diagram shows a simplified representation of the contemporary global carbon cycle. Changes are measured in gigatons of carbon per year (GtC/y). Canadell et al. (2007) estimated the growth rate of global average atmospheric CO
2
for 2000–2006 as 1.93 parts-per-million per year (4.1 petagrams of carbon per year).[38] Image credit: U.S. Department of Energy Genomic Science program[39]

Aside from purely human-produced synthetic halocarbons, most greenhouse gases have both natural and human-caused sources. During the pre-industrial Holocene, concentrations of existing gases were roughly constant. In the industrial era, human activities have added greenhouse gases to the atmosphere, mainly through the burning of fossil fuels and clearing of forests.[40][41]

The 2007 Fourth Assessment Report compiled by the IPCC (AR4) noted that "changes in atmospheric concentrations of greenhouse gases and aerosols, land cover and solar radiation alter the energy balance of the climate system", and concluded that "increases in anthropogenic greenhouse gas concentrations is very likely to have caused most of the increases in global average temperatures since the mid-20th century".[42] In AR4, "most of" is defined as more than 50%.

Abbreviations used in the two tables below: ppm = parts-per-million; ppb = parts-per-billion; ppt = parts-per-trillion; W/m2 = watts per square metre
Current greenhouse gas concentrations[5]

Gas Pre-1750
tropospheric
concentration[43]
Recent
tropospheric
concentration[44]
Absolute increase
since 1750
Percentage
increase
since 1750
Increased
radiative forcing
(W/m2)[45]
Carbon dioxide (CO
2
)
280 ppm[46] 395.4 ppm[47] 115.4 ppm 41.2% 1.88
Methane (CH
4
)
700 ppb[48] 1893 ppb /[49]
1762 ppb[49]
1193 ppb /
1062 ppb
170.4% /
151.7%
0.49
Nitrous oxide (N
2
O
)
270 ppb[45][50] 326 ppb /[49]
324 ppb[49]
56 ppb /
54 ppb
20.7% /
20.0%
0.17
Tropospheric
ozone (O
3
)
237 ppb[43] 337 ppb[43] 100 ppb 42% 0.4[51]
Relevant to radiative forcing and/or ozone depletion; all of the following have no natural sources and hence zero amounts pre-industrial[5]

Gas Recent
tropospheric
concentration
Increased
radiative forcing
(W/m2)
CFC-11
(trichlorofluoromethane)
(CCl
3
F
)
236 ppt /
234 ppt
0.061
CFC-12 (CCl
2
F
2
)
527 ppt /
527 ppt
0.169
CFC-113 (Cl
2
FC-CClF
2
)
74 ppt /
74 ppt
0.022
HCFC-22 (CHClF
2
)
231 ppt /
210 ppt
0.046
HCFC-141b (CH
3
CCl
2
F
)
24 ppt /
21 ppt
0.0036
HCFC-142b (CH
3
CClF
2
)
23 ppt /
21 ppt
0.0042
Halon 1211 (CBrClF
2
)
4.1 ppt /
4.0 ppt
0.0012
Halon 1301 (CBrClF
3
)
3.3 ppt /
3.3 ppt
0.001
HFC-134a (CH
2
FCF
3
)
75 ppt /
64 ppt
0.0108
Carbon tetrachloride (CCl
4
)
85 ppt /
83 ppt
0.0143
Sulfur hexafluoride (SF
6
)
7.79 ppt /[52]
7.39 ppt[52]
0.0043
Other halocarbons Varies by
substance
collectively
0.02
Halocarbons in total
0.3574
refer to caption and article text
400,000 years of ice core data

Ice cores provide evidence for greenhouse gas concentration variations over the past 800,000 years (see the following section). Both CO
2
and CH
4
vary between glacial and interglacial phases, and concentrations of these gases correlate strongly with temperature. Direct data does not exist for periods earlier than those represented in the ice core record, a record that indicates CO
2
mole fractions stayed within a range of 180 ppm to 280 ppm throughout the last 800,000 years, until the increase of the last 250 years. However, various proxies and modeling suggests larger variations in past epochs; 500 million years ago CO
2
levels were likely 10 times higher than now.[53] Indeed higher CO
2
concentrations are thought to have prevailed throughout most of the Phanerozoic eon, with concentrations four to six times current concentrations during the Mesozoic era, and ten to fifteen times current concentrations during the early Palaeozoic era until the middle of the Devonian period, about 400 Ma.[54][55][56] The spread of land plants is thought to have reduced CO
2
concentrations during the late Devonian, and plant activities as both sources and sinks of CO
2
have since been important in providing stabilising feedbacks.[57] Earlier still, a 200-million year period of intermittent, widespread glaciation extending close to the equator (Snowball Earth) appears to have been ended suddenly, about 550 Ma, by a colossal volcanic outgassing that raised the CO
2
concentration of the atmosphere abruptly to 12%, about 350 times modern levels, causing extreme greenhouse conditions and carbonate deposition as limestone at the rate of about 1 mm per day.[58] This episode marked the close of the Precambrian eon, and was succeeded by the generally warmer conditions of the Phanerozoic, during which multicellular animal and plant life evolved. No volcanic carbon dioxide emission of comparable scale has occurred since. In the modern era, emissions to the atmosphere from volcanoes are only about 1% of emissions from human sources.[58][59][60]

Ice cores

Measurements from Antarctic ice cores show that before industrial emissions started atmospheric CO
2
mole fractions were about 280 parts per million (ppm), and stayed between 260 and 280 during the preceding ten thousand years.[61] Carbon dioxide mole fractions in the atmosphere have gone up by approximately 35 percent since the 1900s, rising from 280 parts per million by volume to 387 parts per million in 2009. One study using evidence from stomata of fossilized leaves suggests greater variability, with carbon dioxide mole fractions above 300 ppm during the period seven to ten thousand years ago,[62] though others have argued that these findings more likely reflect calibration or contamination problems rather than actual CO
2
variability.[63][64] Because of the way air is trapped in ice (pores in the ice close off slowly to form bubbles deep within the firn) and the time period represented in each ice sample analyzed, these figures represent averages of atmospheric concentrations of up to a few centuries rather than annual or decadal levels.

Changes since the Industrial Revolution

Refer to caption
Recent year-to-year increase of atmospheric CO
2
.
Refer to caption
Major greenhouse gas trends.

Since the beginning of the Industrial Revolution, the concentrations of most of the greenhouse gases have increased. For example, the mole fraction of carbon dioxide has increased from 280 ppm by about 36% to 380 ppm, or 100 ppm over modern pre-industrial levels. The first 50 ppm increase took place in about 200 years, from the start of the Industrial Revolution to around 1973.[citation needed]; however the next 50 ppm increase took place in about 33 years, from 1973 to 2006.[65]

Recent data also shows that the concentration is increasing at a higher rate. In the 1960s, the average annual increase was only 37% of what it was in 2000 through 2007.[66]

Today, the stock of carbon in the atmosphere increases by more than 3 million tonnes per annum (0.04%) compared with the existing stock.[clarification needed] This increase is the result of human activities by burning fossil fuels, deforestation and forest degradation in tropical and boreal regions.[67]

The other greenhouse gases produced from human activity show similar increases in both amount and rate of increase. Many observations are available online in a variety of Atmospheric Chemistry Observational Databases.

Anthropogenic greenhouse gases

This graph shows changes in the annual greenhouse gas index (AGGI) between 1979 and 2011.[68] The AGGI measures the levels of greenhouse gases in the atmosphere based on their ability to cause changes in the Earth's climate.[68]
This bar graph shows global greenhouse gas emissions by sector from 1990 to 2005, measured in carbon dioxide equivalents.[69]
Modern global anthropogenic carbon emissions.

Since about 1750 human activity has increased the concentration of carbon dioxide and other greenhouse gases. Measured atmospheric concentrations of carbon dioxide are currently 100 ppm higher than pre-industrial levels.[70] Natural sources of carbon dioxide are more than 20 times greater than sources due to human activity,[71] but over periods longer than a few years natural sources are closely balanced by natural sinks, mainly photosynthesis of carbon compounds by plants and marine plankton. As a result of this balance, the atmospheric mole fraction of carbon dioxide remained between 260 and 280 parts per million for the 10,000 years between the end of the last glacial maximum and the start of the industrial era.[72]

It is likely that anthropogenic (i.e., human-induced) warming, such as that due to elevated greenhouse gas levels, has had a discernible influence on many physical and biological systems.[73] Future warming is projected to have a range of impacts, including sea level rise,[74] increased frequencies and severities of some extreme weather events,[74] loss of biodiversity,[75] and regional changes in agricultural productivity.[75]

The main sources of greenhouse gases due to human activity are:
  • burning of fossil fuels and deforestation leading to higher carbon dioxide concentrations in the air. Land use change (mainly deforestation in the tropics) account for up to one third of total anthropogenic CO
    2
    emissions.[72]
  • livestock enteric fermentation and manure management,[76] paddy rice farming, land use and wetland changes, pipeline losses, and covered vented landfill emissions leading to higher methane atmospheric concentrations. Many of the newer style fully vented septic systems that enhance and target the fermentation process also are sources of atmospheric methane.
  • use of chlorofluorocarbons (CFCs) in refrigeration systems, and use of CFCs and halons in fire suppression systems and manufacturing processes.
  • agricultural activities, including the use of fertilizers, that lead to higher nitrous oxide (N
    2
    O
    ) concentrations.
The seven sources of CO
2
from fossil fuel combustion are (with percentage contributions for 2000–2004):[77]
Seven main fossil fuel
combustion sources
Contribution
(%)
Liquid fuels (e.g., gasoline, fuel oil) 36%
Solid fuels (e.g., coal) 35%
Gaseous fuels (e.g., natural gas) 20%
Cement production  3 %
Flaring gas industrially and at wells < 1%  
Non-fuel hydrocarbons < 1%  
"International bunker fuels" of transport
not included in national inventories[78]
 4 %
Carbon dioxide, methane, nitrous oxide (N
2
O
) and three groups of fluorinated gases (sulfur hexafluoride (SF
6
), hydrofluorocarbons (HFCs), and perfluorocarbons (PFCs)) are the major anthropogenic greenhouse gases,[79]:147[80] and are regulated under the Kyoto Protocol international treaty, which came into force in 2005.[81] Emissions limitations specified in the Kyoto Protocol expire in 2012.[81] The Cancún agreement, agreed in 2010, includes voluntary pledges made by 76 countries to control emissions.[82] At the time of the agreement, these 76 countries were collectively responsible for 85% of annual global emissions.[82]

Although CFCs are greenhouse gases, they are regulated by the Montreal Protocol, which was motivated by CFCs' contribution to ozone depletion rather than by their contribution to global warming. Note that ozone depletion has only a minor role in greenhouse warming though the two processes often are confused in the media.

Sectors

Tourism
According to UNEP global tourism is closely linked to climate change. Tourism is a significant contributor to the increasing concentrations of greenhouse gases in the atmosphere. Tourism accounts for about 50% of traffic movements. Rapidly expanding air traffic contributes about 2.5% of the production of CO
2
. The number of international travelers is expected to increase from 594 million in 1996 to 1.6 billion by 2020, adding greatly to the problem unless steps are taken to reduce emissions.[83]

Role of water vapor

Increasing water vapor in the stratosphere at Boulder, Colorado.

Water vapor accounts for the largest percentage of the greenhouse effect, between 36% and 66% for clear sky conditions and between 66% and 85% when including clouds.[18] Water vapor concentrations fluctuate regionally, but human activity does not significantly affect water vapor concentrations except at local scales, such as near irrigated fields. The atmospheric concentration of vapor is highly variable and depends largely on temperature, from less than 0.01% in extremely cold regions up to 3% by mass at in saturated air at about 32 °C.(see Relative humidity#other important facts) [84]

The average residence time of a water molecule in the atmosphere is only about nine days, compared to years or centuries for other greenhouse gases such as CH
4
and CO
2
.[85] Thus, water vapor responds to and amplifies effects of the other greenhouse gases. The Clausius–Clapeyron relation establishes that more water vapor will be present per unit volume at elevated temperatures. This and other basic principles indicate that warming associated with increased concentrations of the other greenhouse gases also will increase the concentration of water vapor (assuming that the relative humidity remains approximately constant; modeling and observational studies find that this is indeed so). Because water vapor is a greenhouse gas, this results in further warming and so is a "positive feedback" that amplifies the original warming. Eventually other earth processes offset these positive feedbacks, stabilizing the global temperature at a new equilibrium and preventing the loss of Earth's water through a Venus-like runaway greenhouse effect.[86]

Direct greenhouse gas emissions

Between the period 1970 to 2004, GHG emissions (measured in CO
2
-equivalent
)[87] increased at an average rate of 1.6% per year, with CO
2
emissions from the use of fossil fuels growing at a rate of 1.9% per year.[88][89] Total anthropogenic emissions at the end of 2009 were estimated at 49.5 gigatonnes CO
2
-equivalent.[90]:15 These emissions include CO
2
from fossil fuel use and from land use, as well as emissions of methane, nitrous oxide and other GHGs covered by the Kyoto Protocol.

At present, the primary source of CO
2
emissions is the burning of coal, natural gas, and petroleum for electricity and heat.[91]

Regional and national attribution of emissions

This figure shows the relative fraction of man-made greenhouse gases coming from each of eight categories of sources, as estimated by the Emission Database for Global Atmospheric Research version 3.2, fast track 2000 project [1]. These values are intended to provide a snapshot of global annual greenhouse gas emissions in the year 2000. The top panel shows the sum over all man-made greenhouse gases, weighted by their global warming potential over the next 100 years. This consists of 72% carbon dioxide, 18% methane, 8% nitrous oxide and 1% other gases. Lower panels show the comparable information for each of these three primary greenhouse gases, with the same coloring of sectors as used in the top chart. Segments with less than 1% fraction are not labeled.[92]

There are several different ways of measuring GHG emissions, for example, see World Bank (2010)[93]:362 for tables of national emissions data. Some variables that have been reported[94] include:
  • Definition of measurement boundaries: Emissions can be attributed geographically, to the area where they were emitted (the territory principle) or by the activity principle to the territory produced the emissions. These two principles result in different totals when measuring, for example, electricity importation from one country to another, or emissions at an international airport.
  • Time horizon of different GHGs: Contribution of a given GHG is reported as a CO
    2
    equivalent. The calculation to determine this takes into account how long that gas remains in the atmosphere. This is not always known accurately and calculations must be regularly updated to reflect new information.
  • What sectors are included in the calculation (e.g., energy industries, industrial processes, agriculture etc.): There is often a conflict between transparency and availability of data.
  • The measurement protocol itself: This may be via direct measurement or estimation. The four main methods are the emission factor-based method, mass balance method, predictive emissions monitoring systems, and continuous emissions monitoring systems. These methods differ in accuracy, cost, and usability.
These different measures are sometimes used by different countries to assert various policy/ethical positions on climate change (Banuri et al., 1996, p. 94).[95] This use of different measures leads to a lack of comparability, which is problematic when monitoring progress towards targets. There are arguments for the adoption of a common measurement tool, or at least the development of communication between different tools.[94]

Emissions may be measured over long time periods. This measurement type is called historical or cumulative emissions. Cumulative emissions give some indication of who is responsible for the build-up in the atmospheric concentration of GHGs (IEA, 2007, p. 199).[96]

The national accounts balance would be positively related to carbon emissions. The national accounts balance shows the difference between exports and imports. For many richer nations, such as the United States, the accounts balance is negative because more goods are imported than they are exported. This is mostly due to the fact that it is cheaper to produce goods outside of developed countries, leading the economies of developed countries to become increasingly dependent on services and not goods. We believed that a positive accounts balance would means that more production was occurring in a country, so more factories working would increase carbon emission levels.(Holtz-Eakin, 1995, pp.;85;101).[97]

Emissions may also be measured across shorter time periods. Emissions changes may, for example, be measured against a base year of 1990. 1990 was used in the United Nations Framework Convention on Climate Change (UNFCCC) as the base year for emissions, and is also used in the Kyoto Protocol (some gases are also measured from the year 1995).[79]:146,149 A country's emissions may also be reported as a proportion of global emissions for a particular year.

Another measurement is of per capita emissions. This divides a country's total annual emissions by its mid-year population.[93]:370 Per capita emissions may be based on historical or annual emissions (Banuri et al., 1996, pp. 106–107).[95]

Greenhouse gas intensity and land-use change

Greenhouse gas intensity in the year 2000, including land-use change.
Cumulative energy-related CO
2
emissions between the years 1850–2005 grouped into low-income, middle-income, high-income, the EU-15, and the OECD countries.
Cumulative energy-related CO
2
emissions between the years 1850–2005 for individual countries.
Map of cumulative per capita anthropogenic atmospheric CO
2
emissions by country. Cumulative emissions include land use change, and are measured between the years 1950 and 2000.
Regional trends in annual CO
2
emissions from fuel combustion between 1971 and 2009.
Regional trends in annual per capita CO
2
emissions from fuel combustion between 1971 and 2009.

The first figure shown opposite is based on data from the World Resources Institute, and shows a measurement of GHG emissions for the year 2000 according to greenhouse gas intensity and land-use change. Herzog et al. (2006, p. 3) defined greenhouse gas intensity as GHG emissions divided by economic output.[98] GHG intensities are subject to uncertainty over whether they are calculated using market exchange rates (MER) or purchasing power parity (PPP) (Banuri et al., 1996, p. 96).[95] Calculations based on MER suggest large differences in intensities between developed and developing countries, whereas calculations based on PPP show smaller differences.

Land-use change, e.g., the clearing of forests for agricultural use, can affect the concentration of GHGs in the atmosphere by altering how much carbon flows out of the atmosphere into carbon sinks.[99] Accounting for land-use change can be understood as an attempt to measure "net" emissions, i.e., gross emissions from all GHG sources minus the removal of emissions from the atmosphere by carbon sinks (Banuri et al., 1996, pp. 92–93).[95]

There are substantial uncertainties in the measurement of net carbon emissions.[100] Additionally, there is controversy over how carbon sinks should be allocated between different regions and over time (Banuri et al., 1996, p. 93).[95] For instance, concentrating on more recent changes in carbon sinks is likely to favour those regions that have deforested earlier, e.g., Europe.

Cumulative and historical emissions

Cumulative anthropogenic (i.e., human-emitted) emissions of CO
2
from fossil fuel use are a major cause of global warming,[101] and give some indication of which countries have contributed most to human-induced climate change.[102]:15
Top-5 historic CO
2
contributors by region over the years 1800 to 1988 (in %)
Region Industrial
CO
2
Total
CO
2
OECD North America 33.2 29.7
OECD Europe 26.1 16.6
Former USSR 14.1 12.5
China   5.5   6.0
Eastern Europe   5.5   4.8

The table above to the left is based on Banuri et al. (1996, p. 94).[95] Overall, developed countries accounted for 83.8% of industrial CO
2
emissions over this time period, and 67.8% of total CO
2
emissions. Developing countries accounted for industrial CO
2
emissions of 16.2% over this time period, and 32.2% of total CO
2
emissions. The estimate of total CO
2
emissions includes biotic carbon emissions, mainly from deforestation. Banuri et al. (1996, p. 94)[95] calculated per capita cumulative emissions based on then-current population. The ratio in per capita emissions between industrialized countries and developing countries was estimated at more than 10 to 1.

Including biotic emissions brings about the same controversy mentioned earlier regarding carbon sinks and land-use change (Banuri et al., 1996, pp. 93–94).[95] The actual calculation of net emissions is very complex, and is affected by how carbon sinks are allocated between regions and the dynamics of the climate system.

Non-OECD countries accounted for 42% of cumulative energy-related CO
2
emissions between 1890–2007.[103]:179–180 Over this time period, the US accounted for 28% of emissions; the EU, 23%; Russia, 11%; China, 9%; other OECD countries, 5%; Japan, 4%; India, 3%; and the rest of the world, 18%.[103]:179–180

Changes since a particular base year

Between 1970–2004, global growth in annual CO
2
emissions was driven by North America, Asia, and the Middle East.[104] The sharp acceleration in CO
2
emissions since 2000 to more than a 3% increase per year (more than 2 ppm per year) from 1.1% per year during the 1990s is attributable to the lapse of formerly declining trends in carbon intensity of both developing and developed nations. China was responsible for most of global growth in emissions during this period. Localised plummeting emissions associated with the collapse of the Soviet Union have been followed by slow emissions growth in this region due to more efficient energy use, made necessary by the increasing proportion of it that is exported.[77] In comparison, methane has not increased appreciably, and N
2
O
by 0.25% y−1.

Using different base years for measuring emissions has an effect on estimates of national contributions to global warming.[102]:17–18[105] This can be calculated by dividing a country's highest contribution to global warming starting from a particular base year, by that country's minimum contribution to global warming starting from a particular base year. Choosing between different base years of 1750, 1900, 1950, and 1990 has a significant effect for most countries.[102]:17–18 Within the G8 group of countries, it is most significant for the UK, France and Germany. These countries have a long history of CO
2
emissions (see the section on Cumulative and historical emissions).

Annual emissions

Per capita anthropogenic greenhouse gas emissions by country for the year 2000 including land-use change.

Annual per capita emissions in the industrialized countries are typically as much as ten times the average in developing countries.[79]:144 Due to China's fast economic development, its annual per capita emissions are quickly approaching the levels of those in the Annex I group of the Kyoto Protocol (i.e., the developed countries excluding the USA).[106] Other countries with fast growing emissions are South Korea, Iran, and Australia. On the other hand, annual per capita emissions of the EU-15 and the USA are gradually decreasing over time.[106] Emissions in Russia and the Ukraine have decreased fastest since 1990 due to economic restructuring in these countries.[107]

Energy statistics for fast growing economies are less accurate than those for the industrialized countries. For China's annual emissions in 2008, the Netherlands Environmental Assessment Agency estimated an uncertainty range of about 10%.[106]

The GHG footprint, or greenhouse gas footprint, refers to the amount of GHG that are emitted during the creation of products or services. It is more comprehensive than the commonly used carbon footprint, which measures only carbon dioxide, one of many greenhouse gases.

Top emitters

Bar graph of annual per capita CO
2
emissions from fuel combustion for 140 countries in 2009.
Bar graph of cumulative energy-related per capita CO
2
emissions between 1850–2008 for 185 countries.

Annual

In 2009, the annual top ten emitting countries accounted for about two-thirds of the world's annual energy-related CO
2
emissions.[108]
Top-10 annual energy-related CO
2
emitters for the year 2009[109]
Country  % of global total
annual emissions
Tonnes of GHG
per capita
People's Rep. of China 23.6 5.13
United States 17.9 16.9
India 5.5 1.37
Russian Federation 5.3 10.8
Japan 3.8 8.6
Germany 2.6 9.2
Islamic Rep. of Iran 1.8 7.3
Canada 1.8 15.4
Korea 1.8 10.6
United Kingdom 1.6 7.5

Cumulative

Top-10 cumulative energy-related CO
2
emitters between 1850–2008[110]
Country  % of world
total
Metric tonnes
CO
2
per person
United States 28.5 1,132.7
China 9.36 85.4
Russian Federation 7.95 677.2
Germany 6.78 998.9
United Kingdom 5.73 1,127.8
Japan 3.88 367
France 2.73 514.9
India 2.52 26.7
Canada 2.17 789.2
Ukraine 2.13 556.4

Embedded emissions

One way of attributing greenhouse gas (GHG) emissions is to measure the embedded emissions (also referred to as "embodied emissions") of goods that are being consumed. Emissions are usually measured according to production, rather than consumption.[111] For example, in the main international treaty on climate change (the UNFCCC), countries report on emissions produced within their borders, e.g., the emissions produced from burning fossil fuels.[103]:179[112]:1 Under a production-based accounting of emissions, embedded emissions on imported goods are attributed to the exporting, rather than the importing, country. Under a consumption-based accounting of emissions, embedded emissions on imported goods are attributed to the importing country, rather than the exporting, country.

Davis and Caldeira (2010)[112]:4 found that a substantial proportion of CO
2
emissions are traded internationally. The net effect of trade was to export emissions from China and other emerging markets to consumers in the US, Japan, and Western Europe. Based on annual emissions data from the year 2004, and on a per-capita consumption basis, the top-5 emitting countries were found to be (in tCO
2
per person, per year): Luxembourg (34.7), the US (22.0), Singapore (20.2), Australia (16.7), and Canada (16.6).[112]:5 Carbon Trust research revealed that approximately 25% of all CO2 emissions from human activities 'flow' (i.e. are imported or exported) from one country to another. Major developed economies were found to be typically net importers of embodied carbon emissions — with UK consumption emissions 34% higher than production emissions, and Germany (29%), Japan (19%) and the USA (13%) also significant net importers of embodied emissions.[113]

Effect of policy

Governments have taken action to reduce GHG emissions (climate change mitigation). Assessments of policy effectiveness have included work by the Intergovernmental Panel on Climate Change,[114] International Energy Agency,[115][116] and United Nations Environment Programme.[117] Policies implemented by governments have included[118][119][120] national and regional targets to reduce emissions, promoting energy efficiency, and support for renewable energy.

Countries and regions listed in Annex I of the United Nations Framework Convention on Climate Change (UNFCCC) (i.e., the OECD and former planned economies of the Soviet Union) are required to submit periodic assessments to the UNFCCC of actions they are taking to address climate change.[120]:3 Analysis by the UNFCCC (2011)[120]:8 suggested that policies and measures undertaken by Annex I Parties may have produced emission savings of 1.5 thousand Tg CO
2
-eq
in the year 2010, with most savings made in the energy sector. The projected emissions saving of 1.5 thousand Tg CO
2
-eq is measured against a hypothetical "baseline" of Annex I emissions, i.e., projected Annex I emissions in the absence of policies and measures. The total projected Annex I saving of 1.5 thousand CO
2
-eq does not include emissions savings in seven of the Annex I Parties.[120]:8

Projections

A wide range of projections of future GHG emissions have been produced.[121] Rogner et al. (2007)[122] assessed the scientific literature on GHG projections. Rogner et al. (2007)[88] concluded that unless energy policies changed substantially, the world would continue to depend on fossil fuels until 2025–2030.
Projections suggest that more than 80% of the world's energy will come from fossil fuels. This conclusion was based on "much evidence" and "high agreement" in the literature.[88] Projected annual energy-related CO
2
emissions in 2030 were 40–110% higher than in 2000, with two-thirds of the increase originating in developing countries.[88] Projected annual per capita emissions in developed country regions remained substantially lower (2.8–5.1 tonnes CO
2
) than those in developed country regions (9.6–15.1 tonnes CO
2
).[123] Projections consistently showed increase in annual world GHG emissions (the "Kyoto" gases,[124] measured in CO
2
-equivalent
) of 25–90% by 2030, compared to 2000.[88]

Relative CO
2
emission from various fuels

One liter of gasoline, when used as a fuel, produces 2.32 kg (about 1300 liters or 1.3 cubic meters) of carbon dioxide, a greenhouse gas. One US gallon produces 19.4 lb (1,291.5 gallons or 172.65 cubic feet)[125][126][127]
Mass of carbon dioxide emitted per quantity of energy for various fuels[128]
Fuel name CO
2

emitted
(lbs/106 Btu)
CO
2

emitted
(g/MJ)
Natural gas 117 50.30
Liquefied petroleum gas 139 59.76
Propane 139 59.76
Aviation gasoline 153 65.78
Automobile gasoline 156 67.07
Kerosene 159 68.36
Fuel oil 161 69.22
Tires/tire derived fuel 189 81.26
Wood and wood waste 195 83.83
Coal (bituminous) 205 88.13
Coal (sub-bituminous) 213 91.57
Coal (lignite) 215 92.43
Petroleum coke 225 96.73
Tar-sand Bitumen [citation needed] [citation needed]
Coal (anthracite) 227 97.59

Life-cycle greenhouse-gas emissions of energy sources

A literature review of numerous energy sources CO
2
emissions by the IPCC in 2011, found that, the CO
2
emission value, that fell within the 50th percentile of all total life cycle emissions studies conducted, was as follows.[129]
Lifecycle greenhouse gas emissions by electricity source.
Technology Description 50th percentile
(g CO
2
/kWhe)
Hydroelectric reservoir 4
Wind onshore 12
Nuclear various generation II reactor types 16
Biomass various 18
Solar thermal parabolic trough 22
Geothermal hot dry rock 45
Solar PV Polycrystaline silicon 46
Natural gas various combined cycle turbines without scrubbing 469
Coal various generator types without scrubbing 1001

Removal from the atmosphere ("sinks")

Natural processes

Greenhouse gases can be removed from the atmosphere by various processes, as a consequence of:
  • a physical change (condensation and precipitation remove water vapor from the atmosphere).
  • a chemical reaction within the atmosphere. For example, methane is oxidized by reaction with naturally occurring hydroxyl radical, OH· and degraded to CO
    2
    and water vapor (CO
    2
    from the oxidation of methane is not included in the methane Global warming potential). Other chemical reactions include solution and solid phase chemistry occurring in atmospheric aerosols.
  • a physical exchange between the atmosphere and the other compartments of the planet. An example is the mixing of atmospheric gases into the oceans.
  • a chemical change at the interface between the atmosphere and the other compartments of the planet. This is the case for CO
    2
    , which is reduced by photosynthesis of plants, and which, after dissolving in the oceans, reacts to form carbonic acid and bicarbonate and carbonate ions (see ocean acidification).
  • a photochemical change. Halocarbons are dissociated by UV light releasing Cl· and F· as free radicals in the stratosphere with harmful effects on ozone (halocarbons are generally too stable to disappear by chemical reaction in the atmosphere).

Negative emissions

A number of technologies remove greenhouse gases emissions from the atmosphere. Most widely analysed are those that remove carbon dioxide from the atmosphere, either to geologic formations such as bio-energy with carbon capture and storage[130][131][132] and carbon dioxide air capture,[132] or to the soil as in the case with biochar.[132] The IPCC has pointed out that many long-term climate scenario models require large scale manmade negative emissions to avoid serious climate change.[133]

History of scientific research

In the late 19th century scientists experimentally discovered that N
2
and O
2
do not absorb infrared radiation (called, at that time, "dark radiation"). On the contrary, water (both as true vapor and condensed in the form of microscopic droplets suspended in clouds) and CO
2
and other poly-atomic gaseous molecules do absorb infrared radiation. In the early 20th century researchers realized that greenhouse gases in the atmosphere made the Earth's overall temperature higher than it would be without them. During the late 20th century, a scientific consensus evolved that increasing concentrations of greenhouse gases in the atmosphere cause a substantial rise in global temperatures and changes to other parts of the climate system,[134] with consequences for the environment and for human health.

Plant communities in Holy Land can cope with climate change of 'biblical' dimensions

Oct 09, 2014
Read more at: http://phys.org/news/2014-10-holy-cope-climate-biblical-dimensions.html#jCp

An international research team comprised of German, Israeli and American ecologists, including Dr. Claus Holzapfel, Dept. of Biological Sciences, Rutgers University-Newark, has conducted unique long-term experiments in Israel to test predictions of climate change, and has concluded that plant communities in the Holy Land can cope with climate change of "biblical" dimensions. Their findings appear in the current issue of Nature Communications.

When taking global climate change into account, many scientists predict dire ecological consequences around the world. The Middle East in particular has been thought to be vulnerable, since east Mediterranean ecosystems not only are hotspots of biodiversity, but also contain many of the wild ancestors of important crop plants and therefore harbor a rich genetic reservoir for them.

In a region with the lowest per-capita water availability, rainfall is predicted to decrease further in the near future, and could spell extreme hardship for the function of these unique ecosystems and possibly endanger the survival of important genetic resources.

For nine years the research team of German, Israeli and American ecologists subjected extremely species-rich plant communities to experimental drought designed to correspond to predicted future climate scenarios. For this, the study used four different ecosystems aligned along a steep, natural aridity gradient that ranges from extreme desert (3-4"annual rainfall) to moist Mediterranean woodland (32").


The recently published study demonstrates that in contrast to predicted changes, no measurable changes were seen in the vegetation even after nine years of rainfall manipulations. None of the crucial vegetation characteristics, neither species richness and composition, nor density or biomass - a particularly important trait for these ecosystems traditionally used as rangelands - changed appreciably in the rainfall manipulations.

These conclusions were reached regardless of whether the sites were subjected to more or less rain.

"Based on our study, the going hypothesis that all arid regions will react strongly to climate change needs to be amended," stated Dr. Katja Tielbörger (University of Tübingen in Germany), the lead author of the study.

One of the reasons for the high resilience of the ecosystems studied is likely the high natural variability in rainfall for which the region has been known throughout history. The climate scenarios tested included a decrease of rainfall to about 30% of the current values. That amount of rainfall seems to fall within the natural "comfort zone" of wild-growing plants. Archeological sources (and similar descriptions in the Bible) speak of such dramatic variation in climate over the course of centuries.

The team of scientists implemented a novel experimental approach in which irrigation and rain-out shelters were used not only to compare plots with changed climate within a site with un-manipulated controls, but the placing of sites along the steep aridity gradient also allowed testing the long-standing assumption that with climate change, species will track their climate zone and their ranges will simply shift.

Such shifts, commonly assumed by numerous climate-envelope models, have now for the first time been scientifically tested and have not been confirmed.

"Our experiment is likely the most extensive climate change study ever done, because of the number of sites involved, the long duration of experimental manipulations, and the immense species richness", stated Dr. Claus Holzapfel of Rutgers University-Newark, adding: "These facts add to the robustness of our results."

The study serves to decrease the "doomsday" scenario of climate change for the arid Middle East, despite the fact that the conclusions reached by the research team are only applicable to the specific regions studied.
The authors of the study caution that these results should not be used to address global issues of climate change. However, the researchers maintain that their results are important for understanding and countering specific consequences of climate change in the Middle East.


More information: Katja Tielbörger, Mark. C. Bilton, Johannes Metz, Jaime Kigel, Claus Holzapfel, Edwin Lebrija-Trejos, Irit Konsens, Hadas A. Parag, Marcelo Sternberg: Middle-Eastern plant communities tolerate nine years of drought in a large-scale climate change experiment. Nature Communications Oct. 2014 www.nature.com/ncomms/2014/141… 2/pdf/ncomms6102.pdf

Journal reference: Nature Communications

Provided by Rutgers University

Monday, October 6, 2014

Everyone calm down, there is no “bee-pocalypse”



Everyone calm down, there is no “bee-pocalypse”


Shawn Regan
July 10, 2013
Original link:  http://qz.com/101585/everyone-calm-down-there-is-no-bee-pocalypse/
The media is abuzz once again with stories about dying bees. According to a new report from the USDA, scientists have been unable to pinpoint the cause of colony collapse disorder (CCD), the mysterious affliction causing honey bees to disappear from their hives. Possible factors include parasites, viruses, and a form of pesticide known as neonicotinoids. Whatever the cause, the results of a recent beekeeper survey suggest that the problem is not going away. For yet another year, nearly one-third of US honey bee colonies did not make it through the winter.

Given the variety of crops that rely on honey bees for pollination, the colony collapse story is an important one. But if you were to rely on media reports alone, you might believe that honey bees are in short supply. NPR recently declared that we may have reached “a crisis point for crops.” Others warned of an impending “beepocalypse” or a “beemageddon.”

In a rush to identify the culprit of the disorder, many journalists have made exaggerated claims about the impacts of CCD. Most have uncritically accepted that continued bee losses would be a disaster for America’s food supply. Others speculate about the coming of a second “silent spring.” Worse yet, many depict beekeepers as passive, unimaginative onlookers that stand idly by as their colonies vanish.

This sensational reporting has confused rather than informed discussions over CCD. Yes, honey bees are dying in above average numbers, and it is important to uncover what’s causing the losses, but it hardly spells disaster for bees or America’s food supply.

Consider the following facts about honey bees and CCD.

For starters, US honey bee colony numbers are stable, and they have been since before CCD hit the scene in 2006. In fact, colony numbers were higher in 2010 than any year since 1999. How can this be? Commercial beekeepers, far from being passive victims, have actively rebuilt their colonies in response to increased mortality from CCD. Although average winter mortality rates have increased from around 15% before 2006 to more than 30%, beekeepers have been able to adapt to these changes and maintain colony numbers.


Source: USDA NASS Honey Production Report
Rebuilding colonies is a routine part of modern beekeeping. The most common method involves splitting healthy colonies into multiple hives. The new hives, known as “nucs,” require a new queen bee, which can be purchased readily from commercial queen breeders for about $15-$25 each. Many beekeepers split their hives late in the year in anticipation of winter losses. The new hives quickly produce a new brood and often replace more bees than are lost over the winter. Other methods of rebuilding colonies include buying packaged bees (about $55 for 12,000 worker bees and a fertilized queen) or replacing the queen to improve the health of the hive.

“The state of the honey bee population—numbers, vitality, and economic output—are the products of not just the impact of disease but also the economic decisions made by beekeepers and farmers,” economists Randal Rucker and Walter Thurman write in a summary of their working paper on the impacts of CCD. Searching through a number of economic measures, the researchers came to a surprising conclusion: CCD has had almost no discernible economic impact.

But you don’t need to rely on their study to see that CCD has had little economic effect. Data on colonies and honey production are publicly available from the USDA. Like honey bee numbers, US honey production has shown no pattern of decline since CCD was first detected. In 2010, honey production was 14% greater than it was in 2006. (To be clear, US honey production and colony numbers are lower today than they were 30 years ago, but as Rucker and Thurman explain, this gradual decline happened prior to 2006 and cannot be attributed to CCD).


Source: USDA NASS Honey Production Report
What about the prices of queen bees and packaged bees? Because of higher winter losses, beekeepers are forced to purchase more packaged queen and worker bees to rebuild their lost hives. Yet even these prices seem unaffected. Commercial queen breeders are able to rear large numbers of queen bees quickly, often in less than a month, putting little to no upward pressure on bee prices following CCD.

And what about the prices consumers pay for crops pollinated by honey bees? Are these skyrocketing along with fears of the beepocalypse? Rucker and Thurman find that the cost of CCD on almonds, one of the most important crops from a honey bee pollinating perspective, is trivial. The implied increase in the shelf price of a pound of Smokehouse Almonds is a mere 2.8 cents, and the researchers consider that to be an upper-bound estimate of the impact on fruits and vegetables.

There is, however, one measure that has been significantly affected by CCD—and that’s the pollination fees beekeepers charge almond producers. These fees have more than doubled in recent years, though the fees began rising a few years before CCD was reported. Rucker and Thurman attribute a portion of this increase to the onset of CCD. But even this impact has a bright side: For many beekeepers, the increase in almond pollination fees has more than offset the costs they have incurred rebuilding their lost colonies.

Overcoming CCD is not without its challenges, but beekeepers have thus far proven themselves adept at navigating such changing conditions. Honey bees have long been afflicted with a variety of diseases. The Varroa mite, a blood-thirsty bee parasite, has been a scourge of beekeepers since the 1980s. While CCD has resulted in larger and more mysterious losses, the resourcefulness of beekeepers remains.

Hannah Nordhaus, author of The Beekeeper’s Lament, warned that the scare stories evoked by CCD should serve as a cautionary tale to environmental journalists. “By engaging in simplistic and sometimes misleading environmental narratives—by exaggerating the stakes and brushing over the inconvenient facts that stand in the way of foregone conclusions­­—we do our field, and our subjects, a disservice,” she wrote in her 2011 essay “An Environmental Journalist’s Lament.”

“The overblown response to CCD in the media stems from a failure to appreciate the resilience of markets in accommodating shocks of various sorts,” write Rucker and Thurman. The ability of beekeepers and other market forces to adapt has kept food on the shelves, honey in the cupboard, and honey bees buzzing. Properly understood, the story of CCD is not one of doom and gloom, but one of the triumph and perseverance of beekeepers.

Sunday, September 21, 2014

Functionalism (philosophy of mind)

Functionalism (philosophy of mind)

From Wikipedia, the free encyclopedia
Functionalism is a theory of the mind in contemporary philosophy, developed largely as an alternative to both the identity theory of mind and behaviorism. Its core idea is that mental states (beliefs, desires, being in pain, etc.) are constituted solely by their functional role – that is, they are causal relations to other mental states, sensory inputs, and behavioral outputs.[1] Functionalism is a theoretical level between the physical implementation and behavioral output.[2] Therefore, it is different from its predecessors of Cartesian dualism (advocating independent mental and physical substances) and Skinnerian behaviorism and physicalism (declaring only physical substances) because it is only concerned with the effective functions of the brain, through its organization or its "software programs".

Since mental states are identified by a functional role, they are said to be realized on multiple levels; in other words, they are able to be manifested in various systems, even perhaps computers, so long as the system performs the appropriate functions. While computers are physical devices with electronic substrate that perform computations on inputs to give outputs, so brains are physical devices with neural substrate that perform computations on inputs which produce behaviors.
While functionalism has its advantages, there have been several arguments against it, claiming that it is an insufficient account of the mind.

Multiple realizability

An important part of some accounts of functionalism is the idea of multiple realizability. Since, according to standard functionalist theories, mental states are the corresponding functional role, mental states can be sufficiently explained without taking into account the underlying physical medium (e.g. the brain, neurons, etc.) that realizes such states; one need only take into account the higher-level functions in the cognitive system. Since mental states are not limited to a particular medium, they can be realized in multiple ways, including, theoretically, within non-biological systems, such as computers. In other words, a silicon-based machine could, in principle, have the same sort of mental life that a human being has, provided that its cognitive system realized the proper functional roles. Thus, mental states are individuated much like a valve; a valve can be made of plastic or metal or whatever material, as long as it performs the proper function (say, controlling the flow of liquid through a tube by blocking and unblocking its pathway).
However, there have been some functionalist theories that combine with the identity theory of mind, which deny multiple realizability. Such Functional Specification Theories (FSTs) (Levin, § 3.4), as they are called, were most notably developed by David Lewis[3] and David Malet Armstrong.[4]
According to FSTs, mental states are the particular "realizers" of the functional role, not the functional role itself. The mental state of belief, for example, just is whatever brain or neurological process that realizes the appropriate belief function. Thus, unlike standard versions of functionalism (often called Functional State Identity Theories), FSTs do not allow for the multiple realizability of mental states, because the fact that mental states are realized by brain states is essential. What often drives this view is the belief that if we were to encounter an alien race with a cognitive system composed of significantly different material from humans' (e.g., silicon-based) but performed the same functions as human mental states (e.g., they tend to yell "Yowzas!" when poked with sharp objects, etc.) then we would say that their type of mental state is perhaps similar to ours, but too different to say it's the same. For some, this may be a disadvantage to FSTs. Indeed, one of Hilary Putnam's[5][6] arguments for his version of functionalism relied on the intuition that such alien creatures would have the same mental states as humans do, and that the multiple realizability of standard functionalism makes it a better theory of mind.

Types of functionalism

Machine-state functionalism


Artistic representation of a Turing machine.

The broad position of "functionalism" can be articulated in many different varieties. The first formulation of a functionalist theory of mind was put forth by Hilary Putnam.[5][6] This formulation, which is now called machine-state functionalism, or just machine functionalism, was inspired by the analogies which Putnam and others noted between the mind and the theoretical "machines" or computers capable of computing any given algorithm which were developed by Alan Turing (called Universal Turing machines).

In non-technical terms, a Turing machine can be visualized as an indefinitely and infinitely long tape divided into rectangles (the memory) with a box-shaped scanning device that sits over and scans one component of the memory at a time. Each unit is either blank (B) or has a 1 written on it. These are the inputs to the machine. The possible outputs are:
  • Halt: Do nothing.
  • R: move one square to the right.
  • L: move one square to the left.
  • B: erase whatever is on the square.
  • 1: erase whatever is on the square and print a '1.
An extremely simple example of a Turing machine which writes out the sequence '111' after scanning three blank squares and then stops as specified by the following machine table:


State One State Two State Three
B write 1; stay in state 1 write 1; stay in state 2 write 1; stay in state 3
1 go right; go to state 2 go right; go to state 3 [halt]

This table states that if the machine is in state one and scans a blank square (B), it will print a 1 and remain in state one. If it is in state one and reads a 1, it will move one square to the right and also go into state two. If it is in state two and reads a B, it will print a 1 and stay in state two. If it is in state two and reads a 1, it will move one square to the right and go into state three. If it is in state three and reads a B, it prints a 1 and remains in state three. Finally, if it is in state three and reads a 1, then it will stay in state three.

The essential point to consider here is the nature of the states of the Turing machine. Each state can be defined exclusively in terms of its relations to the other states as well as inputs and outputs. State one, for example, is simply the state in which the machine, if it reads a B, writes a 1 and stays in that state, and in which, if it reads a 1, it moves one square to the right and goes into a different state. This is the functional definition of state one; it is its causal role in the overall system. The details of how it accomplishes what it accomplishes and of its material constitution are completely irrelevant.

According to machine-state functionalism, the nature of a mental state is just like the nature of the automaton states described above. Just as state one simply is the state in which, given an input B, such and such happens, so being in pain is the state which disposes one to cry "ouch", become distracted, wonder what the cause is, and so forth.

Psychofunctionalism

A second form of functionalism is based on the rejection of behaviorist theories in psychology and their replacement with empirical cognitive models of the mind. This view is most closely associated with Jerry Fodor and Zenon Pylyshyn and has been labeled psychofunctionalism.

The fundamental idea of psychofunctionalism is that psychology is an irreducibly complex science and that the terms that we use to describe the entities and properties of the mind in our best psychological theories cannot be redefined in terms of simple behavioral dispositions, and further, that such a redefinition would not be desirable or salient were it achievable. Psychofunctionalists view psychology as employing the same sorts of irreducibly teleological or purposive explanations as the biological sciences. Thus, for example, the function or role of the heart is to pump blood, that of the kidney is to filter it and to maintain certain chemical balances and so on—this is what accounts for the purposes of scientific explanation and taxonomy. There may be an infinite variety of physical realizations for all of the mechanisms, but what is important is only their role in the overall biological theory. In an analogous manner, the role of mental states, such as belief and desire, is determined by the functional or causal role that is designated for them within our best scientific psychological theory. If some mental state which is postulated by folk psychology (e.g. hysteria) is determined not to have any fundamental role in cognitive psychological explanation, then that particular state may be considered not to exist . On the other hand, if it turns out that there are states which theoretical cognitive psychology posits as necessary for explanation of human behavior but which are not foreseen by ordinary folk psychological language, then these entities or states exist.

Analytic functionalism

A third form of functionalism is concerned with the meanings of theoretical terms in general. This view is most closely associated with David Lewis and is often referred to as analytic functionalism or conceptual functionalism. The basic idea of analytic functionalism is that theoretical terms are implicitly defined by the theories in whose formulation they occur and not by intrinsic properties of the phonemes they comprise. In the case of ordinary language terms, such as "belief", "desire", or "hunger", the idea is that such terms get their meanings from our common-sense "folk psychological" theories about them, but that such conceptualizations are not sufficient to withstand the rigor imposed by materialistic theories of reality and causality. Such terms are subject to conceptual analyses which take something like the following form:
Mental state M is the state that is preconceived by P and causes Q.
For example, the state of pain is caused by sitting on a tack and causes loud cries, and higher order mental states of anger and resentment directed at the careless person who left a tack lying around. These sorts of functional definitions in terms of causal roles are claimed to be analytic and a priori truths about the submental states and the (largely fictitious) propositional attitudes they describe.
Hence, its proponents are known as analytic or conceptual functionalists. The essential difference between analytic and psychofunctionalism is that the latter emphasizes the importance of laboratory observation and experimentation in the determination of which mental state terms and concepts are genuine and which functional identifications may be considered to be genuinely contingent and a posteriori identities. The former, on the other hand, claims that such identities are necessary and not subject to empirical scientific investigation.

Homuncular functionalism

Homuncular functionalism was developed largely by Daniel Dennett and has been advocated by William Lycan. It arose in response to the challenges that Ned Block's China Brain (a.k.a. Chinese nation) and John Searle's Chinese room thought experiments presented for the more traditional forms of functionalism (see below under "Criticism"). In attempting to overcome the conceptual difficulties that arose from the idea of a nation full of Chinese people wired together, each person working as a single neuron to produce in the wired-together whole the functional mental states of an individual mind, many functionalists simply bit the bullet, so to speak, and argued that such a Chinese nation would indeed possess all of the qualitative and intentional properties of a mind; i.e. it would become a sort of systemic or collective mind with propositional attitudes and other mental characteristics.
Whatever the worth of this latter hypothesis, it was immediately objected that it entailed an unacceptable sort of mind-mind supervenience: the systemic mind which somehow emerged at the higher-level must necessarily supervene on the individual minds of each individual member of the Chinese nation, to stick to Block's formulation. But this would seem to put into serious doubt, if not directly contradict, the fundamental idea of the supervenience thesis: there can be no change in the mental realm without some change in the underlying physical substratum. This can be easily seen if we label the set of mental facts that occur at the higher-level M1 and the set of mental facts that occur at the lower-level M2. Given the transitivity of supervenience, if M1 supervenes on M2, and M2 supervenes on P (physical base), then M1 and M2 both supervene on P, even though they are (allegedly) totally different sets of mental facts.

Since mind-mind supervenience seemed to have become acceptable in functionalist circles, it seemed to some that the only way to resolve the puzzle was to postulate the existence of an entire hierarchical series of mind levels (analogous to homunculi) which became less and less sophisticated in terms of functional organization and physical composition all the way down to the level of the physico-mechanical neuron or group of neurons. The homunculi at each level, on this view, have authentic mental properties but become simpler and less intelligent as one works one's way down the hierarchy.

Functionalism and physicalism

There is much confusion about the sort of relationship that is claimed to exist (or not exist) between the general thesis of functionalism and physicalism. It has often been claimed that functionalism somehow "disproves" or falsifies physicalism tout court (i.e. without further explanation or description). On the other hand, most philosophers of mind who are functionalists claim to be physicalists—indeed, some of them, such as David Lewis, have claimed to be strict reductionist-type physicalists.

Functionalism is fundamentally what Ned Block has called a broadly metaphysical thesis as opposed to a narrowly ontological one. That is, functionalism is not so much concerned with what there is than with what it is that characterizes a certain type of mental state, e.g. pain, as the type of state that it is. Previous attempts to answer the mind-body problem have all tried to resolve it by answering both questions: dualism says there are two substances and that mental states are characterized by their immateriality; behaviorism claimed that there was one substance and that mental states were behavioral disposition; physicalism asserted the existence of just one substance and characterized the mental states as physical states (as in "pain = C-fiber firings").

On this understanding, type physicalism can be seen as incompatible with functionalism, since it claims that what characterizes mental states (e.g. pain) is that they are physical in nature, while functionalism says that what characterizes pain is its functional/causal role and its relationship with yelling "ouch", etc. However, any weaker sort of physicalism which makes the simple ontological claim that everything that exists is made up of physical matter is perfectly compatible with functionalism. Moreover, most functionalists who are physicalists require that the properties that are quantified over in functional definitions be physical properties. Hence, they are physicalists, even though the general thesis of functionalism itself does not commit them to being so.

In the case of David Lewis, there is a distinction in the concepts of "having pain" (a rigid designator true of the same things in all possible worlds) and just "pain" (a non-rigid designator). Pain, for Lewis, stands for something like the definite description "the state with the causal role x". The referent of the description in humans is a type of brain state to be determined by science. The referent among silicon-based life forms is something else. The referent of the description among angels is some immaterial, non-physical state. For Lewis, therefore, local type-physical reductions are possible and compatible with conceptual functionalism. (See also Lewis's Mad pain and Martian pain.) There seems to be some confusion between types and tokens that needs to be cleared up in the functionalist analysis.

Criticism

China brain

Ned Block[7] argues against the functionalist proposal of multiple realizability, where hardware implementation is irrelevant because only the functional level is important. The "China brain" or "Chinese nation" thought experiment involves supposing that the entire nation of China systematically organizes itself to operate just like a brain, with each individual acting as a neuron (forming what has come to be called a "Blockhead"). According to functionalism, so long as the people are performing the proper functional roles, with the proper causal relations between inputs and outputs, the system will be a real mind, with mental states, consciousness, and so on. However, Block argues, this is patently absurd, so there must be something wrong with the thesis of functionalism since it would allow this to be a legitimate description of a mind.
Some functionalists believe China would have qualia but that due to the size it is impossible to imagine China being conscious.[8] Indeed, it may be the case that we are constrained by our theory of mind[9] and will never be able to understand what Chinese-nation consciousness is like. Therefore, if functionalism is true either qualia will exist across all hardware or will not exist at all but are illusory.[10]

The Chinese room

The Chinese room argument by John Searle[11] is a direct attack on the claim that thought can be represented as a set of functions. The thought experiment asserts that it is possible to mimic intelligent action without any interpretation or understanding through the use of a purely functional system. In short, Searle describes a person who only speaks English who is in a room with only Chinese symbols in baskets and a rule book in English for moving the symbols around. The person is then ordered by people outside of the room to follow the rule book for sending certain symbols out of the room when given certain symbols. Further suppose that the people outside of the room are Chinese speakers and are communicating with the person inside via the Chinese symbols. According to Searle, it would be absurd to claim that the English speaker inside knows Chinese simply based on these syntactic processes. This thought experiment attempts to show that systems which operate merely on syntactic processes (inputs and outputs, based on algorithms) cannot realize any semantics (meaning) or intentionality (aboutness). Thus, Searle attacks the idea that thought can be equated with following a set of syntactic rules; that is, functionalism is an insufficient theory of the mind.
As noted above, in connection with Block's Chinese nation, many functionalists responded to Searle's thought experiment by suggesting that there was a form of mental activity going on at a higher level than the man in the Chinese room could comprehend (the so-called "system reply"); that is, the system does know Chinese. Of course, Searle responds that there is nothing more than syntax going on at the higher-level as well, so this reply is subject to the same initial problems. Furthermore, Searle suggests the man in the room could simply memorize the rules and symbol relations. Again, though he would convincingly mimic communication, he would be aware only of the symbols and rules, not of the meaning behind them.

Inverted spectrum

Another main criticism of functionalism is the inverted spectrum or inverted qualia scenario, most specifically proposed as an objection to functionalism by Ned Block.[7][12] This thought experiment involves supposing that there is a person, call her Jane, that is born with a condition which makes her see the opposite spectrum of light that is normally perceived. Unlike "normal" people, Jane sees the color violet as yellow, orange as blue, and so forth. So, suppose, for example, that you and Jane are looking at the same orange. While you perceive the fruit as colored orange, Jane sees it as colored blue. However, when asked what color the piece of fruit is, both you and Jane will report "orange". In fact, one can see that all of your behavioral as well as functional relations to colors will be the same. Jane will, for example, properly obey traffic signs just as any other person would, even though this involves the color perception. Therefore, the argument goes, since there can be two people who are functionally identical, yet have different mental states (differing in their qualitative or phenomenological aspects), functionalism is not robust enough to explain individual differences in qualia.[13]
David Chalmers tries to show[14] that even though mental content cannot be fully accounted for in functional terms, there is nevertheless a nomological correlation between mental states and functional states in this world. A silicon-based robot, for example, whose functional profile matched our own, would have to be fully conscious. His argument for this claim takes the form of a reductio ad absurdum. The general idea is that since it would be very unlikely for a conscious human being to experience a change in its qualia which it utterly fails to notice, mental content and functional profile appear to be inextricably bound together, at least in the human case. If the subject's qualia were to change, we would expect the subject to notice, and therefore his functional profile to follow suit. A similar argument is applied to the notion of absent qualia. In this case, Chalmers argues that it would be very unlikely for a subject to experience a fading of his qualia which he fails to notice and respond to. This, coupled with the independent assertion that a conscious being's functional profile just could be maintained, irrespective of its experiential state, leads to the conclusion that the subject of these experiments would remain fully conscious. The problem with this argument, however, as Brian G. Crabb (2005) has observed, is that it begs the central question: How could Chalmers know that functional profile can be preserved, for example while the conscious subject's brain is being supplanted with a silicon substitute, unless he already assumes that the subject's possibly changing qualia would not be a determining factor? And while changing or fading qualia in a conscious subject might force changes in its functional profile, this tells us nothing about the case of a permanently inverted or unconscious robot. A subject with inverted qualia from birth would have nothing to notice or adjust to. Similarly, an unconscious functional simulacrum of ourselves (a zombie) would have no experiential changes to notice or adjust to. Consequently, Crabb argues, Chalmers' "fading qualia" and "dancing qualia" arguments fail to establish that cases of permanently inverted or absent qualia are nomologically impossible.

A related critique of the inverted spectrum argument is that it assumes that mental states (differing in their qualitative or phenomenological aspects) can be independent of the functional relations in the brain. Thus, it begs the question of functional mental states: its assumption denies the possibility of functionalism itself, without offering any independent justification for doing so. (Functionalism says that mental states are produced by the functional relations in the brain.) This same type of problem—that there is no argument, just an antithetical assumption at their base—can also be said of both the Chinese room and the Chinese nation arguments. Notice, however, that Crabb's response to Chalmers does not commit this fallacy: His point is the more restricted observation that even if inverted or absent qualia turn out to be nomologically impossible, and it is perfectly possible that we might subsequently discover this fact by other means, Chalmers' argument fails to demonstrate that they are impossible.

Twin Earth

The Twin Earth thought experiment, introduced by Hilary Putnam,[15] is responsible for one of the main arguments used against functionalism, although it was originally intended as an argument against semantic internalism. The thought experiment is simple and runs as follows. Imagine a Twin Earth which is identical to Earth in every way but one: water does not have the chemical structure H₂O, but rather some other structure, say XYZ. It is critical, however, to note that XYZ on Twin Earth is still called "water" and exhibits all the same macro-level properties that H₂O exhibits on Earth (i.e., XYZ is also a clear drinkable liquid that is in lakes, rivers, and so on). Since these worlds are identical in every way except in the underlying chemical structure of water, you and your Twin Earth doppelgänger see exactly the same things, meet exactly the same people, have exactly the same jobs, behave exactly the same way, and so on. In other words, since you share the same inputs, outputs, and relations between other mental states, you are functional duplicates. So, for example, you both believe that water is wet. However, the content of your mental state of believing that water is wet differs from your duplicate's because your belief is of H₂O, while your duplicate's is of XYZ.
Therefore, so the argument goes, since two people can be functionally identical, yet have different mental states, functionalism cannot sufficiently account for all mental states.

Most defenders of functionalism initially responded to this argument by attempting to maintain a sharp distinction between internal and external content. The internal contents of propositional attitudes, for example, would consist exclusively in those aspects of them which have no relation with the external world and which bear the necessary functional/causal properties that allow for relations with other internal mental states. Since no one has yet been able to formulate a clear basis or justification for the existence of such a distinction in mental contents, however, this idea has generally been abandoned in favor of externalist causal theories of mental contents (also known as informational semantics). Such a position is represented, for example, by Jerry Fodor's account of an "asymmetric causal theory" of mental content. This view simply entails the modification of functionalism to include within its scope a very broad interpretation of input and outputs to include the objects that are the causes of mental representations in the external world.

The twin earth argument hinges on the assumption that experience with an imitation water would cause a different mental state than experience with natural water. However, since no one would notice the difference between the two waters, this assumption is likely false. Further, this basic assumption is directly antithetical to functionalism; and, thereby, the twin earth argument does not constitute a genuine argument: as this assumption entails a flat denial of functionalism itself (which would say that the two waters would not produce different mental states, because the functional relationships would remain unchanged).

Meaning holism

Another common criticism of functionalism is that it implies a radical form of semantic holism. Block and Fodor[12] referred to this as the damn/darn problem. The difference between saying "damn" or "darn" when one smashes one's finger with a hammer can be mentally significant. But since these outputs are, according to functionalism, related to many (if not all) internal mental states, two people who experience the same pain and react with different outputs must share little (perhaps nothing) in common in any of their mental states. But this is counter-intuitive; it seems clear that two people share something significant in their mental states of being in pain if they both smash their finger with a hammer, whether or not they utter the same word when they cry out in pain.

Another possible solution to this problem is to adopt a moderate (or molecularist) form of holism. But even if this succeeds in the case of pain, in the case of beliefs and meaning, it faces the difficulty of formulating a distinction between relevant and non-relevant contents (which can be difficult to do without invoking an analytic-synthetic distinction, as many seek to avoid).

Triviality arguments

Hilary Putnam,[16] John Searle,[17] and others[18][19] have offered arguments that functionalism is trivial, i.e. that the internal structures functionalism tries to discuss turn out to be present everywhere, so that either functionalism turns out to reduce to behaviorism, or to complete triviality and therefore a form of panpsychism. These arguments typically use the assumption that physics leads to a progression of unique states, and that functionalist realization is present whenever there is a mapping from the proposed set of mental states to physical states of the system. Given that the states of a physical system are always at least slightly unique, such a mapping will always exist, so any system is a mind. Formulations of functionalism which stipulate absolute requirements on interaction with external objects (external to the functional account, meaning not defined functionally) are reduced to behaviorism instead of absolute triviality, because the input-output behavior is still required.

Peter Godfrey-Smith has argued further[20] that such formulations can still be reduced to triviality if they accept a somewhat innocent-seeming additional assumption. The assumption is that adding a transducer layer, that is, an input-output system, to an object should not change whether that object has mental states. The transducer layer is restricted to producing behavior according to a simple mapping, such as a lookup table, from inputs to actions on the system, and from the state of the system to outputs. However, since the system will be in unique states at each moment and at each possible input, such a mapping will always exist so there will be a transducer layer which will produce whatever physical behavior is desired.

Godfrey-Smith believes that these problems can be addressed using causality, but that it may be necessary to posit a continuum between objects being minds and not being minds rather than an absolute distinction. Furthermore, constraining the mappings seems to require either consideration of the external behavior as in behaviorism, or discussion of the internal structure of the realization as in identity theory; and though multiple realizability does not seem to be lost, the functionalist claim of the autonomy of high-level functional description becomes questionable.[20]

Hard problem of consciousness

Hard problem of consciousness

From Wikipedia, the free encyclopedia
The hard problem of consciousness is the problem of explaining how and why we have qualia or phenomenal experiences — how sensations acquire characteristics, such as colours and tastes.[1] David Chalmers, who introduced the term "hard problem" of consciousness,[2] contrasts this with the "easy problems" of explaining the ability to discriminate, integrate information, report mental states, focus attention, etc. Easy problems are easy because all that is required for their solution is to specify a mechanism that can perform the function. That is, their proposed solutions, regardless of how complex or poorly understood they may be, can be entirely consistent with the modern materialistic conception of natural phenomena. Chalmers claims that the problem of experience is distinct from this set, and he argues that the problem of experience will "persist even when the performance of all the relevant functions is explained".[3]

The existence of a "hard problem" is controversial and has been disputed by some philosophers.[4][5] Providing an answer to this question could lie in understanding the roles that physical processes play in creating consciousness and the extent to which these processes create our subjective qualities of experience.[3]

Several questions about consciousness must be resolved in order to acquire a full understanding of it. These questions include, but are not limited to, whether being conscious could be wholly described in physical terms, such as the aggregation of neural processes in the brain. If consciousness cannot be explained exclusively by physical events, it must transcend the capabilities of physical systems and require an explanation of nonphysical means. For philosophers who assert that consciousness is nonphysical in nature, there remains a question about what outside of physical theory is required to explain consciousness.

Formulation of the problem

Chalmers' formulation

In Facing Up to the Problem of Consciousness, Chalmers wrote:[3]

Easy problems

Chalmers contrasts the Hard Problem with a number of (relatively) Easy Problems that consciousness presents. (He emphasizes that what the easy problems have in common is that they all represent some ability, or the performance of some function or behavior).
  • the ability to discriminate, categorize, and react to environmental stimuli;
  • the integration of information by a cognitive system;
  • the reportability of mental states;
  • the ability of a system to access its own internal states;
  • the focus of attention;
  • the deliberate control of behavior;
  • the difference between wakefulness and sleep.

Other formulations

Various formulations of the "hard problem":
  • "How is it that some organisms are subjects of experience?"
  • "Why does awareness of sensory information exist at all?"
  • "Why do qualia exist?"
  • "Why is there a subjective component to experience?"
  • "Why aren't we philosophical zombies?"
James Trefil notes that "it is the only major question in the sciences that we don't even know how to ask."[6]

Historical predecessors

The hard problem has scholarly antecedents considerably earlier than Chalmers.

Gottfried Leibniz wrote, as an example also known as Leibniz's gap:
Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception.[7]
Isaac Newton wrote in a letter to Henry Oldenburg:
to determine by what modes or actions light produceth in our minds the phantasm of colour is not so easie.[8]
T.H. Huxley remarked:
how it is that any thing so remarkable as a state of consciousness comes about as the result of irritating nervous tissue, is just as unaccountable as the appearance of the Djin when Aladdin rubbed his lamp.[9]

Responses

Scientific attempts

There have been scientific attempts to explain subjective aspects of consciousness, which is related to the binding problem in neuroscience. Many eminent theorists, including Francis Crick and Roger Penrose, have worked in this field. Nevertheless, even as sophisticated accounts are given, it is unclear if such theories address the hard problem. Eliminative materialist philosopher Patricia Smith Churchland has famously remarked about Penrose's theories that "Pixie dust in the synapses is about as explanatorily powerful as quantum coherence in the microtubules."[10]

Consciousness is fundamental or elusive

Some philosophers, including David Chalmers and Alfred North Whitehead, argue that conscious experience is a fundamental constituent of the universe, a form of panpsychism sometimes referred to as panexperientialism. Chalmers argues that a "rich inner life" is not logically reducible to the functional properties of physical processes. He states that consciousness must be described using nonphysical means. This description involves a fundamental ingredient capable of clarifying phenomena that has not been explained using physical means. Use of this fundamental property, Chalmers argues, is necessary to explain certain functions of the world, much like other fundamental features, such as mass and time, and to explain significant principles in nature.

Thomas Nagel has posited that experiences are essentially subjective (accessible only to the individual undergoing them), while physical states are essentially objective (accessible to multiple individuals). So at this stage, we have no idea what it could even mean to claim that an essentially subjective state just is an essentially non-subjective state. In other words, we have no idea of what reductivism really amounts to.[11]

New mysterianism, such as that of Colin McGinn, proposes that the human mind, in its current form, will not be able to explain consciousness.[12]

Deflationary accounts

Some philosophers, such as Daniel Dennett,[4] Stanislas Dehaene,[5] and Peter Hacker,[13] oppose the idea that there is a hard problem. These theorists argue that once we really come to understand what consciousness is, we will realize that the hard problem is unreal. For instance, Dennett asserts that the so-called hard problem will be solved in the process of answering the easy ones.[4] In contrast with Chalmers, he argues that consciousness is not a fundamental feature of the universe and instead will eventually be fully explained by natural phenomena. Instead of involving the nonphysical, he says, consciousness merely plays tricks on people so that it appears nonphysical—in other words, it simply seems like it requires nonphysical features to account for its powers. In this way, Dennett compares consciousness to stage magic and its capability to create extraordinary illusions out of ordinary things.[14]

To show how people might be commonly fooled into overstating the powers of consciousness, Dennett describes a normal phenomenon called change blindness, a visual process that involves failure to detect scenery changes in a series of alternating images.[15] He uses this concept to argue that the overestimation of the brain's visual processing implies that the conception of our consciousness is likely not as pervasive as we make it out to be. He claims that this error of making consciousness more mysterious than it is could be a misstep in any developments toward an effective explanatory theory. Critics such as Galen Strawson reply that, in the case of consciousness, even a mistaken experience retains the essential face of experience that needs to be explained, contra Dennett.

To address the question of the hard problem, or how and why physical processes give rise to experience, Dennett states that the phenomenon of having experience is nothing more than the performance of functions or the production of behavior, which can also be referred to as the easy problems of consciousness.[4] He states that consciousness itself is driven simply by these functions, and to strip them away would wipe out any ability to identify thoughts, feelings, and consciousness altogether. So, unlike Chalmers and other dualists, Dennett says that the easy problems and the hard problem cannot be separated from each other. To him, the hard problem of experience is included among—not separate from—the easy problems, and therefore they can only be explained together as a cohesive unit.[14]

Dehaene's argument has similarities with those of Dennett. He says Chalmers' 'easy problems of consciousness' are actually the hard problems and the 'hard problems' are based only upon intuitions that, according to Dehaene, are continually shifting as understanding evolves. "Once our intuitions are educated ...Chalmers' hard problem will evaporate" and "qualia...will be viewed as a peculiar idea of the prescientific era, much like vitalism...[Just as science dispatched vitalism] the science of consciousness will eat away at the hard problem of consciousness until it vanishes."[5]

Like Dennett, Peter Hacker argues that the hard problem is fundamentally incoherent and that "consciousness studies," as it exists today, is "literally a total waste of time:"[13]
“The whole endeavour of the consciousness studies community is absurd – they are in pursuit of a chimera. They misunderstand the nature of consciousness. The conception of consciousness which they have is incoherent. The questions they are asking don’t make sense. They have to go back to the drawing board and start all over again.”
Critics of Dennett's approach, such as David Chalmers and Thomas Nagel, argue that Dennett's argument misses the point of the inquiry by merely re-defining consciousness as an external property and ignoring the subjective aspect completely. This has led detractors to refer to Dennett's book Consciousness Explained as Consciousness Ignored or Consciousness Explained Away.[4] Dennett discussed this at the end of his book with a section entitled Consciousness Explained or Explained Away?[15]

Glenn Carruthers and Elizabeth Schier argue that the main arguments for the existence of a hard problem -- philosophical zombies, Mary's room, and Nagel's bats -- are only persuasive if one already assumes that "consciousness must be independent of the structure and function of mental states, i.e. that there is a hard problem." Hence, the arguments beg the question. The authors suggest that "instead of letting our conclusions on the thought experiments guide our theories of consciousness, we should let our theories of consciousness guide our conclusions from the thought experiments."[16] Contrary to this line of argument, Chalmers says: "Some may be led to deny the possibility [of zombies] in order to make some theory come out right, but the justification of such theories should ride on the question of possibility, rather than the other way round".[17]:96

A notable deflationary account is the Higher-Order Thought theories of consciousness.[18][19] Peter Carruthers discusses "recognitional concepts of experience", that is, "a capacity to recognize [a] type of experience when it occurs in one's own mental life", and suggests such a capacity does not depend upon qualia.[20] Though the most common arguments against deflationary accounts and eliminative materialism is the argument from qualia, and that conscious experiences are irreducible to physical states - or that current popular definitions of "physical" are incomplete - the objection follows that the one and same reality can appear in different ways, and that the numerical difference of these ways is consistent with a unitary mode of existence of the reality. Critics of the deflationary approach object that qualia are a case where a single reality cannot have multiple appearances. As John Searle points out: "where consciousness is concerned, the existence of the appearance is the reality."[21]

Massimo Pigliucci distances himself from eliminativism, but he insists that the hard problem is still misguided, resulting from a "category mistake":[22]
Of course an explanation isn't the same as an experience, but that’s because the two are completely independent categories, like colors and triangles. It is obvious that I cannot experience what it is like to be you, but I can potentially have a complete explanation of how and why it is possible to be you.

Green development

From Wikipedia, the free encyclopedia https://en.wikipedia.org/w...