Search This Blog

Thursday, August 10, 2023

Maser

From Wikipedia, the free encyclopedia
First prototype ammonia maser in front of its inventor Charles H. Townes. The ammonia nozzle is at left in the box, the four brass rods at center are the quadrupole state selector, and the resonant cavity is at right. The 24 GHz microwaves exit through the vertical waveguide Townes is adjusting. At bottom are the vacuum pumps.
A hydrogen radio frequency discharge, the first element inside a hydrogen maser (see description below)

A maser (/ˈmzər/; acronym of microwave amplification by stimulated emission of radiation) is a device that produces coherent electromagnetic waves (i.e. microwaves), through amplification by stimulated emission. The first maser was built by Charles H. Townes, James P. Gordon, and Herbert J. Zeiger at Columbia University in 1953. Townes, Nikolay Basov and Alexander Prokhorov were awarded the 1964 Nobel Prize in Physics for theoretical work leading to the maser. Masers are also used as the timekeeping device in atomic clocks, and as extremely low-noise microwave amplifiers in radio telescopes and deep-space spacecraft communication ground stations.

Modern masers can be designed to generate electromagnetic waves at not only microwave frequencies but also radio and infrared frequencies. For this reason, Townes suggested replacing "microwave" with "molecular" as the first word in the acronym "maser".

The laser works by the same principle as the maser but produces higher frequency coherent radiation at visible wavelengths. The maser was the forerunner of the laser, inspiring theoretical work by Townes and Arthur Leonard Schawlow that led to the invention of the laser in 1960 by Theodore Maiman. When the coherent optical oscillator was first imagined in 1957, it was originally called the "optical maser". This was ultimately changed to laser, for "light amplification by stimulated emission of radiation". Gordon Gould is credited with creating this acronym in 1957.

History

The theoretical principles governing the operation of a maser were first described by Joseph Weber of the University of Maryland, College Park at the Electron Tube Research Conference in June 1952 in Ottawa, with a summary published in the June 1953 Transactions of the Institute of Radio Engineers Professional Group on Electron Devices, and simultaneously by Nikolay Basov and Alexander Prokhorov from Lebedev Institute of Physics, at an All-Union Conference on Radio-Spectroscopy held by the USSR Academy of Sciences in May 1952, subsequently published in October 1954.

Independently, Charles Hard Townes, James P. Gordon, and H. J. Zeiger built the first ammonia maser at Columbia University in 1953. This device used stimulated emission in a stream of energized ammonia molecules to produce amplification of microwaves at a frequency of about 24.0 gigahertz. Townes later worked with Arthur L. Schawlow to describe the principle of the optical maser, or laser, of which Theodore H. Maiman created the first working model in 1960.

For their research in the field of stimulated emission, Townes, Basov and Prokhorov were awarded the Nobel Prize in Physics in 1964.

Technology

The maser is based on the principle of stimulated emission proposed by Albert Einstein in 1917. When atoms have been induced into an excited energy state, they can amplify radiation at a frequency particular to the element or molecule used as the masing medium (similar to what occurs in the lasing medium in a laser).

By putting such an amplifying medium in a resonant cavity, feedback is created that can produce coherent radiation.

Some common types

21st-century developments

In 2012, a research team from the National Physical Laboratory and Imperial College London developed a solid-state maser that operated at room temperature by using optically pumped, pentacene-doped p-Terphenyl as the amplifier medium. It produced pulses of maser emission lasting for a few hundred microseconds.

In 2018, a research team from Imperial College London and University College London demonstrated continuous-wave maser oscillation using synthetic diamonds containing nitrogen-vacancy defects.

Uses

Masers serve as high precision frequency references. These "atomic frequency standards" are one of the many forms of atomic clocks. Masers were also used as low-noise microwave amplifiers in radio telescopes, though these have largely been replaced by amplifiers based on FETs.

During the early 1960s, the Jet Propulsion Laboratory developed a maser to provide ultra-low-noise amplification of S-band microwave signals received from deep space probes. This maser used deeply refrigerated helium to chill the amplifier down to a temperature of 4 kelvin. Amplification was achieved by exciting a ruby comb with a 12.0 gigahertz klystron. In the early years, it took days to chill and remove the impurities from the hydrogen lines. Refrigeration was a two-stage process with a large Linde unit on the ground, and a crosshead compressor within the antenna. The final injection was at 21 MPa (3,000 psi) through a 150 μm (0.006 in) micrometer-adjustable entry to the chamber. The whole system noise temperature looking at cold sky (2.7 kelvin in the microwave band) was 17 kelvin; this gave such a low noise figure that the Mariner IV space probe could send still pictures from Mars back to the Earth even though the output power of its radio transmitter was only 15 watts, and hence the total signal power received was only −169 decibels with respect to a milliwatt (dBm).

Hydrogen maser

A hydrogen maser.

The hydrogen maser is used as an atomic frequency standard. Together with other kinds of atomic clocks, these help make up the International Atomic Time standard ("Temps Atomique International" or "TAI" in French). This is the international time scale coordinated by the International Bureau of Weights and Measures. Norman Ramsey and his colleagues first conceived of the maser as a timing standard. More recent masers are practically identical to their original design. Maser oscillations rely on the stimulated emission between two hyperfine energy levels of atomic hydrogen.

Here is a brief description of how they work:

  • First, a beam of atomic hydrogen is produced. This is done by submitting the gas at low pressure to a high-frequency radio wave discharge (see the picture on this page).
  • The next step is "state selection"—in order to get some stimulated emission, it is necessary to create a population inversion of the atoms. This is done in a way that is very similar to the Stern–Gerlach experiment. After passing through an aperture and a magnetic field, many of the atoms in the beam are left in the upper energy level of the lasing transition. From this state, the atoms can decay to the lower state and emit some microwave radiation.
  • A high Q factor (quality factor) microwave cavity confines the microwaves and reinjects them repeatedly into the atom beam. The stimulated emission amplifies the microwaves on each pass through the beam. This combination of amplification and feedback is what defines all oscillators. The resonant frequency of the microwave cavity is tuned to the frequency of the hyperfine energy transition of hydrogen: 1,420,405,752 hertz.
  • A small fraction of the signal in the microwave cavity is coupled into a coaxial cable and then sent to a coherent radio receiver.
  • The microwave signal coming out of the maser is very weak, a few picowatts. The frequency of the signal is fixed and extremely stable. The coherent receiver is used to amplify the signal and change the frequency. This is done using a series of phase-locked loops and a high performance quartz oscillator.

Astrophysical masers

Maser-like stimulated emission has also been observed in nature from interstellar space, and it is frequently called "superradiant emission" to distinguish it from laboratory masers. Such emission is observed from molecules such as water (H2O), hydroxyl radicals (•OH), methanol (CH3OH), formaldehyde (HCHO), silicon monoxide (SiO), and carbodiimide (HNCNH). Water molecules in star-forming regions can undergo a population inversion and emit radiation at about 22.0 GHz, creating the brightest spectral line in the radio universe. Some water masers also emit radiation from a rotational transition at a frequency of 96 GHz.

Extremely powerful masers, associated with active galactic nuclei, are known as megamasers and are up to a million times more powerful than stellar masers.

Terminology

The meaning of the term maser has changed slightly since its introduction. Initially the acronym was universally given as "microwave amplification by stimulated emission of radiation", which described devices which emitted in the microwave region of the electromagnetic spectrum.

The principle and concept of stimulated emission has since been extended to more devices and frequencies. Thus, the original acronym is sometimes modified, as suggested by Charles H. Townes, to "molecular amplification by stimulated emission of radiation." Some have asserted that Townes's efforts to extend the acronym in this way were primarily motivated by the desire to increase the importance of his invention, and his reputation in the scientific community.

When the laser was developed, Townes and Schawlow and their colleagues at Bell Labs pushed the use of the term optical maser, but this was largely abandoned in favor of laser, coined by their rival Gordon Gould. In modern usage, devices that emit in the X-ray through infrared portions of the spectrum are typically called lasers, and devices that emit in the microwave region and below are commonly called masers, regardless of whether they emit microwaves or other frequencies.

Gould originally proposed distinct names for devices that emit in each portion of the spectrum, including grasers (gamma ray lasers), xasers (x-ray lasers), uvasers (ultraviolet lasers), lasers (visible lasers), irasers (infrared lasers), masers (microwave masers), and rasers (RF masers). Most of these terms never caught on, however, and all have now become (apart from in science fiction) obsolete except for maser and laser.

Wednesday, August 9, 2023

Carbon budget


From Wikipedia, the free encyclopedia
Carbon budget and emission reduction scenarios needed to reach the two-degree target agreed to in the Paris Agreement (without net negative emissions, based on peak emissions)

A carbon budget is a concept used in climate policy to help set emissions reduction targets in a fair and effective way. It looks at "the maximum amount of cumulative net global anthropogenic carbon dioxide (CO2) emissions that would result in limiting global warming to a given level". When expressed relative to the pre-industrial period it is referred to as the total carbon budget, and when expressed from a recent specified date it is referred to as the remaining carbon budget.

A carbon budget consistent with keeping warming below a specified limit is also referred to as an emissions budget, an emissions quota, or allowable emissions. An emissions budget may also be associated with objectives for other related climate variables, such as radiative forcing or sea level rise.

Total or remaining carbon budgets are calculated by combining estimates of various contributing factors, including scientific evidence and value judgments or choices.

Global carbon budgets can be further divided into national emissions budgets, so that countries can set specific climate mitigation goals. Emissions budgets are relevant to climate change mitigation because they indicate a finite amount of carbon dioxide that can be emitted over time, before resulting in dangerous levels of global warming. Change in global temperature is independent from the geographic location of these emissions, and is largely independent of the timing of these emissions.

Carbon budgets are applicable to the global level. To translate these global carbon budgets to the country level, a set of value judgments have to be made on how to distribute the total and remaining carbon budget. This involves the consideration of aspects of equity and fairness between countries as well as other methodological choices. There are many differences between nations, including but not limited to population, level of industrialisation, national emissions histories, and mitigation capabilities. For this reason, scientists have made attempts to allocate global carbon budgets among countries using methods that follow various principles of equity.

Definition

The IPCC Sixth Assessment Reports defines carbon budget as the following two concepts:

  • "An assessment of carbon cycle sources and sinks on a global level, through the synthesis of evidence for fossil fuel and cement emissions, emissions and removals associated with land use and land-use change, ocean and natural land sources and sinks of carbon dioxide (CO2), and the resulting change in atmospheric CO2 concentration. This is referred to as the global carbon budget."; or
  • "The maximum amount of cumulative net global anthropogenic CO2 emissions that would result in limiting global warming to a given level with a given probability, taking into account the effect of other anthropogenic climate forcers. This is referred to as the total carbon budget when expressed starting from the pre-industrial period, and as the remaining carbon budget when expressed from a recent specified date."

Global carbon budgets can be further divided into national emissions budgets, so that countries can set specific climate mitigation goals.

An emissions budget may be distinguished from an emissions target, as an emissions target may be internationally or nationally set in accordance with objectives other than a specific global temperature and are commonly applied to the annual emissions in a single year as well.

Estimations

Recent and currently remaining carbon budget

Historical (unrestrained) carbon budget: Cumulative contributions to the global carbon budget since 1850 illustrate how source and sink components have been out of balance, causing an approximately 50% rise in atmospheric CO2.
 
Fossil CO2 emissions: global; territorial; by fuel type (incl cement); per capita

Several organisations provide annual updates to the remaining carbon budget, including the Global Carbon Project, the Mercator Research Institute on Global Commons and Climate Change (MCC) and the CONSTRAIN project. In March 2022, before formal publication of the 'Global Carbon Budget 2021' preprint, scientists reported, based on Carbon Monitor (CM) data, that after COVID-19-pandemic-caused record-level declines in 2020, global CO2 emissions rebounded sharply by 4.8% in 2021, indicating that at the current trajectory, the carbon budget for a ⅔ likelihood for limiting warming to 1.5 °C would be used up within 9.5 years.

In April 2022, the now reviewed and officially published The Global Carbon Budget 2021 concluded that fossil CO2 emissions rebounded from pandemic levels by around +4.8% relative to 2020 emissions – returning to 2019 levels.

It identifies three major issues for improving reliable accuracy of monitoring, shows that China and India surpassed 2019 levels (by 5.7% and 3.2%) while the EU and the US stayed beneath 2019 levels (by 5.3% and 4.5%), quantifies various changes and trends, for the first time provides models' estimates that are linked to the official country GHG inventories reporting, and suggests that the remaining carbon budget at 1. Jan 2022 for a 50% likelihood to limit global warming to 1.5°C (albeit a temporary exceedence is to be expected) is 120 GtC (420 GtCO2) – or 11 years of 2021 emissions levels.

This does not mean that likely 11 years remain to cut emissions but that if emissions stayed the same, instead of increasing like in 2021, 11 years of constant GHG emissions would be left in the hypothetical scenario that all emissions suddenly ceased in the 12th year. (The 50% likelihood may be describable as a kind of minimum plausible deniability requirement as lower likelihoods would make the 1.5°C goal "unlikely".) Moreover, other trackers show (or highlight) different amounts of carbon budget left, such as the MCC, which as of May 2022 shows '7 years 1 month left' and different likelihoods have different carbon budgets: a 83% likelihood would mean 6.6 ±0.1 years left (ending in 2028) according to CM data.

Carbon budget in gigatonnes and factors

Estimating the remaining carbon budget at the global level depends on climate science and value judgments or choices. To translate a global budget to the national level, further value judgments and choices have to be made. Figure from the CONSTRAIN Zero In On Report.

The finding of an almost linear relationship between global temperature rise and cumulative carbon dioxide emissions has encouraged the estimation of global emissions budgets in order to remain below dangerous levels of warming. Since the pre-industrial period to 2019, approximately 2390 Gigatonnes of CO2 (Gt CO2) has already been emitted globally.

Scientific estimations of the remaining global emissions budgets/quotas differ due to varied methodological approaches, and considerations of thresholds. Estimations might not include all amplifying climate change feedbacks, although the most authoritative carbon budget assessments by the IPCC do account explicitly for these. The IPCC assesses the size of remaining carbon budgets using estimates of past warming caused by human activities, the amount of warming per cumulative unit of CO2 emissions (also known as the Transient Climate Response to cumulative Emissions of carbon dioxide, or TCRE), the amount of warming that could still occur once all emissions of CO2 are halted (known as the Zero Emissions Commitment), and the impact of Earth system feedbacks that would otherwise not be covered; and vary according to the global temperature target that is chosen, the probability of staying below that target, and the emission of other non-CO2 greenhouse gases (GHGs). This approach was first applied in the 2018 Special report on Global Warming of 1.5°C by the IPCC, and was also used in its 2021 Working Group I Contribution to the Sixth Assessment Report.

Carbon budget estimates depend on the likelihood or probability of avoiding a temperature limit, and the assumed warming that is projected to be caused by non-CO2 emissions. The values for the carbon budget estimates in the following table are drawn from the latest assessment of the Physical Science Basis of climate change by the Working Group I Contribution to the IPCC Sixth Assessment Report. These estimates assume non-CO2 emissions are also reduced in line with deep decarbonisation scenarios that reach global net zero CO2 emissions. Carbon budget estimates thus depend on how successful society is in reducing non-CO2 emissions together with carbon dioxide emissions. The IPCC Sixth Assessment Report estimated that remaining carbon budgets can be 220 Gt CO2 higher or lower depending on how successful non-CO2 emissions are reduced.

Estimated carbon budgets in GtCO2 from 2020 with likelihoods
Global warming relative to 1850-1900 17% 33% 50% 66% 83%
1.5°C 900 650 500 400 300
1.7°C 1450 1050 850 700 550
2.0°C 2300 1700 1350 1150 900

National emissions budgets

Carbon budgets are applicable to the global level. To translate these global carbon budgets to the country level, a set of value judgments have to be made on how to distribute the total and remaining carbon budget. In light of the many differences between nations, including but not limited to population, level of industrialisation, national emissions histories, and mitigation capabilities, scientists have made attempts to allocate global carbon budgets among countries using methods that follow various principles of equity. Allocating national emissions budgets is comparable to sharing the effort to reduce global emissions, underlined by some assumptions of state-level responsibility of climate change. Many authors have conducted quantitative analyses which allocate emissions budgets, often simultaneously addressing disparities in historical GHG emissions between nations.

One guiding principle that is used to allocate global emissions budgets to nations is the principle of "common but differentiated responsibilities and respective capabilities" that is included in the United Nations Framework Convention on Climate Change (UNFCCC). This principle is not defined in further detail in the UNFCCC but is broadly understood to recognize nations' different cumulative historical contributions to global emissions as well as their different development stages. From this perspective, those countries with greater emissions during a set time period (for example, since the pre-industrial era to the present) are the most responsible for addressing excess emissions, as are countries that are richer. Thus, their national emissions budgets have to be smaller than those from countries that have polluted less in the past, or are poorer. The concept of national historical responsibility for climate change has prevailed in the literature since the early 1990s and has been part of the key international agreements on climate change (UNFCCC, the Kyoto Protocol and the Paris Agreement). Consequently, those countries with the highest cumulative historical emissions have the most responsibility to take the strongest actions and help developing countries to mitigate their emissions and adapt to climate change. This principle is recognized in international treaties and has been part of the diplomatic strategies by developing countries, that argue that they need larger emissions budgets to reduce inequity and achieve sustainable development.

Another common equity principle for calculating national emissions budgets is the "egalitarian" principle. This principle stipulates individuals should have equal rights, and therefore emissions budgets should be distributed proportionally according to state populations. Some scientists have thus reasoned the use of national per-capita emissions in national emissions budget calculations. This principle may be favoured by nations with larger or rapidly growing populations, but raises the question whether individuals can have a right to pollute.

A third equity principle that has been employed in national budget calculations considers national sovereignty. The "sovereignty" principle highlights the equal right of nations to pollute. The grandfathering method for calculating national emissions budgets uses this principle. Grandfathering allocates these budgets proportionally according to emissions at a particular base year, and has been used under international regimes such as the Kyoto Protocol and the early phase of the European Union Emissions Trading Scheme (EU ETS) This principle is often favoured by developed countries, as it allocates larger emissions budgets to them. However, recent publications highlight that grandfathering is unsupported as an equity principle as it "creates 'cascading biases' against poorer states, is not a 'standard of equity'". Other scholars have highlighted that "to treat states as the owners of emission rights has morally problematic consequences".

Pathways to stay within carbon budget

The steps that can be taken to stay within one's carbon budget are explained within the concept of climate change mitigation.

Climate change mitigation is action to limit climate change by reducing emissions of greenhouse gases or removing those gases from the atmosphere. The recent rise in global average temperature is mostly due to emissions from burning fossil fuels such as coal, oil, and natural gas. Mitigation can reduce emissions by transitioning to sustainable energy sources, conserving energy, and increasing efficiency. It is possible to remove carbon dioxide (CO2) from the atmosphere by enlarging forests, restoring wetlands and using other natural and technical processes. Experts call these processes carbon sequestration. Governments and companies have pledged to reduce emissions to prevent dangerous climate change in line with international negotiations to limit warming by reducing emissions.

Solar energy and wind power have the greatest potential for mitigation at the lowest cost compared to a range of other options. The availability of sunshine and wind is variable. But it is possible to deal with this through energy storage and improved electrical grids. These include long-distance electricity transmission, demand management and diversification of renewables. It is possible to reduce emissions from infrastructure that directly burns fossil fuels, such as vehicles and heating appliances, by electrifying the infrastructure. If the electricity comes from renewable sources instead of fossil fuels this will reduce emissions. Using heat pumps and electric vehicles can improve energy efficiency. If industrial processes must create carbon dioxide, carbon capture and storage can reduce net emissions.

Greenhouse gas emissions from agriculture include methane as well as nitrous oxide. It is possible to cut emissions from agriculture by reducing food waste, switching to a more plant-based diet, by protecting ecosystems and by improving farming processes. Changing energy sources, industrial processes and farming methods can reduce emissions. So can changes in demand, for instance in diets or the way we build and travel in cities.

Climate change mitigation policies include: carbon pricing by carbon taxes and carbon emission trading, easing regulations for renewable energy deployment, reductions of fossil fuel subsidies, and divestment from fossil fuels, and subsidies for clean energy. Current policies are estimated to produce global warming of about 2.7 °C by 2100. This warming is significantly above the 2015 Paris Agreement's goal of limiting global warming to well below 2 °C and preferably to 1.5 °C. Globally, limiting warming to 2 °C may result in higher economic benefits than economic costs.

Photosynthetic reaction centre

From Wikipedia, the free encyclopedia
Electron micrograph of a 2D crystal of the LH1-Reaction center photosynthetic unit.

A photosynthetic reaction center is a complex of several proteins, pigments and other co-factors that together execute the primary energy conversion reactions of photosynthesis. Molecular excitations, either originating directly from sunlight or transferred as excitation energy via light-harvesting antenna systems, give rise to electron transfer reactions along the path of a series of protein-bound co-factors. These co-factors are light-absorbing molecules (also named chromophores or pigments) such as chlorophyll and pheophytin, as well as quinones. The energy of the photon is used to excite an electron of a pigment. The free energy created is then used, via a chain of nearby electron acceptors, for a transfer of hydrogen atoms (as protons and electrons) from H2O or hydrogen sulfide towards carbon dioxide, eventually producing glucose. These electron transfer steps ultimately result in the conversion of the energy of photons to chemical energy.

Transforming light energy into charge separation

Reaction centers are present in all green plants, algae, and many bacteria. A variety in light-harvesting complexes exist across the photosynthetic species. Green plants and algae have two different types of reaction centers that are part of larger supercomplexes known as P700 in Photosystem I and P680 in Photosystem II. The structures of these supercomplexes are large, involving multiple light-harvesting complexes. The reaction center found in Rhodopseudomonas bacteria is currently best understood, since it was the first reaction center of known structure and has fewer polypeptide chains than the examples in green plants.

A reaction center is laid out in such a way that it captures the energy of a photon using pigment molecules and turns it into a usable form. Once the light energy has been absorbed directly by the pigment molecules, or passed to them by resonance transfer from a surrounding light-harvesting complex, they release electrons into an electron transport chain and pass energy to a hydrogen donor such as H2O to extract electrons and protons from it. In green plants, the electron transport chain has many electron acceptors including pheophytin, quinone, plastoquinone, cytochrome bf, and ferredoxin, which result finally in the reduced molecule NADPH, while the energy used to split water results in the release of oxygen. The passage of the electron through the electron transport chain also results in the pumping of protons (hydrogen ions) from the chloroplast's stroma and into the lumen, resulting in a proton gradient across the thylakoid membrane that can be used to synthesize ATP using the ATP synthase molecule. Both the ATP and NADPH are used in the Calvin cycle to fix carbon dioxide into triose sugars.

In bacteria

Classification

Two classes of reaction centres are recognized. Type I, found in green-sulfur bacteria, Heliobacteria, and plant/cyanobacterial PS-I, use iron sulfur clusters as electron acceptors. Type II, found in chloroflexus, purple bacteria, and plant/cyanobacterial PS-II, use quinones. Not only do all members inside each class share common ancestry, but the two classes also, by means of common structure, appear related. This section deals with the type II system found in purple bacteria.

Structure

Schematic of reaction center in the membrane, with Cytochrome C at top
Bacterial photosynthetic reaction center.

The bacterial photosynthetic reaction center has been an important model to understand the structure and chemistry of the biological process of capturing light energy. In the 1960s, Roderick Clayton was the first to purify the reaction center complex from purple bacteria. However, the first crystal structure (upper image at right) was determined in 1984 by Hartmut Michel, Johann Deisenhofer and Robert Huber for which they shared the Nobel Prize in 1988. This was also significant for being the first 3D crystal structure of any membrane protein complex.

Four different subunits were found to be important for the function of the photosynthetic reaction center. The L and M subunits, shown in blue and purple in the image of the structure, both span the lipid bilayer of the plasma membrane. They are structurally similar to one another, both having 5 transmembrane alpha helices. Four bacteriochlorophyll b (BChl-b) molecules, two bacteriopheophytin b molecules (BPh) molecules, two quinones (QA and QB), and a ferrous ion are associated with the L and M subunits. The H subunit, shown in gold, lies on the cytoplasmic side of the plasma membrane. A cytochrome subunit, not shown here, contains four c-type hemes and is located on the periplasmic surface (outer) of the membrane. The latter sub-unit is not a general structural motif in photosynthetic bacteria. The L and M subunits bind the functional and light-interacting cofactors, shown here in green.

Reaction centers from different bacterial species may contain slightly altered bacterio-chlorophyll and bacterio-pheophytin chromophores as functional co-factors. These alterations cause shifts in the colour of light that can be absorbed. The reaction center contains two pigments that serve to collect and transfer the energy from photon absorption: BChl and Bph. BChl roughly resembles the chlorophyll molecule found in green plants, but, due to minor structural differences, its peak absorption wavelength is shifted into the infrared, with wavelengths as long as 1000 nm. Bph has the same structure as BChl, but the central magnesium ion is replaced by two protons. This alteration causes both an absorbance maximum shift and a lowered redox potential.

Mechanism

The light reaction

The process starts when light is absorbed by two BChl molecules that lie near the periplasmic side of the membrane. This pair of chlorophyll molecules, often called the "special pair", absorbs photons at 870 nm or 960 nm, depending on the species and, thus, is called P870 (for Rhodobacter sphaeroides) or P960 (for Blastochloris viridis), with P standing for "pigment"). Once P absorbs a photon, it ejects an electron, which is transferred through another molecule of Bchl to the BPh in the L subunit. This initial charge separation yields a positive charge on P and a negative charge on the BPh. This process takes place in 10 picoseconds (10−11 seconds).

The charges on the P+ and the BPh could undergo charge recombination in this state, which would waste the energy and convert it into heat. Several factors of the reaction center structure serve to prevent this. First, the transfer of an electron from BPh to P960+ is relatively slow compared to two other redox reactions in the reaction center. The faster reactions involve the transfer of an electron from BPh (BPh is oxidized to BPh) to the electron acceptor quinone (QA), and the transfer of an electron to P960+ (P960+ is reduced to P960) from a heme in the cytochrome subunit above the reaction center.

The high-energy electron that resides on the tightly bound quinone molecule QA is transferred to an exchangeable quinone molecule QB. This molecule is loosely associated with the protein and is fairly easy to detach. Two electrons are required to fully reduce QB to QH2, taking up two protons from the cytoplasm in the process. The reduced quinone QH2 diffuses through the membrane to another protein complex (cytochrome bc1-complex) where it is oxidized. In the process the reducing power of the QH2 is used to pump protons across the membrane to the periplasmic space. The electrons from the cytochrome bc1-complex are then transferred through a soluble cytochrome c intermediate, called cytochrome c2, in the periplasm to the cytochrome subunit.

In Cyanobacteria and plants

Cyanobacteria, the precursor to chloroplasts found in green plants, have both photosystems with both types of reaction centers. Combining the two systems allows for producing oxygen.

Oxygenic photosynthesis

In 1772, the chemist Joseph Priestley carried out a series of experiments relating to the gases involved in respiration and combustion. In his first experiment, he lit a candle and placed it under an upturned jar. After a short period of time, the candle burned out. He carried out a similar experiment with a mouse in the confined space of the burning candle. He found that the mouse died a short time after the candle had been extinguished. However, he could revivify the foul air by placing green plants in the area and exposing them to light. Priestley's observations were some of the first experiments that demonstrated the activity of a photosynthetic reaction center.

In 1779, Jan Ingenhousz carried out more than 500 experiments spread out over 4 months in an attempt to understand what was really going on. He wrote up his discoveries in a book entitled Experiments upon Vegetables. Ingenhousz took green plants and immersed them in water inside a transparent tank. He observed many bubbles rising from the surface of the leaves whenever the plants were exposed to light. Ingenhousz collected the gas that was given off by the plants and performed several different tests in attempt to determine what the gas was. The test that finally revealed the identity of the gas was placing a smouldering taper into the gas sample and having it relight. This test proved it was oxygen, or, as Joseph Priestley had called it, 'de-phlogisticated air'.

In 1932, Robert Emerson and his student, William Arnold, used a repetitive flash technique to precisely measure small quantities of oxygen evolved by chlorophyll in the algae Chlorella. Their experiment proved the existence of a photosynthetic unit. Gaffron and Wohl later interpreted the experiment and realized that the light absorbed by the photosynthetic unit was transferred. This reaction occurs at the reaction center of Photosystem II and takes place in cyanobacteria, algae and green plants.

Photosystem II

Cyanobacteria photosystem II, Monomer, PDB 2AXT.

Photosystem II is the photosystem that generates the two electrons that will eventually reduce NADP+ in ferredoxin-NADP-reductase. Photosystem II is present on the thylakoid membranes inside chloroplasts, the site of photosynthesis in green plants. The structure of Photosystem II is remarkably similar to the bacterial reaction center, and it is theorized that they share a common ancestor.

The core of Photosystem II consists of two subunits referred to as D1 and D2. These two subunits are similar to the L and M subunits present in the bacterial reaction center. Photosystem II differs from the bacterial reaction center in that it has many additional subunits that bind additional chlorophylls to increase efficiency. The overall reaction catalyzed by Photosystem II is:

2Q + 2H2O + → O2 + 2QH2

Q represents the oxidized form of plastoquinone while QH2 represents its reduced form. This process of reducing quinone is comparable to that which takes place in the bacterial reaction center. Photosystem II obtains electrons by oxidizing water in a process called photolysis. Molecular oxygen is a byproduct of this process, and it is this reaction that supplies the atmosphere with oxygen. The fact that the oxygen from green plants originated from water was first deduced by the Canadian-born American biochemist Martin David Kamen. He used a stable isotope of oxygen, 18O, to trace the path of the oxygen from water to gaseous molecular oxygen. This reaction is catalyzed by a reactive center in Photosystem II containing four manganese ions.

Electron transport in PS2.

The reaction begins with the excitation of a pair of chlorophyll molecules similar to those in the bacterial reaction center. Due to the presence of chlorophyll a, as opposed to bacteriochlorophyll, Photosystem II absorbs light at a shorter wavelength. The pair of chlorophyll molecules at the reaction center are often referred to as P680. When the photon has been absorbed, the resulting high-energy electron is transferred to a nearby pheophytin molecule. This is above and to the right of the pair on the diagram and is coloured grey. The electron travels from the pheophytin molecule through two plastoquinone molecules, the first tightly bound, the second loosely bound. The tightly bound molecule is shown above the pheophytin molecule and is colored red. The loosely bound molecule is to the left of this and is also colored red. This flow of electrons is similar to that of the bacterial reaction center. Two electrons are required to fully reduce the loosely bound plastoquinone molecule to QH2 as well as the uptake of two protons.

The difference between Photosystem II and the bacterial reaction center is the source of the electron that neutralizes the pair of chlorophyll a molecules. In the bacterial reaction center, the electron is obtained from a reduced compound haem group in a cytochrome subunit or from a water-soluble cytochrome-c protein.

Every time the P680 absorbs a photon, it gives off an electron to pheophytin, gaining a positive charge. After this photoinduced charge separation, P680+ is a very strong oxidant of high energy. It passes its energy to water molecules that are bound at the manganese center directly below the pair and extracts an electron from them. This center, below and to the left of the pair in the diagram, contains four manganese ions, a calcium ion, a chloride ion, and a tyrosine residue. Manganese is adept at these reactions because it is capable of existing in four oxidation states: Mn2+, Mn3+, Mn4+ and Mn5+. Manganese also forms strong bonds with oxygen-containing molecules such as water. The process of oxidizing two molecules of water to form an oxygen molecule requires four electrons. The water molecules that are oxidized in the manganese center are the source of the electrons that reduce the two molecules of Q to QH2. To date, this water splitting catalytic center has not been reproduced by any man-made catalyst.

Photosystem I

After the electron has left Photosystem II it is transferred to a cytochrome b6f complex and then to plastocyanin, a blue copper protein and electron carrier. The plastocyanin complex carries the electron that will neutralize the pair in the next reaction center, Photosystem I.

As with Photosystem II and the bacterial reaction center, a pair of chlorophyll a molecules initiates photoinduced charge separation. This pair is referred to as P700, where 700 is a reference to the wavelength at which the chlorophyll molecules absorb light maximally. The P700 lies in the center of the protein. Once photoinduced charge separation has been initiated, the electron travels down a pathway through a chlorophyll α molecule situated directly above the P700, through a quinone molecule situated directly above that, through three 4Fe-4S clusters, and finally to an interchangeable ferredoxin complex. Ferredoxin is a soluble protein containing a 2Fe-2S cluster coordinated by four cysteine residues. The positive charge on the high-energy P700+ is neutralized by the transfer of an electron from plastocyanin, which receives energy eventually used to convert QH2 back to Q. Thus the overall reaction catalyzed by Photosystem I is:

Pc(Cu+) + Fd[ox] + → Pc(Cu2+) + Fd[red]

The cooperation between Photosystems I and II creates an electron and proton flow from H2O to NADP+, producing NADPH needed for glucose synthesis. This pathway is called the 'Z-scheme' because the redox diagram from H2O to NADP+ via P680 and P700 resembles the letter Z.

Ecophysiology

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Ecophysiology

Ecophysiology (from Greek οἶκος, oikos, "house(hold)"; φύσις, physis, "nature, origin"; and -λογία, -logia), environmental physiology or physiological ecology is a biological discipline that studies the response of an organism's physiology to environmental conditions. It is closely related to comparative physiology and evolutionary physiology. Ernst Haeckel's coinage bionomy is sometimes employed as a synonym.

Plants

Plant ecophysiology is concerned largely with two topics: mechanisms (how plants sense and respond to environmental change) and scaling or integration (how the responses to highly variable conditions—for example, gradients from full sunlight to 95% shade within tree canopies—are coordinated with one another), and how their collective effect on plant growth and gas exchange can be understood on this basis.

In many cases, animals are able to escape unfavourable and changing environmental factors such as heat, cold, drought or floods, while plants are unable to move away and therefore must endure the adverse conditions or perish (animals go places, plants grow places). Plants are therefore phenotypically plastic and have an impressive array of genes that aid in acclimating to changing conditions. It is hypothesized that this large number of genes can be partly explained by plant species' need to live in a wider range of conditions.

Light

Light is the food of plants, i.e. the form of energy that plants use to build themselves and reproduce. The organs harvesting light in plants are leaves and the process through which light is converted into biomass is photosynthesis. The response of photosynthesis to light is called light response curve of net photosynthesis (PI curve). The shape is typically described by a non-rectangular hyperbola. Three quantities of the light response curve are particularly useful in characterising a plant's response to light intensities. The inclined asymptote has a positive slope representing the efficiency of light use, and is called quantum efficiency; the x-intercept is the light intensity at which biochemical assimilation (gross assimilation) balances leaf respiration so that the net CO2 exchange of the leaf is zero, called light compensation point; and a horizontal asymptote representing the maximum assimilation rate. Sometimes after reaching the maximum assimilation declines for processes collectively known as photoinhibition.

As with most abiotic factors, light intensity (irradiance) can be both suboptimal and excessive. Suboptimal light (shade) typically occurs at the base of a plant canopy or in an understory environment. Shade tolerant plants have a range of adaptations to help them survive the altered quantity and quality of light typical of shade environments.

Excess light occurs at the top of canopies and on open ground when cloud cover is low and the sun's zenith angle is low, typically this occurs in the tropics and at high altitudes. Excess light incident on a leaf can result in photoinhibition and photodestruction. Plants adapted to high light environments have a range of adaptations to avoid or dissipate the excess light energy, as well as mechanisms that reduce the amount of injury caused.

Light intensity is also an important component in determining the temperature of plant organs (energy budget).

Temperature

In response to extremes of temperature, plants can produce various proteins. These protect them from the damaging effects of ice formation and falling rates of enzyme catalysis at low temperatures, and from enzyme denaturation and increased photorespiration at high temperatures. As temperatures fall, production of antifreeze proteins and dehydrins increases. As temperatures rise, production of heat shock proteins increases. Metabolic imbalances associated with temperature extremes result in the build-up of reactive oxygen species, which can be countered by antioxidant systems. Cell membranes are also affected by changes in temperature and can cause the membrane to lose its fluid properties and become a gel in cold conditions or to become leaky in hot conditions. This can affect the movement of compounds across the membrane. To prevent these changes, plants can change the composition of their membranes. In cold conditions, more unsaturated fatty acids are placed in the membrane and in hot conditions, more saturated fatty acids are inserted.

Infrared image showing the importance of transpiration in keeping leaves cool.

Plants can avoid overheating by minimising the amount of sunlight absorbed and by enhancing the cooling effects of wind and transpiration. Plants can reduce light absorption using reflective leaf hairs, scales, and waxes. These features are so common in warm dry regions that these habitats can be seen to form a 'silvery landscape' as the light scatters off the canopies. Some species, such as Macroptilium purpureum, can move their leaves throughout the day so that they are always orientated to avoid the sun (paraheliotropism). Knowledge of these mechanisms has been key to breeding for heat stress tolerance in agricultural plants.

Plants can avoid the full impact of low temperature by altering their microclimate. For example, Raoulia plants found in the uplands of New Zealand are said to resemble 'vegetable sheep' as they form tight cushion-like clumps to insulate the most vulnerable plant parts and shield them from cooling winds. The same principle has been applied in agriculture by using plastic mulch to insulate the growing points of crops in cool climates in order to boost plant growth.

Water

Too much or too little water can damage plants. If there is too little water then tissues will dehydrate and the plant may die. If the soil becomes waterlogged then the soil will become anoxic (low in oxygen), which can kill the roots of the plant.

The ability of plants to access water depends on the structure of their roots and on the water potential of the root cells. When soil water content is low, plants can alter their water potential to maintain a flow of water into the roots and up to the leaves (Soil plant atmosphere continuum). This remarkable mechanism allows plants to lift water as high as 120 m by harnessing the gradient created by transpiration from the leaves.

In very dry soil, plants close their stomata to reduce transpiration and prevent water loss. The closing of the stomata is often mediated by chemical signals from the root (i.e., abscisic acid). In irrigated fields, the fact that plants close their stomata in response to drying of the roots can be exploited to 'trick' plants into using less water without reducing yields (see partial rootzone drying). The use of this technique was largely developed by Dr Peter Dry and colleagues in Australia

If drought continues, the plant tissues will dehydrate, resulting in a loss of turgor pressure that is visible as wilting. As well as closing their stomata, most plants can also respond to drought by altering their water potential (osmotic adjustment) and increasing root growth. Plants that are adapted to dry environments (Xerophytes) have a range of more specialized mechanisms to maintain water and/or protect tissues when desiccation occurs.

Waterlogging reduces the supply of oxygen to the roots and can kill a plant within days. Plants cannot avoid waterlogging, but many species overcome the lack of oxygen in the soil by transporting oxygen to the root from tissues that are not submerged. Species that are tolerant of waterlogging develop specialised roots near the soil surface and aerenchyma to allow the diffusion of oxygen from the shoot to the root. Roots that are not killed outright may also switch to less oxygen-hungry forms of cellular respiration. Species that are frequently submerged have evolved more elaborate mechanisms that maintain root oxygen levels, such as the aerial roots seen in mangrove forests.

However, for many terminally overwatered houseplants, the initial symptoms of waterlogging can resemble those due to drought. This is particularly true for flood-sensitive plants that show drooping of their leaves due to epinasty (rather than wilting).

CO2 concentration

CO2 is vital for plant growth, as it is the substrate for photosynthesis. Plants take in CO2 through stomatal pores on their leaves. At the same time as CO2 enters the stomata, moisture escapes. This trade-off between CO2 gain and water loss is central to plant productivity. The trade-off is all the more critical as Rubisco, the enzyme used to capture CO2, is efficient only when there is a high concentration of CO2 in the leaf. Some plants overcome this difficulty by concentrating CO2 within their leaves using C4 carbon fixation or Crassulacean acid metabolism. However, most species used C3 carbon fixation and must open their stomata to take in CO2 whenever photosynthesis is taking place.

Plant Productivity in a Warming World

The concentration of CO2 in the atmosphere is rising due to deforestation and the combustion of fossil fuels. This would be expected to increase the efficiency of photosynthesis and possibly increase the overall rate of plant growth. This possibility has attracted considerable interest in recent years, as an increased rate of plant growth could absorb some of the excess CO2 and reduce the rate of global warming. Extensive experiments growing plants under elevated CO2 using Free-Air Concentration Enrichment have shown that photosynthetic efficiency does indeed increase. Plant growth rates also increase, by an average of 17% for above-ground tissue and 30% for below-ground tissue. However, detrimental impacts of global warming, such as increased instances of heat and drought stress, mean that the overall effect is likely to be a reduction in plant productivity. Reduced plant productivity would be expected to accelerate the rate of global warming. Overall, these observations point to the importance of avoiding further increases in atmospheric CO2 rather than risking runaway climate change.

Wind

Wind has three very different effects on plants.

  • It affects the exchanges of mass (water evaporation, CO2) and of energy (heat) between the plant and the atmosphere by renewing the air at the contact with the leaves (convection).
  • It is sensed as a signal driving a wind-acclimation syndrome by the plant known as thigmomorphogenesis, leading to modified growth and development and eventually to wind hardening.
  • Its drag force can damage the plant (leaf abrasion, wind ruptures in branches and stems and windthrows and toppling in trees and lodging in crops).

Exchange of mass and energy

Wind influences the way leaves regulate moisture, heat, and carbon dioxide. When no wind is present, a layer of still air builds up around each leaf. This is known as the boundary layer and in effect insulates the leaf from the environment, providing an atmosphere rich in moisture and less prone to convective heating or cooling. As wind speed increases, the leaf environment becomes more closely linked to the surrounding environment. It may become difficult for the plant to retain moisture as it is exposed to dry air. On the other hand, a moderately high wind allows the plant to cool its leaves more easily when exposed to full sunlight. Plants are not entirely passive in their interaction with wind. Plants can make their leaves less vulnerable to changes in wind speed, by coating their leaves in fine hairs (trichomes) to break up the airflow and increase the boundary layer. In fact, leaf and canopy dimensions are often finely controlled to manipulate the boundary layer depending on the prevailing environmental conditions.

Acclimation

Plants can sense the wind through the deformation of its tissues. This signal leads to inhibits the elongation and stimulates the radial expansion of their shoots, while increasing the development of their root system. This syndrome of responses known as thigmomorphogenesis results in shorter, stockier plants with strengthened stems, as well as to an improved anchorage. It was once believed that this occurs mostly in very windy areas. But it has been found that it happens even in areas with moderate winds, so that wind-induced signal were found to be a major ecological factor.

Trees have a particularly well-developed capacity to reinforce their trunks when exposed to wind. From the practical side, this realisation prompted arboriculturalists in the UK in the 1960s to move away from the practice of staking young amenity trees to offer artificial support.

Wind damage

Wind can damage most of the organs of the plants. Leaf abrasion (due to the rubbing of leaves and branches or to the effect of airborne particles such as sand) and leaf of branch breakage are rather common phenomena, that plants have to accommodate. In the more extreme cases, plants can be mortally damaged or uprooted by wind. This has been a major selective pressure acting over terrestrial plants. Nowadays, it is one of the major threatening for agriculture and forestry even in temperate zones. It is worse for agriculture in hurricane-prone regions, such as the banana-growing Windward Islands in the Caribbean.

When this type of disturbance occurs in natural systems, the only solution is to ensure that there is an adequate stock of seeds or seedlings to quickly take the place of the mature plants that have been lost- although, in many cases, a successional stage will be needed before the ecosystem can be restored to its former state.

Animals

Humans

The environment can have major influences on human physiology. Environmental effects on human physiology are numerous; one of the most carefully studied effects is the alterations in thermoregulation in the body due to outside stresses. This is necessary because in order for enzymes to function, blood to flow, and for various body organs to operate, temperature must remain at consistent, balanced levels.

Thermoregulation

To achieve this, the body alters three main things to achieve a constant, normal body temperature:

The hypothalamus plays an important role in thermoregulation. It connects to thermal receptors in the dermis, and detects changes in surrounding blood to make decisions of whether to stimulate internal heat production or to stimulate evaporation.

There are two main types of stresses that can be experienced due to extreme environmental temperatures: heat stress and cold stress.

Heat stress is physiologically combated in four ways: radiation, conduction, convection, and evaporation. Cold stress is physiologically combated by shivering, accumulation of body fat, circulatory adaptations (that provide an efficient transfer of heat to the epidermis), and increased blood flow to the extremities.

There is one part of the body fully equipped to deal with cold stress. The respiratory system protects itself against damage by warming the incoming air to 80-90 degrees Fahrenheit before it reaches the bronchi. This means that not even the most frigid of temperatures can damage the respiratory tract.

In both types of temperature-related stress, it is important to remain well-hydrated. Hydration reduces cardiovascular strain, enhances the ability of energy processes to occur, and reduces feelings of exhaustion.

Altitude

Extreme temperatures are not the only obstacles that humans face. High altitudes also pose serious physiological challenges on the body. Some of these effects are reduced arterial , the rebalancing of the acid-base content in body fluids, increased hemoglobin, increased RBC synthesis, enhanced circulation, and increased levels of the glycolysis byproduct 2,3 diphosphoglycerate, which promotes off-loading of O2 by hemoglobin in the hypoxic tissues.

Environmental factors can play a huge role in the human body's fight for homeostasis. However, humans have found ways to adapt, both physiologically and tangibly.

Scientists

George A. Bartholomew (1919–2006) was a founder of animal physiological ecology. He served on the faculty at UCLA from 1947 to 1989, and almost 1,200 individuals can trace their academic lineages to him. Knut Schmidt-Nielsen (1915–2007) was also an important contributor to this specific scientific field as well as comparative physiology.

Hermann Rahn (1912–1990) was an early leader in the field of environmental physiology. Starting out in the field of zoology with a Ph.D. from University of Rochester (1933), Rahn began teaching physiology at the University of Rochester in 1941. It is there that he partnered with Wallace O. Fenn to publish A Graphical Analysis of the Respiratory Gas Exchange in 1955. This paper included the landmark O2-CO2 diagram, which formed the basis for much of Rahn's future work. Rahn's research into applications of this diagram led to the development of aerospace medicine and advancements in hyperbaric breathing and high-altitude respiration. Rahn later joined the University at Buffalo in 1956 as the Lawrence D. Bell Professor and Chairman of the Department of Physiology. As Chairman, Rahn surrounded himself with outstanding faculty and made the University an international research center in environmental physiology.

Education

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Education Education is the transmissio...