Search This Blog

Tuesday, January 26, 2016

Solar cell


From Wikipedia, the free encyclopedia


A conventional crystalline silicon solar cell. Electrical contacts made from busbars (the larger strips) and fingers (the smaller ones) are printed on the silicon wafer.

A solar cell, or photovoltaic cell, is an electrical device that converts the energy of light directly into electricity by the photovoltaic effect, which is a physical and chemical phenomenon.[1] It is a form of photoelectric cell, defined as a device whose electrical characteristics, such as current, voltage, or resistance, vary when exposed to light. Solar cells are the building blocks of photovoltaic modules, otherwise known as solar panels.

Solar cells are described as being photovoltaic irrespective of whether the source is sunlight or an artificial light. They are used as a photodetector (for example infrared detectors), detecting light or other electromagnetic radiation near the visible range, or measuring light intensity.
The operation of a photovoltaic (PV) cell requires 3 basic attributes:
  • The absorption of light, generating either electron-hole pairs or excitons.
  • The separation of charge carriers of opposite types.
  • The separate extraction of those carriers to an external circuit.
In contrast, a solar thermal collector supplies heat by absorbing sunlight, for the purpose of either direct heating or indirect electrical power generation from heat. A "photoelectrolytic cell" (photoelectrochemical cell), on the other hand, refers either to a type of photovoltaic cell (like that developed by Edmond Becquerel and modern dye-sensitized solar cells), or to a device that splits water directly into hydrogen and oxygen using only solar illumination.

Applications


From a solar cell to a PV system. Diagram of the possible components of a photovoltaic system

Assemblies of solar cells are used to make solar modules which generate electrical power from sunlight, as distinguished from a "solar thermal module" or "solar hot water panel". A solar array generates solar power using solar energy.

Cells, modules, panels and systems

Multiple solar cells in an integrated group, all oriented in one plane, constitute a solar photovoltaic panel or solar photovoltaic module. Photovoltaic modules often have a sheet of glass on the sun-facing side, allowing light to pass while protecting the semiconductor wafers. Solar cells are usually connected in series in modules, creating an additive voltage. Connecting cells in parallel yields a higher current; however, problems such as shadow effects can shut down the weaker (less illuminated) parallel string (a number of series connected cells) causing substantial power loss and possible damage because of the reverse bias applied to the shadowed cells by their illuminated partners. Strings of series cells are usually handled independently and not connected in parallel, though (as of 2014) individual power boxes are often supplied for each module, and are connected in parallel. Although modules can be interconnected to create an array with the desired peak DC voltage and loading current capacity, using independent MPPTs (maximum power point trackers) is preferable. Otherwise, shunt diodes can reduce shadowing power loss in arrays with series/parallel connected cells.[citation needed]

Typical PV system prices in 2013 in selected countries (USD)
USD/W Australia China France Germany Italy Japan United Kingdom United States
 Residential 1.8 1.5 4.1 2.4 2.8 4.2 2.8 4.9
 Commercial 1.7 1.4 2.7 1.8 1.9 3.6 2.4 4.5
 Utility-scale 2.0 1.4 2.2 1.4 1.5 2.9 1.9 3.3
Source: IEA – Technology Roadmap: Solar Photovoltaic Energy report, 2014 edition[2]:15
Note: DOE – Photovoltaic System Pricing Trends reports lower prices for the U.S.[3]

History

The photovoltaic effect was experimentally demonstrated first by French physicist Edmond Becquerel. In 1839, at age 19, he built the world's first photovoltaic cell in his father's laboratory. Willoughby Smith first described the "Effect of Light on Selenium during the passage of an Electric Current" in a 20 February 1873 issue of Nature. In 1883 Charles Fritts built the first solid state photovoltaic cell by coating the semiconductor selenium with a thin layer of gold to form the junctions; the device was only around 1% efficient.
In 1888 Russian physicist Aleksandr Stoletov built the first cell based on the outer photoelectric effect discovered by Heinrich Hertz in 1887.[4]

In 1905 Albert Einstein proposed a new quantum theory of light and explained the photoelectric effect in a landmark paper, for which he received the Nobel Prize in Physics in 1921.[5]

Vadim Lashkaryov discovered p-n-junctions in Cu_2O and silver sulphide protocells in 1941.[6]
Russell Ohl patented the modern junction semiconductor solar cell in 1946[7] while working on the series of advances that would lead to the transistor.

The first practical photovoltaic cell was publicly demonstrated on 25 April 1954 at Bell Laboratories.[8] The inventors were Daryl Chapin, Calvin Souther Fuller and Gerald Pearson.[9]

Solar cells gained prominence with their incorporation onto the 1958 Vanguard I satellite.

Improvements were gradual over the next two decades. However, this success was also the reason that costs remained high, because space users were willing to pay for the best possible cells, leaving no reason to invest in lower-cost, less-efficient solutions. The price was determined largely by the semiconductor industry; their move to integrated circuits in the 1960s led to the availability of larger boules at lower relative prices. As their price fell, the price of the resulting cells did as well. These effects lowered 1971 cell costs to some $100 per watt.[10]

Space Applications

Solar cells were first used in a prominent application when they were proposed and flown on the Vanguard satellite in 1958, as an alternative power source to the primary battery power source. By adding cells to the outside of the body, the mission time could be extended with no major changes to the spacecraft or its power systems. In 1959 the United States launched Explorer 6, featuring large wing-shaped solar arrays, which became a common feature in satellites. These arrays consisted of 9600 Hoffman solar cells.

By the 1960s, solar cells were (and still are) the main power source for most Earth orbiting satellites and a number of probes into the solar system, since they offered the best power-to-weight ratio. However, this success was possible because in the space application, power system costs could be high, because space users had few other power options, and were willing to pay for the best possible cells. The space power market drove the development of higher efficiencies in solar cells up until the National Science Foundation "Research Applied to National Needs" program began to push development of solar cells for terrestrial applications.

In the early 1990s the technology used for space solar cells diverged from the silicon technology used for terrestrial panels, with the spacecraft application shifting to gallium arsenide-based III-V semiconductor materials, which then evolved into the modern III-V multijunction photovoltaic cell used on spacecraft.

Price reductions


Dr. Elliot Berman testing various solar arrays manufactured by his company, Solar Power Corporation.

In late 1969 Elliot Berman joined the Exxon's task force which was looking for projects 30 years in the future and in April 1973 he founded Solar Power Corporation, a wholly owned subsidiary of Exxon that time.[11][12][13] The group had concluded that electrical power would be much more expensive by 2000, and felt that this increase in price would make alternative energy sources more attractive. He conducted a market study and concluded that a price per watt of about $20/watt would create significant demand.[11] The team eliminated the steps of polishing the wafers and coating them with an anti-reflective layer, relying on the rough-sawn wafer surface. The team also replaced the expensive materials and hand wiring used in space applications with a printed circuit board on the back, acrylic plastic on the front, and silicone glue between the two, "potting" the cells.[14] Solar cells could be made using cast-off material from the electronics market. By 1973 they announced a product, and SPC convinced Tideland Signal to use its panels to power navigational buoys, initially for the U.S. Coast Guard.[12]

Research into solar power for terrestrial applications became prominent with the U.S. National Science Foundation's Advanced Solar Energy Research and Development Division within the "Research Applied to National Needs" program, which ran from 1969 to 1977,[15] and funded research on developing solar power for ground electrical power systems. A 1973 conference, the "Cherry Hill Conference", set forth the technology goals required to achieve this goal and outlined an ambitious project for achieving them, kicking off an applied research program that would be ongoing for several decades.[16] The program was eventually taken over by the Energy Research and Development Administration (ERDA),[17] which was later merged into the U.S. Department of Energy.

Following the 1973 oil crisis oil companies used their higher profits to start (or buy) solar firms, and were for decades the largest producers. Exxon, ARCO, Shell, Amoco (later purchased by BP) and Mobil all had major solar divisions during the 1970s and 1980s. Technology companies also participated, including General Electric, Motorola, IBM, Tyco and RCA.[18]

Declining costs and exponential growth

Price per watt history for conventional (c-Si) solar cells since 1977
Swanson's law – the learning curve of solar PV
Growth of photovoltaics – Worldwide total installed PV capacity

Swanson's law is an observation similar to Moore's Law that states that solar cell prices fall 20% for every doubling of industry capacity. It was featured in an article in the British weekly newspaper The Economist.[19]

Further improvements reduced production cost to under $1 per watt, with wholesale costs well under $2. Balance of system costs were then higher than the panels. Large commercial arrays could be built, as of 2010, at below $3.40 a watt, fully commissioned.[20][21]

As the semiconductor industry moved to ever-larger boules, older equipment became inexpensive. Cell sizes grew as equipment became available on the surplus market; ARCO Solar's original panels used cells 2 to 4 inches (50 to 100 mm) in diameter. Panels in the 1990s and early 2000s generally used 125 mm wafers; since 2008 almost all new panels use 150 mm cells. The widespread introduction of flat screen televisions in the late 1990s and early 2000s led to the wide availability of large, high-quality glass sheets to cover the panels.

During the 1990s, polysilicon ("poly") cells became increasingly popular. These cells offer less efficiency than their monosilicon ("mono") counterparts, but they are grown in large vats that reduce cost. By the mid-2000s, poly was dominant in the low-cost panel market, but more recently the mono returned to widespread use.

Manufacturers of wafer-based cells responded to high silicon prices in 2004–2008 with rapid reductions in silicon consumption. In 2008, according to Jef Poortmans, director of IMEC's organic and solar department, current cells use 8–9 grams (0.28–0.32 oz) of silicon per watt of power generation, with wafer thicknesses in the neighborhood of 200 microns.

First Solar is the largest thin film manufacturer in the world, using a CdTe-cell sandwiched between two layers of glass. Crystalline silicon panels dominate worldwide markets and are mostly manufactured in China and Taiwan. By late 2011, a drop in European demand due to budgetary turmoil dropped prices for crystalline solar modules to about $1.09[21] per watt down sharply from 2010. Prices continued to fall in 2012, reaching $0.62/watt by 4Q2012.[22]

Global installed PV capacity reached at least 177 gigawatts in 2014, enough to supply 1 percent of the world's total electricity consumption. Solar PV is growing fastest in Asia, with China and Japan currently accounting for half of worldwide deployment.[23]

Subsidies and grid parity

The price of solar panels fell steadily for 40 years, interrupted in 2004 when high subsidies in Germany drastically increased demand there and greatly increased the price of purified silicon (which is used in computer chips as well as solar panels). The recession of 2008 and the onset of Chinese manufacturing caused prices to resume their decline. In the four years after January 2008 prices for solar modules in Germany dropped from €3 to €1 per peak watt. During that same time production capacity surged with an annual growth of more than 50%. China increased market share from 8% in 2008 to over 55% in the last quarter of 2010.[28] In December 2012 the price of Chinese solar panels had dropped to $0.60/Wp (crystalline modules).[29]

Theory


Working mechanism of a solar cell

The solar cell works in several steps:
  • Photons in sunlight hit the solar panel and are absorbed by semiconducting materials, such as silicon.
  • Electrons and protons are excited from their current molecular/atomic orbital. Once excited an electron can either dissipate the energy as heat and return to its orbital or travel through the cell until it reaches an electrode. Current flows through the material to cancel the potential and this electricity is captured. The chemical bonds of the material are vital for this process to work, and usually silicon is used in two layers, one layer being bonded with boron, the other phosphorus. These layers have different chemical electric charges and subsequently both drive and direct the current of electrons.[1]
  • An array of solar cells converts solar energy into a usable amount of direct current (DC) electricity.
  • An inverter can convert the power to alternating current (AC).
The most commonly known solar cell is configured as a large-area p-n junction made from silicon.

Efficiency


The Shockley-Queisser limit for the theoretical maximum efficiency of a solar cell. Semiconductors with band gap between 1 and 1.5eV, or near-infrared light, have the greatest potential to form an efficient single-junction cell. (The efficiency "limit" shown here can be exceeded by multijunction solar cells.)

Solar cell efficiency may be broken down into reflectance efficiency, thermodynamic efficiency, charge carrier separation efficiency and conductive efficiency. The overall efficiency is the product of these individual metrics.

A solar cell has a voltage dependent efficiency curve, temperature coefficients, and allowable shadow angles.

Due to the difficulty in measuring these parameters directly, other parameters are substituted: thermodynamic efficiency, quantum efficiency, integrated quantum efficiency, VOC ratio, and fill factor. Reflectance losses are a portion of quantum efficiency under "external quantum efficiency". Recombination losses make up another portion of quantum efficiency, VOC ratio, and fill factor. Resistive losses are predominantly categorized under fill factor, but also make up minor portions of quantum efficiency, VOC ratio.

The fill factor is the ratio of the actual maximum obtainable power to the product of the open circuit voltage and short circuit current. This is a key parameter in evaluating performance. In 2009, typical commercial solar cells had a fill factor > 0.70. Grade B cells were usually between 0.4 to 0.7.[30] Cells with a high fill factor have a low equivalent series resistance and a high equivalent shunt resistance, so less of the current produced by the cell is dissipated in internal losses.

Single p–n junction crystalline silicon devices are now approaching the theoretical limiting power efficiency of 33.7%, noted as the Shockley–Queisser limit in 1961. In the extreme, with an infinite number of layers, the corresponding limit is 86% using concentrated sunlight.[31]

In December 2014, a solar cell achieved a new laboratory record with 46 percent efficiency in a French-German collaboration.[32]

In 2014, three companies broke the record of 25.6% for a silicon solar cell. Panasonic's was the most efficient. The company moved the front contacts to the rear of the panel, eliminating shaded areas. In addition they applied thin silicon films to the (high quality silicon) wafer's front and back to eliminate defects at or near the wafer surface.[33]

In September 2015, the Fraunhofer Institute for Solar Energy Systems (Fraunhofer ISE) announced the achievement of an efficiency above 20% for epitaxial wafer cells. The work on optimizing the atmospheric-pressure chemical vapor deposition (APCVD) in-line production chain was done in collaboration with NexWafe GmbH, a company spun off from Fraunhofer ISE to commercialize production.[34]

For triple-junction thin-film solar cells, the world record is 13.6%, set in June 2015.[35]

Reported timeline of solar cell energy conversion efficiencies (National Renewable Energy Laboratory)

Materials


Global market-share in terms of annual production by PV technology since 1990

Solar cells are typically named after the semiconducting material they are made of. These materials must have certain characteristics in order to absorb sunlight. Some cells are designed to handle sunlight that reaches the Earth's surface, while others are optimized for use in space. Solar cells can be made of only one single layer of light-absorbing material (single-junction) or use multiple physical configurations (multi-junctions) to take advantage of various absorption and charge separation mechanisms.

Solar cells can be classified into first, second and third generation cells. The first generation cells—also called conventional, traditional or wafer-based cells—are made of crystalline silicon, the commercially predominant PV technology, that includes materials such as polysilicon and monocrystalline silicon. Second generation cells are thin film solar cells, that include amorphous silicon, CdTe and CIGS cells and are commercially significant in utility-scale photovoltaic power stations, building integrated photovoltaics or in small stand-alone power system. The third generation of solar cells includes a number of thin-film technologies often described as emerging photovoltaics—most of them have not yet been commercially applied and are still in the research or development phase. Many use organic materials, often organometallic compounds as well as inorganic substances. Despite the fact that their efficiencies had been low and the stability of the absorber material was often too short for commercial applications, there is a lot of research invested into these technologies as they promise to achieve the goal of producing low-cost, high-efficiency solar cells.

Crystalline silicon

By far, the most prevalent bulk material for solar cells is crystalline silicon (c-Si), also known as "solar grade silicon". Bulk silicon is separated into multiple categories according to crystallinity and crystal size in the resulting ingot, ribbon or wafer. These cells are entirely based around the concept of a p-n junction. Solar cells made of c-Si are made from wafers between 160 to 240 micrometers thick.

Monocrystalline silicon

Monocrystalline silicon (mono-Si) solar cells are more efficient and more expensive than most other types of cells. The corners of the cells look clipped, like an octagon, because the wafer material is cut from cylindrical ingots, that are typically grown by the Czochralski process. Solar panels using mono-Si cells display a distinctive pattern of small white diamonds.

Epitaxial silicon

Epitaxial wafers can be grown on a monocrystalline silicon "seed" wafer by atmospheric-pressure CVD in a high-throughput inline process, and then detached as self-supporting wafers of some standard thickness (e.g., 250 µm) that can be manipulated by hand, and directly substituted for wafer cells cut from monocrystalline silicon ingots. Solar cells made with this technique can have efficiencies approaching those of wafer-cut cells, but at appreciably lower cost.[36]

Polycrystalline silicon 

Polycrystalline silicon, or multicrystalline silicon (multi-Si) cells are made from cast square ingots—large blocks of molten silicon carefully cooled and solidified. They consist of small crystals giving the material its typical metal flake effect. Polysilicon cells are the most common type used in photovoltaics and are less expensive, but also less efficient, than those made from monocrystalline silicon.

Ribbon silicon 

Ribbon silicon is a type of polycrystalline silicon—it is formed by drawing flat thin films from molten silicon and results in a polycrystalline structure. These cells are cheaper to make than multi-Si, due to a great reduction in silicon waste, as this approach does not require sawing from ingots.[37] However, they are also less efficient.

Mono-like-multi silicon (MLM) 

This form was developed in the 2000s and introduced commercially around 2009. Also called cast-mono, this design uses polycrystalline casting chambers with small "seeds" of mono material. The result is a bulk mono-like material that is polycrystalline around the outsides. When sliced for processing, the inner sections are high-efficiency mono-like cells (but square instead of "clipped"), while the outer edges are sold as conventional poly. This production method results in mono-like cells at poly-like prices.[38]

Thin film

Thin-film technologies reduce the amount of active material in a cell. Most designs sandwich active material between two panes of glass. Since silicon solar panels only use one pane of glass, thin film panels are approximately twice as heavy as crystalline silicon panels, although they have a smaller ecological impact (determined from life cycle analysis).[39] The majority of film panels have 2-3 percentage points lower conversion efficiencies than crystalline silicon.[40] Cadmium telluride (CdTe), copper indium gallium selenide (CIGS) and amorphous silicon (a-Si) are three thin-film technologies often used for outdoor applications. As of December 2013, CdTe cost per installed watt was $0.59 as reported by First Solar. CIGS technology laboratory demonstrations reached 20.4% conversion efficiency as of December 2013. The lab efficiency of GaAs thin film technology topped 28%.[citation needed] The quantum efficiency of thin film solar cells is also lower due to reduced number of collected charge carriers per incident photon. Most recently, CZTS solar cell emerge as the less-toxic thin film solar cell technology, which achieved ~12% efficiency.[41] Thin film solar cells are increasing due to it being silent, renewable and solar energy being the most abundant energy source on Earth.[42]

Cadmium telluride

Cadmium telluride is the only thin film material so far to rival crystalline silicon in cost/watt. However cadmium is highly toxic and tellurium (anion: "telluride") supplies are limited. The cadmium present in the cells would be toxic if released. However, release is impossible during normal operation of the cells and is unlikely during fires in residential roofs.[43] A square meter of CdTe contains approximately the same amount of Cd as a single C cell nickel-cadmium battery, in a more stable and less soluble form.[43]

Copper indium gallium selenide

Copper indium gallium selenide (CIGS) is a direct band gap material. It has the highest efficiency (~20%) among all commercially significant thin film materials (see CIGS solar cell). Traditional methods of fabrication involve vacuum processes including co-evaporation and sputtering. Recent developments at IBM and Nanosolar attempt to lower the cost by using non-vacuum solution processes.[44]

Silicon thin film 

Silicon thin-film cells are mainly deposited by chemical vapor deposition (typically plasma-enhanced, PE-CVD) from silane gas and hydrogen gas. Depending on the deposition parameters, this can yield amorphous silicon (a-Si or a-Si:H), protocrystalline silicon or nanocrystalline silicon (nc-Si or nc-Si:H), also called microcrystalline silicon.[45]

Amorphous silicon is the most well-developed thin film technology to-date. An amorphous silicon (a-Si) solar cell is made of non-crystalline or microcrystalline silicon. Amorphous silicon has a higher bandgap (1.7 eV) than crystalline silicon (c-Si) (1.1 eV), which means it absorbs the visible part of the solar spectrum more strongly than the higher power density infrared portion of the spectrum. The production of a-Si thin film solar cells uses glass as a substrate and deposits a very thin layer of silicon by plasma-enhanced chemical vapor deposition (PECVD).

Protocrystalline silicon with a low volume fraction of nanocrystalline silicon is optimal for high open circuit voltage.[46] Nc-Si has about the same bandgap as c-Si and nc-Si and a-Si can advantageously be combined in thin layers, creating a layered cell called a tandem cell. The top cell in a-Si absorbs the visible light and leaves the infrared part of the spectrum for the bottom cell in nc-Si.

Gallium arsenide thin filmT

The semiconductor material Gallium arsenide (GaAs) is also used for single-crystalline thin film solar cells. Although GaAs cells are very expensive, they hold the world's record in efficiency for a single-junction solar cell at 28.8%.[47] GaAs is more commonly used in multijunction photovoltaic cells for concentrated photovoltaics (CPV, HCPV) and for solar panels on spacecrafts, as the industry favours efficiency over cost for space-based solar power.

Multijunction cells


Dawn's 10 kW triple-junction gallium arsenide solar array at full extension

Multi-junction cells consist of multiple thin films, each essentially a solar cell grown on top of each other, typically using metalorganic vapour phase epitaxy. Each layers has a different band gap energy to allow it to absorb electromagnetic radiation over a different portion of the spectrum. Multi-junction cells were originally developed for special applications such as satellites and space exploration, but are now used increasingly in terrestrial concentrator photovoltaics (CPV), an emerging technology that uses lenses and curved mirrors to concentrate sunlight onto small but highly efficient multi-junction solar cells. By concentrating sunlight up to a thousand times, High concentrated photovoltaics (HCPV) has the potential to outcompete conventional solar PV in the future.[48]:21,26
Tandem solar cells based on monolithic, series connected, gallium indium phosphide (GaInP), gallium arsenide (GaAs), and germanium (Ge) p–n junctions, are increasing sales, despite cost pressures.[49] Between December 2006 and December 2007, the cost of 4N gallium metal rose from about $350 per kg to $680 per kg. Additionally, germanium metal prices have risen substantially to $1000–1200 per kg this year. Those materials include gallium (4N, 6N and 7N Ga), arsenic (4N, 6N and 7N) and germanium, pyrolitic boron nitride (pBN) crucibles for growing crystals, and boron oxide, these products are critical to the entire substrate manufacturing industry.[citation needed]
A triple-junction cell, for example, may consist of the semiconductors: GaAs, Ge, and GaInP
2
.[50] Triple-junction GaAs solar cells were used as the power source of the Dutch four-time World Solar Challenge winners Nuna in 2003, 2005 and 2007 and by the Dutch solar cars Solutra (2005), Twente One (2007) and 21Revolution (2009).[citation needed] GaAs based multi-junction devices are the most efficient solar cells to date. On 15 October 2012, triple junction metamorphic cells reached a record high of 44%.[51]

Research in solar cells

Perovskite solar cells

Perovskite solar cells are solar cells that include a perovskite-structured material as the active layer. Most commonly, this is a solution-processed hybrid organic-inorganic tin or lead halide based material. Efficiencies have increased from below 10% at their first usage in 2009 to over 20% in 2014, making them a very rapidly advancing technology and a hot topic in the solar cell field.[52] Perovskite solar cells are also forecast to be extremely cheap to scale up, making them a very attractive option for commercialisation.

Liquid inks

In 2014, researchers at California NanoSystems Institute discovered using kesterite and perovskite improved electric power conversion efficiency for solar cells.[53]

Upconversion and Downconversion

Photon upconversion is the process of using two low-energy (e.g., infrared) photons to produce one higher energy photon; downconversion is the process of using one high energy photon (e.g.,, ultraviolet) to produce two lower energy photons. Either of these techniques could be used to produce higher efficiency solar cells by allowing solar photons to be more efficiently used. The difficulty, however, is that the conversion efficiency of existing phosphors exhibiting up- or down-conversion is low, and is typically narrow band.

One upconversion technique is to incorporate lanthanide-doped materials (Er3+, Yb3+, Ho3+ or a combination), taking advantage of their luminescence to convert infrared radiation to visible light. Upconversion process occurs when two infrared photons are absorbed by rare-earth ions to generate a (high-energy) absorbable photon. As example, the energy transfer upconversion process (ETU), consists in successive transfer processes between excited ions in the near infrared. The upconverter material could be placed below the solar cell to absorb the infrared light that passes through the silicon. Useful ions are most commonly found in the trivalent state. Er+ ions have been the most used. Er3+ ions absorb solar radiation around 1.54 µm. Two Er3+ ions that have absorbed this radiation can interact with each other through an upconversion process. The excited ion emits light above the Si bandgap that is absorbed by the solar cell and creates an additional electron–hole pair that can generate current. However, the increased efficiency was small. In addition, fluoroindate glasses have low phonon energy and have been proposed as suitable matrix doped with Ho3+ ions.[54]

Light-absorbing dyes

Dye-sensitized solar cells (DSSCs) are made of low-cost materials and do not need elaborate manufacturing equipment, so they can be made in a DIY fashion. In bulk it should be significantly less expensive than older solid-state cell designs. DSSC's can be engineered into flexible sheets and although its conversion efficiency is less than the best thin film cells, its price/performance ratio may be high enough to allow them to compete with fossil fuel electrical generation.

Typically a ruthenium metalorganic dye (Ru-centered) is used as a monolayer of light-absorbing material. The dye-sensitized solar cell depends on a mesoporous layer of nanoparticulate titanium dioxide to greatly amplify the surface area (200–300 m2/g TiO
2
, as compared to approximately 10 m2/g of flat single crystal). The photogenerated electrons from the light absorbing dye are passed on to the n-type TiO
2
and the holes are absorbed by an electrolyte on the other side of the dye. The circuit is completed by a redox couple in the electrolyte, which can be liquid or solid. This type of cell allows more flexible use of materials and is typically manufactured by screen printing or ultrasonic nozzles, with the potential for lower processing costs than those used for bulk solar cells. However, the dyes in these cells also suffer from degradation under heat and UV light and the cell casing is difficult to seal due to the solvents used in assembly. The first commercial shipment of DSSC solar modules occurred in July 2009 from G24i Innovations.[55]

Quantum dots

Quantum dot solar cells (QDSCs) are based on the Gratzel cell, or dye-sensitized solar cell architecture, but employ low band gap semiconductor nanoparticles, fabricated with crystallite sizes small enough to form quantum dots (such as CdS, CdSe, Sb
2
S
3
, PbS, etc.), instead of organic or organometallic dyes as light absorbers. QD's size quantization allows for the band gap to be tuned by simply changing particle size. They also have high extinction coefficients and have shown the possibility of multiple exciton generation.[56]

In a QDSC, a mesoporous layer of titanium dioxide nanoparticles forms the backbone of the cell, much like in a DSSC. This TiO
2
layer can then be made photoactive by coating with semiconductor quantum dots using chemical bath deposition, electrophoretic deposition or successive ionic layer adsorption and reaction. The electrical circuit is then completed through the use of a liquid or solid redox couple. The efficiency of QDSCs has increased[57] to over 5% shown for both liquid-junction[58] and solid state cells.[59] In an effort to decrease production costs, the Prashant Kamat research group[60] demonstrated a solar paint made with TiO
2
and CdSe that can be applied using a one-step method to any conductive surface with efficiencies over 1%.[61]

Organic/polymer solar cells

Organic solar cells and polymer solar cells are built from thin films (typically 100 nm) of organic semiconductors including polymers, such as polyphenylene vinylene and small-molecule compounds like copper phthalocyanine (a blue or green organic pigment) and carbon fullerenes and fullerene derivatives such as PCBM.

They can be processed from liquid solution, offering the possibility of a simple roll-to-roll printing process, potentially leading to inexpensive, large-scale production. In addition, these cells could be beneficial for some applications where mechanical flexibility and disposability are important. Current cell efficiencies are, however, very low, and practical devices are essentially non-existent.
Energy conversion efficiencies achieved to date using conductive polymers are very low compared to inorganic materials. However, Konarka Power Plastic reached efficiency of 8.3%[62] and organic tandem cells in 2012 reached 11.1%.[citation needed]

The active region of an organic device consists of two materials, one electron donor and one electron acceptor. When a photon is converted into an electron hole pair, typically in the donor material, the charges tend to remain bound in the form of an exciton, separating when the exciton diffuses to the donor-acceptor interface, unlike most other solar cell types. The short exciton diffusion lengths of most polymer systems tend to limit the efficiency of such devices. Nanostructured interfaces, sometimes in the form of bulk heterojunctions, can improve performance.[63]

In 2011, MIT and Michigan State researchers developed solar cells with a power efficiency close to 2% with a transparency to the human eye greater than 65%, achieved by selectively absorbing the ultraviolet and near-infrared parts of the spectrum with small-molecule compounds.[64][65] Researchers at UCLA more recently developed an analogous polymer solar cell, following the same approach, that is 70% transparent and has a 4% power conversion efficiency.[66][67][68] These lightweight, flexible cells can be produced in bulk at a low cost and could be used to create power generating windows.

In 2013, researchers announced polymer cells with some 3% efficiency. They used block copolymers, self-assembling organic materials that arrange themselves into distinct layers. The research focused on P3HT-b-PFTBT that separates into bands some 16 nanometers wide.[69][70]

Adaptive cells

Adaptive cells change their absorption/reflection characteristics depending to respond to environmental conditions. An adaptive material responds to the intensity and angle of incident light. At the part of the cell where the light is most intense, the cell surface changes from reflective to adaptive, allowing the light to penetrate the cell. The other parts of the cell remain reflective increasing the retention of the absorbed light within the cell.[71]

In 2014 a system that combined an adaptive surface with a glass substrate that redirect the absorbed to a light absorber on the edges of the sheet. The system also included an array of fixed lenses/mirrors to concentrate light onto the adaptive surface. As the day continues, the concentrated light moves along the surface of the cell. That surface switches from reflective to adaptive when the light is most concentrated and back to reflective after the light moves along.[71]

Manufacture


Solar cells share some of the same processing and manufacturing techniques as other semiconductor devices. However, the stringent requirements for cleanliness and quality control of semiconductor fabrication are more relaxed for solar cells, lowering costs.

Polycrystalline silicon wafers are made by wire-sawing block-cast silicon ingots into 180 to 350 micrometer wafers. The wafers are usually lightly p-type-doped. A surface diffusion of n-type dopants is performed on the front side of the wafer. This forms a p–n junction a few hundred nanometers below the surface.

Anti-reflection coatings are then typically applied to increase the amount of light coupled into the solar cell. Silicon nitride has gradually replaced titanium dioxide as the preferred material, because of its excellent surface passivation qualities. It prevents carrier recombination at the cell surface. A layer several hundred nanometers thick is applied using PECVD. Some solar cells have textured front surfaces that, like anti-reflection coatings, increase the amount of light reaching the wafer. Such surfaces were first applied to single-crystal silicon, followed by multicrystalline silicon somewhat later.

A full area metal contact is made on the back surface, and a grid-like metal contact made up of fine "fingers" and larger "bus bars" are screen-printed onto the front surface using a silver paste. This is an evolution of the so-called "wet" process for applying electrodes, first described in a US patent filed in 1981 by Bayer AG.[72] The rear contact is formed by screen-printing a metal paste, typically aluminium. Usually this contact covers the entire rear, though some designs employ a grid pattern. The paste is then fired at several hundred degrees Celsius to form metal electrodes in ohmic contact with the silicon. Some companies use an additional electro-plating step to increase efficiency. After the metal contacts are made, the solar cells are interconnected by flat wires or metal ribbons, and assembled into modules or "solar panels". Solar panels have a sheet of tempered glass on the front, and a polymer encapsulation on the back.

Manufacturers and certification

Solar cell production by region[73]

National Renewable Energy Laboratory tests and validates solar technologies. Three reliable groups certify solar equipment: UL and IEEE (both U.S. standards) and IEC.

Solar cells are manufactured in volume in Japan, Germany, China, Taiwan, Malaysia and the United States, whereas Europe, China, the U.S., and Japan have dominated (94% or more as of 2013) in installed systems.[74] Other nations are acquiring significant solar cell production capacity.

Global PV cell/module production increased by 10% in 2012 despite a 9% decline in solar energy investments according to the annual "PV Status Report" released by the European Commission's Joint Research Centre. Between 2009 and 2013 cell production has quadrupled.[74][75][76]

China

Due to heavy government investment, China has become the dominant force in solar cell manufacturing. Chinese companies produced solar cells/modules with a capacity of ~23 GW in 2013 (60% of global production).[74]

Malaysia

In 2014, Malaysia was the world's third largest manufacturer of photovoltaics equipment, behind China and the European Union.[77]

United States

Solar cell production in the U.S. has suffered due to the global financial crisis, but recovered partly due to the falling price of quality silicon.[78][79]

Sunday, January 24, 2016

General Circulation Model


From Wikipedia, the free encyclopedia


This visualization shows early test renderings of a global computational model of Earth's atmosphere based on data from NASA's Goddard Earth Observing System Model, Version 5 (GEOS-5).

A general circulation model (GCM) is a type of climate model. It employs a mathematical model of the general circulation of a planetary atmosphere or ocean. It uses the Navier–Stokes equations on a rotating sphere with thermodynamic terms for various energy sources (radiation, latent heat). These equations are the basis for computer programs used to simulate the Earth's atmosphere or oceans. Atmospheric and oceanic GCMs (AGCM and OGCM) are key components along with sea ice and land-surface components.

GCMs and global climate models are used for weather forecasting, understanding the climate and forecasting climate change.

Versions designed for decade to century time scale climate applications were originally created by Syukuro Manabe and Kirk Bryan at the Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey.[1] These models are based on the integration of a variety of fluid dynamical, chemical and sometimes biological equations.

Terminology

The acronym GCM originally stood for General Circulation Model. Recently, a second meaning came into use, namely Global Climate Model. While these do not refer to the same thing, General Circulation Models are typically the tools used for modelling climate, and hence the two terms are sometimes used interchangeably. However, the term "global climate model" is ambiguous and may refer to an integrated framework that incorporates multiple components including a general circulation model, or may refer to the general class of climate models that use a variety of means to represent the climate mathematically.

History

In 1956, Norman Phillips developed a mathematical model that could realistically depict monthly and seasonal patterns in the troposphere. It became the first successful climate model.[2][3] Following Phillips's work, several groups began working to create GCMs.[4] The first to combine both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory.[1] By the early 1980s, the United States' National Center for Atmospheric Research had developed the Community Atmosphere Model; this model has been continuously refined.[5] In 1996, efforts began to model soil and vegetation types.[6] Later the Hadley Centre for Climate Prediction and Research's HadCM3 model coupled ocean-atmosphere elements.[4] The role of gravity waves was added in the mid-1980s. Gravity waves are required to simulate regional and global scale circulations accurately.[7]

Atmospheric and oceanic models

Atmospheric (AGCMs) and oceanic GCMs (OGCMs) can be coupled to form an atmosphere-ocean coupled general circulation model (CGCM or AOGCM). With the addition of submodels such as a sea ice model or a model for evapotranspiration over land, AOGCMs become the basis for a full climate model.[8]

Trends

A recent trend in GCMs is to apply them as components of Earth system models, e.g. by coupling ice sheet models for the dynamics of the Greenland and Antarctic ice sheets, and one or more chemical transport models (CTMs) for species important to climate. Thus a carbon CTM may allow a GCM to better predict anthropogenic changes in carbon dioxide concentrations. In addition, this approach allows accounting for inter-system feedback: e.g. chemistry-climate models allow the possible effects of climate change on ozone hole to be studied.[9]

Climate prediction uncertainties depend on uncertainties in chemical, physical and social models (see IPCC scenarios below).[10] Significant uncertainties and unknowns remain, especially regarding the future course of human population, industry and technology.

Structure

Three-dimensional (more properly four-dimensional) GCMs apply discrete equations for fluid motion and integrate these forward in time. They contain parameterisations for processes such as convection that occur on scales too small to be resolved directly.

A simple general circulation model (SGCM) consists of a dynamic core that relates properties such as temperature to others such as pressure and velocity. Examples are programs that solve the primitive equations, given energy input and energy dissipation in the form of scale-dependent friction, so that atmospheric waves with the highest wavenumbers are most attenuated. Such models may be used to study atmospheric processes, but are not suitable for climate projections.

Atmospheric GCMs (AGCMs) model the atmosphere (and typically contain a land-surface model as well) using imposed sea surface temperatures (SSTs).[11] They may include atmospheric chemistry.
AGCMs consist of a dynamical core which integrates the equations of fluid motion, typically for:
  • surface pressure
  • horizontal components of velocity in layers
  • temperature and water vapor in layers
  • radiation, split into solar/short wave and terrestrial/infra-red/long wave
  • parameters for:
A GCM contains prognostic equations that are a function of time (typically winds, temperature, moisture, and surface pressure) together with diagnostic equations that are evaluated from them for a specific time period. As an example, pressure at any height can be diagnosed by applying the hydrostatic equation to the predicted surface pressure and the predicted values of temperature between the surface and the height of interest. Pressure is used to compute the pressure gradient force in the time-dependent equation for the winds.

OGCMs model the ocean (with fluxes from the atmosphere imposed) and may contain a sea ice model. For example, the standard resolution of HadOM3 is 1.25 degrees in latitude and longitude, with 20 vertical levels, leading to approximately 1,500,000 variables.

AOGCMs (e.g. HadCM3, GFDL CM2.X) combine the two submodels. They remove the need to specify fluxes across the interface of the ocean surface. These models are the basis for model predictions of future climate, such as are discussed by the IPCC. AOGCMs internalise as many processes as possible. They have been used to provide predictions at a regional scale. While the simpler models are generally susceptible to analysis and their results are easier to understand, AOGCMs may be nearly as hard to analyse as the climate itself.

Grid

The fluid equations for AGCMs are made discrete using either the finite difference method or the spectral method. For finite differences, a grid is imposed on the atmosphere. The simplest grid uses constant angular grid spacing (i.e., a latitude / longitude grid). However, non-rectantangular grids (e.g., icosahedral) and grids of variable resolution[12] are more often used.[13] The LMDz model can be arranged to give high resolution over any given section of the planet. HadGEM1 (and other ocean models) use an ocean grid with higher resolution in the tropics to help resolve processes believed to be important for the El Niño Southern Oscillation (ENSO). Spectral models generally use a gaussian grid, because of the mathematics of transformation between spectral and grid-point space. Typical AGCM resolutions are between 1 and 5 degrees in latitude or longitude: HadCM3, for example, uses 3.75 in longitude and 2.5 degrees in latitude, giving a grid of 96 by 73 points (96 x 72 for some variables); and has 19 vertical levels. This results in approximately 500,000 "basic" variables, since each grid point has four variables (u,v, T, Q), though a full count would give more (clouds; soil levels). HadGEM1 uses a grid of 1.875 degrees in longitude and 1.25 in latitude in the atmosphere; HiGEM, a high-resolution variant, uses 1.25 x 0.83 degrees respectively.[14] These resolutions are lower than is typically used for weather forecasting.[15] Ocean resolutions tend to be higher, for example HadCM3 has 6 ocean grid points per atmospheric grid point in the horizontal.

For a standard finite difference model, uniform gridlines converge towards the poles. This would lead to computational instabilities (see CFL condition) and so the model variables must be filtered along lines of latitude close to the poles. Ocean models suffer from this problem too, unless a rotated grid is used in which the North Pole is shifted onto a nearby landmass. Spectral models do not suffer from this problem. Some experiments use geodesic grids[16] and icosahedral grids, which (being more uniform) do not have pole-problems. Another approach to solving the grid spacing problem is to deform a Cartesian cube such that it covers the surface of a sphere.[17]

Flux buffering

Some early versions of AOGCMs required an ad hoc process of "flux correction" to achieve a stable climate. This resulted from separately prepared ocean and atmospheric models that each used an implicit flux from the other component different than that component could produce. Such a model failed to match observations. However, if the fluxes were 'corrected', the factors that led to these unrealistic fluxes might be unrecognised, which could affect model sensitivity. As a result the vast majority of models used in the current round of IPCC reports do not use them. The model improvements that now make flux corrections unnecessary include improved ocean physics, improved resolution in both atmosphere and ocean, and more physically consistent coupling between atmosphere and ocean submodels. Improved models now maintain stable, multi-century simulations of surface climate that are considered to be of sufficient quality to allow their use for climate projections.[18]

Convection

Moist convection releases latent heat and is important to the Earth's energy budget. Convection occurs on too small a scale to be resolved by climate models, and hence it must be handled via parameters. This has been done since the 1950s. Akio Arakawa did much of the early work, and variants of his scheme are still used,[19] although a variety of different schemes are now in use.[20][21][22] Clouds are also typically handled with a parameter, for a similar lack of scale. Limited understanding of clouds has limited the success of this strategy, but not due to some inherent shortcoming of the method.[23]

Software

Most models include software to diagnose a wide range of variables for comparison with observations or study of atmospheric processes. An example is the 1.5-metre temperature, which is the standard height for near-surface observations of air temperature. This temperature is not directly predicted from the model but is deduced from surface and lowest-model-layer temperatures. Other software is used for creating plots and animations.

Projections

File:Animation of projected annual mean surface air temperature from 1970-2100, based on SRES emissions scenario A1B (NOAA GFDL CM2.1).webmPlay media
Projected annual mean surface air temperature from 1970-2100, based on SRES emissions scenario A1B, using the NOAA GFDL CM2.1 climate model (credit: NOAA Geophysical Fluid Dynamics Laboratory).[24]

Coupled AOGCMs use transient climate simulations to project/predict climate changes under various scenarios. These can be idealised scenarios (most commonly, CO2 emissions increasing at 1%/yr) or based on recent history (usually the "IS92a" or more recently the SRES scenarios). Which scenarios are most realistic remains uncertain.

The 2001 IPCC Third Assessment Report F igure 9.3 shows the global mean response of 19 different coupled models to an idealised experiment in which emissions increased at 1% per year.[25] Figure 9.5 shows the response of a smaller number of models to more recent trends. For the 7 climate models shown there, the temperature change to 2100 varies from 2 to 4.5 °C with a median of about 3 °C.

Future scenarios do not include unknown events – for example, volcanic eruptions or changes in solar forcing. These effects are believed to be small in comparison to greenhouse gas (GHG) forcing in the long term, but large volcanic eruptions, for example, can exert a substantial temporary cooling effect.
Human GHG emissions are a model input, although it is possible to include an economic/technological submodel to provide these as well. Atmospheric GHG levels are usually supplied as an input, though it is possible to include a carbon cycle model that reflects vegetation and oceanic processes to calculate such levels.

Emissions scenarios

In the 21st century, changes in global mean temperature are projected to vary across the world
Projected change in annual mean surface air temperature from the late 20th century to the middle 21st century, based on SRES emissions scenario A1B (credit: NOAA Geophysical Fluid Dynamics Laboratory).[24]

For the six SRES marker scenarios, IPCC (2007:7–8) gave a "best estimate" of global mean temperature increase (2090–2099 relative to the period 1980–99) of 1.8 °C to 4.0 °C.[26] Over the same time period, the "likely" range (greater than 66% probability, based on expert judgement) for these scenarios was for a global mean temperature increase of 1.1 to 6.4 °C.[26]

In 2008 a study made climate projections using several emission scenarios.[27] In a scenario where global emissions start to decrease by 2010 and then declined at a sustained rate of 3% per year, the likely global average temperature increase was predicted to be 1.7 °C above pre-industrial levels by 2050, rising to around 2 °C by 2100. In a projection designed to simulate a future where no efforts are made to reduce global emissions, the likely rise in global average temperature was predicted to be 5.5 °C by 2100. A rise as high as 7 °C was thought possible, although less likely.

Another no-reduction scenario resulted in a median warming over land (2090–99 relative to the period 1980–99) of 5.1 °C. Under the same emissions scenario but with a different model, the predicted median warming was 4.1 °C.[28]

Model accuracy


SST errors in HadCM3

North American precipitation from various models.

Temperature predictions from some climate models assuming the SRES A2 emissions scenario.

AOGCMs internalise as many processes as are sufficiently understood. However, they are still under development and significant uncertainties remain. They may be coupled to models of other processes, such as the carbon cycle, so as to better model feedbacks. Most recent simulations show "plausible" agreement with the measured temperature anomalies over the past 150 years, when driven by observed changes in greenhouse gases and aerosols. Agreement improves by including both natural and anthropogenic forcings.[29][30]

Imperfect models may nevertheless produce useful results. GCMs are capable of reproducing the general features of the observed global temperature over the past century.[29]

A debate over how to reconcile climate model predictions that upper air (tropospheric) warming should be greater than observed surface warming, some of which appeared to show otherwise,[31] was resolved in favour of the models, following data revisions.

Cloud effects are a significant area of uncertainty in climate models. Clouds have competing effects on climate. They cool the surface by reflecting sunlight into space; they warm it by increasing the amount of infrared radiation transmitted from the atmosphere to the surface.[32] In the 2001 IPCC report possible changes in cloud cover were highlighted as a major uncertainty in predicting climate.[33][34]

Climate researchers around the world use climate models to understand the climate system. Thousands of papers have been published about model-based studies. Part of this research is to improve the models.

In 2000, a comparison between measurements and dozens of GCM simulations of ENSO-driven tropical precipitation, water vapor, temperature, and outgoing longwave radiation found similarity between measurements and simulation of most factors. However the simulated change in precipitation was about one-fourth less than what was observed. Errors in simulated precipitation imply errors in other processes, such as errors in the evaporation rate that provides moisture to create precipitation. The other possibility is that the satellite-based measurements are in error. Either indicates progress is required in order to monitor and predict such changes.[35]

A more complete discussion of climate models is provided in the IPCC's Third Assessment Report.[36]
  • The model mean exhibits good agreement with observations.
  • The individual models often exhibit worse agreement with observations.
  • Many of the non-flux adjusted models suffered from unrealistic climate drift up to about 1 °C/century in global mean surface temperature.
  • The errors in model-mean surface air temperature rarely exceed 1 °C over the oceans and 5 °C over the continents; precipitation and sea level pressure errors are relatively greater but the magnitudes and patterns of these quantities are recognisably similar to observations.
  • Surface air temperature is particularly well simulated, with nearly all models closely matching the observed magnitude of variance and exhibiting a correlation > 0.95 with the observations.
  • Simulated variance of sea level pressure and precipitation is within ±25% of observed.
  • All models have shortcomings in their simulations of the present day climate of the stratosphere, which might limit the accuracy of predictions of future climate change.
    • There is a tendency for the models to show a global mean cold bias at all levels.
    • There is a large scatter in the tropical temperatures.
    • The polar night jets in most models are inclined poleward with height, in noticeable contrast to an equatorward inclination of the observed jet.
    • There is a differing degree of separation in the models between the winter sub-tropical jet and the polar night jet.
  • For nearly all models the r.m.s. error in zonal- and annual-mean surface air temperature is small compared with its natural variability.
    • There are problems in simulating natural seasonal variability.[citation needed] ( 2000)
      • In flux-adjusted models, seasonal variations are simulated to within 2 K of observed values over the oceans. The corresponding average over non-flux-adjusted models shows errors up to about 6 K in extensive ocean areas.
      • Near-surface land temperature errors are substantial in the average over flux-adjusted models, which systematically underestimates (by about 5 K) temperature in areas of elevated terrain. The corresponding average over non-flux-adjusted models forms a similar error pattern (with somewhat increased amplitude) over land.
      • In Southern Ocean mid-latitudes, the non-flux-adjusted models overestimate the magnitude of January-minus-July temperature differences by ~5 K due to an overestimate of summer (January) near-surface temperature. This error is common to five of the eight non-flux-adjusted models.
      • Over Northern Hemisphere mid-latitude land areas, zonal mean differences between July and January temperatures simulated by the non-flux-adjusted models show a greater spread (positive and negative) about observed values than results from the flux-adjusted models.
      • The ability of coupled GCMs to simulate a reasonable seasonal cycle is a necessary condition for confidence in their prediction of long-term climatic changes (such as global warming), but it is not a sufficient condition unless the seasonal cycle and long-term changes involve similar climatic processes.
  • Coupled climate models do not simulate with reasonable accuracy clouds and some related hydrological processes (in particular those involving upper tropospheric humidity). Problems in the simulation of clouds and upper tropospheric humidity, remain worrisome because the associated processes account for most of the uncertainty in climate model simulations of anthropogenic change.
The precise magnitude of future changes in climate is still uncertain;[37] for the end of the 21st century (2071 to 2100), for SRES scenario A2, the change of global average SAT change from AOGCMs compared with 1961 to 1990 is +3.0 °C (4.8 °F) and the range is +1.3 to +4.5 °C (+2 to +7.2 °F).

The IPCC's Fifth Assessment Report asserted "...very high confidence that models reproduce the general features of the global-scale annual mean surface temperature increase over the historical period." However, the report also observed that the rate of warming over the period 1998-2012 was lower than that predicted by 111 out of 114 Coupled Model Intercomparison Project climate models.[38]

Relation to weather forecasting

The global climate models used for climate projections are similar in structure to (and often share computer code with) numerical models for weather prediction, but are nonetheless logically distinct.
Most weather forecasting is done on the basis of interpreting numerical model results. Since forecasts are short—typically a few days or a week—such models do not usually contain an ocean model but rely on imposed SSTs. They also require accurate initial conditions to begin the forecast—typically these are taken from the output of a previous forecast, blended with observations. Predictions must require only a few hours; but because they only cover a one week the models can be run at higher resolution than in climate mode. Currently the ECMWF runs at 40 km (25 mi) resolution[39] as opposed to the 100-to-200 km (62-to-124 mi) scale used by typical climate model runs. Often local models are run using global model results for boundary conditions, to achieve higher local resolution: for example, the Met Office runs a mesoscale model with an 11 km (6.8 mi) resolution[40] covering the UK, and various agencies in the US employ models such as the NGM and NAM models. Like most global numerical weather prediction models such as the GFS, global climate models are often spectral models[41] instead of grid models. Spectral models are often used for global models because some computations in modeling can be performed faster, thus reducing run times.

Computations

Climate models use quantitative methods to simulate the interactions of the atmosphere, oceans, land surface and ice.

All climate models take account of incoming energy as short wave electromagnetic radiation, chiefly visible and short-wave (near) infrared, as well as outgoing energy as long wave (far) infrared electromagnetic radiation from the earth. Any imbalance results in a change in temperature.
The most talked-about models of recent years relate temperature to emissions of greenhouse gases. These models project an upward trend in the surface temperature record, as well as a more rapid increase in temperature at higher altitudes.[42]

Three (or more properly, four since time is also considered) dimensional GCM's discretise the equations for fluid motion and energy transfer and integrate these over time. They also contain parametrisations for processes such as convection that occur on scales too small to be resolved directly.

Atmospheric GCMs (AGCMs) model the atmosphere and impose sea surface temperatures as boundary conditions. Coupled atmosphere-ocean GCMs (AOGCMs, e.g. HadCM3, EdGCM, GFDL CM2.X, ARPEGE-Climat[43]) combine the two models.
Models range in complexity:
  • A simple radiant heat transfer model treats the earth as a single point and averages outgoing energy
  • This can be expanded vertically (radiative-convective models), or horizontally
  • Finally, (coupled) atmosphere–ocean–sea ice global climate models discretise and solve the full equations for mass and energy transfer and radiant exchange.
  • Box models treat flows across and within ocean basins.
Other submodels can be interlinked, such as land use, allowing researchers to predict the interaction between climate and ecosystems.

Other climate models

Earth-system models of intermediate complexity (EMICs)

The Climber-3 model uses a 2.5-dimensional statistical-dynamical model with 7.5° × 22.5° resolution and time step of 1/2 a day. An oceanic submodel is MOM-3 (Modular Ocean Model) with a 3.75° × 3.75° grid and 24 vertical levels.[44]

Radiative-convective models (RCM)

One-dimensional, radiative-convective models were used to verify basic climate assumptions in the '80s and '90s.[45]

Friday, January 22, 2016

Cloud forcing


From Wikipedia, the free encyclopedia

Cloud forcing (sometimes described as cloud radiative forcing) is, in meteorology, the difference between the radiation budget components for average cloud conditions and cloud-free conditions. Much of the interest in cloud forcing relates to its role as a feedback process in the present period of global warming.

All global climate models used for climate change projections include the effects of water vapor and cloud forcing. The models include the effects of clouds on both incoming (solar) and emitted (terrestrial) radiation.

Clouds increase the global reflection of solar radiation from 15% to 30%, reducing the amount of solar radiation absorbed by the Earth by about 44 W/m². This cooling is offset somewhat by the greenhouse effect of clouds which reduces the outgoing longwave radiation by about 31 W/m². Thus the net cloud forcing of the radiation budget is a loss of about 13 W/m².[1] If the clouds were removed with all else remaining the same, the Earth would gain this last amount in net radiation and begin to warm up. These numbers should not be confused with the usual radiative forcing concept, which is for the change in forcing related to climate change.

Without the inclusion of clouds, water vapor alone contributes 36% to 70% of the greenhouse effect on Earth. When water vapor and clouds are considered together, the contribution is 66% to 85%. The ranges come about because there are two ways to compute the influence of water vapor and clouds: the lower bounds are the reduction in the greenhouse effect if water vapor and clouds are removed from the atmosphere leaving all other greenhouse gases unchanged, while the upper bounds are the greenhouse effect introduced if water vapor and clouds are added to an atmosphere with no other greenhouse gases.[2] The two values differ because of overlap in the absorption and emission by the various greenhouse gases. Trapping of the long-wave radiation due to the presence of clouds reduces the radiative forcing of the greenhouse gases compared to the clear-sky forcing. However, the magnitude of the effect due to clouds varies for different greenhouse gases. Relative to clear skies, clouds reduce the global mean radiative forcing due to CO2 by about 15%,[3] that due to CH4 and N2O by about 20%,[3] and that due to the halocarbons by up to 30%.[4][5][6] Clouds remain one of the largest uncertainties in future projections of climate change by global climate models, owing to the physical complexity of cloud processes and the small scale of individual clouds relative to the size of the model computational grid.

Atmospheric thermodynamics


From Wikipedia, the free encyclopedia

Atmospheric thermodynamics is the study of heat to work transformations (and the reverse) in the earth's atmospheric system in relation to weather or climate. Following the fundamental laws of classical thermodynamics, atmospheric thermodynamics studies such phenomena as properties of moist air, formation of clouds, atmospheric convection, boundary layer meteorology, and vertical stabilities in the atmosphere. Atmospheric thermodynamic diagrams are used as tools in the forecasting of storm development. Atmospheric thermodynamics forms a basis for cloud microphysics and convection parameterizations in numerical weather models, and is used in many climate considerations, including convective-equilibrium climate models.

Overview

The atmosphere is an example of a non-equilibrium system.[1] Atmospheric thermodynamics focuses on water and its transformations. Areas of study include the law of energy conservation, the ideal gas law, specific heat capacities, adiabatic processes (in which entropy is conserved), and moist adiabatic processes. Most of tropospheric gases are treated as ideal gases and water vapor is considered as one of the most important trace components of air.

Advanced topics are phase transitions of water, homogeneous and inhomogeneous nucleation, effect of dissolved substances on cloud condensation, role of supersaturation on formation of ice crystals and cloud droplets. Considerations of moist air and cloud theories typically involve various temperatures, such as equivalent potential temperature, wet-bulb and virtual temperatures. Connected areas are energy, momentum, and mass transfer, turbulence interaction between air particles in clouds, convection, dynamics of tropical cyclones, and large scale dynamics of the atmosphere.

The major role of atmospheric thermodynamics is expressed in terms of adiabatic and diabatic forces acting on air parcels included in primitive equations of air motion either as grid resolved or subgrid parameterizations. These equations form a basis for the numerical weather and climate predictions.

History

In the early 19th century thermodynamicists such as Sadi Carnot, Rudolf Clausius, and Émile Clapeyron developed mathematical models on the dynamics of bodies fluids and vapors related to the combustion and pressure cycles of atmospheric steam engines; one example is the Clausius–Clapeyron equation. In 1873, thermodynamicist Willard Gibbs published "Graphical Methods in the Thermodynamics of Fluids."

Thermodynamic diagram developed in the 19th century is still used to calculate quantities such as convective available potential energy or air stability.

These sorts of foundations naturally began to be applied towards the development of theoretical models of atmospheric thermodynamics which drew the attention of the best minds. Papers on atmospheric thermodynamics appeared in the 1860s that treated such topics as dry and moist adiabatic processes. In 1884 Heinrich Hertz devised first atmospheric thermodynamic diagram (emagram).[2] Pseudo-adiabatic process was coined by von Bezold describing air as it is lifted, expands, cools, and eventually precipitates its water vapor; in 1888 he published voluminous work entitled "On the thermodynamics of the atmosphere".[3]

In 1911 von Alfred Wegener published a book "Thermodynamik der Atmosphäre", Leipzig, J. A. Barth. From here the development of atmospheric thermodynamics as a branch of science began to take root. The term "atmospheric thermodynamics", itself, can be traced to Frank W. Verys 1919 publication: "The radiant properties of the earth from the standpoint of atmospheric thermodynamics" (Occasional scientific papers of the Westwood Astrophysical Observatory). By the late 1970s various textbooks on the subject began to appear. Today, atmospheric thermodynamics is an integral part of weather forecasting.

Chronology

  • 1751 Charles Le Roy recognized dew point temperature as point of saturation of air
  • 1782 Jacques Charles made hydrogen balloon flight measuring temperature and pressure in Paris
  • 1784 Concept of variation of temperature with height was suggested
  • 1801-1803 John Dalton developed his laws of pressures of vapours
  • 1804 Joseph Louis Gay-Lussac made balloon ascent to study weather
  • 1805 Pierre Simon Laplace developed his law of pressure variation with height
  • 1841 James Pollard Espy publishes paper on convection theory of cyclone energy
  • 1889 Hermann von Helmholtz and John William von Bezold used the concept of potential temperature, von Bezold used adiabatic lapse rate and pseudoadiabat
  • 1893 Richard Asman constructs first aerological sonde (pressure-temperature-humidity)
  • 1894 John Wilhelm von Bezold used concept of equivalent temperature
  • 1926 Sir Napier Shaw introduced tephigram
  • 1933 Tor Bergeron published paper on "Physics of Clouds and Precipitation" describing precipitation from supercooled (due to condensational growth of ice crystals in presence of water drops)
  • 1946 Vincent J. Schaeffer and Irving Langmuir performed the first cloud-seeding experiment
  • 1986 K. Emanuel conceptualizes tropical cyclone as Carnot heat engine

Applications

Hadley Circulation

The Hadley Circulation can be considered as a heat engine.[4] The Hadley circulation is identified with rising of warm and moist air in the equatorial region with the descent of colder air in the subtropics corresponding to a thermally driven direct circulation, with consequent net production of kinetic energy. The thermodynamic efficiency of the Hadley system, considered as a heat engine, has been relatively constant over the 1979~2010 period, averaging 2.6%. Over the same interval, the power generated by the Hadley regime has risen at an average rate of about 0.54 TW per yr; this reflects an increase in energy input to the system consistent with the observed trend in the tropical sea surface temperatures.

Tropical cyclone Carnot cycle


Air is being moistened as it travels toward convective system. Ascending motion in a deep convective core produces air expansion, cooling, and condensation. Upper level outflow visible as an anvil cloud is eventually descending conserving mass (rysunek - Robert Simmon).

The thermodynamic structure of the hurricane can be modelled as a heat engine [5] running between sea temperature of about 300K and tropopause which has temperature of about 200K. Parcels of air traveling close to the surface take up moisture and warm, ascending air expands and cools releasing moisture (rain) during the condensation. The release of latent heat energy during the condensation provides mechanical energy for the hurricane. Both a decreasing temperature in the upper troposphere or an increasing temperature of the atmosphere close to the surface will increase the maximum winds observed in hurricanes. When applied to hurricane dynamics it defines a Carnot heat engine cycle and predicts maximum hurricane intensity.

Water vapor and global climate change

The Clausius–Clapeyron relation shows how the water-holding capacity of the atmosphere increases by about 8% per Celsius increase in temperature. (It does not directly depend on other parameters like the pressure or density.) This water-holding capacity, or "equilibrium vapor pressure", can be approximated using the August-Roche-Magnus formula
 e_s(T)= 6.1094 \exp \left( \frac{17.625T}{T+243.04} \right)
(where e_s(T) is the equilibrium or saturation vapor pressure in hPa, and T is temperature in degrees Celsius). This shows that when atmospheric temperature increases (e.g., due to greenhouse gases) the absolute humidity should also increase exponentially (assuming a constant relative humidity). However, this purely thermodynamic argument is subject of considerable debate because convective processes might cause extensive drying due to increased areas of subsidence, efficiency of precipitation could be influenced by the intensity of convection, and because cloud formation is related to relative humidity.[citation needed]

Mandatory Palestine

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Mandatory_Palestine   Palestine 1920–...