Search This Blog

Wednesday, January 16, 2019

Dark energy (updated)

From Wikipedia, the free encyclopedia

In physical cosmology and astronomy, dark energy is an unknown form of energy which is hypothesized to permeate all of space, tending to accelerate the expansion of the universe. Dark energy is the most accepted hypothesis to explain the observations since the 1990s indicating that the universe is expanding at an accelerating rate
 
Assuming that the standard model of cosmology is correct, the best current measurements indicate that dark energy contributes 68% of the total energy in the present-day observable universe. The mass–energy of dark matter and ordinary (baryonic) matter contribute 27% and 5%, respectively, and other components such as neutrinos and photons contribute a very small amount. The density of dark energy     is very low (~ 7 × 10−30 g/cm3) much less than the density of ordinary matter or dark matter within galaxies. However, it dominates the mass–energy of the universe because it is uniform across space.

Two proposed forms for dark energy are the cosmological constant, representing a constant energy density filling space homogeneously, and scalar fields such as quintessence or moduli, dynamic quantities whose energy density can vary in time and space. Contributions from scalar fields that are constant in space are usually also included in the cosmological constant. The cosmological constant can be formulated to be equivalent to the zero-point radiation of space i.e. the vacuum energy. Scalar fields that change in space can be difficult to distinguish from a cosmological constant because the change may be extremely slow.

History of discovery and previous speculation

Einstein's cosmological constant

The "cosmological constant" is a constant term that can be added to Einstein's field equation of general relativity. If considered as a "source term" in the field equation, it can be viewed as equivalent to the mass of empty space (which conceptually could be either positive or negative), or "vacuum energy". 

The cosmological constant was first proposed by Einstein as a mechanism to obtain a solution of the gravitational field equation that would lead to a static universe, effectively using dark energy to balance gravity. Einstein gave the cosmological constant the symbol Λ (capital lambda). Einstein stated that the cosmological constant required that `empty space takes the role of gravitating negative masses which are distributed all over the interstellar space'.

The mechanism was an example of fine-tuning, and it was later realized that Einstein's static universe would not be stable: local inhomogeneities would ultimately lead to either the runaway expansion or contraction of the universe. The equilibrium is unstable: if the universe expands slightly, then the expansion releases vacuum energy, which causes yet more expansion. Likewise, a universe which contracts slightly will continue contracting. These sorts of disturbances are inevitable, due to the uneven distribution of matter throughout the universe. Further, observations made by Edwin Hubble in 1929 showed that the universe appears to be expanding and not static at all. Einstein reportedly referred to his failure to predict the idea of a dynamic universe, in contrast to a static universe, as his greatest blunder.

Inflationary dark energy

Alan Guth and Alexei Starobinsky proposed in 1980 that a negative pressure field, similar in concept to dark energy, could drive cosmic inflation in the very early universe. Inflation postulates that some repulsive force, qualitatively similar to dark energy, resulted in an enormous and exponential expansion of the universe slightly after the Big Bang. Such expansion is an essential feature of most current models of the Big Bang. However, inflation must have occurred at a much higher energy density than the dark energy we observe today and is thought to have completely ended when the universe was just a fraction of a second old. It is unclear what relation, if any, exists between dark energy and inflation. Even after inflationary models became accepted, the cosmological constant was thought to be irrelevant to the current universe. 

Nearly all inflation models predict that the total (matter+energy) density of the universe should be very close to the critical density. During the 1980s, most cosmological research focused on models with critical density in matter only, usually 95% cold dark matter and 5% ordinary matter (baryons). These models were found to be successful at forming realistic galaxies and clusters, but some problems appeared in the late 1980s: in particular, the model required a value for the Hubble constant lower than preferred by observations, and the model under-predicted observations of large-scale galaxy clustering. These difficulties became stronger after the discovery of anisotropy in the cosmic microwave background by the COBE spacecraft in 1992, and several modified CDM models came under active study through the mid-1990s: these included the Lambda-CDM model and a mixed cold/hot dark matter model. The first direct evidence for dark energy came from supernova observations in 1998 of accelerated expansion in Riess et al. and in Perlmutter et al., and the Lambda-CDM model then became the leading model. Soon after, dark energy was supported by independent observations: in 2000, the BOOMERanG and Maxima cosmic microwave background experiments observed the first acoustic peak in the CMB, showing that the total (matter+energy) density is close to 100% of critical density. Then in 2001, the 2dF Galaxy Redshift Survey gave strong evidence that the matter density is around 30% of critical. The large difference between these two supports a smooth component of dark energy making up the difference. Much more precise measurements from WMAP in 2003–2010 have continued to support the standard model and give more accurate measurements of the key parameters.

The term "dark energy", echoing Fritz Zwicky's "dark matter" from the 1930s, was coined by Michael Turner in 1998.

Change in expansion over time

Diagram representing the accelerated expansion of the universe due to dark energy.
 
High-precision measurements of the expansion of the universe are required to understand how the expansion rate changes over time and space. In general relativity, the evolution of the expansion rate is estimated from the curvature of the universe and the cosmological equation of state (the relationship between temperature, pressure, and combined matter, energy, and vacuum energy density for any region of space). Measuring the equation of state for dark energy is one of the biggest efforts in observational cosmology today. Adding the cosmological constant to cosmology's standard FLRW metric leads to the Lambda-CDM model, which has been referred to as the "standard model of cosmology" because of its precise agreement with observations. 

As of 2013, the Lambda-CDM model is consistent with a series of increasingly rigorous cosmological observations, including the Planck spacecraft and the Supernova Legacy Survey. First results from the SNLS reveal that the average behavior (i.e., equation of state) of dark energy behaves like Einstein's cosmological constant to a precision of 10%. Recent results from the Hubble Space Telescope Higher-Z Team indicate that dark energy has been present for at least 9 billion years and during the period preceding cosmic acceleration.

Nature

The nature of dark energy is more hypothetical than that of dark matter, and many things about it remain matters of speculation. Dark energy is thought to be very homogeneous and not very dense, and is not known to interact through any of the fundamental forces other than gravity. Since it is quite rarefied and un-massive — roughly 10−27 kg/m3 — it is unlikely to be detectable in laboratory experiments. The reason dark energy can have such a profound effect on the universe, making up 68% of universal density in spite of being so dilute, is that it uniformly fills otherwise empty space.

Independently of its actual nature, dark energy would need to have a strong negative pressure (repulsive action), like radiation pressure in a metamaterial, to explain the observed acceleration of the expansion of the universe. According to general relativity, the pressure within a substance contributes to its gravitational attraction for other objects just as its mass density does. This happens because the physical quantity that causes matter to generate gravitational effects is the stress–energy tensor, which contains both the energy (or matter) density of a substance and its pressure and viscosity. In the Friedmann–Lemaître–Robertson–Walker metric, it can be shown that a strong constant negative pressure in all the universe causes an acceleration in the expansion if the universe is already expanding, or a deceleration in contraction if the universe is already contracting. This accelerating expansion effect is sometimes labeled "gravitational repulsion".

Technical definition

In standard cosmology, there are three components of the universe: matter, radiation, and dark energy. Matter is anything whose energy density scales with the inverse cube of the scale factor, i.e., ρ ∝ a−3, while radiation is anything which scales to the inverse fourth power of the scale factor (ρ ∝ a−4). This can be understood intuitively: for an ordinary particle in a square box, doubling the length of a side of the box decreases the density (and hence energy density) by a factor of eight (23). For radiation, the decrease in energy density is greater, because an increase in spatial distance also causes a redshift.

The final component, dark energy, is an intrinsic property of space, and so has a constant energy density regardless of the volume under consideration (ρ ∝ a0). Thus, unlike ordinary matter, it does not get diluted with the expansion of space.

Evidence of existence

The evidence for dark energy is indirect but comes from three independent sources:
  • Distance measurements and their relation to redshift, which suggest the universe has expanded more in the last half of its life.
  • The theoretical need for a type of additional energy that is not matter or dark matter to form the observationally flat universe (absence of any detectable global curvature).
  • Measures of large-scale wave-patterns of mass density in the universe.

Supernovae

A Type Ia supernova (bright spot on the bottom-left) near a galaxy
 
In 1998, the High-Z Supernova Search Team published observations of Type Ia ("one-A") supernovae. In 1999, the Supernova Cosmology Project followed by suggesting that the expansion of the universe is accelerating. The 2011 Nobel Prize in Physics was awarded to Saul Perlmutter, Brian P. Schmidt, and Adam G. Riess for their leadership in the discovery.

Since then, these observations have been corroborated by several independent sources. Measurements of the cosmic microwave background, gravitational lensing, and the large-scale structure of the cosmos, as well as improved measurements of supernovae, have been consistent with the Lambda-CDM model. Some people argue that the only indications for the existence of dark energy are observations of distance measurements and their associated redshifts. Cosmic microwave background anisotropies and baryon acoustic oscillations serve only to demonstrate that distances to a given redshift are larger than would be expected from a "dusty" Friedmann–Lemaître universe and the local measured Hubble constant.

Supernovae are useful for cosmology because they are excellent standard candles across cosmological distances. They allow researchers to measure the expansion history of the universe by looking at the relationship between the distance to an object and its redshift, which gives how fast it is receding from us. The relationship is roughly linear, according to Hubble's law. It is relatively easy to measure redshift, but finding the distance to an object is more difficult. Usually, astronomers use standard candles: objects for which the intrinsic brightness, or absolute magnitude, is known. This allows the object's distance to be measured from its actual observed brightness, or apparent magnitude. Type Ia supernovae are the best-known standard candles across cosmological distances because of their extreme and consistent luminosity.

Recent observations of supernovae are consistent with a universe made up 71.3% of dark energy and 27.4% of a combination of dark matter and baryonic matter.

Cosmic microwave background

Estimated division of total energy in the universe into matter, dark matter and dark energy based on five years of WMAP data.
 
The existence of dark energy, in whatever form, is needed to reconcile the measured geometry of space with the total amount of matter in the universe. Measurements of cosmic microwave background (CMB) anisotropies indicate that the universe is close to flat. For the shape of the universe to be flat, the mass-energy density of the universe must be equal to the critical density. The total amount of matter in the universe (including baryons and dark matter), as measured from the CMB spectrum, accounts for only about 30% of the critical density. This implies the existence of an additional form of energy to account for the remaining 70%. The Wilkinson Microwave Anisotropy Probe (WMAP) spacecraft seven-year analysis estimated a universe made up of 72.8% dark energy, 22.7% dark matter, and 4.5% ordinary matter. Work done in 2013 based on the Planck spacecraft observations of the CMB gave a more accurate estimate of 68.3% dark energy, 26.8% dark matter, and 4.9% ordinary matter.

Large-scale structure

The theory of large-scale structure, which governs the formation of structures in the universe (stars, quasars, galaxies and galaxy groups and clusters), also suggests that the density of matter in the universe is only 30% of the critical density.

A 2011 survey, the WiggleZ galaxy survey of more than 200,000 galaxies, provided further evidence towards the existence of dark energy, although the exact physics behind it remains unknown. The WiggleZ survey from the Australian Astronomical Observatory scanned the galaxies to determine their redshift. Then, by exploiting the fact that baryon acoustic oscillations have left voids regularly of ~150 Mpc diameter, surrounded by the galaxies, the voids were used as standard rulers to estimate distances to galaxies as far as 2,000 Mpc (redshift 0.6), allowing for accurate estimate of the speeds of galaxies from their redshift and distance. The data confirmed cosmic acceleration up to half of the age of the universe (7 billion years) and constrain its inhomogeneity to 1 part in 10. This provides a confirmation to cosmic acceleration independent of supernovae.

Late-time integrated Sachs-Wolfe effect

Accelerated cosmic expansion causes gravitational potential wells and hills to flatten as photons pass through them, producing cold spots and hot spots on the CMB aligned with vast supervoids and superclusters. This so-called late-time Integrated Sachs–Wolfe effect (ISW) is a direct signal of dark energy in a flat universe. It was reported at high significance in 2008 by Ho et al. and Giannantonio et al.

Observational Hubble constant data

A new approach to test evidence of dark energy through observational Hubble constant data (OHD) has gained significant attention in recent years. The Hubble constant, H(z), is measured as a function of cosmological redshift. OHD directly tracks the expansion history of the universe by taking passively evolving early-type galaxies as “cosmic chronometers”. From this point, this approach provides standard clocks in the universe. The core of this idea is the measurement of the differential age evolution as a function of redshift of these cosmic chronometers. Thus, it provides a direct estimate of the Hubble parameter
The reliance on a differential quantity, Δz/Δt, can minimize many common issues and systematic effects; and as a direct measurement of the Hubble parameter instead of its integral, like supernovae and baryon acoustic oscillations (BAO), it brings more information and is appealing in computation. For these reasons, it has been widely used to examine the accelerated cosmic expansion and study properties of dark energy.

Theories of dark energy

Dark energy's status as a hypothetical force with unknown properties makes it a very active target of research. The problem is attacked from a great variety of angles, such as modifying the prevailing theory of gravity (general relativity), attempting to pin down the properties of dark energy, and finding alternative ways to explain the observational data. 

The equation of state of Dark Energy for 4 common models by Redshift.
A: CPL Model,
B: Jassal Model,
C: Barboza & Alcaniz Model,
D: Wetterich Model

Cosmological constant

Estimated distribution of matter and energy in the universe
 
The simplest explanation for dark energy is that it is an intrinsic, fundamental energy of space. This is the cosmological constant, usually represented by the Greek letter Λ (Lambda, hence Lambda-CDM model). Since energy and mass are related according to the equation E = mc2, Einstein's theory of general relativity predicts that this energy will have a gravitational effect. It is sometimes called a vacuum energy because it is the energy density of empty vacuum.

The cosmological constant has negative pressure equal to its energy density and so causes the expansion of the universe to accelerate. The reason a cosmological constant has negative pressure can be seen from classical thermodynamics. In general, energy must be lost from inside a container (the container must do work on its environment) in order for the volume to increase. Specifically, a change in volume dV requires work done equal to a change of energy −P dV, where P is the pressure. But the amount of energy in a container full of vacuum actually increases when the volume increases, because the energy is equal to ρV, where ρ is the energy density of the cosmological constant. Therefore, P is negative and, in fact, P = −ρ

There are two major advantages for the cosmological constant. The first is that it is simple. Einstein had in fact introduced this term in his original formulation of general relativity such as to get a static universe. Although he later discarded the term after Hubble found that the universe is expanding, a nonzero cosmological constant can act as dark energy, without otherwise changing the Einstein field equations. The other advantage is that there is a natural explanation for its origin. Most quantum field theories predict vacuum fluctuations that would give the vacuum this sort of energy. This is related to the Casimir effect, in which there is a small suction into regions where virtual particles are geometrically inhibited from forming (e.g. between plates with tiny separation). 

A major outstanding problem is that the same quantum field theories predict a huge cosmological constant, more than 100 orders of magnitude too large. This would need to be almost, but not exactly, cancelled by an equally large term of the opposite sign. Some supersymmetric theories require a cosmological constant that is exactly zero, which does not help because supersymmetry must be broken. 

Nonetheless, the cosmological constant is the most economical solution to the problem of cosmic acceleration. Thus, the current standard model of cosmology, the Lambda-CDM model, includes the cosmological constant as an essential feature.

Quintessence

In quintessence models of dark energy, the observed acceleration of the scale factor is caused by the potential energy of a dynamical field, referred to as quintessence field. Quintessence differs from the cosmological constant in that it can vary in space and time. In order for it not to clump and form structure like matter, the field must be very light so that it has a large Compton wavelength

No evidence of quintessence is yet available, but it has not been ruled out either. It generally predicts a slightly slower acceleration of the expansion of the universe than the cosmological constant. Some scientists think that the best evidence for quintessence would come from violations of Einstein's equivalence principle and variation of the fundamental constants in space or time. Scalar fields are predicted by the Standard Model of particle physics and string theory, but an analogous problem to the cosmological constant problem (or the problem of constructing models of cosmological inflation) occurs: renormalization theory predicts that scalar fields should acquire large masses.

The coincidence problem asks why the acceleration of the Universe began when it did. If acceleration began earlier in the universe, structures such as galaxies would never have had time to form, and life, at least as we know it, would never have had a chance to exist. Proponents of the anthropic principle view this as support for their arguments. However, many models of quintessence have a so-called "tracker" behavior, which solves this problem. In these models, the quintessence field has a density which closely tracks (but is less than) the radiation density until matter-radiation equality, which triggers quintessence to start behaving as dark energy, eventually dominating the universe. This naturally sets the low energy scale of the dark energy.

In 2004, when scientists fit the evolution of dark energy with the cosmological data, they found that the equation of state had possibly crossed the cosmological constant boundary (w = −1) from above to below. A No-Go theorem has been proved that gives this scenario at least two degrees of freedom as required for dark energy models. This scenario is so-called Quintom scenario.

Some special cases of quintessence are phantom energy, in which the energy density of quintessence actually increases with time, and k-essence (short for kinetic quintessence) which has a non-standard form of kinetic energy such as a negative kinetic energy. They can have unusual properties: phantom energy, for example, can cause a Big Rip.

Interacting dark energy

This class of theories attempts to come up with an all-encompassing theory of both dark matter and dark energy as a single phenomenon that modifies the laws of gravity at various scales. This could, for example, treat dark energy and dark matter as different facets of the same unknown substance, or postulate that cold dark matter decays into dark energy. Another class of theories that unifies dark matter and dark energy are suggested to be covariant theories of modified gravity. These theories alter the dynamics of the space-time such that the modified dynamic stems what have been assigned to the presence of dark energy and dark matter.

Variable dark energy models

The density of dark energy might have varied in time over the history of the universe. Modern observational data allow for estimates of the present density. Using baryon acoustic oscillations, it is possible to investigate the effect of dark energy in the history of the Universe, and constrain parameters of the equation of state of dark energy. To that end, several models have been proposed. One of the most popular models is the Chevallier–Polarski–Linder model (CPL). Some other common models are, (Barboza & Alcaniz. 2008), (Jassal et al. 2005), (Wetterich. 2004), (Oztas et al. 2018).

Observational skepticism

Some alternatives to dark energy aim to explain the observational data by a more refined use of established theories. In this scenario, dark energy doesn't actually exist, and is merely a measurement artifact. For example, if we are located in an emptier-than-average region of space, the observed cosmic expansion rate could be mistaken for a variation in time, or acceleration. A different approach uses a cosmological extension of the equivalence principle to show how space might appear to be expanding more rapidly in the voids surrounding our local cluster. While weak, such effects considered cumulatively over billions of years could become significant, creating the illusion of cosmic acceleration, and making it appear as if we live in a Hubble bubble. Yet other possibilities are that the accelerated expansion of the universe is an illusion caused by the relative motion of us to the rest of the universe, or that the supernovae sample size used wasn't large enough.

Other mechanism driving acceleration

Modified gravity

The evidence for dark energy is heavily dependent on the theory of general relativity. Therefore, it is conceivable that a modification to general relativity also eliminates the need for dark energy. There are very many such theories, and research is ongoing. The measurement of the speed of gravity in the first gravitational wave measured by non-gravitational means (GW170817) ruled out many modified gravity theories as explanations to dark energy.

Astrophysicist Ethan Siegel states that, while such alternatives gain a lot of mainstream press coverage, almost all professional astrophysicists are confident that dark energy exists, and that none of the competing theories successfully explain observations to the same level of precision as standard dark energy.

Implications for the fate of the universe

Cosmologists estimate that the acceleration began roughly 5 billion years ago. Before that, it is thought that the expansion was decelerating, due to the attractive influence of matter. The density of dark matter in an expanding universe decreases more quickly than dark energy, and eventually the dark energy dominates. Specifically, when the volume of the universe doubles, the density of dark matter is halved, but the density of dark energy is nearly unchanged (it is exactly constant in the case of a cosmological constant). 

Projections into the future can differ radically for different models of dark energy. For a cosmological constant, or any other model that predicts that the acceleration will continue indefinitely, the ultimate result will be that galaxies outside the Local Group will have a line-of-sight velocity that continually increases with time, eventually far exceeding the speed of light. This is not a violation of special relativity because the notion of "velocity" used here is different from that of velocity in a local inertial frame of reference, which is still constrained to be less than the speed of light for any massive object. Because the Hubble parameter is decreasing with time, there can actually be cases where a galaxy that is receding from us faster than light does manage to emit a signal which reaches us eventually. However, because of the accelerating expansion, it is projected that most galaxies will eventually cross a type of cosmological event horizon where any light they emit past that point will never be able to reach us at any time in the infinite future because the light never reaches a point where its "peculiar velocity" toward us exceeds the expansion velocity away from us. Assuming the dark energy is constant (a cosmological constant), the current distance to this cosmological event horizon is about 16 billion light years, meaning that a signal from an event happening at present would eventually be able to reach us in the future if the event were less than 16 billion light years away, but the signal would never reach us if the event were more than 16 billion light years away.

As galaxies approach the point of crossing this cosmological event horizon, the light from them will become more and more redshifted, to the point where the wavelength becomes too large to detect in practice and the galaxies appear to vanish completely. Planet Earth, the Milky Way, and the Local Group of which the Milky way is a part, would all remain virtually undisturbed as the rest of the universe recedes and disappears from view. In this scenario, the Local Group would ultimately suffer heat death, just as was hypothesized for the flat, matter-dominated universe before measurements of cosmic acceleration

There are other, more speculative ideas about the future of the universe. The phantom energy model of dark energy results in divergent expansion, which would imply that the effective force of dark energy continues growing until it dominates all other forces in the universe. Under this scenario, dark energy would ultimately tear apart all gravitationally bound structures, including galaxies and solar systems, and eventually overcome the electrical and nuclear forces to tear apart atoms themselves, ending the universe in a "Big Rip". It is also possible the universe may never have an end and continue in its present state forever. On the other hand, dark energy might dissipate with time or even become attractive. Such uncertainties leave open the possibility that gravity might yet rule the day and lead to a universe that contracts in on itself in a "Big Crunch", or that there may even be a dark energy cycle, which implies a cyclic model of the universe in which every iteration (Big Bang then eventually a Big Crunch) takes about a trillion (1012) years. While none of these are supported by observations, they are not ruled out.

In philosophy of science

In philosophy of science, dark energy is an example of an "auxiliary hypothesis", an ad hoc postulate that is added to a theory in response to observations that falsify it. It has been argued that the dark energy hypothesis is a conventionalist hypothesis, that is, a hypothesis that adds no empirical content and hence is unfalsifiable in the sense defined by Karl Popper.

Sewers could help clean the atmosphere

January 16, 2019 by John Sullivan, Princeton University
 
Researchers at Princeton have concluded that sewer plants serving municipalities worldwide offer a major option for capturing carbon dioxide and other greenhouse gases.
 
Sewage treatment—an unglamorous backbone of urban living—could offer a cost-effective way to combat climate change by flushing greenhouse gases from the atmosphere.
 
In an article analyzing several possible technical approaches in the journal Nature Sustainability on Dec. 18, 2018, researchers at Princeton University concluded that sewer plants serving municipalities worldwide offer a major option for capturing dioxide and other . Although cautioning that research and development is needed before the systems could be deployed, the team identified several potentially viable paths to using sewage as a carbon sink—that is, sewer plants could clean the atmosphere as they .

"The could play a big role in tackling ," said senior author Jason Ren, professor of civil and and the Andlinger Center for Energy and the Environment. "It is a very exciting idea because people always think about energy or transportation, but water has not been considered as a major factor in carbon reduction."

Sewer plants are massive industrial operations that use a variety of techniques to remove pollutants before wastewater returns to the environment. Although most people never think about the systems, the volume of water is staggering. New York City, for example, runs 14 sewer plants and processes 1.3 billion gallons of water daily (enough to fill about 22,000 Olympic pools.)
 
In the past few years, researchers have proposed methods to use that wastewater to capture enough carbon to offset the amount generated to power heavy equipment used to run sewer plants. They discovered that some techniques not only would allow the plants to balance their own emissions (cleaning water requires considerable energy use), they could also absorb extra carbon that operators pumped into the sewage as it moved through the plants.
 
"If you consider it as a resource, you could convert part of the waste material including the CO2 into products," Ren said. "You could actually make money."
 
Generally, the operators would use pipes to pump carbon dioxide gas into the sewer water in the plants. They would then use a variety of techniques to convert the gas into carbonate minerals, biofuels or a sludge-based fertilizer called biochar.
 
The researchers reviewed a range of techniques including:
 
Microbial electrolytic carbon capture
 
This techniques uses a combination of bacteria and a low electrical charge to change the water's alkalinity and, with the addition of silicates, convert carbon dioxide to solid carbonate and bi-carbonate. In addition to the solids, which can be used by industry, the process creates large amounts of hydrogen gas. The researchers noted that this technique is currently used in the laboratory and additional work is needed to show whether it is economical and applicable at the industrial level.
 
Microbial electrosynthesis
 
Microbial electrosynthesis is similar to the microbial electrolytic technique except that the process relies on bacteria to directly capture carbon dioxide and convert it into other organic compounds such as ethanol or formic acid. The researchers noted that the technology is promising but major breakthroughs are needed to fully develop the process.
 
Microalgae cultivation
 
Microalgae cultivation could be used as a complement to other processes. Algae and bacteria use the carbon dioxide, nitrogen and phosphorous in the wastewater to grow. Operators then harvest the algae, which can be used as animal feed, for soil treatment or in biofuel production. The researchers said work is going forward on identifying the best local microbial communities, small and intensive bioreactors, and efficient techniques for separating solids and liquids.
 
Biochar production
 
This method converts wastewater sludge and microalgae into material that improves soil's ability to retain water and nutrients. The technique, which removes pathogens, is usually self-sufficient in terms of energy, although most biochar is now made from dry plants. The researchers said using wastewater sludge to make may require more energy or additional steps to account for the additional water content.
 
Ren said that in many locations, sewer plants are already located near industrial facilities that emit large amounts of carbon dioxide such as power plants, cement factories and refineries. He said using the sewer systems to capture the carbon could provide an economic return for these companies in the form of carbon credits. He also said the technique could be used by industries that already run their own wastewater treatment systems such as oil and gas producers, brewers, and distillers. When analyzing the potential environmental and economic benefits of such operation, they found millions of tons of CO2 could be captured and utilized, while billions of dollars in revenue could be generated in both the U.S. and China, the world's two largest CO2 emitters.
 
The researchers cautioned that while many techniques are promising, "the concept is still in its infancy." They said that full use of the technology will require work of not only scientists, but also regulators, investors and industry.
 
Jerald Schnoor, an engineering professor at the University of Iowa, said national leaders should consider wastewater treatment as part of efforts to decrease the country's carbon footprint in coming decades.
 
"Wastewater treatment is one of the largest energy users and greenhouse gas emitters of a municipal spreadsheet," said Schnoor, who was not involved in this research. "Technologies exist at pilot scale to achieve zero carbon and energy footprints, but they are not proven to be scalable or cost-effective at the current time. As the country now embarks on 'green infrastructure' initiatives, as being discussed by the new Congress, this should be a high priority."
Explore further: Wastewater treatment plants can become sustainable biorefineries
 


Sewers could help clean the atmosphere

January 16, 2019 by John Sullivan, Princeton University

Researchers at Princeton have concluded that sewer plants serving municipalities worldwide offer a major option for capturing carbon dioxide and other greenhouse gases. Although cautioning that research and development is needed before the …more
Sewage treatment—an unglamorous backbone of urban living—could offer a cost-effective way to combat climate change by flushing greenhouse gases from the atmosphere.
In an article analyzing several possible technical approaches in the journal Nature Sustainability on Dec. 18, 2018, researchers at Princeton University concluded that sewer plants serving municipalities worldwide offer a major option for capturing dioxide and other . Although cautioning that research and development is needed before the systems could be deployed, the team identified several potentially viable paths to using sewage as a carbon sink—that is, sewer plants could clean the atmosphere as they .
"The could play a big role in tackling ," said senior author Jason Ren, professor of civil and and the Andlinger Center for Energy and the Environment. "It is a very exciting idea because people always think about energy or transportation, but water has not been considered as a major factor in carbon reduction."
Sewer plants are massive industrial operations that use a variety of techniques to remove pollutants before wastewater returns to the environment. Although most people never think about the systems, the volume of water is staggering. New York City, for example, runs 14 sewer plants and processes 1.3 billion gallons of water daily (enough to fill about 22,000 Olympic pools.)
In the past few years, researchers have proposed methods to use that wastewater to capture enough carbon to offset the amount generated to power heavy equipment used to run sewer plants. They discovered that some techniques not only would allow the plants to balance their own emissions (cleaning water requires considerable energy use), they could also absorb extra carbon that operators pumped into the sewage as it moved through the plants.
"If you consider it as a resource, you could convert part of the waste material including the CO2 into products," Ren said. "You could actually make money."
Generally, the operators would use pipes to pump carbon dioxide gas into the sewer water in the plants. They would then use a variety of techniques to convert the gas into carbonate minerals, biofuels or a sludge-based fertilizer called biochar.
The researchers reviewed a range of techniques including:
Microbial electrolytic carbon capture
This techniques uses a combination of bacteria and a low electrical charge to change the water's alkalinity and, with the addition of silicates, convert carbon dioxide to solid carbonate and bi-carbonate. In addition to the solids, which can be used by industry, the process creates large amounts of hydrogen gas. The researchers noted that this technique is currently used in the laboratory and additional work is needed to show whether it is economical and applicable at the industrial level.
Microbial electrosynthesis
Microbial electrosynthesis is similar to the microbial electrolytic technique except that the process relies on bacteria to directly capture carbon dioxide and convert it into other organic compounds such as ethanol or formic acid. The researchers noted that the technology is promising but major breakthroughs are needed to fully develop the process.
Microalgae cultivation
Microalgae cultivation could be used as a complement to other processes. Algae and bacteria use the carbon dioxide, nitrogen and phosphorous in the wastewater to grow. Operators then harvest the algae, which can be used as animal feed, for soil treatment or in biofuel production. The researchers said work is going forward on identifying the best local microbial communities, small and intensive bioreactors, and efficient techniques for separating solids and liquids.
Biochar production
This method converts wastewater sludge and microalgae into material that improves soil's ability to retain water and nutrients. The technique, which removes pathogens, is usually self-sufficient in terms of energy, although most biochar is now made from dry plants. The researchers said using wastewater sludge to make may require more energy or additional steps to account for the additional water content.
Ren said that in many locations, sewer plants are already located near industrial facilities that emit large amounts of carbon dioxide such as power plants, cement factories and refineries. He said using the sewer systems to capture the carbon could provide an economic return for these companies in the form of carbon credits. He also said the technique could be used by industries that already run their own wastewater treatment systems such as oil and gas producers, brewers, and distillers. When analyzing the potential environmental and economic benefits of such operation, they found millions of tons of CO2 could be captured and utilized, while billions of dollars in revenue could be generated in both the U.S. and China, the world's two largest CO2 emitters.
The researchers cautioned that while many techniques are promising, "the concept is still in its infancy." They said that full use of the technology will require work of not only scientists, but also regulators, investors and industry.
Jerald Schnoor, an engineering professor at the University of Iowa, said national leaders should consider wastewater treatment as part of efforts to decrease the country's carbon footprint in coming decades.
"Wastewater treatment is one of the largest energy users and greenhouse gas emitters of a municipal spreadsheet," said Schnoor, who was not involved in this research. "Technologies exist at pilot scale to achieve zero carbon and energy footprints, but they are not proven to be scalable or cost-effective at the current time. As the country now embarks on 'green infrastructure' initiatives, as being discussed by the new Congress, this should be a high priority."
More information: Lu Lu et al. Wastewater treatment for carbon capture and utilization, Nature Sustainability (2018). DOI: 10.1038/s41893-018-0187-9



Read more at: https://phys.org/news/2019-01-sewers-atmosphere.html#jCp

Sewers could help clean the atmosphere

January 16, 2019 by John Sullivan, Princeton University

Researchers at Princeton have concluded that sewer plants serving municipalities worldwide offer a major option for capturing carbon dioxide and other greenhouse gases. Although cautioning that research and development is needed before the …more
Sewage treatment—an unglamorous backbone of urban living—could offer a cost-effective way to combat climate change by flushing greenhouse gases from the atmosphere.
In an article analyzing several possible technical approaches in the journal Nature Sustainability on Dec. 18, 2018, researchers at Princeton University concluded that sewer plants serving municipalities worldwide offer a major option for capturing dioxide and other . Although cautioning that research and development is needed before the systems could be deployed, the team identified several potentially viable paths to using sewage as a carbon sink—that is, sewer plants could clean the atmosphere as they .
"The could play a big role in tackling ," said senior author Jason Ren, professor of civil and and the Andlinger Center for Energy and the Environment. "It is a very exciting idea because people always think about energy or transportation, but water has not been considered as a major factor in carbon reduction."
Sewer plants are massive industrial operations that use a variety of techniques to remove pollutants before wastewater returns to the environment. Although most people never think about the systems, the volume of water is staggering. New York City, for example, runs 14 sewer plants and processes 1.3 billion gallons of water daily (enough to fill about 22,000 Olympic pools.)
In the past few years, researchers have proposed methods to use that wastewater to capture enough carbon to offset the amount generated to power heavy equipment used to run sewer plants. They discovered that some techniques not only would allow the plants to balance their own emissions (cleaning water requires considerable energy use), they could also absorb extra carbon that operators pumped into the sewage as it moved through the plants.
"If you consider it as a resource, you could convert part of the waste material including the CO2 into products," Ren said. "You could actually make money."
Generally, the operators would use pipes to pump carbon dioxide gas into the sewer water in the plants. They would then use a variety of techniques to convert the gas into carbonate minerals, biofuels or a sludge-based fertilizer called biochar.
The researchers reviewed a range of techniques including:
Microbial electrolytic carbon capture
This techniques uses a combination of bacteria and a low electrical charge to change the water's alkalinity and, with the addition of silicates, convert carbon dioxide to solid carbonate and bi-carbonate. In addition to the solids, which can be used by industry, the process creates large amounts of hydrogen gas. The researchers noted that this technique is currently used in the laboratory and additional work is needed to show whether it is economical and applicable at the industrial level.
Microbial electrosynthesis
Microbial electrosynthesis is similar to the microbial electrolytic technique except that the process relies on bacteria to directly capture carbon dioxide and convert it into other organic compounds such as ethanol or formic acid. The researchers noted that the technology is promising but major breakthroughs are needed to fully develop the process.
Microalgae cultivation
Microalgae cultivation could be used as a complement to other processes. Algae and bacteria use the carbon dioxide, nitrogen and phosphorous in the wastewater to grow. Operators then harvest the algae, which can be used as animal feed, for soil treatment or in biofuel production. The researchers said work is going forward on identifying the best local microbial communities, small and intensive bioreactors, and efficient techniques for separating solids and liquids.
Biochar production
This method converts wastewater sludge and microalgae into material that improves soil's ability to retain water and nutrients. The technique, which removes pathogens, is usually self-sufficient in terms of energy, although most biochar is now made from dry plants. The researchers said using wastewater sludge to make may require more energy or additional steps to account for the additional water content.
Ren said that in many locations, sewer plants are already located near industrial facilities that emit large amounts of carbon dioxide such as power plants, cement factories and refineries. He said using the sewer systems to capture the carbon could provide an economic return for these companies in the form of carbon credits. He also said the technique could be used by industries that already run their own wastewater treatment systems such as oil and gas producers, brewers, and distillers. When analyzing the potential environmental and economic benefits of such operation, they found millions of tons of CO2 could be captured and utilized, while billions of dollars in revenue could be generated in both the U.S. and China, the world's two largest CO2 emitters.
The researchers cautioned that while many techniques are promising, "the concept is still in its infancy." They said that full use of the technology will require work of not only scientists, but also regulators, investors and industry.
Jerald Schnoor, an engineering professor at the University of Iowa, said national leaders should consider wastewater treatment as part of efforts to decrease the country's carbon footprint in coming decades.
"Wastewater treatment is one of the largest energy users and greenhouse gas emitters of a municipal spreadsheet," said Schnoor, who was not involved in this research. "Technologies exist at pilot scale to achieve zero carbon and energy footprints, but they are not proven to be scalable or cost-effective at the current time. As the country now embarks on 'green infrastructure' initiatives, as being discussed by the new Congress, this should be a high priority."
More information: Lu Lu et al. Wastewater treatment for carbon capture and utilization, Nature Sustainability (2018). DOI: 10.1038/s41893-018-0187-9



Read more at: https://phys.org/news/2019-01-sewers-atmosphere.html#jCp

Energy subsidies

From Wikipedia, the free encyclopedia

Energy subsidies are measures that keep prices for consumers below market levels or for producers above market levels, or reduce costs for consumers and producers. Energy subsidies may be direct cash transfers to producers, consumers, or related bodies, as well as indirect support mechanisms, such as tax exemptions and rebates, price controls, trade restrictions, and limits on market access. They may also include energy conservation subsidies. The development of today's major modern energy industries have all relied on substantial subsidy support.
 
The elimination of energy subsidies is widely seen as one of the most effective ways of reducing global carbon emissions.

Overview

Main arguments for energy subsidies are:
  • Security of supply – subsidies are used to ensure adequate domestic supply by supporting indigenous fuel production in order to reduce import dependency, or supporting overseas activities of national energy companies.
  • Environmental improvement – subsidies are used to reduce pollution, including different emissions, and to fulfill international obligations (e.g. Kyoto Protocol).
  • Economic benefits – subsidies in the form of reduced prices are used to stimulate particular economic sectors or segments of the population, e.g. alleviating poverty and increasing access to energy in developing countries.
  • Employment and social benefits – subsidies are used to maintain employment, especially in periods of economic transition.
Main arguments against energy subsidies are:
  • Some energy subsidies counter the goal of sustainable development, as they may lead to higher consumption and waste, exacerbating the harmful effects of energy use on the environment, create a heavy burden on government finances and weaken the potential for economies to grow, undermine private and public investment in the energy sector. Also, most benefits from fossil fuel subsidies in developing countries go to the richest 20% of households.
  • Impede the expansion of distribution networks and the development of more environmentally benign energy technologies, and do not always help the people that need them most.
  • The study conducted by the World Bank finds that subsidies to the large commercial businesses that dominate the energy sector are not justified. However, under some circumstances it is reasonable to use subsidies to promote access to energy for the poorest households in developing countries. Energy subsidies should encourage access to the modern energy sources, not to cover operating costs of companies. The study conducted by the World Resources Institute finds that energy subsidies often go to capital intensive projects at the expense of smaller or distributed alternatives.
Types of energy subsidies are:
  • Direct financial transfers – grants to producers; grants to consumers; low-interest or preferential loans to producers.
  • Preferential tax treatments – rebates or exemption on royalties, duties, producer levies and tariffs; tax credit; accelerated depreciation allowances on energy supply equipment.
  • Trade restrictions – quota, technical restrictions and trade embargoes.
  • Energy-related services provided by government at less than full cost – direct investment in energy infrastructure; public research and development.
  • Regulation of the energy sector – demand guarantees and mandated deployment rates; price controls; market-access restrictions; preferential planning consent and controls over access to resources.
  • Failure to impose external costs – environmental external costs; energy security risks and price volatility costs.
  • Depletion Allowance – allows a deduction from gross income of up to ~27% for the depletion of exhaustible resources (oil, gas, minerals).
Overall, energy subsidies require coordination and integrated implementation, especially in light of globalization and increased interconnectedness of energy policies, thus their regulation at the World Trade Organization is often seen as necessary.

Impact of fossil fuel subsidies

The degree and impact of fossil fuel studies is extensively studied. Because fossil fuels are a leading contributor to climate change through greenhouse gases, fossil fuel subsidies increase emissions and exacerbate climate change. The OECD’s inventory in 2015 determined an overall value of $160bn-$200bn per year between 2010 and 2014 relying on the WTO's 1994 definition of fossil fuel subsidies as “financial contribution by a government” which “confers a benefit” on its recipient. This is the only internationally agreed definition of the term.

A 2016 IMF study estimated that global fossil fuel subsidies were $5.3 trillion in 2015, which represents 6.5% of global GDP. The study found that "China was the biggest subsidizer in 2013 ($1.8 trillion), followed by the United States ($0.6 trillion), and Russia, the European Union, and India (each with about $0.3 trillion)." The authors estimated that the elimination of "subsidies would have reduced global carbon emissions in 2013 by 21% and fossil fuel air pollution deaths 55%, while raising revenue of 4%, and social welfare by 2.2%, of global GDP." This study is controversial for its radical break with previous definitions of subsidies by redefining externals as a subsidy, as well as an excessively broad application of social costs as oil externals. The externals accounted for are broad enough that oil companies not paying for automobile accidents is considered a subsidy.

According to the International Energy Agency, the elimination of fossil fuel subsidies worldwide would be the one of the most effective ways of reducing greenhouse gases and battling global warming. In May 2016, the G7 nations set for the first time a deadline for ending most fossil fuel subsidies; saying government support for coal, oil and gas should end by 2025.

According to the OECD, subsidies supporting fossil fuels, particularly coal and oil, represent greater threats to the environment than subsidies to renewable energy. Subsidies to nuclear power contribute to unique environmental and safety issues, related mostly to the risk of high-level environmental damage, although nuclear power contributes positively to the environment in the areas of air pollution and climate change. According to Fatih Birol, Chief Economist at the International Energy Agency without a phasing out of fossil fuel subsidies, countries will not reach their climate targets.

A 2010 study by Global Subsidies Initiative compared global relative subsidies of different energy sources. Results show that fossil fuels receive 0.8 US cents per kWh of energy they produce (although it should be noted that the estimate of fossil fuel subsidies applies only to consumer subsidies and only within non-OECD countries), nuclear energy receives 1.7 cents / kWh, renewable energy (excluding hydroelectricity) receives 5.0 cents / kWh and bio-fuels receive 5.1 cents / kWh in subsidies.

In 2011, IEA chief economist Faith Birol said the current $409 billion equivalent of fossil fuel subsidies are encouraging a wasteful use of energy, and that the cuts in subsidies is the biggest policy item that would help renewable energies get more market share and reduce CO2 emissions.

Impact of renewable energy subsidies

Global renewable energy subsidies reached $88 billion in 2011. According to the OECD, subsidies to renewable energy are generally considered more environmentally beneficial than fossil fuel subsidies, although the full range of environmental effects should be taken into account.

IEA position on subsidies

According to International Energy Agency (IEA) (2011) energy subsidies artificially lower the price of energy paid by consumers, raise the price received by producers or lower the cost of production. "Fossil fuels subsidies costs generally outweigh the benefits. Subsidies to renewables and low-carbon energy technologies can bring long-term economic and environmental benefits". In November 2011, an IEA report entitled Deploying Renewables 2011 said "subsidies in green energy technologies that were not yet competitive are justified in order to give an incentive to investing into technologies with clear environmental and energy security benefits". The IEA's report disagreed with claims that renewable energy technologies are only viable through costly subsidies and not able to produce energy reliably to meet demand. "A portfolio of renewable energy technologies is becoming cost-competitive in an increasingly broad range of circumstances, in some cases providing investment opportunities without the need for specific economic support," the IEA said, and added that "cost reductions in critical technologies, such as wind and solar, are set to continue."

Fossil-fuel consumption subsidies were $409 billion in 2010, oil products being half of it. Renewable-energy subsidies were $66 billion in 2010 and will reach $250 billion by 2035, according to IEA. Renewable energy is subsidized in order to compete in the market, increase their volume and develop the technology so that the subsidies become unnecessary with the development. Eliminating fossil-fuel subsidies could bring economic and environmental benefits. Phasing out fossil-fuel subsidies by 2020 would cut primary energy demand 5%. Since the start of 2010, at least 15 countries have taken steps to phase out fossil-fuel subsidies. According to IEA onshore wind may become competitive around 2020 in the European Union.

According to the IEA the phase-out of fossil fuel subsidies, over $500 billion annually, will reduce 10% greenhouse gas emissions by 2050.

Subsidies by country

The International Energy Agency estimates that governments subsidized fossil fuels by US $548 billion in 2013. Ten countries accounted for almost three-quarters of this figure. At their meeting in September 2009 the G-20 countries committed to "rationalize and phase out over the medium term inefficient fossil fuel subsidies that encourage wasteful consumption". The 2010s have seen many countries reducing energy subsidies, for instance in July 2014 Ghana abolished all diesel and gasoline subsidies, whilst in the same month Egypt raised diesel prices 63% as part of a raft of reforms intended to remove subsidies within 5 years.

The public energy subsidies for energy in Finland in 2013 were €700 million for fossil energy and €60 million for renewable energy (mainly wood and wind).

United States

Congressional Budget Office estimated allocation of energy-related tax preferences, by type of fuel or technology, 2016
 
According to a Congressional Budget Office testimony, roughly three-fourths of the projected cost of tax preferences for energy in 2016 was for renewable energy and energy efficiency. An estimated $10.9 billion was directed toward renewable energy; $2.7 billion, went to energy efficiency or electricity transmission. Fossil fuels accounted for most of the remaining cost of energy-related tax preferences—an estimated $4.6 billion.

According to a 2015 estimate by the Obama administration, the US oil industry benefited from subsidies of about $4.6 billion per year. A 2017 study by researchers at Stockholm Environment Institute published in the journal Nature Energy estimated that nearly half of U.S. oil production would be unprofitable without subsidies.

Allocation of subsidies in the United States

On March 13, 2013, Terry M. Dinan, senior advisor at the Congressional Budget Office, testified before the Subcommittee on Energy of the Committee on Science, Space, and Technology in the U.S. House of Representatives that federal energy tax subsidies would cost $16.4 billion that fiscal year, broken down as follows:
  • Renewable energy: $7.3 billion (45 percent)
  • Energy efficiency: $4.8 billion (29 percent)
  • Fossil fuels: $3.2 billion (20 percent)
  • Nuclear energy: $1.1 billion (7 percent)
In addition, Dinan testified that the U.S. Department of Energy would spend an additional $3.4 billion on financial Support for energy technologies and energy efficiency, broken down as follows:
  • Energy efficiency and renewable energy: $1.7 billion (51 percent)
  • Nuclear energy: $0.7 billion (22 percent)
  • Fossil energy research & development: $0.5 billion (15 percent)
  • Advanced Research Projects Agency—Energy: $0.3 billion (8 percent)
  • Electricity delivery and energy reliability: $0.1 billion (4 percent)
A 2011 study by the consulting firm Management Information Services, Inc. (MISI) estimated the total historical federal subsidies for various energy sources over the years 1950–2010. The study found that oil, natural gas, and coal received $369 billion, $121 billion, and $104 billion (2010 dollars), respectively, or 70% of total energy subsidies over that period. Oil, natural gas, and coal benefited most from percentage depletion allowances and other tax-based subsidies, but oil also benefited heavily from regulatory subsidies such as exemptions from price controls and higher-than-average rates of return allowed on oil pipelines. The MISI report found that non-hydro renewable energy (primarily wind and solar) benefited from $74 billion in federal subsidies, or 9% of the total, largely in the form of tax policy and direct federal expenditures on research and development (R&D). Nuclear power benefited from $73 billion in federal subsidies, 9% of the total, largely in the form of R&D, while hydro power received $90 billion in federal subsidies, 12% of the total. 

Congressional Budget Office testimony delivered March 29, 2017 showing the historic trend of energy related tax preferences
 
A 2009 study by the Environmental Law Institute assessed the size and structure of U.S. energy subsidies in 2002–08. The study estimated that subsidies to fossil fuel-based sources totaled about $72 billion over this period and subsidies to renewable fuel sources totaled $29 billion. The study did not assess subsidies supporting nuclear energy. 

The three largest fossil fuel subsidies were:
  1. Foreign tax credit ($15.3 billion)
  2. Credit for production of non-conventional fuels ($14.1 billion)
  3. Oil and Gas exploration and development expense ($7.1 billion)
The three largest renewable fuel subsidies were:
  1. Alcohol Credit for Fuel Excise Tax ($11.6 billion)
  2. Renewable Electricity Production Credit ($5.2 billion)
  3. Corn-Based Ethanol ($5.0 billion)
In the United States, the federal government has paid US$74 billion for energy subsidies to support R&D for nuclear power ($50 billion) and fossil fuels ($24 billion) from 1973 to 2003. During this same time frame, renewable energy technologies and energy efficiency received a total of US $26 billion. It has been suggested that a subsidy shift would help to level the playing field and support growing energy sectors, namely solar power, wind power, and bio-fuels. However, many of the "subsidies" available to the oil and gas industries are general business opportunity credits, available to all US businesses (particularly, the foreign tax credit mentioned above). The value of industry-specific (oil, gas, and coal) subsidies in 2006 was estimated by the Texas State Comptroller to be $6.25 billion - about 60% of the amount calculated by the Environmental Law Institute. The balance of federal subsidies, which the comptroller valued at $7.4 billion, came from shared credits and deductions, and oil defense (spending on the Strategic Petroleum Reserve, energy infrastructure security, etc.).

Critics allege that the most important subsidies to the nuclear industry have not involved cash payments, but rather the shifting of construction costs and operating risks from investors to taxpayers and ratepayers, burdening them with an array of risks including cost overruns, defaults to accidents, and nuclear waste management. Critics claim that this approach distorts market choices, which they believe would otherwise favor less risky energy investments.

Many energy analysts, such as Clint Wilder, Ron Pernick and Lester Brown, have suggested that energy subsidies need to be shifted away from mature and established industries and towards high growth clean energy. They also suggest that such subsidies need to be reliable, long-term and consistent, to avoid the periodic difficulties that the wind industry has had in the United States.

A 2012 study authored by researchers at the Breakthrough Institute, Brookings Institution, and World Resources Institute estimated that between 2009 and 2014 the federal government will spend $150 billion on clean energy through a combination of direct spending and tax expenditures. Renewable electricity (mainly wind, solar, geothermal, hydro, and tidal energy) will account for the largest share of this expenditure, 32.1%, while spending on liquid biofuels will account for the next largest share, 16.1%. Spending on multiple and other forms of clean energy, including energy efficiency, electric vehicles and advanced batteries, high-speed rail, grid and transportation electrification, nuclear, and advanced fossil fuel technologies, will account for the remaining share, 51.8%. Moreover, the report finds that absent federal action, spending on clean energy will decline by 75%, from $44.3 billion in 2009 to $11.0 billion in 2014.

United States government role in the development of new energy industries

From civilian nuclear power to hydro, wind, solar, and shale gas, the United States federal government has played a central role in the development of new energy industries.

America's nuclear power industry, which currently supplies about 20% of the country's electricity, has its origins in the Manhattan Project to develop atomic weapons during World War II. From 1942 to 1945, the United States invested $20 billion (2003 dollars) into a massive nuclear research and deployment initiative. But the achievement of the first nuclear weapon test in 1945 marked the beginning, not the end, of federal involvement in nuclear technologies. President Dwight D. Eisenhower's “Atoms for Peace” address in 1953 and the 1954 Atomic Energy Act committed the United States to develop peaceful uses for nuclear technology, including commercial energy generation. The new National Laboratory system, established by the Manhattan Project, was maintained and expanded, and the government poured money into nuclear energy research and development. Recognizing that research was not sufficient to spur the development of a nascent, capital-intensive industry, the federal government created financial incentives to spur the deployment of nuclear energy. For example, the 1957 Price Anderson Act limited the liability of nuclear energy firms in case of serious accident and helped firms secure capital with federal loan guarantees. In the favorable environment created by such incentives, more than 100 nuclear plants were built in the United States by 1973.

Commercial wind power, today one of the fastest growing energy sectors, was also enabled through government support. In the 1980s, the federal government pursued two different R&D efforts for wind turbine development. The first was a “big science” effort by NASA and the Department of Energy (DOE) to use U.S. expertise in high-technology research and products to develop new large-scale wind turbines for electricity generation, largely from scratch. A second, more successful R&D effort, sponsored by the DOE, focused on component innovations for smaller turbines that used the operational experience of existing turbines to inform future research agendas. Joint research projects between the government and private firms produced a number of innovations that helped increase the efficiency of wind turbines, including twisted blades and special-purpose airfoils. Publicly funded R&D was coupled with efforts to build a domestic market for new turbines. At the federal level, this included tax credits and the passage of the Public Utilities Regulatory Policy Act (PURPA), which required that utilities purchase power from some small renewable energy generators at avoided cost. Both federal and state support for wind turbine development helped drive costs down considerably, but policy incentives at both the federal and state level were discontinued at the end of the decade. However, after a nearly five-year federal policy hiatus in the late 1980s, the U.S. government enacted new policies to support the industry in the early 1990s. The National Renewable Energy Laboratory (NREL) continued its support for wind turbine R&D, and also launched the Advanced Wind Turbine Program (AWTP). The goal of the AWTP was to reduce the cost of wind power to rates that would be competitive in the U.S. market. Policymakers also introduced new mechanisms to spur the demand of new wind turbines and boost the domestic market, including a 1.5 cents per kilowatt-hour tax credit (adjusted over time for inflation) included in the 1992 Energy Policy Act. Today the wind industry's main subsidy support comes from the federal production tax credit.

The development of commercial solar power was also dependent on government support. Solar PV technology was born in the United States, when Daryl Chapin, Calvin Fuller, and Gerald Pearson at Bell Labs first demonstrated the silicon solar photovoltaic cell in 1954. The first cells recorded efficiencies of four percent, far lower than the 25 percent efficiencies typical of some silicon crystalline cells today. With the cost out of reach for most applications, developers of the new technology had to look elsewhere for an early market. As it turned out, solar PV did make economic sense in one market segment: aerospace. The United States Army and Air Force viewed the technology as an ideal power source for a top-secret project on earth-orbiting satellites. The government contracted with Hoffman Electronics to provide solar cells for its new space exploration program. The first commercial satellite, the Vanguard I, launched in 1958, was equipped with both silicon solar cells and chemical batteries. By 1965, NASA was using almost a million solar PV cells. Strong government demand and early research support for solar cells paid off in the form of dramatic declines in the cost of the technology and improvements in its performance. From 1956 to 1973, the price of PV cells declined from $300 to $20 per watt. Beginning in the 1970s, as costs were declining, manufacturers began producing solar PV cells for terrestrial applications. Solar PV found a new niche in areas distant from power lines where electricity was needed, such as oil rigs and Coast Guard lighthouses. The government continued to support the industry through the 1970s and early 1980s with new R&D efforts under Presidents Richard Nixon and Gerald Ford, both Republicans, and President Jimmy Carter, a Democrat. As a direct result of government involvement in solar PV development, 13 of the 14 top innovations in PV over the past three decades were developed with the help of federal dollars, nine of which were fully funded by the public sector.

More recently than nuclear, wind, or solar, the development of the shale gas industry and subsequent boom in shale gas development in the United States was enabled through government support. The history of shale gas fracking in the United States was punctuated by the successive developments of massive hydraulic fracturing (MHF), microseismic imaging, horizontal drilling, and other key innovations that when combined made the once unreachable energy resource technically recoverable. Along each stage of the innovation pipeline – from basic research to applied R&D to cost-sharing on demonstration projects to tax policy support for deployment – public-private partnerships and federal investments helped push hydraulic fracturing in shale into full commercial competitiveness. Through a combination of federally funded geologic research beginning in the 1970s, public-private collaboration on demonstration project and R&D priorities, and tax policy support for unconventional technologies, the federal government played a key role in the development of shale gas in the United States.

Investigations have uncovered the crucial role of the government in the development of other energy technologies and industries, including aviation and jet engines, synthetic fuels, advanced natural gas turbines, and advanced diesel internal combustion engines.

Venezuela

In Venezuela, energy subsidies were equivalent to about 8.9 percent of the country's GDP in 2012. Fuel subsidies were 7.1 percent while electricity subsidies were 1.8 percent. In order to fund this the government used about 85 percent of its tax revenue on these subsidies. It is estimated the subsidies have caused Venezuela to consume 20 percent more energy than without them. The fuel subsidies are given more heavily to the richest part of the population who are consuming the most energy. The fuel subsidies maintained a cost of about $0.01 US for a liter of gasoline at the pump since 1996 until president Nicolas Maduro reduced the national subsidy in 2016 to make it roughly $0.60 US per liter (The local currency is Bolivar and the price per liter of gas is 6 Bolivars). Fuel consumption has increased overall since the 1996 policy began even though the production of oil has fallen more than 350,000 barrels a day since 2008 under that policy. PDVSA, the Venezuelan state oil company, has been losing money on these domestic transactions since the enactment of these policies. These losses can also be attributed to the 2005 Petrocaribe agreement, under which Venezuela sells many surrounding countries petroleum at a reduced or preferable price; essentially a subsidy by Venezuela for countries that are a part of the agreement. The subsidizing of fossil fuels and consequent low cost of fuel at the pump has caused the creation of a large black market. Criminal groups smuggle fuel out of Venezuela to adjacent nations (mainly Colombia). This is due to the large profits that can be gained by this act, as fuel is much more expensive in Colombia than in Venezuela. Despite the fact that this issue is already well known in Venezuela, and insecurity in the region continues to rise, the state has not yet lowered or eliminated these fossil fuel subsidies.

Russia

Russia is one of the world’s energy powerhouses. It holds the world’s largest natural gas reserves (27% of total), the second-largest coal reserves, and the eighth-largest oil reserves. Russia is the world's third-largest energy subsidizer as of 2015. The country subsidizes electricity and natural gas as well as oil extraction. Approximately 60% of the subsidies go to natural gas, with the remainder spent on electricity (including under-pricing of gas delivered to power stations). For oil extraction the government gives tax exemptions and duty reductions amounting to about 22 billion dollars a year. Some of the tax exemptions and duty reductions also apply to natural gas extraction, though the majority is allocated for oil. In 2013 Russia offered the first subsidies to renewable power generators. The large subsidies of Russia are costly and it is recommended in order to help the economy that Russia lowers its domestic subsidies. However, the potential elimination of energy subsidies in Russia carries the risk of social unrest that makes Russian authorities reluctant to remove them.

European Union

In February 2011 and January 2012 the UK Energy Fair group, supported by other organisations and environmentalists, lodged formal complaints with the European Union's Directorate General for Competition, alleging that the Government was providing unlawful state aid in the form of subsidies for nuclear power industry, in breach of European Union competition law.

One of the largest subsidies is the cap on liabilities for nuclear accidents which the nuclear power industry has negotiated with governments. “Like car drivers, the operators of nuclear plants should be properly insured,” said Gerry Wolff, coordinator of the Energy Fair group. The group calculates that, "if nuclear operators were fully insured against the cost of nuclear disasters like those at Chernobyl and Fukushima, the price of nuclear electricity would rise by at least €0.14 per kWh and perhaps as much as €2.36, depending on assumptions made". According to the most recent statistics, subsidies for fossil fuels in Europe are exclusively allocated to coal (€10 billion) and natural gas (€6 billion). Oil products do not receive any subsidies.

1947–1948 civil war in Mandatory Palestine

From Wikipedia, the free encyclopedia During the civil war, the Jewish and Arab communities of Palestine clashed (the latter supported b...