Search This Blog

Thursday, February 10, 2022

Carbon capture and storage

From Wikipedia, the free encyclopedia

Global proposed vs. implemented annual CO2 sequestration. More than 75% of proposed gas processing projects have been implemented, with corresponding figures for other industrial projects and power plant projects being about 60% and 10%, respectively.

Carbon capture and storage (CCS) or carbon capture and sequestration is the process of capturing carbon dioxide (CO2) before it enters the atmosphere, transporting it, and storing it (carbon sequestration) for centuries or millennia. Usually the CO2 is captured from large point sources, such as coal-fired power plant, a chemical plant or biomass power plant, and then stored in an underground geological formation. The aim is to prevent the release of CO2 from heavy industry with the intent of mitigating the effects of climate change. Although CO2 has been injected into geological formations for several decades for various purposes, including enhanced oil recovery, the long-term storage of CO2 is a relatively new concept. Carbon capture and utilization (CCU) and CCS are sometimes discussed collectively as carbon capture, utilization, and sequestration (CCUS). This is because CCS is a relatively expensive process yielding a product with an intrinsic low value (i.e. CO2). Hence, carbon capture makes economically more sense when being combined with a utilization process where the cheap CO2 can be used to produce high-value chemicals to offset the high costs of capture operations.

CO2 can be captured directly from an industrial source, such as a cement kiln, using a variety of technologies; including absorption, adsorption, chemical looping, membrane gas separation or gas hydration. As of 2020, about one thousandth of global CO2 emissions are captured by CCS. Most projects are industrial.

Storage of the CO2 is envisaged either in deep geological formations, or in the form of mineral carbonates. Pyrogenic carbon capture and storage (PyCCS) is also being researched. Geological formations are currently considered the most promising sequestration sites. The US National Energy Technology Laboratory (NETL) reported that North America has enough storage capacity for more than 900 years worth of CO2 at current production rates. A general problem is that long-term predictions about submarine or underground storage security are very difficult and uncertain, and there is still the risk that some CO2 might leak into the atmosphere. Opponents point out that many CCS projects have failed to deliver on promised emissions reductions. Additionally, opponents argue that rather than focusing on carbon removal, carbon capture and storage is a justification for indefinite fossil fuel usage disguised as marginal emission reductions. One of the most well-known failures is the FutureGen program, partnerships between the US federal government and coal energy production companies which were intended to demonstrate ″clean coal″, but never succeeded in producing any carbon-free electricity from coal.

Capture

Capturing CO2 is most cost-effective at point sources, such as large carbon-based energy facilities, industries with major CO2 emissions (e.g. cement production, steelmaking), natural gas processing, synthetic fuel plants and fossil fuel-based hydrogen production plants. Extracting CO2 from air is possible, although the lower concentration of CO2 in air compared to combustion sources complicates the engineering and makes the process therefore more expensive.

Impurities in CO2 streams, like sulfurs and water, can have a significant effect on their phase behavior and could pose a significant threat of increased pipeline and well corrosion. In instances where CO2 impurities exist, especially with air capture, a scrubbing separation process is needed to initially clean the flue gas. It is possible to capture approximately 65% of CO2 embedded in it and sequester it in a solid form.

Broadly, three different technologies exist: post-combustion, pre-combustion, and oxyfuel combustion:

  • In post combustion capture, the CO2 is removed after combustion of the fossil fuel—this is the scheme that would apply to fossil-fuel power plants. CO2 is captured from flue gases at power stations or other point sources. The technology is well understood and is currently used in other industrial applications, although at smaller scale than required in a commercial scale station. Post combustion capture is most popular in research because fossil fuel power plants can be retrofitted to include CCS technology in this configuration.
  • The technology for pre-combustion is widely applied in fertilizer, chemical, gaseous fuel (H2, CH4), and power production. In these cases, the fossil fuel is partially oxidized, for instance in a gasifier. The CO from the resulting syngas (CO and H2) reacts with added steam (H2O) and is shifted into CO2 and H2. The resulting CO2 can be captured from a relatively pure exhaust stream. The H2 can be used as fuel; the CO2 is removed before combustion. Several advantages and disadvantages apply versus post combustion capture. The CO2 is removed after combustion, but before the flue gas expands to atmospheric pressure. The capture before expansion, i.e. from pressurized gas, is standard in almost all industrial CO2 capture processes, at the same scale as required for power plants.
  • In oxy-fuel combustion the fuel is burned in pure oxygen instead of air. To limit the resulting flame temperatures to levels common during conventional combustion, cooled flue gas is recirculated and injected into the combustion chamber. The flue gas consists of mainly CO2 and water vapour, the latter of which is condensed through cooling. The result is an almost pure CO2 stream. Power plant processes based on oxyfuel combustion are sometimes referred to as "zero emission" cycles, because the CO2 stored is not a fraction removed from the flue gas stream (as in the cases of pre- and post-combustion capture) but the flue gas stream itself. A certain fraction of the CO2 inevitably end up in the condensed water. To warrant the label "zero emission" the water would thus have to be treated or disposed of appropriately.

Separation technologies

The major technologies proposed for carbon capture are:

Absorption, or carbon scrubbing with amines is the dominant capture technology. It is the only carbon capture technology so far that has been used industrially.

CO2 adsorbs to a MOF (Metal–organic framework) through physisorption or chemisorption based on the porosity and selectivity of the MOF leaving behind a CO2 poor gas stream. The CO2 is then stripped off the MOF using temperature swing adsorption (TSA) or pressure swing adsorption (PSA) so the MOF can be reused. Adsorbents and absorbents require regeneration steps where the CO2 is removed from the sorbent or solution that collected it from the flue gas in order for the sorbent or solution to be reused. Monoethanolamine (MEA) solutions, the leading amine for capturing CO2 , have a heat capacity between 3–4 J/g K since they are mostly water. Higher heat capacities add to the energy penalty in the solvent regeneration step. Thus, to optimize a MOF for carbon capture, low heat capacities and heats of adsorption are desired. Additionally, high working capacity and high selectivity are desirable in order to capture as much CO2 as possible. However, an energy trade off complicates selectivity and energy expenditure. As the amount of CO2 captured increases, the energy, and therefore cost, required to regenerate increases. A drawback of MOF/CCS is the limitation imposed by their chemical and thermal stability. Research is attempting to optimize MOF properties for CCS. Metal reservoirs are another limiting factor.

About two thirds of CCS cost is attributed to capture, making it the limit to CCS deployment. Optimizing capture would significantly increase CCS feasibility since the transport and storage steps of CCS are rather mature.

An alternate method is chemical looping combustion (CLC). Looping uses a metal oxide as a solid oxygen carrier. Metal oxide particles react with a solid, liquid or gaseous fuel in a fluidized bed combustor, producing solid metal particles and a mixture of CO2 and water vapor. The water vapor is condensed, leaving pure CO2 , which can then be sequestered. The solid metal particles are circulated to another fluidized bed where they react with air, producing heat and regenerating metal oxide particles for return to the combustor. A variant of chemical looping is calcium looping, which uses the alternating carbonation and then calcination of a calcium oxide based carrier.

CCS could reduce CO2 emissions from smokestacks by 85–90% or more, but it has no net effect on CO2 emissions due to the mining and transport of coal (something that is frequently overlooked when considering “green” alternatives such as batteries). It will actually "increase such emissions and of air pollutants per unit of net delivered power and will increase all ecological, land-use, air-pollution, and water-pollution impacts from coal mining, transport, and processing, because CCS requires 25% more energy, thus 25% more coal combustion, than does a system without CCS".

A 2019 study found CCS plants to be less effective than renewable electricity. The electrical energy returned on energy invested (EROEI) ratios of both production methods were estimated, accounting for their operational and infrastructural energy costs. Renewable electricity production included solar and wind with sufficient energy storage, plus dispatchable electricity production. Thus, rapid expansion of scalable renewable electricity and storage would be preferable over fossil-fuel with CCS. The study did not consider whether both options could be pursued in parallel.

In 2021 High Hopes proposed using high-altitude balloons to capture CO2 cryogenically, using hydrogen to lower the already low-temperature atmosphere sufficiently to produce dry ice that is returned to earth for sequestration.

In sorption enhanced water gas shift (SEWGS) technology a pre-combustion carbon capture process, based on solid adsorption, is combined with the water gas shift reaction (WGS) in order to produce a high pressure hydrogen stream. The CO2 stream produced can be stored or used for other industrial processes.

Transport

After capture, the CO2 must be transported to suitable storage sites. Pipelines are the cheapest form of transport. Ships can be utilized where pipelines are infeasible, and for long enough distances ships may be cheaper than a pipeline. These methods are used for transporting CO2 for other applications. Rail and tanker truck cost about twice as much as pipelines or ships.

For example, approximately 5,800 km of CO2 pipelines operated in the US in 2008, and a 160 km pipeline in Norway, used to transport CO2 to oil production sites where it is injected into older fields to extract oil. This injection is called enhanced oil recovery. Pilot programs are in development to test long-term storage in non-oil producing geologic formations. In the United Kingdom, the Parliamentary Office of Science and Technology envisages pipelines as the main UK transport.

Sequestration

Various approaches have been conceived for permanent storage. These include gaseous storage in deep geological formations (including saline formations and exhausted gas fields), and solid storage by reaction of CO2 with metal oxides to produce stable carbonates. It was once suggested that CO2 could be stored in the oceans, but this would exacerbate ocean acidification and was banned under the London and OSPAR conventions.

Geological storage

Geo-sequestration, involves injecting CO2 , generally in supercritical form, into underground geological formations. Oil fields, gas fields, saline formations, unmineable coal seams, and saline-filled basalt formations have been suggested as alternatives. Physical (e.g., highly impermeable caprock) and geochemical trapping mechanisms prevent the CO2 from escaping to the surface.

Unmineable coal seams can be used because CO2 molecules attach to the coal surface. Technical feasibility depends on the coal bed's permeability. In the process of absorption the coal releases previously absorbed methane, and the methane can be recovered (enhanced coal bed methane recovery). Methane revenues can offset a portion of the cost, although burning the resultant methane, however, produces another stream of CO2 to be sequestered.

Saline formations contain mineralized brines and have yet to produce benefit to humans. Saline aquifers have occasionally been used for storage of chemical waste in a few cases. The main advantage of saline aquifers is their large potential storage volume and their ubiquity. The major disadvantage of saline aquifers is that relatively little is known about them. To keep the cost of storage acceptable, geophysical exploration may be limited, resulting in larger uncertainty about the aquifer structure. Unlike storage in oil fields or coal beds, no side product offsets the storage cost. Trapping mechanisms such as structural trapping, residual trapping, solubility trapping and mineral trapping may immobilize the CO2 underground and reduce leakage risks.

Enhanced oil recovery

CO2 is occasionally injected into an oil field as an enhanced oil recovery technique, but because CO2 is released when the oil is burned, it is not carbon neutral.

Algae/bacteria

CO2 can be physically supplied to algae or bacteria that could degrade the CO2. It would ultimately be ideal to exploit CO2 metabolizing bacterium Clostridium thermocellum.

Mineral storage

CO2 can exothermically react with metal oxides, which in turn produce stable carbonates (e.g. calcite, magnesite). This process (CO2-to-stone) occurs naturally over periods of years and is responsible for much surface limestone. Olivine is one such MOX. The reaction rate can be accelerated with a catalyst or by increasing temperatures and/or pressures, or by mineral pre-treatment, although this method can require additional energy. The IPCC estimates that a power plant equipped with CCS using mineral storage would need 60–180% more energy than one without. Theoretically, up to 22% of crustal mineral mass is able to form carbonates.

Principal metal oxides of Earth's crust
Earthen oxide Percent of crust Carbonate Enthalpy change (kJ/mol)
SiO2 59.71

Al2O3 15.41

CaO 4.90 CaCO3 −179
MgO 4.36 MgCO3 −118
Na2O 3.55 Na2CO3 −322
FeO 3.52 FeCO3 −85
K2O 2.80 K2CO3 −393.5
Fe2O3 2.63 FeCO3 112

21.76 All carbonates

Ultramafic mine tailings are a readily available source of fine-grained metal oxides that can serve this purpose. Accelerating passive CO2 sequestration via mineral carbonation may be achieved through microbial processes that enhance mineral dissolution and carbonate precipitation.

Cost

Cost is a significant factor affecting CCS. The cost of CCS, plus any subsidies, must be less than the expected cost of emitting CO2 for a project to be considered economically favorable.

Capturing CO2 requires energy, and if that energy comes from fossil fuels then more fuel must be burned to produce a given net amount. In other words, the cost of CO2 captured does not fully account for the reduced efficiency of the plant with CCS. For this reason the cost of CO2 captured is always lower than the cost of CO2 avoided and does not describe the full cost of CCS. Some sources report the increase in the cost of electricity as a way to evaluate the economic impact of CCS.

CCS technology is expected to use between 10 and 40 percent of the energy produced by a power station. Energy for CCS is called an energy penalty. It has been estimated that about 60% of the penalty originates from the capture process, 30% comes from compression of CO2 , while the remaining 10% comes from pumps and fans. CCS would increase the fuel requirement of a plant with CCS by about 15% (gas plant). The cost of this extra fuel, as well as storage and other system costs, are estimated to increase the costs of energy from a power plant with CCS by 30–60%.

Constructing CCS units is capital intensive. The additional costs of a large-scale CCS demonstration project are estimated to be €0.5–1.1 billion per project over the project lifetime. Other applications are possible. CCS trials for coal-fired plants in the early 21st century were economically unviable in most countries, including China, in part because revenue from enhanced oil recovery collapsed with the 2020 oil price collapse.

As of 2018 a carbon price of at least 100 euros per tonne CO2 was estimated to make industrial CCS viable together with carbon tariffs. By 2021 the EU Allowance had only reached 60 euros and the Carbon Border Adjustment Mechanism had not yet been implemented. However a company making small modules claims it can get well below that price by mass production by 2022.

According to UK government estimates made in the late 2010s, carbon capture (without storage) is estimated to add 7 GBP per MWh by 2025 to the cost of electricity from a gas-fired power plant: however most CO2 will need to be stored so in total the increase in cost for gas or biomass generated electricity is around 50%.

Business models

Possible business models for industrial carbon capture include:

  • Contract for Difference CfDC CO2 certificate strike price
  • Cost Plus open book
  • Regulated Asset Base (RAB)
  • Tradeable tax credits for CCS
  • Tradeable CCS certificates + obligation
  • Creation of low carbon market

Governments have provided various types of funding for CCS demonstration projects, including tax credits, allocations and grants.

Clean Development Mechanism

One alternative could be through the Clean Development Mechanism of the Kyoto Protocol. At COP16 in 2010, The Subsidiary Body for Scientific and Technological Advice, at its thirty-third session, issued a draft document recommending the inclusion of CCS in geological formations in Clean Development Mechanism project activities. At COP17 in Durban, a final agreement was reached enabling CCS projects to receive support through the Clean Development Mechanism.

Environmental effects

Alkaline solvents

CO2 can be captured with alkaline solvents at low temperatures in the absorber and released CO2 at higher temperatures in a desorber. Chilled ammonia CCS plants emit ammonia. "Functionalized Ammonia" emits less ammonia, but amines may form secondary amines that emit volatile nitrosamines by a side reaction with nitrogen dioxide, which is present in any flue gas. Alternative amines with little to no vapor pressure can avoid these emissions. Nevertheless, practically 100% of remaining sulfur dioxide from the plant is washed out of the flue gas, along with dust/ash.

Gas-fired power plants

The extra energy requirements deriving from CCS for natural gas combined cycle (NGCC) plants range from 11 to 22%. Fuel use and environmental problems (e.g., methane emissions) arising from gas extraction increase accordingly. Plants equipped with selective catalytic reduction systems for nitrogen oxides produced during combustion require proportionally greater amounts of ammonia.

Coal-fired power plants

A 2020 study concluded that half as much CCS might be installed in coal-fired plants as in gas-fired: these would be mainly in China and India.

For super-critical pulverized coal (PC) plants, CCS' energy requirements range from 24 to 40%, while for coal-based gasification combined cycle (IGCC) systems it is 14–25%. Fuel use and environmental problems arising from coal extraction increase accordingly. Plants equipped with flue-gas desulfurization (FGD) systems for sulfur dioxide control require proportionally greater amounts of limestone, and systems equipped with selective catalytic reduction systems for nitrogen oxides produced during combustion require proportionally greater amounts of ammonia.

Leakage

Long term retention

IPCC estimates that leakage risks at properly managed sites are comparable to those associated with current hydrocarbon activity. It recommends that limits be set to the amount of leakage that can take place. However, this finding is contested given the lack of experience. CO2 could be trapped for millions of years, and although some leakage may occur, appropriate storage sites are likely to retain over 99% for over 1000 years.

Mineral storage is not regarded as presenting any leakage risks.

Norway's Sleipner gas field is the oldest industrial scale retention project. An environmental assessment conducted after ten years of operation concluded that geosequestration was the most definite form of permanent geological storage method:

Available geological information shows absence of major tectonic events after the deposition of the Utsira formation [saline reservoir]. This implies that the geological environment is tectonically stable and a site suitable for CO2 storage. The solubility trapping [is] the most permanent and secure form of geological storage.

In March 2009 StatoilHydro issued a study documenting the slow spread of CO2 in the formation after more than 10 years operation.

Gas leakage into the atmosphere may be detected via atmospheric gas monitoring, and can be quantified directly via eddy covariance flux measurements.

Sudden leakage hazards

Transmission pipelines may leak or rupture. Pipelines can be fitted with remotely controlled valves that can limit the release quantity to one pipe section. For example, a severed 19" pipeline section 8 km long could release its 1,300 tonnes in about 3–4 min. At the storage site, the injection pipe can be fitted with non-return valves to prevent an uncontrolled release from the reservoir in case of upstream pipeline damage.

Large-scale releases present asphyxiation risk. In the 1953 Menzengraben mining accident, several thousand tonnes were released and asphyxiated a person 300 meters away. Malfunction of a CO2 industrial fire suppression system in a large warehouse released 50 t CO2 after which 14 people collapsed on the nearby public road. In the Berkel en Rodenrijs incident in December 2008 a modest release from a pipeline under a bridge killed some ducks sheltering there.

Monitoring

Monitoring allows leak detection with enough warning to minimize the amount lost, and to quantify the leak size. Monitoring can be done at both the surface and subsurface levels.

Subsurface

Subsurface monitoring can directly and/or indirectly track the reservoir's status. One direct method involves drilling deep enough to collect a sample. This drilling can be expensive due to the rock's physical properties. It also provides data only at a specific location.

One indirect method sends sound or electromagnetic waves into the reservoir which reflects back for interpretation. This approach provides data over a much larger region; although with less precision.

Both direct and indirect monitoring can be done intermittently or continuously.

Seismic

Seismic monitoring is a type of indirect monitoring. It is done by creating seismic waves either at the surface using a seismic vibrator, or inside a well using a spinning eccentric mass. These waves propagate through geological layers and reflect back, creating patterns that are recorded by seismic sensors placed on the surface or in boreholes. It can identify migration pathways of the CO2 plume.

Examples of seismic monitoring of geological sequestration are the Sleipner sequestration project, the Frio CO2 injection test and the CO2CRC Otway Project. Seismic monitoring can confirm the presence of CO2 in a given region and map its lateral distribution, but is not sensitive to the concentration.

Tracer

Organic chemical tracers, using no radioactive nor Cadmium components, can be used during the injection phase in a CCS project where CO2 is injected into an existing oil or gas field, either for EOR, pressure support or storage. Tracers and methodologies are compatible with CO2 – and at the same time unique and distinguishable from the CO2 itself or other molecules present in the sub-surface. Using laboratory methodology with an extreme detectability for tracer, regular samples at the producing wells will detect if injected CO2 has migrated from the injection point to the producing well. Therefore, a small tracer amount is sufficient to monitor large scale subsurface flow patterns. For this reason, tracer methodology is well-suited to monitor the state and possible movements of CO2 in CCS projects. Tracers can therefore be an aid in CCS projects by acting as an assurance that CO2 is contained in the desired location sub-surface. In the past, this technology has been used to monitor and study movements in CCS projects in Algeria (Mathieson et al. “In Salah CO 2 Storage JIP: CO 2 sequestration monitoring and verification technologies applied at Krechba, Algeria”, Energy Procedia 4:3596-3603), in the Netherlands (Vandeweijer et al. “Monitoring the CO2 injection site: K12B”, Energy Procedia 4 (2011) 5471–5478) as well as in Norway (Snøhvit).

Surface

Eddy covariance is a surface monitoring technique that measures the flux of CO2 from the ground's surface. It involves measuring CO2 concentrations as well as vertical wind velocities using an anemometer. This provides a measure of the vertical CO2 flux. Eddy covariance towers could potentially detect leaks, after accounting for the natural carbon cycle, such as photosynthesis and plant respiration. An example of eddy covariance techniques is the Shallow Release test. Another similar approach is to use accumulation chambers for spot monitoring. These chambers are sealed to the ground with an inlet and outlet flow stream connected to a gas analyzer. They also measure vertical flux. Monitoring a large site would require a network of chambers.

InSAR

InSAR monitoring involves a satellite sending signals down to the Earth's surface where it is reflected back to the satellite's receiver. The satellite is thereby able to measure the distance to that point. CO2 injection into deep sublayers of geological sites creates high pressures. These layers affect layers above and below them, change the surface landscape. In areas of stored CO2 , the ground's surface often rises due to the high pressures. These changes correspond to a measurable change in the distance from the satellite.

Carbon capture and utilization (CCU)

Comparison between sequestration and utilization of captured carbon dioxide

Carbon capture and utilization (CCU) is the process of capturing carbon dioxide (CO2) to be recycled for further usage. Carbon capture and utilization may offer a response to the global challenge of significantly reducing greenhouse gas emissions from major stationary (industrial) emitters. CCU differs from carbon capture and storage (CCS) in that CCU does not aim nor result in permanent geological storage of carbon dioxide. Instead, CCU aims to convert the captured carbon dioxide into more valuable substances or products; such as plastics, concrete or biofuel; while retaining the carbon neutrality of the production processes.

Captured CO2 can be converted to several products: one group being alcohols, such as methanol, to use as biofuels and other alternative and renewable sources of energy. Other commercial products include plastics, concrete and reactants for various chemical synthesis. Some of these chemicals can on their turn be transformed back into electricity, making CO2 not only a feedstock but also an ideal energy carrier.

Although CCU does not result in a net carbon positive to the atmosphere, there are several important considerations to be taken into account. Because CO2 is a thermodynamically stable form of carbon manufacturing products from it is energy intensive. The availability of other raw materials to create a product should also be considered before investing in CCU.

Considering the different potential options for capture and utilization, research suggests that those involving chemicals, fuels and microalgae have limited potential for CO2 removal, while those that involve construction materials and agricultural use can be more effective.

The profitability of CCU depends partly on the carbon price of CO2 being released into the atmosphere. Using captured CO2 to create useful commercial products could make carbon capture financially viable.

Social acceptance

Multiple studies indicate that risk and benefit perception are the most essential components of social acceptance.

Risk perception is mostly related to the concerns on its safety issues in terms of hazards from its operations and the possibility of CO2 leakage which may endanger communities, commodities, and the environment in the vicinity of the infrastructure. Other perceived risks relate to tourism and property values.

People who are already affected by climate change, such as drought, tend to be more supportive of CCS. Locally, communities are sensitive to economic factors, including job creation, tourism or related investment.

Experience is another relevant feature. Several field studies concluded that people already involved or used to industry are likely to accept the technology. In the same way, communities who have been negatively affected by any industrial activity are also less supportive of CCS.

Few members of the public know about CCS. This can allow misconceptions that lead to less approval. No strong evidence links knowledge of CCS and public acceptance. However, one study found that communicating information about monitoring tends to have a negative impact on attitudes. Conversely, approval seems to be reinforced when CCS is compared to natural phenomena.

Due to the lack of knowledge, people rely on organizations that they trust. In general, non-governmental organizations and researchers experience higher trust than stakeholders and governments. Opinions amongst NGOs are mixed. Moreover, the link between trust and acceptance is at best indirect. Instead, trust has an influence on the perception of risks and benefits.

CCS is embraced by the shallow ecology worldview, which promotes the search for solutions to the effects of climate change in lieu of/in addition to addressing the causes. This involves the use of advancing technology and CCS acceptance is common among techno-optimists. CCS is an "end-of-pipe" solution that reduces atmospheric CO2, instead of minimizing the use of fossil fuel.

On 21 January 2021, Elon Musk announced he was donating $100m for a prize for best carbon capture technology.

Environmental justice

Carbon capture facilities are often designed to be located near existing oil and gas infrastructure. In areas such as the Gulf coast, new facilities would exacerbate already existing industrial pollution, putting communities of color at greater risk.

A 2021 DeSmog Blog story highlighted, "CCS hubs are likely be sites in communities already being impacted by the climate crisis like Lake Charles and those along the Mississippi River corridor, where most of the state carbon pollution is emitted from fossil fuel power plants. Exxon, for example, is backing a carbon storage project in Houston’s shipping channel, another environmental justice community."

Political debate

CCS has been discussed by political actors at least since the start of the UNFCCC negotiations in the beginning of the 1990s, and remains a very divisive issue. CCS was included in the Kyoto protocol, and this inclusion was a precondition for the signing of the treaty by the United States, Norway, Russia and Canada.

Some environmental groups raised concerns over leakage given the long storage time required, comparing CCS to storing radioactive waste from nuclear power stations.

Other controversies arose from the use of CCS by policy makers as a tool to fight climate change. In the IPCC’s Fourth Assessment Report in 2007, a possible pathway to keep the increase of global temperature below 2 °C included the use of negative emission technologies (NETs).

Carbon emission status-quo

Opponents claimed that CCS could legitimize the continued use of fossil fuels, as well obviate commitments on emission reduction.

Some examples such as in Norway shows that CCS and other carbon removal technologies gained traction because it allowed the country to pursue its interests regarding the petroleum industry. Norway was a pioneer in emission mitigation, and established a CO2 tax in 1991. However, strong growth in Norway’s petroleum sector made domestic emission cuts increasingly difficult throughout the 1990s. The country’s successive governments struggled to pursue ambitious emission mitigation policies. The compromise was set to reach ambitious emission cuts targets without disrupting the economy, which was achieved by extensively relying on Kyoto Protocol’s flexible mechanisms regarding carbon sinks, whose scope could extend beyond national borders.

Environmental NGOs

Environmental NGOs are not in widespread agreement about CCS as a potential climate mitigation tool.

The main disagreement amid NGOs is whether CCS will reduce CO2 emissions or just perpetuate the use of fossil fuels.

For instance, Greenpeace is strongly against CCS. According to the organization, the use of the technology will keep the world dependent on fossil fuels. In 2008, Greenpeace published ‘False hope: Why Carbon Capture and Storage Won’t Save the Climate’ to explain their posture. Their only solution is the reduction of fossil fuel usage. Greenpeace claimed that CCS could lead to a doubling of coal plant costs.

On the other hand, BECCS is used in some IPCC scenarios to help meet mitigation targets. Adopting the IPCC argument that CO2 emissions need to be reduced by 2050 to avoid dramatic consequences, the Bellona Foundation justified CCS as a mitigation action. They claimed fossil fuels are unavoidable for the near term and consequently, CCS is the quickest way to reduce CO2 emissions.

Example projects

According to the Global CCS Institute, in 2020 there was about 40 million tons CO2 per year capacity of CCS in operation and 50 million tons per year in development. In contrast, the world emits about 38 billion tonnes of CO2 every year, so CCS captured about one thousandth of the 2020 CO2 emissions.

Algeria

In Salah injection

In Salah was an operational onshore gas field with CO2 injection. CO2 was separated from produced gas and reinjected into the Krechba geologic formation at a depth of 1,900m. Since 2004, about 3.8 Mt of CO2 has been captured during natural gas extraction and stored. Injection was suspended in June 2011 due to concerns about the integrity of the seal, fractures and leakage into the caprock, and movement of CO2 outside of the Krechba hydrocarbon lease.

NET Power Facility. La Porte, Tx

Australia

In the early 2020s the government allocated over A$300 million for CCS both onshore and offshore.

Canada

Canadian governments committed $1.8 billion fund CCS projects over the 2008-2018 period. The main programs are the federal government's Clean Energy Fund, Alberta's Carbon Capture and Storage fund, and the governments of Saskatchewan, British Columbia, and Nova Scotia. Canada works closely with the United States through the U.S.–Canada Clean Energy Dialogue launched by the Obama administration in 2009.

Alberta

Alberta committed $170 million in 2013/2014 – and a total of $1.3 billion over 15 years – to fund two large-scale CCS projects.

The CAN $1.2 billion Alberta Carbon Trunk Line Project (ACTL), pioneered by Enhance Energy, became fully operational in June 2020. It is now the world's largest carbon capture and storage system consisting of a 240 km pipeline that collects CO2 industrial emissions from the Agrium fertilizer plant and North West Sturgeon Refinery in Alberta. The capture is then delivered to the matured Clive oil reservoir for use in EOR (enhanced oil recovery) and permanent storage. At full capacity, it can capture 14.6 million tonnes of CO2 per year. For perspective, that translates into capturing CO2 from 2.6 million cars plus.

The Quest Carbon Capture and Storage Project was developed by Shell for use in the Athabasca Oil Sands Project. It is cited as being the world's first commercial-scale CCS project. Construction began in 2012 and ended in 2015. The capture unit is located at the Scotford Upgrader in Alberta, Canada, where hydrogen is produced to upgrade bitumen from oil sands into synthetic crude oil. The steam methane units that produce the hydrogen emit CO2 as a byproduct. The capture unit captures the CO2 from the steam methane unit using amine absorption technology, and the captured CO2 is then transported to Fort Saskatchewan where it is injected into a porous rock formation called the Basal Cambrian Sands. From 2015 to 2018, the project stored 3 Mt CO2 at a rate of 1 Mtpa.

Saskatchewan

Boundary Dam Power Station Unit 3 Project

Boundary Dam Power Station, owned by SaskPower, is a coal fired station originally commissioned in 1959. In 2010, SaskPower committed to retrofitting the lignite-powered Unit 3 with a carbon capture unit. The project was completed in 2014. The retrofit utilized a post-combustion amine absorption technology. The captured CO2 was to be sold to Cenovus to be used for Enhanced Oil Recovery (EOR) in Weyburn field. Any CO2 not used for EOR was planned to be used by the Aquistore project and stored in deep saline aquifers. Many complications kept Unit 3 and this project from operating as much as expected, but between August 2017 – August 2018, Unit 3 was online for 65%/day on average. The project has a nameplate capacity of capture of 1 Mtpa. The other units are to be phased out by 2024. The future of the one retrofitted unit is unclear.

Great Plains Synfuel Plant and Weyburn-Midale Project

The Great Plains Synfuel Plant, owned by Dakota Gas, is a coal gasification operation that produces synthetic natural gas and various petrochemicals from coal. The plant began operation in 1984, while CCS began in 2000. In 2000, Dakota Gas retrofitted the plant and planned to sell the CO2 to Cenovus and Apache Energy, for EOR in the Weyburn and Midale fields in Canada. The Midale fields were injected with 0.4 Mtpa and the Weyburn fields are injected with 2.4 Mtpa for a total injection capacity of 2.8 Mtpa. The Weyburn-Midale Carbon Dioxide Project (or IEA GHG Weyburn-Midale CO2 Monitoring and Storage Project), was conducted there. Injection continued even after the study concluded. Between 2000 and 2018, over 30 Mt CO2 was injected.

China

As of 2019 coal accounted for around 60% of China's energy production. The majority of CO2 emissions come from coal-fired power plants or coal-to-chemical processes (e.g. the production of synthetic ammonia, methanol, fertilizer, natural gas, and CTLs). According to the IEA, around 385 out of China's 900 gigawatts of coal-fired power capacity are near locations suitable for CCS. As of 2017 three CCS facilities are operational or in late stages of construction, drawing CO2 from natural gas processing or petrochemical production. At least eight more facilities are in early planning and development, most of which target power plant emissions, with an injection target of EOR.

China's largest carbon capture and storage plant at Guohua Jinjie coal power station was completed in January 2021. The project is expected to prevent 150,000 tons of carbon dioxide emission annually at a 90% capture rate.

CNPC Jilin Oil Field

China's first carbon capture project was the Jilin oil field in Songyuan, Jilin Province. It started as a pilot EOR project in 2009, and developed into a commercial operation for the China National Petroleum Corporation (CNPC). The final development phase completed in 2018. The source of CO2 is the nearby Changling gas field, from which natural gas with about 22.5% is extracted. After separation at the natural gas processing plant, the CO2 is transported to Jilin via pipeline and injected for a 37% enhancement in oil recovery at the low-permeability oil field. At commercial capacity, the facility currently injects 0.6 Mt CO2 per year, and it has injected a cumulative total of over 1.1 million tonnes over its lifetime.

Sinopec Qilu Petrochemical CCS Project

Sinopec is developing a carbon capture unit whose first phase was to be operational in 2019. The facility is located in Zibo City, Shandong Province, where a fertilizer plant produces CO2 from coal/coke gasification. CO2 is to be captured by cryogenic distillation and will be transported via pipeline to the nearby Shengli oil field for EOR. Construction of the first phase began by 2018, and was expected to capture and inject 0.4 Mt CO2 per year. The Shengli oil field is the destination for CO2.

Yanchang Integrated CCS Project

Yanchang Petroleum is developing carbon capture facilities at two coal-to-chemical plants in Yulin City, Shaanxi Province. The first capture plant is capable of capturing 50,000 tonnes per year and was finished in 2012. Construction on the second plant started in 2014 and was expected to be finished in 2020, with a capacity of 360,000 tonnes per year. This CO2 will be transported to the Ordos Basin, one of China's largest coal, oil, and gas-producing regions with a series of low- and ultra-low permeability oil reservoirs. Lack of water has limited the use of water for EOR, so the CO2 increase production.

Germany

The German industrial area of Schwarze Pumpe, about 4 kilometres (2.5 mi) south of the city of Spremberg, was home to the world's first demonstration CCS coal plant, the Schwarze Pumpe power station. The mini pilot plant was run by an Alstom-built oxy-fuel boiler and is also equipped with a flue gas cleaning facility to remove fly ash and sulfur dioxide. The Swedish company Vattenfall AB invested some €70 million in the two-year project, which began operation 9 September 2008. The power plant, which is rated at 30 megawatts, was a pilot project to serve as a prototype for future full-scale power plants. 240 tonnes a day of CO2 were being trucked 350 kilometers (220 mi) to be injected into an empty gas field. Germany's BUND group called it a "fig leaf". For each tonne of coal burned, 3.6 tonnes of CO2 is produced. The CCS program at Schwarze Pumpe ended in 2014 due to nonviable costs and energy use.

German utility RWE operates a pilot-scale CO2 scrubber at the lignite-fired Niederaußem power station built in cooperation with BASF (supplier of detergent) and Linde engineering.

Netherlands

Developed in the Netherlands, an electrocatalysis by a copper complex helps reduce CO2 to oxalic acid.

Norway

In Norway, the CO2 Technology Centre (TCM) at Mongstad began construction in 2009, and completed in 2012. It includes two capture technology plants (one advanced amine and one chilled ammonia), both capturing flue gas from two sources. This includes a gas-fired power plant and refinery cracker flue gas (similar to coal-fired power plant flue gas).

In addition to this, the Mongstad site was also planned to have a full-scale CCS demonstration plant. The project was delayed to 2014, 2018, and then indefinitely. The project cost rose to US$985 million. Then in October 2011, Aker Solutions' wrote off its investment in Aker Clean Carbon, declaring the carbon sequestration market to be "dead".

On 1 October 2013, Norway asked Gassnova, its Norwegian state enterprise for carbon capture and storage, not to sign any contracts for carbon capture and storage outside Mongstad.

In 2015 Norway was reviewing feasibility studies and hoping to have a full-scale carbon capture demonstration project by 2020.

In 2020, it then announced "Longship" ("Langskip" in Norwegian). This 2,7 billion CCS project will capture and store the carbon emissions of Norcem's cement factory in Brevik. Also, it plans to fund Fortum Oslo's Varme waste incineration facility. Finally, it will fund the transport and storage project "Northern Lights", a joint project between Equinor, Shell and Total. This latter project will transport liquid CO2 from capture facilities to a terminal at Øygarden in Vestland County. From there, CO2 will be pumped through pipelines to a reservoir beneath the seabed.

Sleipner CO2 Injection

Sleipner is a fully operational offshore gas field with CO2 injection initiated in 1996. CO2 is separated from produced gas and reinjected in the Utsira saline aquifer (800–1000 m below ocean floor) above the hydrocarbon reservoir zones. This aquifer extends much further north from the Sleipner facility at its southern extreme. The large size of the reservoir accounts for why 600 billion tonnes of CO2 are expected to be stored, long after the Sleipner natural gas project has ended. The Sleipner facility is the first project to inject its captured CO2 into a geological feature for the purpose of storage rather than economically compromising EOR.

United Arab Emirates

Abu Dhabi

After the success of their pilot plant operation in November 2011, the Abu Dhabi National Oil Company and Abu Dhabi Future Energy Company moved to create the first commercial CCS facility in the iron and steel industry. CO2 is a byproduct of the iron making process. It is transported via a 50 km pipeline to Abu Dhabi National Oil Company oil reserves for EOR. The facility's capacity is 800,000 tonnes per year. As of 2013, more than 40% of gas emitted by the crude oil production process is recovered within the oil fields for EOR.

United Kingdom

The 2020 budget allocated 800 million pounds to attempt to create CCS clusters by 2030, to capture CO2 from heavy industry and a gas-fired power station and store it under the North Sea. The Crown Estate is responsible for storage rights on the UK continental shelf and it has facilitated work on offshore CO2 storage technical and commercial issues.

A trial of bio-energy with carbon capture and storage (BECCS) at a wood-fired unit in Drax power station in the UK started in 2019. If successful this could remove one tonne per day of CO2 from the atmosphere.

In the UK CCS is under consideration to help with industry and heating decarbonization.

United States

In addition to individual carbon capture and sequestration projects, various programs work to research, develop, and deploy CCS technologies on a broad scale. These include the National Energy Technology Laboratory's (NETL) Carbon Sequestration Program, regional carbon sequestration partnerships and the Carbon Sequestration Leadership Forum (CSLF).

In September 2020, the U.S. Department Of Energy awarded $72 million in federal funding to support the development and advancement of carbon capture technologies. Under this cost-shared program, DOE awarded $51 million to nine new projects for coal and natural gas power and industrial sources.

The nine projects were to design initial engineering studies to develop technologies for byproducts at industrial sites. The projects selected are:

  1. Enabling Production of Low Carbon Emissions Steel Through CO2 Capture from Blast Furnace Gases — ArcelorMittal USA
  2. LH CO2MENT Colorado Project — Electricore
  3. Engineering Design of a Polaris Membrane CO2 Capture System at a Cement Plant — Membrane Technology and Research (MTR) Inc.
  4. Engineering Design of a Linde-BASF Advanced Post-Combustion CO2 Capture Technology at a Linde Steam Methane Reforming H2 Plant — Praxair
  5. Initial Engineering and Design for CO2 Capture from Ethanol Facilities — University of North Dakota Energy & Environmental Research Center
  6. Chevron Natural Gas Carbon Capture Technology Testing Project — Chevron USA, Inc.
  7. Engineering-scale Demonstration of Transformational Solvent on NGCC Flue Gas — ION Clean Energy Inc.
  8. Engineering-Scale Test of a Water-Lean Solvent for Post-Combustion Capture — Electric Power Research Institute Inc.
  9. Engineering Scale Design and Testing of Transformational Membrane Technology for CO2 Capture — Gas Technology Institute (GTI)

$21 million was also awarded to 18 projects for technologies that remove CO2 from the atmosphere. The focus was on the development of new materials for use in direct air capture and will also complete field testing. The projects:

  1. Direct Air Capture Using Novel Structured Adsorbents — Electricore
  2. Advanced Integrated Reticular Sorbent-Coated System to Capture CO2 from the Atmosphere — GE Research
  3. MIL-101(Cr)-Amine Sorbents Evaluation Under Realistic Direct Air Capture Conditions — Georgia Tech Research Corporation
  4. Demonstration of a Continuous-Motion Direct Air Capture System — Global Thermostat Operations, LLC
  5. Experimental Demonstration of Alkalinity Concentration Swing for Direct Air Capture of CO2 — Harvard University
  6. High-Performance, Hybrid Polymer Membrane for CO2 Separation from Ambient Air — InnoSense, LLC
  7. Transformational Sorbent Materials for a Substantial Reduction in the Energy Requirement for Direct Air Capture of CO2 — InnoSepra, LLC
  8. A Combined Water and CO2 Direct Air Capture System — IWVC, LLC
  9. TRAPS: Tunable, Rapid-uptake, AminoPolymer Aerogel Sorbent for Direct Air Capture of CO2 — Palo Alto Research Center
  10. Direct Air Capture Using Trapped Small Amines in Hierarchical Nanoporous Capsules on Porous Electrospun Hollow Fibers — Rensselaer Polytechnic Institute
  11. Development of Advanced Solid Sorbents for Direct Air Capture — RTI International
  12. Direct Air Capture Recovery of Energy for CCUS Partnership (DAC RECO2UP) — Southern States Energy Board
  13. Membrane Adsorbents Comprising Self-Assembled Inorganic Nanocages (SINCs) for Super-fast Direct Air Capture Enabled by Passive Cooling — SUNY
  14. Low Regeneration Temperature Sorbents for Direct Air Capture of CO2 — Susteon Inc.
  15. Next Generation Fiber-Encapsulated Nanoscale Hybrid Materials for Direct Air Capture with Selective Water Rejection — The Trustees of Columbia University in the City of New York
  16. Gradient Amine Sorbents for Low Vacuum Swing CO2 Capture at Ambient Temperature — The University of Akron
  17. Electrochemically-Driven CO2 Separation — University of Delaware
  18. Development of Novel Materials for Direct Air Capture of CO2 — University of Kentucky Research Foundation

Kemper Project

The Kemper Project is a gas-fired power plant under construction in Kemper County, Mississippi. It was originally planned as a coal-fired plant. Mississippi Power, a subsidiary of Southern Company, began construction in 2010. Had it become operational as a coal plant, the Kemper Project would have been a first-of-its-kind electricity plant to employ gasification and carbon capture technologies at this scale. The emission target was to reduce CO2 to the same level an equivalent natural gas plant would produce. However, in June 2017 the proponents – Southern Company and Mississippi Power – announced that the plant would only burn natural gas.

Construction was delayed and the scheduled opening was pushed back over two years, while the cost increased to $6.6 billion—three times the original estimate. According to a Sierra Club analysis, Kemper is the most expensive power plant ever built for the watts of electricity it will generate.

In October 2021, the coal gasification portion of the plant was demolished.

Terrell Natural Gas Processing Plant

Opening in 1972, the Terrell plant in Texas, United States was the oldest operating industrial CCS project as of 2017. CO2 is captured during gas processing and transported primarily via the Val Verde pipeline where it is eventually injected at Sharon Ridge oil field and other secondary sinks for use in EOR. The facility captures an average of somewhere between 0.4 and 0.5 million tons of CO2 per annum.

Enid Fertilizer

Beginning in 1982, the facility owned by the Koch Nitrogen company is the second oldest large scale CCS facility still in operation. The CO2 that is captured is a high purity byproduct of nitrogen fertilizer production. The process is made economical by transporting the CO2 to oil fields for EOR.

Shute Creek Gas Processing Facility

7 million metric tonnes of CO2 are recovered annually from ExxonMobil's Shute Creek gas processing plant near La Barge, Wyoming, and transported by pipeline to various oil fields for EOR. Started in 1986, as of 2017 this project had the second largest CO2 capture capacity in the world.

Petra Nova

The Petra Nova project is a billion dollar endeavor undertaken by NRG Energy and JX Nippon to partially retrofit their jointly owned W.A Parish coal-fired power plant with post-combustion carbon capture. The plant, which is located in Thompsons, Texas (just outside of Houston), entered commercial service in 1977. Carbon capture began on 10 January 2017. The WA Parish unit 8 generates 240 MW and 90% of the CO2 (or 1.4 million tonnes) was captured per year. The CO2 (99% purity) is compressed and piped about 82 miles to West Ranch Oil Field, Texas, for EOR. The field has a capacity of 60 million barrels of oil and has increased its production from 300 barrels per day to 4000 barrels daily. On May 1, 2020, NRG shut down Petra Nova, citing low oil prices during the COVID-19 pandemic. The plant had also reportedly suffered frequent outages and missed its carbon sequestration goal by 17% over its first three years of operation. In 2021 the plant was mothballed.

Illinois Industrial

The Illinois Industrial Carbon Capture and Storage project is dedicated to geological CO2 storage. The project received a 171 million dollar investment from the DOE and over 66 million dollars from the private sector. The CO2 is a byproduct of the fermentation process of corn ethanol production and is stored 7000 feet underground in the Mt. Simon Sandstone saline aquifer. Sequestration began in April 2017 with a carbon capture capacity of 1 Mt/a.

NET Power Demonstration Facility

The NET Power Demonstration Facility is an oxy-combustion natural gas power plant that operates by the Allam power cycle. Due to its unique design, the plant is able to reduce its air emissions to zero by producing a near pure stream of CO2. The plant first fired in May 2018.

Century Plant

Occidental Petroleum, along with SandRidge Energy, operates a West Texas hydrocarbon gas processing plant and related pipeline infrastructure that provides CO2 for Enhanced Oil Recovery (EOR). With a CO2 capture capacity of 8.4 Mt/a, the Century plant is the largest single industrial source CO2 capture facility in the world.

Developing projects

ANICA - Advanced Indirectly Heated Carbonate Looping Process

The ANICA Project is focused on developing economically feasible carbon capture technology for lime and cement plants, which are responsible for 5% of the total anthropogenic carbon dioxide emissions. In 2019, a consortium of 12 partners from Germany, United Kingdom and Greece began working on integrating indirectly heated carbonate lopping (IHCaL) process in cement and lime production. The project aims at lowering the energy penalty and CO2 avoidance costs for CO2 capture from lime and cement plants.

Port of Rotterdam CCUS Backbone Initiative

Expected in 2021, the Port of Rotterdam CCUS Backbone Initiative aimed to implement a "backbone" of shared CCS infrastructure for use by businesses located around the Port of Rotterdam in Rotterdam, Netherlands. The project is overseen by the Port of Rotterdam, natural gas company Gasunie, and the EBN. It intends to capture and sequester 2 million tons of CO2 per year and increase this number in future years. Although dependent on the participation of companies, the goal of this project is to greatly reduce the carbon footprint of the industrial sector of the Port of Rotterdam and establish a successful CCS infrastructure in the Netherlands following the recently canceled ROAD project. CO2 captured from local chemical plants and refineries will both be sequestered in the North Sea seabed. The possibility of a CCU initiative has also been considered, in which the captured CO2 will be sold to horticultural firms, who will use it to speed up plant growth, as well as other industrial users.

Climeworks Direct Air Capture Plant and CarbFix2 Project

Climeworks opened the first commercial direct air capture plant in Zürich, Switzerland. Their process involves capturing CO2 directly from ambient air using a patented filter, isolating the captured CO2 at high heat, and finally transporting it to a nearby greenhouse as a fertilizer. The plant is built near a waste recovery facility that uses its excess heat to power the Climeworks plant.

Climeworks is also working with Reykjavik Energy on the CarbFix2 project with funding from the European Union. This project is located in Hellisheidi, Iceland, uses direct air capture technology to geologically store CO2 in conjunction with a large geothermal power plant. Once CO2 is captured using Climeworks' filters, it is heated using heat from the geothermal plant and bound to water. The geothermal plant then pumps the carbonated water into underground rock formations where the CO2 reacts with basaltic bedrock and forms carbonate minerals.

OPEN100

The OPEN100 project, launched in 2020 by the Energy Impact Center (EIC), is the world's first open-source blueprint for nuclear power plant deployment. The Energy Impact Center and OPEN100 aim to reverse climate change by 2040 and believe that nuclear power is the only feasible energy source to power CCS without the compromise of releasing new CO2.

This project intends to bring together researchers, designers, scientists, engineers, think tanks, etc. to help compile research and designs that will eventually evolve into a blueprint that is available to the public and can be utilized in the development of future nuclear plants.

History of water supply and sanitation

Aqueduct in Petra, Jordan.

The history of water supply and sanitation is one of a logistical challenge to provide clean water and sanitation systems since the dawn of civilization. Where water resources, infrastructure or sanitation systems were insufficient, diseases spread and people fell sick or died prematurely.

Major human settlements could initially develop only where fresh surface water was plentiful, such as near rivers or natural springs. Throughout history, people have devised systems to make getting water into their communities and households and disposing of (and later also treating) wastewater more convenient.

The historical focus of sewage treatment was on the conveyance of raw sewage to a natural body of water, e.g. a river or ocean, where it would be diluted and dissipated. Early human habitations were often built next to water sources. Rivers would often serve as a crude form of natural sewage disposal.

Over the millennia, technology has dramatically increased the distances across which water can be relocated. Furthermore, treatment processes to purify drinking water and to treat wastewater have been improved.

Prehistory

Skara Brae, a Neolithic village in Orkney, Scotland with home furnishings including water-flushing toilets 3180 BC–2500 BC

During the Neolithic era, humans dug the first permanent water wells, from where vessels could be filled and carried by hand. Wells dug around 6500 BC have been found in the Jezreel Valley. The size of human settlements was largely dependent on nearby available water.

A primitive indoor, tree bark lined, two-channel, stone, fresh and wastewater system appears to have featured in the houses of Skara Brae, and the Barnhouse Settlement, from around 3000 BCE, along with a cell-like enclave in a number of houses, of Skara Brae, that it has been suggested may have functioned as an early indoor latrine.

Wastewater reuse activities

Wastewater reuse is an ancient practice, which has been applied since the dawn of human history, and is connected to the development of sanitation provision. Reuse of untreated municipal wastewater has been practiced for many centuries with the objective of diverting human waste outside of urban settlements. Likewise, land application of domestic wastewater is an old and common practice, which has gone through different stages of development.

Domestic wastewater was used for irrigation by prehistoric civilizations (e.g. Mesopotamian, Indus valley, and Minoan) since the Bronze Age (ca. 3200-1100 BC). Thereafter, wastewater was used for disposal, irrigation, and fertilization purposes by Hellenic civilizations and later by Romans in areas surrounding cities (e.g. Athens and Rome).

Bronze and early Iron Ages

Ancient Americas

In ancient Peru, the Nazca people employed a system of interconnected wells and an underground watercourse known as puquios.

Ancient Near East

Mesopotamia

The Mesopotamians introduced the world to clay sewer pipes around 4000 BCE, with the earliest examples found in the Temple of Bel at Nippur and at Eshnunna, utilised to remove wastewater from sites, and capture rainwater, in wells. The city of Uruk also demonstrates the first examples of brick constructed latrines, from 3200 BCE. Clay pipes were later used in the Hittite city of Hattusa. They had easily detachable and replaceable segments, and allowed for cleaning.

Ancient Persia

The first sanitation systems within prehistoric Iran were built near the city of Zabol. Persian qanats and ab anbars have been used for water supply and cooling.

Ancient Egypt

The c. 2400 BCE, Pyramid of Sahure, and adjoining temple complex at Abusir, was discovered to have a network of copper drainage pipes.

Ancient East Asia

Ancient China

A Chinese ceramic model of a well with a water pulley system, excavated from a tomb of the Han Dynasty (202 BC - 220 AD) period

Some of the earliest evidence of water wells are located in China. The Neolithic Chinese discovered and made extensive use of deep drilled groundwater for drinking. The Chinese text The Book of Changes, originally a divination text of the Western Zhou dynasty (1046 -771 BC), contains an entry describing how the ancient Chinese maintained their wells and protected their sources of water. Archaeological evidence and old Chinese documents reveal that the prehistoric and ancient Chinese had the aptitude and skills for digging deep water wells for drinking water as early as 6000 to 7000 years ago. A well excavated at the Hemedu excavation site was believed to have been built during the Neolithic era. The well was caused by four rows of logs with a square frame attached to them at the top of the well. Sixty additional tile wells southwest of Beijing are also believed to have been built around 600 BC for drinking and irrigation. Plumbing is also known to have been used in East Asia since the Qin and Han Dynasties of China.

Indus Valley Civilization

A large well and bathing platforms at Harappa, remains of the city's final phase of occupation from 2200 to 1900  BC.

The Indus Valley Civilization in Asia shows early evidence of public water supply and sanitation. The system the Indus developed and managed included a number of advanced features. A typical example is the Indus city of Lothal (c. 2350 BCE). In Lothal all houses had their own private toilet which was connected to a covered sewer network constructed of brickwork held together with a gypsum-based mortar that emptied either into the surrounding water bodies or alternatively into cesspits, the latter of which were regularly emptied and cleaned.

The urban areas of the Indus Valley civilization included public and private baths. Sewage was disposed through underground drains built with precisely laid bricks, and a sophisticated water management system with numerous reservoirs was established. In the drainage systems, drains from houses were connected to wider public drains. Many of the buildings at Mohenjo-daro had two or more stories. Water from the roof and upper storey bathrooms was carried through enclosed terracotta pipes or open chutes that emptied out onto the street drains.

The earliest evidence of urban sanitation was seen in Harappa, Mohenjo-daro, and the recently discovered Rakhigarhi of Indus Valley civilization. This urban plan included the world's first urban sanitation systems. Within the city, individual homes or groups of homes obtained water from wells. From a room that appears to have been set aside for bathing, waste water was directed to covered drains, which lined the major streets.

Devices such as shadoofs were used to lift water to ground level. Ruins from the Indus Valley Civilization like Mohenjo-daro in Pakistan and Dholavira in Gujarat in India had settlements with some of the ancient world's most sophisticated sewage systems. They included drainage channels, rainwater harvesting, and street ducts.

Stepwells have mainly been used in the Indian subcontinent.

Ancient Mediterranean

Ancient Greece

The ancient Greek civilization of Crete, known as the Minoan civilization, was the first civilization to use underground clay pipes for sanitation and water supply. Their capital, Knossos, had a well-organized water system for bringing in clean water, taking out waste water and storm sewage canals for overflow when there was heavy rain. It was also one of the first uses of a flush toilet, dating back to the 18th century BC. The Minoan civilization had stone sewers that were periodically flushed with clean water. In addition to sophisticated water and sewer systems they devised elaborate heating systems. The Ancient Greeks of Athens and Asia Minor also used an indoor plumbing system, used for pressurized showers. The Greek inventor Heron used pressurized piping for fire fighting purposes in the City of Alexandria. The Mayans were the third earliest civilization to have employed a system of indoor plumbing using pressurized water.

An inverted siphon system, along with glass covered clay pipes, was used for the first time in the palaces of Crete, Greece. It is still in working condition, after about 3000 years.

Roman Empire

In ancient Rome, the Cloaca Maxima, considered a marvel of engineering, discharged into the Tiber. Public latrines were built over the Cloaca Maxima.

Beginning in the Roman era a water wheel device known as a noria supplied water to aqueducts and other water distribution systems in major cities in Europe and the Middle East.

The Roman Empire had indoor plumbing, meaning a system of aqueducts and pipes that terminated in homes and at public wells and fountains for people to use. Rome and other nations used lead pipes; while commonly thought to be the cause of lead poisoning in the Roman Empire, the combination of running water which did not stay in contact with the pipe for long and the deposition of precipitation scale actually mitigated the risk from lead pipes.

Roman towns and garrisons in the United Kingdom between 46 BC and 400 AD had complex sewer networks sometimes constructed out of hollowed-out elm logs, which were shaped so that they butted together with the down-stream pipe providing a socket for the upstream pipe.

Medieval and early modern ages

Nepal

People waiting in line for water from Manga Hiti in the city of Patan, Nepal
 

In Nepal the construction of water conduits like drinking fountains and wells is considered a pious act.

A drinking water supply system was developed starting at least as early as 550 AD. This dhunge dhara or hiti system consists of carved stone fountains through which water flows uninterrupted from underground sources. These are supported by numerous ponds and canals that form an elaborate network of water bodies, created as a water resource during the dry season and to help alleviate the water pressure caused by the monsoon rains. After the introduction of modern, piped water systems, starting in the late 19th century, this old system has fallen into disrepair and some parts of it are lost forever. Nevertheless, many people of Nepal still rely on the old hitis on a daily basis.

In 2008 the dhunge dharas of the Kathmandu Valley produced 2.95 million litres of water per day.

Of the 389 stone spouts found in the Kathmandu Valley in 2010, 233 were still in use, serving about 10% of Kathmandu's population. 68 had gone dry, 45 were lost entirely and 43 were connected to the municipal water supply instead of their original source.

Islamic world

Islam stresses the importance of cleanliness and personal hygiene. Islamic hygienical jurisprudence, which dates back to the 7th century, has a number of elaborate rules. Taharah (ritual purity) involves performing wudu (ablution) for the five daily salah (prayers), as well as regularly performing ghusl (bathing), which led to bathhouses being built across the Islamic world.[39][40] Islamic toilet hygiene also requires washing with water after using the toilet, for purity and to minimize germs.[41]

In the Abbasid Caliphate (8th-13th centuries), its capital city of Baghdad (Iraq) had 65,000 baths, along with a sewer system. Cities of the medieval Islamic world had water supply systems powered by hydraulic technology that supplied drinking water along with much greater quantities of water for ritual washing, mainly in mosques and hammams (baths). Bathing establishments in various cities were rated by Arabic writers in travel guides. Medieval Islamic cities such as Baghdad, Córdoba (Islamic Spain), Fez (Morocco) and Fustat (Egypt) also had sophisticated waste disposal and sewage systems with interconnected networks of sewers. The city of Fustat also had multi-storey tenement buildings (with up to six floors) with flush toilets, which were connected to a water supply system, and flues on each floor carrying waste to underground channels.

Al-Karaji (c. 953–1029) wrote a book, The Extraction of Hidden Waters, which presented ground-breaking ideas and descriptions of hydrological and hydrogeological perceptions such as components of the hydrological cycle, groundwater quality, and driving factors of groundwater flow. He also gave an early description of a water filtration process.

Post-classical East Africa

In post-classical Kilwa plumbing was prevalent in the stone homes of the natives. The Husani Kubwa Palace as well as other buildings for the ruling elite and wealthy included the luxury of indoor plumbing.

Medieval Europe

A 1939 conceptual illustration showing various ways that typhoid bacteria can contaminate a water well (center)
 
Waterworks (Wasserkunst) and fountain from 1602 in Wismar, Germany.

There is little record of other sanitation systems (apart of sanitation in ancient Rome) in most of Europe until the High Middle Ages. Unsanitary conditions and overcrowding were widespread throughout Europe and Asia during the Middle Ages. This resulted in pandemics such as the Plague of Justinian (541–542) and the Black Death (1347–1351), which killed tens of millions of people. Very high infant and child mortality prevailed in Europe throughout medieval times, due partly to deficiencies in sanitation.

In medieval European cities, small natural waterways used for carrying off wastewater were eventually covered over and functioned as sewers. London's River Fleet is such a system. Open drains, or gutters, for waste water run-off ran along the center of some streets. These were known as "kennels" (i.e., canals, channels), and in Paris were sometimes known as “split streets,” as the waste water running along the middle physically split the streets into two halves. The first closed sewer constructed in Paris was designed by Hugues Aubird in 1370 on Rue Montmartre (Montmartre Street), and was 300 meters long. The original purpose of designing and constructing a closed sewer in Paris was less-so for waste management as much as it was to hold back the stench coming from the odorous waste water. In Dubrovnik, then known as Ragusa (Latin name), the Statute of 1272 set out the parameters for the construction of septic tanks and channels for the removal of dirty water. Throughout the 14th and 15th century the sewage system was built, and it is still operational today, with minor changes and repairs done in recent centuries. Pail closets, outhouses, and cesspits were used to collect human waste. The use of human waste as fertilizer was especially important in China and Japan, where cattle manure was less available. However, most cities did not have a functioning sewer system before the Industrial era, relying instead on nearby rivers or occasional rain showers to wash away the sewage from the streets. In some places, waste water simply ran down the streets, which had stepping stones to keep pedestrians out of the muck, and eventually drained as runoff into the local watershed.

John Harington's toilet

In the 16th century, Sir John Harington invented a flush toilet as a device for Queen Elizabeth I (his godmother) that released wastes into cesspools.

After the adoption of gunpowder, municipal outhouses became an important source of raw material for the making of saltpeter in European countries.

In London, the contents of the city's outhouses were collected every night by commissioned wagons and delivered to the nitrite beds where it was laid into specially designed soil beds to produce earth rich in mineral nitrates. The nitrate rich-earth would be then further processed to produce saltpeter, or potassium nitrate, an important ingredient in black powder that played a part in the making of gunpowder.

Classic and early modern Mesoamerica

The Classic Maya at Palenque had underground aqueducts and flush toilets; the Classic Maya even used household water filters using locally abundant limestone carved into a porous cylinder, made so as to work in a manner strikingly similar to Modern ceramic water filters.

In Spain and Spanish America, a community operated watercourse known as an acequia, combined with a simple sand filtration system, provided potable water.

Sewage farms for disposal and irrigation

Sewage farms” (i.e. wastewater application to the land for disposal and agricultural use) were operated in Bunzlau (Silesia) in 1531, in Edinburgh (Scotland) in 1650, in Paris (France) in 1868, in Berlin (Germany) in 1876 and in different parts of the USA since 1871, where wastewater was used for beneficial crop production. In the following centuries (16th and 18th centuries) in many rapidly growing countries/cities of Europe (e.g. Germany, France) and the United States, “sewage farms” were increasingly seen as a solution for the disposal of large volumes of the wastewater, some of which are still in operation today. Irrigation with sewage and other wastewater effluents has a long history also in China and India; while also a large “sewage farm” was established in Melbourne, Australia, in 1897.

Modern age

Sewer systems

Many industrialized cities had incomplete public sanitation well into the 20th century. Outhouses in Brisbane, Australia, around 1950.

A significant development was the construction of a network of sewers to collect wastewater. In some cities, including Rome, Istanbul (Constantinople) and Fustat, networked ancient sewer systems continue to function today as collection systems for those cities' modernized sewer systems. Instead of flowing to a river or the sea, the pipes have been re-routed to modern sewer treatment facilities.

Basic sewer systems were used for waste removal in ancient Mesopotamia, where vertical shafts carried the waste away into cesspools. Similar systems existed in the Indus Valley civilization in modern-day India and in Ancient Crete and Greece. In the Middle Ages the sewer systems built by the Romans fell into disuse and waste was collected into cesspools that were periodically emptied by workers known as 'rakers' who would often sell it as fertilizer to farmers outside the city.

Archaeological discoveries have shown that some of the earliest sewer systems were developed in the third millennium BCE in the ancient cities of Harappa and Mohenjo-daro in present-day Pakistan. The primitive sewers were carved in the ground alongside buildings. This discovery reveals the conceptual understanding of waste disposal by early civilizations.

However, until the Enlightenment era, little progress was made in water supply and sanitation and the engineering skills of the Romans were largely neglected throughout Europe. This began to change in the 17th and 18th centuries with a rapid expansion in waterworks and pumping systems.

The tremendous growth of cities in Europe and North America during the Industrial Revolution quickly led to crowding, which acted as a constant source for the outbreak of disease. As cities grew in the 19th century concerns were raised about public health. As part of a trend of municipal sanitation programs in the late 19th and 20th centuries, many cities constructed extensive gravity sewer systems to help control outbreaks of disease such as typhoid and cholera. Storm and sanitary sewers were necessarily developed along with the growth of cities. By the 1840s the luxury of indoor plumbing, which mixes human waste with water and flushes it away, eliminated the need for cesspools.

Modern sewerage systems were first built in the mid-nineteenth century as a reaction to the exacerbation of sanitary conditions brought on by heavy industrialization and urbanization. Baldwin Latham, a British civil engineer contributed to the rationalization of sewerage and house drainage systems and was a pioneer in sanitary engineering. He developed the concept of oval sewage pipe to facilitate sewer drainage and to prevent sludge deposition and flooding. Due to the contaminated water supply, cholera outbreaks occurred in 1832, 1849 and 1855 in London, killing tens of thousands of people. This, combined with the Great Stink of 1858, when the smell of untreated human waste in the River Thames became overpowering, and the report into sanitation reform of the Royal Commissioner Edwin Chadwick, led to the Metropolitan Commission of Sewers appointing Joseph Bazalgette to construct a vast underground sewage system for the safe removal of waste. Contrary to Chadwick's recommendations, Bazalgette's system, and others later built in Continental Europe, did not pump the sewage onto farm land for use as fertilizer; it was simply piped to a natural waterway away from population centres, and pumped back into the environment.

Liverpool, London and other cities in the UK

The Great Stink of 1858 stimulated the construction of a sewer system for London. In this caricature in The Times, Michael Faraday reports to Father Thames on the state of the river.

As recently as the late 19th-century sewerage systems in some parts of the rapidly industrializing United Kingdom were so inadequate that water-borne diseases such as cholera and typhoid remained a risk.

From as early as 1535 there were efforts to stop polluting the River Thames in London. Beginning with an Act passed that year that was to prohibit the dumping of excrement into the river. Leading up to the Industrial Revolution the River Thames was identified as being thick and black due to sewage, and it was even said that the river “smells like death.” As Britain was the first country to industrialize, it was also the first to experience the disastrous consequences of major urbanization and was the first to construct a modern sewerage system to mitigate the resultant unsanitary conditions. During the early 19th century, the River Thames was effectively an open sewer, leading to frequent outbreaks of cholera epidemics. Proposals to modernize the sewerage system had been made during 1856 but were neglected due to lack of funds. However, after the Great Stink of 1858, Parliament realized the urgency of the problem and resolved to create a modern sewerage system.

However, ten years earlier and 200 miles to the north, James Newlands, a Scottish Engineer, was one of a celebrated trio of pioneering officers appointed under a private Act, the Liverpool Sanitory Act by the Borough of Liverpool Health of Towns Committee. The other officers appointed under the Act were William Henry Duncan, Medical Officer for Health, and Thomas Fresh, Inspector of Nuisances (an early antecedent of the environmental health officer). One of five applicants for the post, Newlands was appointed Borough Engineer of Liverpool on 26 January 1847.

He made a careful and exact survey of Liverpool and its surroundings, involving approximately 3,000 geodetical observations, and resulting in the construction of a contour map of the town and its neighbourhood, on a scale of one inch to 20 feet (6.1 m). From this elaborate survey Newlands proceeded to lay down a comprehensive system of outlet and contributory sewers, and main and subsidiary drains, to an aggregate extent of nearly 300 miles (480 km). The details of this projected system he presented to the Corporation in April 1848.

In July 1848 James Newlands' sewer construction programme began, and over the next 11 years 86 miles (138 km) of new sewers were built. Between 1856 and 1862 another 58 miles (93 km) were added. This programme was completed in 1869. Before the sewers were built, life expectancy in Liverpool was 19 years, and by the time Newlands retired it had more than doubled.

Joseph Bazalgette, a civil engineer and Chief Engineer of the Metropolitan Board of Works, was given responsibility for the work. He designed an extensive underground sewerage system that diverted waste to the Thames Estuary, downstream of the main center of population. Six main interceptor sewers, totaling almost 100 miles (160 km) in length, were constructed, some incorporating stretches of London's 'lost' rivers. Three of these sewers were north of the river, the southernmost, low-level one being incorporated in the Thames Embankment. The Embankment also allowed new roads, new public gardens, and the Circle Line of the London Underground.

The intercepting sewers, constructed between 1859 and 1865, were fed by 450 miles (720 km) of main sewers that, in turn, conveyed the contents of some 13,000 miles (21,000 km) of smaller local sewers. Construction of the interceptor system required 318 million bricks, 2.7 million cubic metres of excavated earth and 670,000 cubic metres of concrete. Gravity allowed the sewage to flow eastwards, but in places such as Chelsea, Deptford and Abbey Mills, pumping stations were built to raise the water and provide sufficient flow. Sewers north of the Thames feed into the Northern Outfall Sewer, which fed into a major treatment works at Beckton. South of the river, the Southern Outfall Sewer extended to a similar facility at Crossness. With only minor modifications, Bazalgette's engineering achievement remains the basis for sewerage design up into the present day.

In Merthyr Tydfil, a large town in South Wales, most houses discharged their sewage to individual cess-pits which persistently overflowed causing the pavements to be awash with foul sewage.

Paris, France

In 1802, Napoleon built the Ourcq canal which brought 70,000 cubic meters of water a day to Paris, while the Seine river received up to 100,000 cubic meters (3,500,000 cu ft) of wastewater per day. The Paris cholera epidemic of 1832 sharpened the public awareness of the necessity for some sort of drainage system to deal with sewage and wastewater in a better and healthier way. Between 1865 and 1920 Eugene Belgrand lead the development of a large scale system for water supply and wastewater management. Between these years approximately 600 kilometers of aqueducts were built to bring in potable spring water, which freed the poor quality water to be used for flushing streets and sewers. By 1894 laws were passed which made drainage mandatory. The treatment of Paris sewage, though, was left to natural devices as 5,000 hectares of land were used to spread the waste out to be naturally purified. Further, the lack of sewage treatment left Parisian sewage pollution to become concentrated downstream in the town of Clichy, effectively forcing residents to pack up and move elsewhere.

The 19th century brick-vaulted Paris sewers serve as a tourist attraction nowadays.

Hamburg and Frankfurt, Germany

The first comprehensive sewer system in a German city was built in Hamburg, Germany, in the mid-19th century.

In 1863, work began on the construction of a modern sewerage system for the rapidly growing city of Frankfurt am Main, based on design work by William Lindley. 20 years after the system's completion, the death rate from typhoid had fallen from 80 to 10 per 100,000 inhabitants.

Map of the sewer system of Memphis, Tennessee in 1880

United States

The first sewer systems in the United States were built in the late 1850s in Chicago and Brooklyn.

In the United States, the first sewage treatment plant using chemical precipitation was built in Worcester, Massachusetts, in 1890.

Sewage treatment

Initially the gravity sewer systems discharged sewage directly to surface waters without treatment. Later, cities attempted to treat the sewage before discharge in order to prevent water pollution and waterborne diseases. During the half-century around 1900, these public health interventions succeeded in drastically reducing the incidence of water-borne diseases among the urban population, and were an important cause in the increases of life expectancy experienced at the time.

Application on agricultural land

Early techniques for sewage treatment involved land application of sewage on agricultural land.One of the first attempts at diverting sewage for use as a fertilizer in the farm was made by the cotton mill owner James Smith in the 1840s. He experimented with a piped distribution system initially proposed by James Vetch that collected sewage from his factory and pumped it into the outlying farms, and his success was enthusiastically followed by Edwin Chadwick and supported by organic chemist Justus von Liebig.

The idea was officially adopted by the Health of Towns Commission, and various schemes (known as sewage farms) were trialled by different municipalities over the next 50 years. At first, the heavier solids were channeled into ditches on the side of the farm and were covered over when full, but soon flat-bottomed tanks were employed as reservoirs for the sewage; the earliest patent was taken out by William Higgs in 1846 for "tanks or reservoirs in which the contents of sewers and drains from cities, towns and villages are to be collected and the solid animal or vegetable matters therein contained, solidified and dried..." Improvements to the design of the tanks included the introduction of the horizontal-flow tank in the 1850s and the radial-flow tank in 1905. These tanks had to be manually de-sludged periodically, until the introduction of automatic mechanical de-sludgers in the early 1900s.

Chemical treatment and sedimentation

As pollution of water bodies became a concern, cities attempted to treat the sewage before discharge. In the late 19th century some cities began to add chemical treatment and sedimentation systems to their sewers. In the United States, the first sewage treatment plant using chemical precipitation was built in Worcester, Massachusetts in 1890. During the half-century around 1900, these public health interventions succeeded in drastically reducing the incidence of water-borne diseases among the urban population, and were an important cause in the increases of life expectancy experienced at the time.

Odor was considered the big problem in waste disposal and to address it, sewage could be drained to a lagoon, or "settled" and the solids removed, to be disposed of separately. This process is now called "primary treatment" and the settled solids are called "sludge." At the end of the 19th century, since primary treatment still left odor problems, it was discovered that bad odors could be prevented by introducing oxygen into the decomposing sewage. This was the beginning of the biological aerobic and anaerobic treatments which are fundamental to wastewater processes.

The precursor to the modern septic tank was the cesspool in which the water was sealed off to prevent contamination and the solid waste was slowly liquified due to anaerobic action; it was invented by L.H Mouras in France in the 1860s. Donald Cameron, as City Surveyor for Exeter patented an improved version in 1895, which he called a 'septic tank'; septic having the meaning of 'bacterial'. These are still in worldwide use, especially in rural areas unconnected to large-scale sewage systems.

Biological treatment

It was not until the late 19th century that it became possible to treat the sewage by biologically decomposing the organic components through the use of microorganisms and removing the pollutants. Land treatment was also steadily becoming less feasible, as cities grew and the volume of sewage produced could no longer be absorbed by the farmland on the outskirts.

Edward Frankland conducted experiments at the sewage farm in Croydon, England, during the 1870s and was able to demonstrate that filtration of sewage through porous gravel produced a nitrified effluent (the ammonia was converted into nitrate) and that the filter remained unclogged over long periods of time. This established the then revolutionary possibility of biological treatment of sewage using a contact bed to oxidize the waste. This concept was taken up by the chief chemist for the London Metropolitan Board of Works, William Libdin, in 1887:

...in all probability the true way of purifying sewage...will be first to separate the sludge, and then turn into neutral effluent... retain it for a sufficient period, during which time it should be fully aerated, and finally discharge it into the stream in a purified condition. This is indeed what is aimed at and imperfectly accomplished on a sewage farm.

From 1885 to 1891 filters working on this principle were constructed throughout the UK and the idea was also taken up in the US at the Lawrence Experiment Station in Massachusetts, where Frankland's work was confirmed. In 1890 the LES developed a 'trickling filter' that gave a much more reliable performance.

Contact beds were developed in Salford, Lancashire and by scientists working for the London City Council in the early 1890s. According to Christopher Hamlin, this was part of a conceptual revolution that replaced the philosophy that saw "sewage purification as the prevention of decomposition with one that tried to facilitate the biological process that destroy sewage naturally."

Contact beds were tanks containing an inert substance, such as stones or slate, that maximized the surface area available for the microbial growth to break down the sewage. The sewage was held in the tank until it was fully decomposed and it was then filtered out into the ground. This method quickly became widespread, especially in the UK, where it was used in Leicester, Sheffield, Manchester and Leeds. The bacterial bed was simultaneously developed by Joseph Corbett as Borough Engineer in Salford and experiments in 1905 showed that his method was superior in that greater volumes of sewage could be purified better for longer periods of time than could be achieved by the contact bed.

The Royal Commission on Sewage Disposal published its eighth report in 1912 that set what became the international standard for sewage discharge into rivers; the '20:30 standard', which allowed "2 parts per hundred thousand" of Biochemical oxygen demand and "3 parts per hundred thousand" of suspended solid.

Activated sludge process

Most cities in the Western world added more expensive systems for sewage treatment in the early 20th century, after scientists at the University of Manchester discovered the sewage treatment process of activated sludge in 1912.

The Davyhulme Sewage Works Laboratory, where the activated sludge process was developed in the early 20th century.

The activated sludge process was discovered in 1913 in the United Kingdom by two engineers, Edward Ardern and W.T. Lockett, who were conducting research for the Manchester Corporation Rivers Department at Davyhulme Sewage Works. In 1912, Dr. Gilbert Fowler, a scientist at the University of Manchester, observed experiments being conducted at the Lawrence Experiment Station at Massachusetts involving the aeration of sewage in a bottle that had been coated with algae. Fowler's engineering colleagues, Ardern and Lockett, experimented on treating sewage in a draw-and-fill reactor, which produced a highly treated effluent. They aerated the waste-water continuously for about a month and were able to achieve a complete nitrification of the sample material. Believing that the sludge had been activated (in a similar manner to activated carbon) the process was named activated sludge. Not until much later was it realized that what had actually occurred was a means to concentrate biological organisms, decoupling the liquid retention time (ideally, low, for a compact treatment system) from the solids retention time (ideally, fairly high, for an effluent low in BOD5 and ammonia.)

Their results were published in their seminal 1914 paper, and the first full-scale continuous-flow system was installed at Worcester two years later. In the aftermath of the First World War the new treatment method spread rapidly, especially to the USA, Denmark, Germany and Canada. By the late 1930s, the activated sludge treatment became a well-known biological wastewater treatment process in those countries where sewer systems and sewage treatment plants were common.

Toilets

With the onset of the industrial revolution and related advances in technology, the flush toilet began to emerge into its modern form. It needs to be connected to a sewer system though. Where this is not feasible or desired, dry toilets are an alternative option.

Water supply

Chelsea Waterworks, 1752. Two Newcomen beam engines pumped Thames water from a canal to reservoirs at Green Park and Hyde Park.

An ambitious engineering project to bring fresh water from Hertfordshire to London was undertaken by Hugh Myddleton, who oversaw the construction of the New River between 1609 and 1613. The New River Company became one of the largest private water companies of the time, supplying the City of London and other central areas. The first civic system of piped water in England was established in Derby in 1692, using wooden pipes, which was common for several centuries. The Derby Waterworks included waterwheel-powered pumps for raising water out of the River Derwent and storage tanks for distribution.

It was in the 18th century that a rapidly growing population fueled a boom in the establishment of private water supply networks in London. The Chelsea Waterworks Company was established in 1723 "for the better supplying the City and Liberties of Westminster and parts adjacent with water". The company created extensive ponds in the area bordering Chelsea and Pimlico using water from the tidal Thames. Other waterworks were established in London, including at West Ham in 1743, at Lea Bridge before 1767, Lambeth Waterworks Company in 1785, West Middlesex Waterworks Company in 1806 and Grand Junction Waterworks Company in 1811.

The S-bend pipe was invented by Alexander Cummings in 1775 but became known as the U-bend following the introduction of the U-shaped trap by Thomas Crapper in 1880. The first screw-down water tap was patented in 1845 by Guest and Chrimes, a brass foundry in Rotherham.

Water treatment

Sand filter

Sir Francis Bacon attempted to desalinate sea water by passing the flow through a sand filter. Although his experiment did not succeed, it marked the beginning of a new interest in the field.

The first documented use of sand filters to purify the water supply dates to 1804, when the owner of a bleachery in Paisley, Scotland, John Gibb, installed an experimental filter, selling his unwanted surplus to the public. This method was refined in the following two decades by engineers working for private water companies, and it culminated in the first treated public water supply in the world, installed by engineer James Simpson for the Chelsea Waterworks Company in London in 1829. This installation provided filtered water for every resident of the area, and the network design was widely copied throughout the United Kingdom in the ensuing decades.

The Metropolis Water Act introduced the regulation of the water supply companies in London, including minimum standards of water quality for the first time. The Act "made provision for securing the supply to the Metropolis of pure and wholesome water", and required that all water be "effectually filtered" from 31 December 1855. This was followed up with legislation for the mandatory inspection of water quality, including comprehensive chemical analyses, in 1858. This legislation set a worldwide precedent for similar state public health interventions across Europe. The Metropolitan Commission of Sewers was formed at the same time, water filtration was adopted throughout the country, and new water intakes on the Thames were established above Teddington Lock. Automatic pressure filters, where the water is forced under pressure through the filtration system, were innovated in 1899 in England.

Water chlorination

In what may have been one of the first attempts to use chlorine, William Soper used chlorinated lime to treat the sewage produced by typhoid patients in 1879.

In a paper published in 1894, Moritz Traube formally proposed the addition of chloride of lime (calcium hypochlorite) to water to render it "germ-free." Two other investigators confirmed Traube's findings and published their papers in 1895. Early attempts at implementing water chlorination at a water treatment plant were made in 1893 in Hamburg, Germany, and in 1897 the town of Maidstone, in Kent, England, was the first to have its entire water supply treated with chlorine.

Permanent water chlorination began in 1905, when a faulty slow sand filter and a contaminated water supply led to a serious typhoid fever epidemic in Lincoln, England. Dr. Alexander Cruickshank Houston used chlorination of the water to stem the epidemic. His installation fed a concentrated solution of chloride of lime to the water being treated. The chlorination of the water supply helped stop the epidemic and as a precaution, the chlorination was continued until 1911 when a new water supply was instituted.

Manual Control Chlorinator for the liquefaction of chlorine for water purification, early 20th century. From Chlorination of Water by Joseph Race, 1918.

The first continuous use of chlorine in the United States for disinfection took place in 1908 at Boonton Reservoir (on the Rockaway River), which served as the supply for Jersey City, New Jersey. Chlorination was achieved by controlled additions of dilute solutions of chloride of lime (calcium hypochlorite) at doses of 0.2 to 0.35 ppm. The treatment process was conceived by Dr. John L. Leal and the chlorination plant was designed by George Warren Fuller. Over the next few years, chlorine disinfection using chloride of lime were rapidly installed in drinking water systems around the world.

The technique of purification of drinking water by use of compressed liquefied chlorine gas was developed by a British officer in the Indian Medical Service, Vincent B. Nesfield, in 1903. According to his own account, "It occurred to me that chlorine gas might be found satisfactory ... if suitable means could be found for using it.... The next important question was how to render the gas portable. This might be accomplished in two ways: By liquefying it, and storing it in lead-lined iron vessels, having a jet with a very fine capillary canal, and fitted with a tap or a screw cap. The tap is turned on, and the cylinder placed in the amount of water required. The chlorine bubbles out, and in ten to fifteen minutes the water is absolutely safe. This method would be of use on a large scale, as for service water carts."

U.S. Army Major Carl Rogers Darnall, Professor of Chemistry at the Army Medical School, gave the first practical demonstration of this in 1910. Shortly thereafter, Major William J. L. Lyster of the Army Medical Department used a solution of calcium hypochlorite in a linen bag to treat water. For many decades, Lyster's method remained the standard for U.S. ground forces in the field and in camps, implemented in the form of the familiar Lyster Bag (also spelled Lister Bag). This work became the basis for present day systems of municipal water purification.

Fluoridation

Water fluoridation is a practice that has been carried out since the early 20th century for the purpose of decreasing tooth decay.

Trends

The Sustainable Development Goal 6 formulated in 2015 includes targets on access to water supply and sanitation at a global level. In developing countries, self-supply of water and sanitation is used as an approach of incremental improvements to water and sanitation services, which are mainly financed by the user. Decentralized wastewater systems are also growing in importance to achieve sustainable sanitation.

Understanding of health aspects

Original map by John Snow showing the clusters of cholera cases in the London epidemic of 1854.

A basic form of contagion theory dates back to medicine in the medieval Islamic world, where it was proposed by Persian physician Ibn Sina (also known as Avicenna) in The Canon of Medicine (1025), the most authoritative medical textbook of the Middle Ages. He mentioned that people can transmit disease to others by breath, noted contagion with tuberculosis, and discussed the transmission of disease through water and dirt. The concept of invisible contagion was eventually widely accepted by Islamic scholars. In the Ayyubid Sultanate, they referred to them as najasat ("impure substances"). The fiqh scholar Ibn al-Haj al-Abdari (c. 1250–1336), while discussing Islamic diet and hygiene, gave advice and warnings about how contagion can contaminate water, food, and garments, and could spread through the water supply.

Long before studies had established the germ theory of disease, or any advanced understanding of the nature of water as a vehicle for transmitting disease, traditional beliefs had cautioned against the consumption of water, rather favoring processed beverages such as beer, wine and tea. For example, in the camel caravans that crossed Central Asia along the Silk Road, the explorer Owen Lattimore noted, "The reason we drank so much tea was because of the bad water. Water alone, unboiled, is never drunk. There is a superstition that it causes blisters on the feet."

One of the earliest understandings of waterborne diseases in Europe arose during the 19th century, when the Industrial Revolution took over Europe. Waterborne diseases, such as cholera, were once wrongly explained by the miasma theory, the theory that bad air causes the spread of diseases. However, people started to find a correlation between water quality and waterborne diseases, which led to different water purification methods, such as sand filtering and chlorinating their drinking water.

Founders of microscopy, Antonie van Leeuwenhoek and Robert Hooke, used the newly invented microscope to observe for the first time small material particles that were suspended in the water, laying the groundwork for the future understanding of waterborne pathogens and waterborne diseases.

In the 19th century, Britain was the center for rapid urbanization, and as a result, many health and sanitation problems manifested, for example cholera outbreaks and pandemics. This resulted in Britain playing a large role in the development for public health. Before discovering the link between contaminated drinking water and diseases, such as cholera and other waterborne diseases, the miasma theory was used to justify the outbreaks of these illnesses. Miasma theory is the theory that certain diseases and illnesses are the products of "bad airs". The investigations of the physician John Snow in the United Kingdom during the 1854 Broad Street cholera outbreak clarified the connections between waterborne diseases and polluted drinking water. Although the germ theory of disease had not yet been developed, Snow's observations led him to discount the prevailing miasma theory. His 1855 essay On the Mode of Communication of Cholera conclusively demonstrated the role of the water supply in spreading the cholera epidemic in Soho, with the use of a dot distribution map and statistical proof to illustrate the connection between the quality of the water source and cholera cases. During the 1854 epidemic, he collected and analyzed data establishing that people who drank water from contaminated sources such as the Broad Street pump died of cholera at much higher rates than those who got water elsewhere. His data convinced the local council to disable the water pump, which promptly ended the outbreak.

Edwin Chadwick, in particular, played a key role in Britain's sanitation movement, using the miasma theory to back up his plans for improving the sanitation situation in Britain. Although Chadwick brought contributions to developing public health in the 19th century, it was John Snow and William Budd who introduced the idea that cholera was the consequence of contaminated water, presenting the idea that diseases could be transmitted through drinking water.

People found that purifying and filtering their water improved the water quality and limited the cases of waterborne diseases. In the German town Altona this finding was first illustrated by using a sand filtering system for its water supply. A nearby town that didn't use any filtering system for their water suffered from the outbreak while Altona remained unaffected by the disease, providing evidence that the quality of water had something to do with the diseases. After this discovery, Britain and the rest of Europe took into account to filter their drinking water, as well as chlorinating them to fight off waterborne diseases like cholera.

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...