Search This Blog

Saturday, February 12, 2022

Carbon emission trading

From Wikipedia, the free encyclopedia

Carbon emission trade - allowance prices from 2008

Emission trading (ETS) for carbon dioxide (CO2) and other greenhouse gases (GHG) is a form of carbon pricing; also known as cap and trade (CAT) or carbon pricing. It is an approach to limit climate change by creating a market with limited allowances for emissions. This can lower competitiveness of fossil fuels and accelerate investments into low carbon sources of energy such as wind power and photovoltaics. Fossil fuels are the main driver for climate change. They account for 89% of all CO2 emissions and 68% of all GHG emissions.

Emissions trading works by setting a quantitative total limit on the emissions produced by all participating emitters. As a result, the price automatically adjusts to this target. This is the main advantage compared to a fixed carbon tax. Under emission trading, a polluter having more emissions than their quota has to purchase the right to emit more. The entity having fewer emissions sells the right to emit carbon to other entities. As a result, the most cost-effective carbon reduction methods would be exploited first. ETS and carbon taxes are a common method for countries in their attempts to meet their pledges under the Paris Agreement.

Carbon ETS are in operation in China, the European Union and other countries. However, they are usually not harmonized with any defined carbon budgets, which are required to maintain global warming below the critical thresholds of 1.5 °C or "well below" 2 °C. The existing schemes only cover a limited scope of emissions. The EU-ETS focuses on industry and large power generation, leaving the introduction of additional schemes for transport and private consumption to the member states. Though units are counted in tonnes of carbon dioxide equivalent, other potent GHGs such as methane (CH4) or nitrous oxide (N2O) from agriculture are usually not part these schemes yet. Apart from that, an oversupply leads to low prices of allowances with almost no effect on fossil fuel combustion. In September 2021, emission trade allowances (ETAs) covered a wide price range from €7/tCO2 in China's new national carbon market to €63/tCO2 in the EU-ETS. Latest models of the social cost of carbon calculate a damage of more than $3000 per ton CO2 as a result of economy feedbacks and falling global GDP growth rates, while policy recommendations range from about $50 to $200.

History

The international community began the long process towards building effective international and domestic measures to tackle GHG emissions (carbon dioxide, methane, nitrous oxide, hydroflurocarbons, perfluorocarbons, sulphur hexafluoride) in response to the increasing assertions that global warming is happening due to man-made emissions and the uncertainty over its likely consequences. That process began in Rio de Janeiro in 1992, when 160 countries agreed the UN Framework Convention on Climate Change (UNFCCC). The necessary detail was left to be settled by the UN Conference of Parties (COP).

In 1997, the Kyoto Protocol was the first major agreement to reduce greenhouse gases. 38 developed countries (Annex 1 countries) committed themselves to targets and timetables.

The resulting inflexible limitations on GHG growth could entail substantial costs if countries have to solely rely on their own domestic measures.

Economics

The economic problem with climate change is that the emitters of greenhouse gases (GHGs) do not face the full cost implications of their actions. These other costs are called external costs. External costs may affect the welfare of others. In the case of climate change, GHG emissions affect the welfare of people now and in the future, as well as affecting the natural environment. The social cost of carbon depends on the future development of emissions. This can be addressed with the dynamic price model of emissions trading.

Distribution of allowances

Emission allowances may be given away for free or auctioned. In the first case, the government receives no carbon revenue and in the second it receives (on average) the full value of the permits. In either case, permits will be equally scarce and just as valuable to market participants. Since the private market (for trading permits) determines the final price of permits (at the time they must be used to cover emissions), the price will be the same in either case (free or auctioned). This is generally understood.

A second point about free permits (usually “grandfathered,” i.e. given out in proportion to past emissions) has often been misunderstood. Companies that receive free permits, treat them as if they had paid full price for them. This is because using carbon in production has the same cost under both arrangements. With auctioned permits, the cost is obvious. With free permits, the cost is the cost of not selling the permit at full value — this is termed an “opportunity cost.” Since the cost of emissions is generally a marginal cost (increasing with output), the cost is passed on by raising the cost of output (e.g. raising the cost of gasoline or electricity).

Windfall profits

A company that receives permits for free will pass on its opportunity cost in the form of higher product prices. Hence, if it sells the same amount of output as before that cap, with no change in production technology, the full value (at the market price) of permits received for free becomes windfall profits. However, since the cap reduces output and often causes the company to incur costs to increase efficiency, windfall profits will be less than the full value of its free permits.

Generally speaking, if permits are allocated to emitters for free, they will profit from them. But if they must pay full price, or if carbon is taxed, their profits will be reduced. If the carbon price exactly equals the true social cost of carbon, then long-run profit reduction will simply reflect the consequences of paying this new cost. If having to pay this cost is unexpected, then there will likely be a one-time loss that is due to the change in regulations and not simply due to paying the real cost of carbon. However, if there is advanced notice of this change, or if the carbon price is introduced gradually, this one-time regulatory cost will be minimized. There has now been enough advance notice of carbon pricing that this effect should be negligible on average.

Carbon emission trading systems and markets

For emissions trading where greenhouse gases are regulated, one emissions permit is considered equivalent to one tonne of carbon dioxide (CO2) emissions. Other emissions permits are carbon credits, Kyoto units, assigned amount units, and Certified Emission Reduction units (CER). These permits can be sold privately or in the international market at the prevailing market price. These trade and settle internationally, and hence allow permits to be transferred between countries. Each international transfer is validated by the United Nations Framework Convention on Climate Change (UNFCCC). Each transfer of ownership within the European Union is additionally validated by the European Commission.

Emissions trading programmes such as the European Union Emissions Trading System (EU ETS) complement the country-to-country trading stipulated in the Kyoto Protocol by allowing private trading of permits. Under such programmes – which are generally co-ordinated with the national emissions targets provided within the framework of the Kyoto Protocol – a national or international authority allocates permits to individual companies based on established criteria, with a view to meeting national and/or regional Kyoto targets at the lowest overall economic cost.

Other greenhouse gases can also be traded, but are quoted as standard multiples of carbon dioxide with respect to their global warming potential. These features reduce the quota's financial impact on business, while ensuring that the quotas are met at a national and international level.

Exchanges trading in UNFCCC related carbon credits include the European Climate Exchange, NASDAQ OMX Commodities Europe, PowerNext, Commodity Exchange Bratislava and the European Energy Exchange. The Chicago Climate Exchange participated until 2010. NASDAQ OMX Commodities Europe listed a contract to trade offsets generated by a CDM carbon project called Certified Emission Reductions. Many companies now engage in emissions abatement, offsetting, and sequestration programs to generate credits that can be sold on one of the exchanges. At least one private electronic market has been established in 2008: CantorCO2e. Carbon credits at Commodity Exchange Bratislava are traded at special platform called Carbon place. Various proposals for linking international systems across markets are being investigated. This is being coordinated by the International Carbon Action Partnership (ICAP).

China

The Chinese national carbon trading scheme is the largest in the world. It is an intensity-based trading system for carbon dioxide emissions by China, which started operating in 2021. The initial design of the system targets a scope of 3.5 billion tons of carbon dioxide emissions that come from 1700 installations. It has made a voluntary pledge under the UNFCCC to lower CO2 per unit of GDP by 40 to 45% in 2020 when comparing to the 2005 levels.

In November 2011, China approved pilot tests of carbon trading in seven provinces and cities – Beijing, Chongqing, Shanghai, Shenzhen, Tianjin as well as Guangdong Province and Hubei Province, with different prices in each region. The pilot is intended to test the waters and provide valuable lessons for the design of a national system in the near future. Their successes or failures will, therefore, have far-reaching implications for carbon market development in China in terms of trust in a national carbon trading market. Some of the pilot regions can start trading as early as 2013/2014. National trading is expected to start in 2017, latest in 2020.

The effort to start a national trading system has faced some problems that took longer than expected to solve, mainly in the complicated process of initial data collection to determine the base level of pollution emission. According to the initial design, there will be eight sectors that are first included in the trading system: chemicals, petrochemicals, iron and steel, non-ferrous metals, building materials, paper, power and aviation, but many of the companies involved lacked consistent data. Therefore, by the end of 2017, the allocation of emission quotas have started but it has been limited to only the power sector and will gradually expand, although the operation of the market is yet to begin. In this system, Companies that are involved will be asked to meet target level of reduction and the level will contract gradually.

European Union

European Allowance prices from 2009

The European Union Emission Trading Scheme (or EU-ETS) is the largest multi-national, greenhouse gas emissions trading scheme in the world. After voluntary trials in the UK and Denmark, Phase I began operation in January 2005 with all 15 member states of the European Union participating. The program caps the amount of carbon dioxide that can be emitted from large installations with a net heat supply in excess of 20 MW, such as power plants and carbon intensive factories, and covers almost half (46%) of the EU's Carbon Dioxide emissions. Phase I permits participants to trade among themselves and in validated credits from the developing world through Kyoto's Clean Development Mechanism. Credits are gained by investing in clean technologies and low-carbon solutions, and by certain types of emission-saving projects around the world to cover a proportion of their emissions.

During Phases I and II, allowances for emissions have typically been given free to firms, which has resulted in them getting windfall profits. Ellerman and Buchner (2008) suggested that during its first two years in operation, the EU-ETS turned an expected increase in emissions of 1%–2% per year into a small absolute decline. Grubb et al. (2009) suggested that a reasonable estimate for the emissions cut achieved during its first two years of operation was 50–100 MtCO2 per year, or 2.5%–5%.

A number of design flaws have limited the effectiveness of the scheme. In the initial 2005–07 period, emission caps were not tight enough to drive a significant reduction in emissions. The total allocation of allowances turned out to exceed actual emissions. This drove the carbon price down to zero in 2007. This oversupply was caused because the allocation of allowances by the EU was based on emissions data from the European Environmental Agency in Copenhagen, which uses a horizontal activity-based emissions definition similar to the United Nations, the EU-ETS Transaction log in Brussels, but a vertical installation-based emissions measurement system. This caused an oversupply of 200 million tonnes (10% of market) in the EU-ETS in the first phase and collapsing prices.

Phase II saw some tightening, but the use of JI and CDM offsets was allowed, with the result that no reductions in the EU will be required to meet the Phase II cap. For Phase II, the cap is expected to result in an emissions reduction in 2010 of about 2.4% compared to expected emissions without the cap (business-as-usual emissions). For Phase III (2013–20), the European Commission proposed a number of changes, including:

  • Setting an overall EU cap, with allowances then allocated;
  • Tighter limits on the use of offsets;
  • Unlimited banking of allowances between Phases II and III;
  • A move from allowances to auction.

In January 2008, Norway, Iceland, and Liechtenstein joined the European Union Emissions Trading System (EU-ETS), according to a publication from the European Commission. The Norwegian Ministry of the Environment has also released its draft National Allocation Plan which provides a carbon cap-and-trade of 15 million tonnes of CO2, 8 million of which are set to be auctioned. According to the OECD Economic Survey of Norway 2010, the nation "has announced a target for 2008–12 10% below its commitment under the Kyoto Protocol and a 30% cut compared with 1990 by 2020." In 2012, EU-15 emissions was 15.1% below their base year level. Based on figures for 2012 by the European Environment Agency, EU-15 emissions averaged 11.8% below base-year levels during the 2008–2012 period. This means the EU-15 over-achieved its first Kyoto target by a wide margin.

A 2020 study found that the European Union Emissions Trading System successfully reduced CO2 emissions even though the prices for carbon were set at low prices.

India

Trading is set to begin in 2014 after a three-year rollout period. It is a mandatory energy efficiency trading scheme covering eight sectors responsible for 54 per cent of India's industrial energy consumption. India has pledged a 20 to 25 per cent reduction in emission intensity from 2005 levels by 2020. Under the scheme, annual efficiency targets will be allocated to firms. Tradable energy-saving permits will be issued depending on the amount of energy saved during a target year.

USA

As of 2017, there is no national emissions trading scheme in the United States. Failing to get Congressional approval for such a scheme, President Barack Obama instead acted through the United States Environmental Protection Agency to attempt to adopt through rulemaking the Clean Power Plan, which does not feature emissions trading. The plan was subsequently challenged by the administration of President Donald Trump.

Concerned at the lack of federal action, several states on the east and west coasts have created sub-national cap-and-trade programs.

President Barack Obama in his proposed 2010 United States federal budget wanted to support clean energy development with a 10-year investment of US$15 billion per year, generated from the sale of greenhouse gas (GHG) emissions credits. Under the proposed cap-and-trade program, all GHG emissions credits would have been auctioned off, generating an estimated $78.7 billion in additional revenue in FY 2012, steadily increasing to $83 billion by FY 2019. The proposal was never made law.

The American Clean Energy and Security Act (H.R. 2454), a greenhouse gas cap-and-trade bill, was passed on 26 June 2009, in the House of Representatives by a vote of 219–212. The bill originated in the House Energy and Commerce Committee and was introduced by Representatives Henry A. Waxman and Edward J. Markey. The political advocacy organizations FreedomWorks and Americans for Prosperity, funded by brothers David and Charles Koch of Koch Industries, encouraged the Tea Party movement to focus on defeating the legislation. Although cap and trade also gained a significant foothold in the Senate via the efforts of Republican Lindsey Graham, Independent and former Democrat Joe Lieberman, and Democrat John Kerry, the legislation died in the Senate.

State and regional programs

In 2003, New York State proposed and attained commitments from nine Northeast states to form a cap-and-trade carbon dioxide emissions program for power generators, called the Regional Greenhouse Gas Initiative (RGGI). This program launched on January 1, 2009, with the aim to reduce the carbon "budget" of each state's electricity generation sector to 10% below their 2009 allowances by 2018.

Also in 2003, U.S. corporations were able to trade CO2 emission allowances on the Chicago Climate Exchange under a voluntary scheme. In August 2007, the Exchange announced a mechanism to create emission offsets for projects within the United States that cleanly destroy ozone-depleting substances.

In 2006, the California Legislature passed the California Global Warming Solutions Act, AB-32. Thus far, flexible mechanisms in the form of project based offsets have been suggested for three main project types. The project types include: manure management, forestry, and destruction of ozone-depleted substances. However, a ruling from Judge Ernest H. Goldsmith of San Francisco's Superior Court stated that the rules governing California's cap-and-trade system were adopted without a proper analysis of alternative methods to reduce greenhouse gas emissions. The tentative ruling, issued on January 24, 2011, argued that the California Air Resources Board violated state environmental law by failing to consider such alternatives. If the decision is made final, the state would not be allowed to implement its proposed cap-and-trade system until the California Air Resources Board fully complies with the California Environmental Quality Act. However, on June 24, 2011, the Superior Court's ruling was overturned by the Court of Appeals. By 2012, some of the emitters obtained allowances for free, which is for the electric utilities, industrial facilities and natural gas distributors, whereas some of the others have to go to the auction. The California cap-and-trade program came into effect in 2013.

In 2014, the Texas legislature approved a 10% reduction for the Highly Reactive Volatile Organic Compound (HRVOC) emission limit. This was followed by a 5% reduction for each subsequent year until a total of 25% percent reduction was achieved in 2017.

In February 2007, five U.S. states and four Canadian provinces joined to create the Western Climate Initiative (WCI), a regional greenhouse gas emissions trading system. In July 2010, a meeting took place to further outline the cap-and-trade system. In November 2011, Arizona, Montana, New Mexico, Oregon, Utah and Washington withdrew from the WCI. As of 2021, only the U.S. state of California and the Canadian province of Quebec participate in the WCI.

In 1997, the State of Illinois adopted a trading program for volatile organic compounds in most of the Chicago area, called the Emissions Reduction Market System. Beginning in 2000, over 100 major sources of pollution in eight Illinois counties began trading pollution credits.

Australia

In 2003 the New South Wales (NSW) state government unilaterally established the New South Wales Greenhouse Gas Abatement Scheme to reduce emissions by requiring electricity generators and large consumers to purchase NSW Greenhouse Abatement Certificates (NGACs). This has prompted the rollout of free energy-efficient compact fluorescent lightbulbs and other energy-efficiency measures, funded by the credits. This scheme has been criticised by the Centre for Energy and Environmental Markets (CEEM) of the UNSW because of its lack of effectiveness in reducing emissions, its lack of transparency and its lack of verification of the additionality of emission reductions.

Both the incumbent Howard Coalition government and the Rudd Labor opposition promised to implement an emissions trading scheme (ETS) before the 2007 federal election. Labor won the election, with the new government proceeding to implement an ETS. The government introduced the Carbon Pollution Reduction Scheme, which the Liberals supported with Malcolm Turnbull as leader. Tony Abbott questioned an ETS, saying the best way to reduce emissions is with a "simple tax". Shortly before the carbon vote, Abbott defeated Turnbull in a leadership challenge, and from there on the Liberals opposed the ETS. This left the government unable to secure passage of the bill and it was subsequently withdrawn.

Julia Gillard defeated Rudd in a leadership challenge and promised not to introduce a carbon tax, but would look to legislate a price on carbon when taking the government to the 2010 election. In the first hung parliament result in 70 years, the government required the support of crossbenchers including the Greens. One requirement for Greens support was a carbon price, which Gillard proceeded with in forming a minority government. A fixed carbon price would proceed to a floating-price ETS within a few years under the plan. The fixed price lent itself to characterisation as a carbon tax and when the government proposed the Clean Energy Bill in February 2011, the opposition claimed it to be a broken election promise.

The bill was passed by the Lower House in October 2011 and the Upper House in November 2011. The Liberal Party vowed to overturn the bill if elected. The bill thus resulted in passage of the Clean Energy Act, which possessed a great deal of flexibility in its design and uncertainty over its future.

The Liberal/National coalition government elected in September 2013 has promised to reverse the climate legislation of the previous government. In July 2014, the carbon tax was repealed as well as the Emissions Trading Scheme (ETS) that was to start in 2015.

Canada

The Canadian provinces of Quebec and Nova Scotia operate an emissions trading scheme. Quebec links its program with the US state of California through the Western Climate Initiative.

Japan

The Japanese city of Tokyo is like a country in its own right in terms of its energy consumption and GDP. Tokyo consumes as much energy as "entire countries in Northern Europe, and its production matches the GNP of the world's 16th largest country". A scheme to limit carbon emissions launched in April 2010 covers the top 1,400 emitters in Tokyo, and is enforced and overseen by the Tokyo Metropolitan Government. Phase 1, which is similar to Japan's scheme, ran until 2015. (Japan had an ineffective voluntary emissions reductions system for years, but no nationwide cap-and-trade program.) Emitters must cut their emissions by 6% or 8% depending on the type of organization; from 2011, those who exceed their limits must buy matching allowances or invest in renewable-energy certificates or offset credits issued by smaller businesses or branch offices. Polluters that fail to comply will be fined up to 500,000 yen plus credits for 1.3 times excess emissions. In its fourth year, emissions were reduced by 23% compared to base-year emissions. In phase 2, (FY2015-FY2019), the target is expected to increase to 15%–17%. The aim is to cut Tokyo's carbon emissions by 25% from 2000 levels by 2020. These emission limits can be met by using technologies such as solar panels and advanced fuel-saving devices.

New Zealand

New Zealand Unit Prices

The New Zealand Emissions Trading Scheme (NZ ETS) is a partial-coverage all-free allocation uncapped highly internationally linked emissions trading scheme. The NZ ETS was first legislated in the Climate Change Response (Emissions Trading) Amendment Act 2008 in September 2008 under the Fifth Labour Government of New Zealand and then amended in November 2009 and in November 2012 by the Fifth National Government of New Zealand.

The NZ ETS covers forestry (a net sink), energy (43.4% of total 2010 emissions), industry (6.7% of total 2010 emissions) and waste (2.8% of total 2010 emissions) but not pastoral agriculture (47% of 2010 total emissions). Participants in the NZ ETS must surrender two emissions units (either an international 'Kyoto' unit or a New Zealand-issued unit) for every three tonnes of carbon dioxide equivalent emissions reported or they may choose to buy NZ units from the government at a fixed price of NZ$25.

Individual sectors of the economy have different entry dates when their obligations to report emissions and surrender emission units take effect. Forestry, which contributed net removals of 17.5 Mts of CO2e in 2010 (19% of NZ's 2008 emissions,) entered the NZ ETS on 1 January 2008. The stationary energy, industrial processes and liquid fossil fuel sectors entered the NZ ETS on 1 July 2010. The waste sector (landfill operators) entered on 1 January 2013. Methane and nitrous oxide emissions from pastoral agriculture are not included in the NZ ETS. (From November 2009, agriculture was to enter the NZ ETS on 1 January 2015)

The NZ ETS is highly linked to international carbon markets as it allows the importing of most of the Kyoto Protocol emission units. However, as of June 2015, the scheme will effectively transition into a domestic scheme, with restricted access to international Kyoto units (CERs, ERUs and RMUs). The NZ ETS has a domestic unit; the 'New Zealand Unit' (NZU), which is issued by free allocation to emitters, with no auctions intended in the short term. Free allocation of NZUs varies between sectors. The commercial fishery sector (who are not participants) have a free allocation of units on a historic basis. Owners of pre-1990 forests have received a fixed free allocation of units. Free allocation to emissions-intensive industry, is provided on an output-intensity basis. For this sector, there is no set limit on the number of units that may be allocated. The number of units allocated to eligible emitters is based on the average emissions per unit of output within a defined 'activity'. Bertram and Terry (2010, p 16) state that as the NZ ETS does not 'cap' emissions, the NZ ETS is not a cap and trade scheme as understood in the economics literature.

Some stakeholders have criticized the New Zealand Emissions Trading Scheme for its generous free allocations of emission units and the lack of a carbon price signal (the Parliamentary Commissioner for the Environment), and for being ineffective in reducing emissions (Greenpeace Aotearoa New Zealand).

The NZ ETS was reviewed in late 2011 by an independent panel, which reported to the Government and public in September 2011.

South Korea

South Korea's national emissions trading scheme officially launched on 1 January 2015, covering 525 entities from 23 sectors. With a three-year cap of 1.8687 billion tCO2e, it now forms the second largest carbon market in the world following the EU ETS. This amounts to roughly two-thirds of the country's emissions. The Korean emissions trading scheme is part of the Republic of Korea's efforts to reduce greenhouse gas emissions by 30% compared to the business-as-usual scenario by 2020.

United Kingdom

Business in the UK have come out strongly in support of emissions trading as a key tool to mitigate climate change, supported by NGOs. However, not all businesses favor a trading approach. On December 11, 2008, Rex Tillerson, the CEO of ExxonMobil, said a carbon tax is "a more direct, more transparent and more effective approach" than a cap-and-trade program, which he said, "inevitably introduces unnecessary cost and complexity". He also said that he hoped that the revenues from a carbon tax would be used to lower other taxes so as to be revenue neutral.

Market trend

Carbon taxes and emission trading worldwide
Emissions trading and carbon taxes around the world (2021)
  Carbon tax implemented or scheduled
  Carbon emission trading implemented or scheduled
  Carbon emission trading or carbon tax under consideration

Carbon emissions trading increased rapidly in 2021 with the start of the Chinese national carbon trading scheme. The increasing costs of permits on the EU ETS have had the effect of increasing costs of coal power.

A 2019 study by the American Council for an Energy Efficient Economy (ACEEE) finds that efforts to put a price on greenhouse gas emissions are growing in North America. "In addition to carbon taxes in effect in Alberta, British Columbia and Boulder, Colorado, cap and trade programs are in effect in California, Quebec, Nova Scotia and the nine northeastern states that form the Regional Greenhouse gas Initiative (RGGI). Several other states and provinces are currently considering putting a price on emissions."

Business reaction

23 multinational corporations came together in the G8 Climate Change Roundtable, a business group formed at the January 2005 World Economic Forum. The group included Ford, Toyota, British Airways, BP and Unilever. On June 9, 2005, the Group published a statement stating the need to act on climate change and stressing the importance of market-based solutions. It called on governments to establish "clear, transparent, and consistent price signals" through "creation of a long-term policy framework" that would include all major producers of greenhouse gases. By December 2007, this had grown to encompass 150 global businesses.

The International Air Transport Association, whose 230 member airlines comprise 93% of all international traffic, position is that trading should be based on "benchmarking", setting emissions levels based on industry averages, rather than "grandfathering", which would use individual companies' previous emissions levels to set their future permit allowances. They argue grandfathering "would penalise airlines that took early action to modernise their fleets, while a benchmarking approach, if designed properly, would reward more efficient operations".

In 2021 shipowners said they are against being included in the EU ETS.

Voluntary surrender of units

There are examples of individuals and organisations purchasing tradable emission permits and 'retiring' (cancelling) them so they cannot be used by emitters to authorise their emissions. This makes the emissions 'cap' lower and therefore further reduces emissions. It is argued that this removes the credits from the carbon market so they cannot be used to allow the emission of carbon and that this reduces the 'cap' on emissions by reducing the number of credits available to emitters.

Criticisms

Critics of carbon trading, such as Carbon Trade Watch, argue that it places disproportionate emphasis on individual lifestyles and carbon footprints, distracting attention from the wider, systemic changes and collective political action that needs to be taken to tackle climate change. Groups such as the Corner House have argued that the market will choose the easiest means to save a given quantity of carbon in the short term, which may be different from the pathway required to obtain sustained and sizable reductions over a longer period, and so a market-led approach is likely to reinforce technological lock-in. For instance, small cuts may often be achieved cheaply through investment in making a technology more efficient, where larger cuts would require scrapping the technology and using a different one. They also argue that emissions trading is undermining alternative approaches to pollution control with which it does not combine well, and so the overall effect it is having is to actually stall significant change to less polluting technologies. In September 2010, campaigning group FERN released "Trading Carbon: How it works and why it is controversial" which compiles many of the arguments against carbon trading.

The Financial Times published an article about cap-and-trade systems which argued that "Carbon markets create a muddle" and "...leave much room for unverifiable manipulation". Lohmann (2009) pointed out that emissions trading schemes create new uncertainties and risks, which can be commodified by means of derivatives, thereby creating a new speculative market.

In China some companies started artificial production of greenhouse gases with sole purpose of their recycling and gaining carbon credits. Similar practices happened in India. Earned credit were then sold to companies in US and Europe.

Proposals for alternative schemes to avoid the problems of cap-and-trade schemes include Cap and Share, which was considered by the Irish Parliament in 2008, and the Sky Trust schemes. These schemes stated that cap-and-trade schemes inherently impact the poor and those in rural areas, who have less choice in energy consumption options.

Carbon trading has been criticised as a form of colonialism, in which rich countries maintain their levels of consumption while getting credit for carbon savings in inefficient industrial projects. Nations that have fewer financial resources may find that they cannot afford the permits necessary for developing an industrial infrastructure, thus inhibiting these countries economic development.

The Kyoto Protocol's Clean Development Mechanism has been criticised for not promoting enough sustainable development.

Another criticism is the claimed possibility of non-existent emission reductions being recorded under the Kyoto Protocol due to the surplus of allowances that some countries possess. For example, Russia had a surplus of allowances due to its economic collapse following the end of the Soviet Union. Other countries could have bought these allowances from Russia, but this would not have reduced emissions. Rather, it would have been simply be a redistribution of emissions allowances. In practice, Kyoto Parties have as yet chosen not to buy these surplus allowances.

Flexibility, and thus complexity, inherent in cap and trade schemes has resulted in a great deal of policy uncertainty surrounding these schemes. Such uncertainty has beset such schemes in Australia, Canada, China, the EU, India, Japan, New Zealand, and the US. As a result of this uncertainty, organizations have little incentive to innovate and comply, resulting in an ongoing battle of stakeholder contestation for the past two decades.

Lohmann (2006b) supported conventional regulation, green taxes, and energy policies that are "justice-based" and "community-driven." According to Carbon Trade Watch (2009), carbon trading has had a "disastrous track record." The effectiveness of the EU ETS was criticized, and it was argued that the CDM had routinely favoured "environmentally ineffective and socially unjust projects."

Annie Leonard's 2009 documentary The Story of Cap and Trade criticized carbon emissions trading for the free permits to major polluters giving them unjust advantages, cheating in connection with carbon offsets, and as a distraction from the search for other solutions.

Offsets

Forest campaigner Jutta Kill (2006) of European environmental group FERN argued that offsets for emission reductions were not substitute for actual cuts in emissions. Kill stated that "[carbon] in trees is temporary: Trees can easily release carbon into the atmosphere through fire, disease, climatic changes, natural decay and timber harvesting."

Permit supply level

Regulatory agencies run the risk of issuing too many emission credits, which can result in a very low price on emission permits. This reduces the incentive that permit-liable firms have to cut back their emissions. On the other hand, issuing too few permits can result in an excessively high permit price. This is an argument for a hybrid instrument having a price-floor, i.e., a minimum permit price, and a price-ceiling, i.e., a limit on the permit price. However, a price-ceiling (safety value) removes the certainty of a particular quantity limit of emissions.

Permit allocation versus auctioning

If polluters receive emission permits for free ("grandfathering"), this may be a reason for them not to cut their emissions because if they do they will receive fewer permits in the future.

This perverse incentive can be alleviated if permits are auctioned, i.e., sold to polluters, rather than giving them the permits for free. Auctioning is a method for distributing emission allowances in a cap-and-trade system whereby allowances are sold to the highest bidder. Revenues from auctioning go to the government and can be used for development of sustainable technology or to cut distortionary taxes, thus improving the efficiency of the overall cap policy.

On the other hand, allocating permits can be used as a measure to protect domestic firms who are internationally exposed to competition. This happens when domestic firms compete against other firms that are not subject to the same regulation. This argument in favor of allocation of permits has been used in the EU ETS, where industries that have been judged to be internationally exposed, e.g., cement and steel production, have been given permits for free).

Structuring issues

Corporate and governmental carbon emission trading schemes have been modified in ways that have been attributed to permitting money laundering to take place. The principal point here is that financial system innovations (outside banking) open up the possibility for unregulated (non-banking) transactions to take place in relativity unsupervised markets.

Public opinion

In the United States, most polling shows large support for emissions trading (often referred to as cap-and-trade). This majority support can be seen in polls conducted by The Washington Post/ABC News, Zogby International and Yale University. A new Washington Post-ABC poll reveals that majorities of the American people believe in climate change, are concerned about it, are willing to change their lifestyles and pay more to address it, and want the federal government to regulate greenhouse gases. They are, however, ambivalent on cap-and-trade.

More than three-quarters of respondents, 77.0%, reported they "strongly support" (51.0%) or "somewhat support" (26.0%) the EPA's decision to regulate carbon emissions. While 68.6% of respondents reported being "very willing" (23.0%) or "somewhat willing" (45.6%), another 26.8% reported being "somewhat unwilling" (8.8%) or "not at all willing" (18.0%) to pay higher prices for "Green" energy sources to support funding for programs that reduce the effect of global warming.

According to PolitiFact, it is a misconception that emissions trading is unpopular in the United States because of earlier polls from Zogby International and Rasmussen which misleadingly include "new taxes" in the questions (taxes aren't part of emissions trading) or high energy cost estimates.

United States biological defense program

From Wikipedia, the free encyclopedia

The United States biological defense program—in recent years also called the National Biodefense Strategy— refers to the collective effort by all levels of government, along with private enterprise and other stakeholders, in the United States to carry out biodefense activities.

Biodefense is a system of planned actions to counter and reduce the risk of biological threats and to prepare, respond to, and recover from them if they happen. The National Defense Authorization Act (NDAA) of 2016 required high-level officials across the federal government to create a national biodefense strategy together. As a result, in 2018 the National Biodefense Strategy was released by President Donald J. Trump. In essence, the strategy comprises the U.S. biological defense program in that it is the official framework that provides a "single coordinated effort" to coordinate all biodefense activities across the federal government. To execute the strategy, the White House issued a Presidential Memorandum on the Support for National Biodefense, which puts the specific directives and rules in place for carrying out the plans written in the strategy. It is worth noting that the National Biodefense Strategy elevated natural outbreaks as a vital component of the U.S. biological defense program for the first time, mostly because of the significant risk that natural outbreaks pose to civilian, animal and agricultural populations across the country.

The U.S. biological defense program began as a small defensive effort that parallels the country's offensive biological weapons development and production program, active since 1943. Organizationally, the medical defense research effort was pursued first (1956-1969) by the U.S. Army Medical Unit (USAMU) and later, after publicly known discontinuation of the offensive program, by the U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID). Both of these units were located at Fort Detrick, Maryland, where the U.S. Army Biological Warfare Laboratories were headquartered. The current mission is multi-agency, not exclusively military, and is purely to develop defensive measures against bio-agents, as opposed to the former bio-weapons development program.

In 1951, due to biological warfare concerns arising from the Korean War, the US Centers for Disease Control and Prevention (CDC) created the Epidemic Intelligence Service (EIS), a hands-on two-year postgraduate training program in epidemiology, with a focus on field work.

Since the 2001 anthrax attacks, and the consequent expansion of federal bio-defense expenditures, USAMRIID has been joined at Fort Detrick by sister bio-defense agencies of the U.S. Department of Health and Human Services (NIAID's Integrated Research Facility) and the U.S. Department of Homeland Security (the National Biodefense Analysis and Countermeasures Center and the National Bioforensic Analysis Center). These—along with the much older Foreign Disease Weed Science Research Unit of the U.S. Department of Agriculture—now constitute the National Interagency Confederation for Biological Research (NICBR).

Broadly defined, the "United States biological defense program" now also encompasses all federal-level programs and efforts to monitor, prevent, and contain naturally occurring infectious disease outbreaks of widespread public health concern. These include efforts to forestall large-scale disasters such as flu pandemics and other "emerging infections" such as novel pathogens or those imported from other countries.

Overview

Biological agents have been used in warfare for centuries to produce death or disease in humans, animals, or plants. The United States officially began its biological warfare offensive program in 1941. During the next 28 years, the U.S. initiative evolved into an effective, military-driven research and acquisition program, shrouded in secrecy and, later, controversy. Most research and development was done at Fort Detrick, Maryland, while production and testing of bio-weapons occurred at Pine Bluff, Arkansas, and Dugway Proving Ground (DPG), Utah. Field testing was done secretly and successfully with simulants and actual agents disseminated over wide areas. A small defensive effort always paralleled the weapons development and production program. With the presidential decision in 1969 to halt offensive biological weapons production—and the agreement in 1972 at the international BWC never to develop, produce, stockpile, or retain biological agents or toxins—the program became entirely defensive, with medical and non-medical components. The U.S. biological defense research program exists today, conducting research to develop physical and medical countermeasures to protect service members and civilians from the threat of modern biological warfare.

Both the U.S. bio-weapons ban and the BWC restricted any work in the area of biological warfare to defensive in nature. In reality, this gives BWC member-states wide latitude to conduct biological weapons research because the BWC contains no provisions for monitoring of enforcement. The treaty, essentially, is a gentlemen's agreement amongst members backed by the long-prevailing thought that biological warfare should not be used in battle.

In recent years certain critics have claimed the U.S. stance on biological warfare and the use of biological agents has differed from historical interpretations of the BWC. For example, it is said that the U.S. now maintains that the Article I of the BWC (which explicitly bans bio-weapons), does not apply to "non-lethal" biological agents. Previous interpretation was stated to be in line with a definition laid out in Public Law 101-298, the Biological Weapons Anti-Terrorism Act of 1989. That law defined a biological agent as:

any micro-organism, virus, infectious substance, or biological product that may be engineered as a result of biotechnology, or any naturally occurring or bio-engineered component of any such microorganism, virus, infectious substance, or biological product, capable of causing death, disease, or other biological malfunction in a human, an animal, a plant, or another living organism; deterioration of food, water, equipment, supplies, or material of any kind ...

According to the Federation of American Scientists, U.S. work on non-lethal agents exceeds limitations in the BWC.

History

1950s

After World War II, and with the onset of Cold War tensions, the US continued its clandestine wartime bio-weapons program. The Korean War (1950–53) added justification for continuing the program, when the possible entry of the Soviet Union into the war was feared. Concerns over the Soviet Union were justified, for the Soviet Union would pronounce in 1956 that chemical and biological weapons would, indeed, be used for mass destruction in future wars. In October 1950, the US Secretary of Defense approved continuation of the program, based largely on the Soviet threat and a belief that the North Korean and Chinese communists would use biological weapons. With expansion of the biological warfare retaliatory program, the scope of the defensive program was nearly doubled. Data were obtained on personnel protection, decontamination, and immunization. Early detection research produced prototype alarms for use on the battlefield, but progress was slow, apparently limited by technology.

The U.S. Army Medical Unit, under the direction of The U.S. Army Surgeon General, began formal operations in 1956. One of the Unit's first missions was to manage all aspects of Project CD-22, the exposure of volunteers to aerosols containing a pathogenic strain of Coxiella burnetii, the etiologic agent of Q fever. The volunteers were closely monitored and antibiotic therapy was administered when appropriate. All volunteers recovered from Q fever with no adverse aftereffects. One year later, the Unit submitted to the U.S. Food and Drug Administration an Investigational New Drug application for a Q fever vaccine.

1960s

In the following decade, the US accumulated significant data on personnel protection, decontamination, and immunization; and, in the offensive program, on the potential for mosquitoes to be used as biological vectors. A new Department of Defense (DoD) Biological and Chemical Defense Planning Board was created in 1960 to establish program priorities and objectives. Preventive approaches toward infections of all kinds were funded under the auspices of biological warfare. As concern increased over the biological warfare threat during the Cold War, so did the budget for the program: to $38 million by fiscal year 1966.

The U.S. Army Chemical Corps was given the responsibility to conduct biological warfare research for all of the services. In 1962, the responsibility for the testing of promising biological warfare agents was given to a separate Testing and Evaluation Command (TEC). Depending on the particular program, different test centers were used, such as the Deseret Test Center at Fort Douglas, Utah, the headquarters for the new biological and chemical warfare testing organization. In response to increasing concerns over public safety and the environment, the TEC implemented a complex system of approval of its research programs that included the U.S. Army Chief of Staff, the Joint Chiefs of Staff, the Secretary of Defense, and the President of the United States.

During the last 10 years of the offensive research and development program (1959–69), many scientific advances were made that proved that biological warfare was clearly feasible, although dependent on careful planning, especially with regard to meteorological conditions. Large-scale fermentation, purification, concentration, stabilization, drying, and weaponization of pathogenic microorganisms could be done safely. Furthermore, modern principles of biosafety and containment were established at the Fort Detrick laboratories which have greatly facilitated biomedical research in general; still today, these are followed throughout the world. Arnold G. Wedum, M.D., Ph.D., a civilian scientist who was Director of Industrial Health and Safety at Fort Detrick, was the leader in the development of containment facilities.

During the 1960s, the US program underwent a philosophical change, and attention was now directed more towards biological agents that could incapacitate, but not kill. In 1964, research programs involved staphylococcal enterotoxins capable of causing food poisoning. Research initiatives also included new therapy and prophylaxis. Pathogens studied included the agents causing anthrax, glanders, brucellosis, melioidosis, plague, psittacosis, Venezuelan equine encephalitis, Q fever, coccidioidomycosis, and a variety of plant and animal pathogens.

Particular attention was directed at chemical and biological detectors during the 1960s. The first devices were primitive field alarms to detect chemicals. Although the development of sensitive biological warfare agent detectors was at a standstill, two systems were, nonetheless, investigated. The first was a monitor that detected increases in the number of particles sized 1 to 5 µm in diameter, based on the assumption that a biological agent attack would include airborne particles of this size. The second system involved the selective staining of particles collected from the air. Both systems lacked enough specificity and sensitivity to be of any practical use.

But in 1966, a research effort directed at detecting the presence of adenosine triphosphate (a chemical found only in living organisms) was begun. By using a fluorescent material found in fireflies, preliminary studies indicated that it was possible to detect the presence of a biological agent in the atmosphere. The important effort to find a satisfactory detection system continues today, for timely detection of a biological attack would allow the attacked force to use its protective masks effectively, and identification of the agent would allow any pre-treatment regimens to be instituted. The US Army also experimented with and developed highly effective barrier protective measures against both chemical and biological agents. Special impervious tents and personal protective equipment were developed, including individual gas masks even for military dogs.

During the late 1960s, funding for the biological warfare program decreased temporarily, to accommodate the accelerating costs of the Vietnam War. The budget for fiscal year 1969 was $31 million, decreasing to $11.8 million by fiscal year 1973. Although the offensive program had been stopped in 1969, both offensive and defensive programs continued to be defended. John S. Foster, Jr, Director of Defense Research and Engineering, responded to a query by Congressman Richard D. McCarthy:

It is the policy of the U.S. to develop and maintain a defensive chemical-biological (CB) capability so that our military forces could operate for some period of time in a toxic environment, if necessary; to develop and maintain a limited offensive capability in order to deter all use of CB weapons by the threat of retaliation in kind; and to continue a program of research and development in this area to minimize the possibility of technological surprise.

On 25 November 1969, President Richard Nixon visited Fort Detrick to announce a new policy on biological warfare. In two National Security Memoranda, the U.S. government renounced all development, production, and stockpiling of biological weapons and declared its intent to maintain only small research quantities of biological agents, such as are necessary for the development of vaccines, drugs, and diagnostics.

Ground was broken in 1967 for the construction of a new, modern laboratory building at Fort Detrick. The building would open in phases during 1971 and 1972. With the disestablishment of the biological warfare laboratories, the name of the U.S. Army Medical Unit, which was to have been housed in the new laboratories, was formally changed to U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID) in 1969. The institute's new mission was stated in General Order 137, 10 November 1971 (since superseded):

Conducts studies related to medical defensive aspects of biological agents of military importance and develops appropriate biological protective measures, diagnostic procedures and therapeutic methods.

The emphasis now shifted away from offensive weapons to the development of vaccines, diagnostic systems, personal protection, chemoprophylaxis, and rapid detection systems.

1970s

After Nixon declared an end to the U.S. bio-weapons program, debate in the Army centered around whether or not toxin weapons were included in the president's declaration. Following Nixon's November 1969 order, scientists at Fort Detrick worked on one toxin, Staphylococcus enterotoxin type B (SEB), for several more months. Nixon ended the debate when he added toxins to the bio-weapons ban in February 1970.

In response to Nixon's 1969 decision, all antipersonnel biological warfare stocks were destroyed between 10 May 1971 and 1 May 1972. The laboratory at Pine Bluff Arsenal, Arkansas, was converted to a toxicological research laboratory, and was no longer under the direction or control of the DoD. Biological anticrop agents were destroyed by February 1973. Biological warfare demilitarization continued through the 1970s, with input provided by the U.S. Department of Health, Education and Welfare; U.S. Department of the Interior; U.S. Department of Agriculture; and the Environmental Protection Agency. Fort Detrick and other installations involved in the biological warfare program took on new identities, and their missions were changed to biological defense and the development of medical countermeasures. The necessary containment capability, Biosafety Levels 3 and 4 (BSL-3 and BSL-4) continued to be maintained at USAMRIID.

1980s

In 1984, the DoD requested funds for the construction of another biological aerosol test facility in Utah. The proposal submitted by the army called for BSL-4 containment, although maintaining that the BSL-4 inclusion was based on a possible need in the future and not on a current research effort. The proposal was not well received in Utah, where many citizens and government officials still recalled the secretive projects of the military: the areas on DPG still contaminated with anthrax spores, and the well-publicized accidental chemical poisoning of a flock of sheep in Skull Valley, Utah, in March 1968. Questions arose over the safety of the employees and the surrounding communities, and a suggestion was even made to shift all biological defense research to a civilian agency, such as the National Institutes of Health. The plan for a new facility was revised to utilize a BSL-3 facility, but not before the US Congress had instituted more surveillance, reporting, and control measures on the army to ensure compliance with the BWC.

1990s

In the 1990s, the US medical biological defense research effort (part of the U.S. Army's Biological Defense Research Program [BDRP]) was concentrated at USAMRIID at Fort Detrick. The army maintained state-of-the-art containment laboratory facilities there, with more than 10,000 ft2 of BSL-4 and 50,000 ft2 of BSL-3 laboratory space. BSL-4, the highest containment level, included laboratory suites that are isolated by internal walls and protected by rigorous entry restrictions, air-locks, negative-pressure air-handling systems, and filtration of all out-flow air through high-efficiency particulate air (HEPA) filters. Workers in BSL-4 laboratories also wore filtered positive-pressure total body suits, which isolated the workers from the internal air of the laboratory. BSL-3 laboratories had a similar design, but do not require that personnel wear positive-pressure suits. Workers in BSL-3 suites were protected immunologically by vaccines. U.S. governmental standards provided guidance as to which organisms might be handled under various containment levels in laboratories such as USAMRIID.

The unique facilities available at USAMRIID also included a 16-bed clinical research ward capable of BSL-3 containment, and a 2-bed patient care isolation suite—the Medical Containment Suite (MCS), known as "The Slammer"—where ICU-level care could be provided under BSL-4 containment. Here, healthcare personnel wore the same positive-pressure suits as are worn in BSL-4 research laboratories. The level of patient isolation required depended on the infecting organism and the risk to healthcare providers. Patient care can be provided at BSL-4. There were no patient-care category analogous to BSL-3; humans who are ill as a result of exposure to BSL-3 agents were to be cared for in an ordinary hospital room with barrier nursing procedures.

USAMRIID guidelines were prepared to determine which level of containment would be employed for individual patients who required BSL-4 isolation or barrier nursing care. Staff augmentation for BSL-4 critical care expertise came from the Walter Reed Army Medical Center (WRAMC), Washington, D.C., in accordance with a memorandum of agreement between the two institutions. Patients could be brought directly into the BSL-4 suite from the outside through specialized ports with unique patient-isolation equipment. (The MCS was decommissioned and discontinued in December 2010.)

Additionally, starting in the 1970s USAMRIID maintained a unique evacuation capability known as the Aeromedical Isolation Team (AIT). Led by a physician and a registered nurse, each of the two teams consisted of eight volunteers who trained intensively to provide an evacuation capability for casualties suspected of being infected with highly transmissible, life-threatening BSL-4 infectious diseases (e.g., hemorrhagic fever viruses). The unit used special adult-sized Vickers isolation units (Vickers Medical Containment Stretcher Transit Isolator). These units were aircraft transportable and isolated a patient placed inside from the external environment. The AIT could transport two patients simultaneously; obviously, this was not designed for a mass casualty situation. During the 1995 outbreak of Ebola fever in Zaire, the AIT remained on alert to evacuate any US citizens who might have become ill while working to control the disease in that country.

During this period, some biological defense research also continued at the U.S. Army Medical Research Institute of Chemical Defense, Edgewood Arsenal, Maryland, and the Walter Reed Army Institute of Research (WRAIR), Washington, D.C. USAMRIID and these sister laboratories conducted basic research in support of the medical component of the US biological defense research program, which developed strategies, products, information, procedures, and training for medical defense against biological warfare agents. The products included diagnostic reagents and procedures, drugs, vaccines, toxoids, and antitoxins. Emphasis is placed on protecting personnel before any potential exposure to the biological agent occurs.

In 1997, United States law formally defined weaponizable bio-agents as "Biological Select Agents or Toxins" (BSATs) — or simply Select Agents for short — which fall under the oversight of either the U.S. Department of Health and Human Services or the U.S. Department of Agriculture (or both) and which have the "potential to pose a severe threat to public health and safety".

In 1998, several DoD organizations consolidated to create the Defense Threat Reduction Agency (DTRA), headquartered in Fort Belvoir, Virginia. This agency is DOD's official Combat Support Agency for countering weapons of mass destruction, including bio-agents. DTRA's main functions are threat reduction, threat control, combat support, and technology development. In the US national interest, DTRA supports projects at more than 14 locations around the world, including Russia, Kazakhstan, Azerbaijan, Uzbekistan, Georgia, and Ukraine.

In 1999, a "National Pharmaceutical Stockpile" — renamed Strategic National Stockpile in 2002 — was created under the oversight of DHHS. In the same year, the Laboratory Response Network — a collaborative effort within the US federal government involving the Association of Public Health Laboratories and the Centers for Disease Control and Prevention — was established to facilitate the confirmatory diagnosis and typing of possible bio-agents. Also in 1999, President Bill Clinton issued Executive Order 13139, which provided for experimental anti-WMD drugs to be given to service members at the discretion of the Secretary of Defense only under informed consent; only the President may waive the necessity for informed consent.

2000s

Three secret DoD projects involving countermeasures against anthrax – code named Project Bacchus, Project Clear Vision and Project Jefferson – were publicly disclosed by The New York Times in 2001. (The projects were undertaken between 1997 and 2000 and focused on the concern that the old Soviet BW program was secretly continuing and had developed a genetically modified anthrax weapon.)

Since the September 11 attacks and the 2001 anthrax attacks, the US government has allocated nearly $50 billion to address the threat of biological weapons. Funding for bioweapons-related activities focuses primarily on research for and acquisition of medicines for defense. Biodefense funding also goes toward stockpiling protective equipment, increased surveillance and detection of bio-agents, and improving state and hospital preparedness. Significant funding goes to BARDA (Biomedical Advanced Research and Development Authority), part of DHHS. Funding for activities aimed at prevention has more than doubled since 2007 and is distributed among 11 federal agencies. Efforts toward cooperative international action are part of the project.

A "Select Agent Program" (SAP) was established to satisfy requirements of the USA PATRIOT Act of 2001 and the Public Health Security and Bioterrorism Preparedness and Response Act of 2002. The Centers for Disease Control and Prevention administers the SAP, which regulates the laboratories that may possess, use, or transfer Select Agents within the United States. The Project Bioshield Act was passed by Congress in 2004 calling for $5 billion for purchasing vaccines that would be used in the event of a bioterrorist attack. According to President George W. Bush:

Project BioShield will transform our ability to defend the nation in three essential ways. First, Project BioShield authorizes $5.6 billion over 10 years for the government to purchase and stockpile vaccines and drugs to fight anthrax, smallpox and other potential agents of bioterror. The DHHS has already taken steps to purchase 75 million doses of an improved anthrax vaccine for the Strategic National Stockpile. Under Project BioShield, HHS is moving forward with plans to acquire a safer, second generation smallpox vaccine, an antidote to botulinum toxin, and better treatments for exposure to chemical and radiological weapons. 

This was a ten-year program to acquire medical countermeasures to biological, chemical, radiological and nuclear agents for civilian use. A key element of the Act was to allow stockpiling and distribution of vaccines that had not been tested for safety or efficacy in humans, due to ethical concerns. Efficacy of these agents cannot be directly tested in humans without also exposing humans to the chemical, biological, or radioactive threat being treated. In these cases efficacy testing follows the US Food and Drug Administration Animal Rule for pivotal animal efficacy.

Since 2007, USAMRIID has been joined at Fort Detrick by sister bio-defense agencies of the U.S. Department of Health and Human Services (NIAID's Integrated Research Facility) and the U.S. Department of Homeland Security (the National Biodefense Analysis and Countermeasures Center and the National Bioforensic Analysis Center). These—along with the much older Foreign Disease Weed Science Research Unit of the U.S. Department of Agriculture—now constitute the National Interagency Confederation for Biological Research (NICBR).

2010s

In July 2012, the White House issued its guiding document on the National Biosurveillance Strategy.

2020s

In December 2019, Congress moved forward with a spending package that provided increases for several key U.S. biological defense programs, including the Strategic National Stockpile. The Centers for Disease Control and Prevention was slated to receive $8 billion, a $636 million increase over 2019, with a mandate written in the bill for CDC "to maintain a strong and central role in the medical countermeasures enterprise." Within the CDC budget, the Public Health and Social Services Emergency Fund, which prepares for "all public health emergencies" including bioterrorism and federal efforts against infectious diseases, was funded at $2.74 billion. Another change was a specific item in the budget for the Strategic National Stockpile, which directed $535 million for vaccines, medicines and diagnostic tools to fight Ebola, which has become an emerging threat.

Current status

In August 2019, the U.S. Government Accountability Office (GAO) issued a report that identified specific challenges that the United States faces in protecting the nation against biological events. The report focused on four specific vulnerabilities: assessment of "enterprise-wide threats", situational awareness and data integration, biodetection technologies, and lab safety and security.

Products currently being produced or under development through military research include:

Some vaccines also have applicability for diseases of domestic animals (e.g., Rift Valley fever and Venezuelan equine encephalitis). In addition, vaccines are provided to persons who may be occupationally exposed to such agents (e.g., laboratory workers, entomologists, and veterinary personnel) throughout government, industry, and academe.

USAMRIID also provides diagnostic and epidemiological support to federal, state, and local agencies and foreign governments. Examples of assistance rendered to civilian health efforts by the U.S. Army Medical Research and Materiel Command (USAMRMC) include:

  • The massive immunization program instituted during the Venezuelan equine encephalitis outbreak in the Americas in 1971;
  • The laboratory support provided to the U.S. Public Health Service during the outbreak of Legionnaire's disease in Philadelphia, Pennsylvania, in 1976;
  • The management of patients suspected of having African viral hemorrhagic fever in Sweden during the 1980s;
  • International support during the outbreak of Rift Valley fever in Mauritania in 1989;
  • Assistance with the outbreak of Ebola infections among monkeys imported to Reston (Virginia) in 1990 (→ Reston virus); and
  • Epidemiological and diagnostic support to the World Health Organization–Centers for Disease Control and Prevention field team that studied the Ebola outbreak in Zaire in 1995 (→ Zaire ebolavirus).

The current research effort combines new technological advances, such as genetic engineering and molecular modeling, applying them toward development of prevention and treatment of diseases of military significance. The program is conducted in compliance with requirements set forth by the U.S. Food and Drug Administration (FDA), U.S. Public Health Service, Nuclear Regulatory Commission, U.S. Department of Agriculture, Occupational Safety and Health Administration, and Biological Weapons Convention.

Friday, February 11, 2022

Catalytic reforming

From Wikipedia, the free encyclopedia

Catalytic reforming is a chemical process used to convert petroleum refinery naphthas distilled from crude oil (typically having low octane ratings) into high-octane liquid products called reformates, which are premium blending stocks for high-octane gasoline. The process converts low-octane linear hydrocarbons (paraffins) into branched alkanes (isoparaffins) and cyclic naphthenes, which are then partially dehydrogenated to produce high-octane aromatic hydrocarbons. The dehydrogenation also produces significant amounts of byproduct hydrogen gas, which is fed into other refinery processes such as hydrocracking. A side reaction is hydrogenolysis, which produces light hydrocarbons of lower value, such as methane, ethane, propane and butanes.

In addition to a gasoline blending stock, reformate is the main source of aromatic bulk chemicals such as benzene, toluene, xylene and ethylbenzene which have diverse uses, most importantly as raw materials for conversion into plastics. However, the benzene content of reformate makes it carcinogenic, which has led to governmental regulations effectively requiring further processing to reduce its benzene content.

This process is quite different from and not to be confused with the catalytic steam reforming process used industrially to produce products such as hydrogen, ammonia, and methanol from natural gas, naphtha or other petroleum-derived feedstocks. Nor is this process to be confused with various other catalytic reforming processes that use methanol or biomass-derived feedstocks to produce hydrogen for fuel cells or other uses.

History

In the 1940s, Vladimir Haensel, a research chemist working for Universal Oil Products (UOP), developed a catalytic reforming process using a catalyst containing platinum. Haensel's process was subsequently commercialized by UOP in 1949 for producing a high octane gasoline from low octane naphthas and the UOP process become known as the Platforming process. The first Platforming unit was built in 1949 at the refinery of the Old Dutch Refining Company in Muskegon, Michigan.

In the years since then, many other versions of the process have been developed by some of the major oil companies and other organizations. Today, the large majority of gasoline produced worldwide is derived from the catalytic reforming process.

To name a few of the other catalytic reforming versions that were developed, all of which utilized a platinum and/or a rhenium catalyst:

Chemistry

Before describing the reaction chemistry of the catalytic reforming process as used in petroleum refineries, the typical naphthas used as catalytic reforming feedstocks will be discussed.

Typical naphtha feedstocks

A petroleum refinery includes many unit operations and unit processes. The first unit operation in a refinery is the continuous distillation of the petroleum crude oil being refined. The overhead liquid distillate is called naphtha and will become a major component of the refinery's gasoline (petrol) product after it is further processed through a catalytic hydrodesulfurizer to remove sulfur-containing hydrocarbons and a catalytic reformer to reform its hydrocarbon molecules into more complex molecules with a higher octane rating value. The naphtha is a mixture of very many different hydrocarbon compounds. It has an initial boiling point of about 35 °C and a final boiling point of about 200 °C, and it contains paraffin, naphthene (cyclic paraffins) and aromatic hydrocarbons ranging from those containing 6 carbon atoms to those containing about 10 or 11 carbon atoms.

The naphtha from the crude oil distillation is often further distilled to produce a "light" naphtha containing most (but not all) of the hydrocarbons with 6 or fewer carbon atoms and a "heavy" naphtha containing most (but not all) of the hydrocarbons with more than 6 carbon atoms. The heavy naphtha has an initial boiling point of about 140 to 150 °C and a final boiling point of about 190 to 205 °C. The naphthas derived from the distillation of crude oils are referred to as "straight-run" naphthas.

It is the straight-run heavy naphtha that is usually processed in a catalytic reformer because the light naphtha has molecules with 6 or fewer carbon atoms which, when reformed, tend to crack into butane and lower molecular weight hydrocarbons which are not useful as high-octane gasoline blending components. Also, the molecules with 6 carbon atoms tend to form aromatics which is undesirable because governmental environmental regulations in a number of countries limit the amount of aromatics (most particularly benzene) that gasoline may contain.

There are a great many petroleum crude oil sources worldwide and each crude oil has its own unique composition or "assay". Also, not all refineries process the same crude oils and each refinery produces its own straight-run naphthas with their own unique initial and final boiling points. In other words, naphtha is a generic term rather than a specific term.

The table just below lists some fairly typical straight-run heavy naphtha feedstocks, available for catalytic reforming, derived from various crude oils. It can be seen that they differ significantly in their content of paraffins, naphthenes and aromatics:

Typical Heavy Naphtha Feedstocks
Crude oil name
Location
Barrow Island
Australia
Mutineer-Exeter
Australia
CPC Blend
Kazakhstan
Draugen
North Sea
Initial boiling point, °C 149 140 149 150
Final boiling point, °C 204 190 204 180
Paraffins, liquid volume % 46 62 57 38
Naphthenes, liquid volume % 42 32 27 45
Aromatics, liquid volume % 12 6 16 17

Some refinery naphthas include olefinic hydrocarbons, such as naphthas derived from the fluid catalytic cracking and coking processes used in many refineries. Some refineries may also desulfurize and catalytically reform those naphthas. However, for the most part, catalytic reforming is mainly used on the straight-run heavy naphthas, such as those in the above table, derived from the distillation of crude oils.

The reaction chemistry

There are many chemical reactions that occur in the catalytic reforming process, all of which occur in the presence of a catalyst and a high partial pressure of hydrogen. Depending upon the type or version of catalytic reforming used as well as the desired reaction severity, the reaction conditions range from temperatures of about 495 to 525 °C and from pressures of about 5 to 45 atm.

The commonly used catalytic reforming catalysts contain noble metals such as platinum and/or rhenium, which are very susceptible to poisoning by sulfur and nitrogen compounds. Therefore, the naphtha feedstock to a catalytic reformer is always pre-processed in a hydrodesulfurization unit which removes both the sulfur and the nitrogen compounds. Most catalysts require both sulphur and nitrogen content to be lower than 1 ppm.

The four major catalytic reforming reactions are:

1: The dehydrogenation of naphthenes to convert them into aromatics as exemplified in the conversion methylcyclohexane (a naphthene) to toluene (an aromatic), as shown below:
Methylcyclohexanetotoluene.svg
2: The isomerization of normal paraffins to isoparaffins as exemplified in the conversion of normal octane to 2,5-Dimethylhexane (an isoparaffin), as shown below:
Paraffintoisoparaffin.svg
3: The dehydrogenation and aromatization of paraffins to aromatics (commonly called dehydrocyclization) as exemplified in the conversion of normal heptane to toluene, as shown below:
Dehydrocyclization reaction of heptane to toluene.svg
4: The hydrocracking of paraffins into smaller molecules as exemplified by the cracking of normal heptane into isopentane and ethane, as shown below:
CatReformerEq4.png

During the reforming reactions, the carbon number of the reactants remains unchanged, except for hydrocracking reactions which break down the hydrocarbon molecule into molecules with fewer carbon atoms.[11] The hydrocracking of paraffins is the only one of the above four major reforming reactions that consumes hydrogen. The isomerization of normal paraffins does not consume or produce hydrogen. However, both the dehydrogenation of naphthenes and the dehydrocyclization of paraffins produce hydrogen. The overall net production of hydrogen in the catalytic reforming of petroleum naphthas ranges from about 50 to 200 cubic meters of hydrogen gas (at 0 °C and 1 atm) per cubic meter of liquid naphtha feedstock. In the United States customary units, that is equivalent to 300 to 1200 cubic feet of hydrogen gas (at 60 °F and 1 atm) per barrel of liquid naphtha feedstock. In many petroleum refineries, the net hydrogen produced in catalytic reforming supplies a significant part of the hydrogen used elsewhere in the refinery (for example, in hydrodesulfurization processes). The hydrogen is also necessary in order to hydrogenolyze any polymers that form on the catalyst.

In practice, the higher the content of naphthenes in the naphtha feedstock, the better will be the quality of the reformate and the higher the production of hydrogen. Crude oils containing the best naphtha for reforming are typically from Western Africa or the North Sea, such as Bonny light oil or Norwegian Troll.

Model reactions using lumping technique

Owing to too many components in catalytic reforming process feedstock, untraceable reactions and the high temperature range, the design and simulation of catalytic reformer reactors is accompanied by complexities. The lumping technique is used extensively for reducing complexities so that the lumps and reaction pathways that properly describe the reforming system and kinetic rate parameters do not depend on feedstock composition. In one of the recent works, naphtha is considered in terms of 17 hydrocarbon fractions with 15 reactions in which C1 to C5 hydrocarbons are specified as light paraffins and the C6 to C8+ naphtha cuts are characterized as isoparaffins, normal paraffins, naphthenes and aromatics. Reactions in catalytic naphtha reforming are elementary and Hougen-Watson Langmuir-Hinshelwood type reaction rate expressions are used to describe the rate of each reaction. Rate equations of this type explicitly account for the interaction of chemical species with catalyst and contain denominators in which terms characteristic of the adsorption of reacting species are presented.

Process description

The most commonly used type of catalytic reforming unit has three reactors, each with a fixed bed of catalyst, and all of the catalyst is regenerated in situ during routine catalyst regeneration shutdowns which occur approximately once each 6 to 24 months. Such a unit is referred to as a semi-regenerative catalytic reformer (SRR).

Some catalytic reforming units have an extra spare or swing reactor and each reactor can be individually isolated so that any one reactor can be undergoing in situ regeneration while the other reactors are in operation. When that reactor is regenerated, it replaces another reactor which, in turn, is isolated so that it can then be regenerated. Such units, referred to as cyclic catalytic reformers, are not very common. Cyclic catalytic reformers serve to extend the period between required shutdowns.

The latest and most modern type of catalytic reformers are called continuous catalyst regeneration (CCR) reformers. Such units are defined by continuous in-situ regeneration of part of the catalyst in a special regenerator, and by continuous addition of the regenerated catalyst to the operating reactors. As of 2006, two CCR versions available: UOP's CCR Platformer process and Axens' Octanizing process. The installation and use of CCR units is rapidly increasing.

Many of the earliest catalytic reforming units (in the 1950s and 1960s) were non-regenerative in that they did not perform in situ catalyst regeneration. Instead, when needed, the aged catalyst was replaced by fresh catalyst and the aged catalyst was shipped to catalyst manufacturers to be either regenerated or to recover the platinum content of the aged catalyst. Very few, if any, catalytic reformers currently in operation are non-regenerative.

The process flow diagram below depicts a typical semi-regenerative catalytic reforming unit.

Schematic diagram of a typical semi-regenerative catalytic reformer unit in a petroleum refinery

The liquid feed (at the bottom left in the diagram) is pumped up to the reaction pressure (5–45 atm) and is joined by a stream of hydrogen-rich recycle gas. The resulting liquid–gas mixture is preheated by flowing through a heat exchanger. The preheated feed mixture is then totally vaporized and heated to the reaction temperature (495–520 °C) before the vaporized reactants enter the first reactor. As the vaporized reactants flow through the fixed bed of catalyst in the reactor, the major reaction is the dehydrogenation of naphthenes to aromatics (as described earlier herein) which is highly endothermic and results in a large temperature decrease between the inlet and outlet of the reactor. To maintain the required reaction temperature and the rate of reaction, the vaporized stream is reheated in the second fired heater before it flows through the second reactor. The temperature again decreases across the second reactor and the vaporized stream must again be reheated in the third fired heater before it flows through the third reactor. As the vaporized stream proceeds through the three reactors, the reaction rates decrease and the reactors therefore become larger. At the same time, the amount of reheat required between the reactors becomes smaller. Usually, three reactors are all that is required to provide the desired performance of the catalytic reforming unit.

Some installations use three separate fired heaters as shown in the schematic diagram and some installations use a single fired heater with three separate heating coils.

The hot reaction products from the third reactor are partially cooled by flowing through the heat exchanger where the feed to the first reactor is preheated and then flow through a water-cooled heat exchanger before flowing through the pressure controller (PC) into the gas separator.

Most of the hydrogen-rich gas from the gas separator vessel returns to the suction of the recycle hydrogen gas compressor and the net production of hydrogen-rich gas from the reforming reactions is exported for use in the other refinery processes that consume hydrogen (such as hydrodesulfurization units and/or a hydrocracker unit).

The liquid from the gas separator vessel is routed into a fractionating column commonly called a stabilizer. The overhead offgas product from the stabilizer contains the byproduct methane, ethane, propane and butane gases produced by the hydrocracking reactions as explained in the above discussion of the reaction chemistry of a catalytic reformer, and it may also contain some small amount of hydrogen. That offgas is routed to the refinery's central gas processing plant for removal and recovery of propane and butane. The residual gas after such processing becomes part of the refinery's fuel gas system.

The bottoms product from the stabilizer is the high-octane liquid reformate that will become a component of the refinery's product gasoline. Reformate can be blended directly in the gasoline pool but often it is separated in two or more streams. A common refining scheme consists in fractionating the reformate in two streams, light and heavy reformate. The light reformate has lower octane and can be used as isomerization feedstock if this unit is available. The heavy reformate is high in octane and low in benzene, hence it is an excellent blending component for the gasoline pool.

Benzene is often removed with a specific operation to reduce the content of benzene in the reformate as the finished gasoline has often an upper limit of benzene content (in the UE this is 1% volume). The benzene extracted can be marketed as feedstock for the chemical industry.

Catalysts and mechanisms

Most catalytic reforming catalysts contain platinum or rhenium on a silica or silica-alumina support base, and some contain both platinum and rhenium. Fresh catalyst is chlorided (chlorinated) prior to use.

The noble metals (platinum and rhenium) are considered to be catalytic sites for the dehydrogenation reactions and the chlorinated alumina provides the acid sites needed for isomerization, cyclization and hydrocracking reactions. The biggest care has to be exercised during the chlorination. Indeed, if not chlorinated (or insufficiently chlorinated) the platinum and rhenium in the catalyst would be reduced almost immediately to metallic state by the hydrogen in the vapour phase. On the other hand, an excessive chlorination could depress excessively the activity of the catalyst.

The activity (i.e., effectiveness) of the catalyst in a semi-regenerative catalytic reformer is reduced over time during operation by carbonaceous coke deposition and chloride loss. The activity of the catalyst can be periodically regenerated or restored by in situ high temperature oxidation of the coke followed by chlorination. As stated earlier herein, semi-regenerative catalytic reformers are regenerated about once per 6 to 24 months. The higher the severity of the reacting conditions (temperature), the higher the octane of the produced reformate but also the shorter the duration of the cycle between two regenerations. Catalyst's cycle duration is also very dependent on the quality of the feedstock. However, independently of the crude oil used in the refinery, all catalysts require a maximum final boiling point of the naphtha feedstock of 180 °C.

Normally, the catalyst can be regenerated perhaps 3 or 4 times before it must be returned to the manufacturer for reclamation of the valuable platinum and/or rhenium content.

Weaknesses and Competition

The sensitivity of catalytic reforming to contamination by sulfur and nitrogen requires hydrotreating the naphtha before it enters the reformer, adding to the cost and complexity of the process. Dehydrogenation, an important component of reforming, is a strongly endothermic reaction, and as such, requires the reactor vessel to be externally heated. This contributes both to costs and the emissions of the process. Catalytic reforming has a limited ability to process naphthas with a high content of normal paraffins, e.g. naphthas from the gas-to-liquids (GTL) units. The reformate has a much higher content of benzene than is permissible by the current regulations in many countries. This means that the reformate should either be further processed in an aromatics extraction unit, or blended with appropriate hydrocarbon streams with low content of aromatics. Catalytic reforming requires a whole range of other processing units at the refinery (apart from the distillation tower, a naphtha hydrotreater, usually an isomerization unit to process light naphtha, an aromatics extraction unit, etc.) which puts it out of reach for smaller (micro-)refineries.

Main licensors of catalytic reforming processes, UOP and Axens, constantly work on improving the catalysts, but the rate of improvement seems to be reaching its physical limits. This is driving the emergence of new technologies to process naphtha into gasoline by companies like Chevron Phillips Chemical (Aromax) and NGT Synthesis (Methaforming).

Economics

Catalytic reformation is profitable in that it converts long-chain hydrocarbons, for which there is limited demand despite high supply, into short-chained hydrocarbons, which, due to their uses in petrol fuel, are in much greater demand. It can also be used to improve the octane rating of short-chained hydrocarbons by aromatizing them.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...