Search This Blog

Friday, June 4, 2021

Economics of nuclear power plants

From Wikipedia, the free encyclopedia
 
EDF has said its third-generation EPR Flamanville 3 project (seen here in 2010) will be delayed until 2018, due to "both structural and economic reasons," and the project's total cost has climbed to EUR 11 billion in 2012. On 29 June 2019, it was announced that the start-up was once again being pushed back, making it unlikely it could be started before the end of 2022. In July 2020, the French Court of Audit finalised an eighteen-month in-depth analysis of the project, concluding that the total estimated cost reaches up to €19.1 billion which is more than 5 times the original cost estimate. Similarly, the cost of the EPR being built at Olkiluoto, Finland, has escalated dramatically from €3 billion to over €12 billion , and the project is well behind schedule. Originally to commence operation in 2009 and that is now unlikely to be before 2022. The initial low cost forecasts for these megaprojects exhibited "optimism bias".

New nuclear power plants typically have high capital expenditure for building the plant. Fuel, operational, and maintenance costs are relatively small components of the total cost. The long service life and high capacity factor of nuclear power plants allow sufficient funds for ultimate plant decommissioning and waste storage and management to be accumulated, with little impact on the price per unit of electricity generated. Other groups disagree with these statements. Additionally, measures to mitigate climate change such as a carbon tax or carbon emissions trading, would favor the economics of nuclear power over fossil fuel power. Other groups argue that nuclear power is not the answer to climate change.

Nuclear power construction costs have varied significantly across the world and in time. Large and rapid increases in cost occurred during the 1970s, especially in the United States. There were no construction starts of nuclear power reactors between 1979 and 2012 in the United States, and since then more new reactor projects have gone into bankruptcy than have been completed. Recent cost trends in countries such as Japan and Korea have been very different, including periods of stability and decline in costs.

In more economically developed countries, a slowdown in electricity demand growth in recent years has made large-scale power infrastructure investments difficult. Very large upfront costs and long project cycles carry large risks, including political decision making and intervention such as regulatory ratcheting. In Eastern Europe, a number of long-established projects are struggling to find financing, notably Belene in Bulgaria and the additional reactors at Cernavoda in Romania, and some potential backers have pulled out. Where cheap gas is available and its future supply relatively secure, this also poses a major problem for clean energy projects. Former Exelon CEO John Rowe said in 2012 that new nuclear plants in the United States "don't make any sense right now" and would not be economic as long as gas prices remain low.

Current bids for new nuclear power plants in China were estimated at between $2800/kW and $3500/kW, as China planned to accelerate its new build program after a pause following the Fukushima disaster. However, more recent reports indicated that China will fall short of its targets. While nuclear power in China has been cheaper than solar and wind power, these are getting cheaper while nuclear power costs are growing. Moreover, third generation plants are expected to be considerably more expensive than earlier plants. Therefore, comparison with other power generation methods is strongly dependent on assumptions about construction timescales and capital financing for nuclear plants. Analysis of the economics of nuclear power must take into account who bears the risks of future uncertainties. To date all operating nuclear power plants were developed by state-owned or regulated utility monopolies where many of the risks associated with political change and regulatory ratcheting were borne by consumers rather than suppliers. Many countries have now liberalized the electricity market where these risks, and the risk of cheap competition from subsidised energy sources emerging before capital costs are recovered, are borne by plant suppliers and operators rather than consumers, which leads to a significantly different evaluation of the risk of investing in new nuclear power plants.

Two of the four EPRs under construction (the Olkiluoto Nuclear Power Plant in Finland and Flamanville in France), which are the latest new builds in Europe, are significantly behind schedule and substantially over cost. Following the 2011 Fukushima Daiichi nuclear disaster, costs are likely to go up for some types of currently operating and new nuclear power plants, due to new requirements for on-site spent fuel management and elevated design basis threats.

Overview

Olkiluoto 3 under construction in 2009. It is the first EPR design, but problems with workmanship and supervision have created costly delays which led to an inquiry by the Finnish nuclear regulator STUK. In December 2012, Areva estimated that the full cost of building the reactor will be about €8.5 billion, or almost three times the original delivery price of €3 billion.

Although the price of new plants in China is lower than in the Western world John Quiggin, an economics professor, maintains that the main problem with the nuclear option is that it is not economically viable. Professor of science and technology Ian Lowe has also challenged the economics of nuclear power. However, nuclear supporters continue to point to the historical success of nuclear power across the world, and they call for new reactors in their own countries, including proposed new but largely uncommercialised designs, as a source of new power. Nuclear supporters point out that the IPCC climate panel endorses nuclear technology as a low carbon, mature energy source which should be nearly quadrupled to help address soaring greenhouse gas emissions.

Some independent reviews keep repeating that nuclear power plants are necessarily very expensive, and anti-nuclear groups frequently produce reports that say the costs of nuclear energy are prohibitively high.

In 2012 in Ontario, Canada, costs for nuclear generation stood at 5.9¢/kWh while hydroelectricity, at 4.3¢/kWh, cost 1.6¢ less than nuclear. By September 2015, the cost of solar in the United States dropped below nuclear generation costs, averaging 5¢/kWh. Solar costs continued to fall, and by February 2016, the City of Palo Alto, California, approved a power-purchase agreement (PPA) to purchase solar electricity for under 3.68¢/kWh, lower than even hydroelectricity. Utility-scale solar electricity generation newly contracted by Palo Alto in 2016 costs 2.22¢/kWh less than electricity from the already-completed Canadian nuclear plants, and the costs of solar energy generation continue to drop. However, solar power has very low capacity factors compared to nuclear, and solar power can only achieve so much market penetration before (expensive) energy storage and transmission become necessary.

Countries including Russia, India, and China, have continued to pursue new builds. Globally, around 50 nuclear power plants were under construction in 20 countries as of April 2020, according to the IAEA. China has 10 reactors under construction. According to the World Nuclear Association, the global trend is for new nuclear power stations coming online to be balanced by the number of old plants being retired.

In the United States, nuclear power faces competition from the low natural gas prices in North America. Former Exelon CEO John Rowe said in 2012 that new nuclear plants in the United States "don’t make any sense right now" and won't be economic as long as the natural gas glut persists. In 2016, Governor of New York Andrew Cuomo directed the New York Public Service Commission to consider ratepayer-financed subsidies similar to those for renewable sources to keep nuclear power stations profitable in the competition against natural gas.

A 2019 study by the economic think tank DIW found that nuclear power has not been profitable anywhere in the World. The study of the economics of nuclear power has found it has never been financially viable, that most plants have been built while heavily subsidised by governments, often motivated by military purposes, and that nuclear power is not a good approach to tackling climate change. It found, after reviewing trends in nuclear power plant construction since 1951, that the average 1,000MW nuclear power plant would incur an average economic loss of 4.8 billion euros ($7.7 billion AUD). This has been refuted by another study.

Capital costs

"The usual rule of thumb for nuclear power is that about two thirds of the generation cost is accounted for by fixed costs, the main ones being the cost of paying interest on the loans and repaying the capital..."

Capital cost, the building and financing of nuclear power plants, represents a large percentage of the cost of nuclear electricity. In 2014, the US Energy Information Administration estimated that for new nuclear plants going online in 2019, capital costs will make up 74% of the levelized cost of electricity; higher than the capital percentages for fossil-fuel power plants (63% for coal, 22% for natural gas), and lower than the capital percentages for some other nonfossil-fuel sources (80% for wind, 88% for solar PV).

Areva, the French nuclear plant operator, offers that 70% of the cost of a kWh of nuclear electricity is accounted for by the fixed costs from the construction process. Some analysts argue (for example Steve Thomas, Professor of Energy Studies at the University of Greenwich in the UK, quoted in the book The Doomsday Machine by Martin Cohen and Andrew McKillop) that what is often not appreciated in debates about the economics of nuclear power is that the cost of equity, that is companies using their own money to pay for new plants, is generally higher than the cost of debt. Another advantage of borrowing may be that "once large loans have been arranged at low interest rates – perhaps with government support – the money can then be lent out at higher rates of return".

"One of the big problems with nuclear power is the enormous upfront cost. These reactors are extremely expensive to build. While the returns may be very great, they're also very slow. It can sometimes take decades to recoup initial costs. Since many investors have a short attention span, they don't like to wait that long for their investment to pay off."

Because of the large capital costs for the initial nuclear power plants built as part of a sustained build program, and the relatively long construction period before revenue is returned, servicing the capital costs of first few nuclear power plants can be the most important factor determining the economic competitiveness of nuclear energy. The investment can contribute about 70% to 80% of the costs of electricity. Timothy Stone, businessman and nuclear expert, stated in 2017 "It has long been recognised that the only two numbers which matter in [new] nuclear power are the capital cost and the cost of capital." The discount rate chosen to cost a nuclear power plant's capital over its lifetime is arguably the most sensitive parameter to overall costs. Because of the long life of new nuclear power plants, most of the value of a new nuclear power plant is created for the benefit of future generations.

The recent liberalization of the electricity market in many countries has made the economics of nuclear power generation less attractive, and no new nuclear power plants have been built in a liberalized electricity market. Previously a monopolistic provider could guarantee output requirements decades into the future. Private generating companies now have to accept shorter output contracts and the risks of future lower-cost competition, so they desire a shorter return on investment period. This favours generation plant types with lower capital costs or high subsidies, even if associated fuel costs are higher. A further difficulty is that due to the large sunk costs but unpredictable future income from the liberalized electricity market, private capital is unlikely to be available on favourable terms, which is particularly significant for nuclear as it is capital-intensive. Industry consensus is that a 5% discount rate is appropriate for plants operating in a regulated utility environment where revenues are guaranteed by captive markets, and 10% discount rate is appropriate for a competitive deregulated or merchant plant environment; however the independent MIT study (2003) which used a more sophisticated finance model distinguishing equity and debt capital had a higher 11.5% average discount rate.

As states are declining to finance nuclear power plants, the sector is now much more reliant on the commercial banking sector. According to research done by Dutch banking research group Profundo, commissioned by BankTrack, in 2008 private banks invested almost €176 billion in the nuclear sector. Champions were BNP Paribas, with more than €13,5 billion in nuclear investments and Citigroup and Barclays on par with both over €11,4 billion in investments. Profundo added up investments in eighty companies in over 800 financial relationships with 124 banks in the following sectors: construction, electricity, mining, the nuclear fuel cycle and "other".

A 2016 study argued that while costs did increase in the past for reactors built in the past, this does not necessarily mean there is an inherent trend of cost escalation with nuclear power, as prior studies tended to examine a relatively small share of reactors built and that a full analysis shows that cost trends for reactors varied substantially by country and era.

Cost overruns

Construction delays can add significantly to the cost of a plant. Because a power plant does not earn income and currencies can inflate during construction, longer construction times translate directly into higher finance charges. Modern nuclear power plants are planned for construction in five years or less (42 months for CANDU ACR-1000, 60 months from order to operation for an AP1000, 48 months from first concrete to operation for an EPR and 45 months for an ESBWR) as opposed to over a decade for some previous plants. However, despite Japanese success with ABWRs, two of the four EPRs under construction (in Finland and France) are significantly behind schedule.

In the United States many new regulations were put in place in the years before and again immediately after the Three Mile Island accident's partial meltdown, resulting in plant startup delays of many years. The NRC has new regulations in place now, and the next plants will have NRC Final Design Approval before the customer buys them, and a Combined Construction and Operating License will be issued before construction starts, guaranteeing that if the plant is built as designed then it will be allowed to operate—thus avoiding lengthy hearings after completion.

In Japan and France, construction costs and delays are significantly diminished because of streamlined government licensing and certification procedures. In France, one model of reactor was type-certified, using a safety engineering process similar to the process used to certify aircraft models for safety. That is, rather than licensing individual reactors, the regulatory agency certified a particular design and its construction process to produce safe reactors. U.S. law permits type-licensing of reactors, a process which is being used on the AP1000 and the ESBWR.

In Canada, cost overruns for the Darlington Nuclear Generating Station, largely due to delays and policy changes, are often cited by opponents of new reactors. Construction started in 1981 at an estimated cost of $7.4 Billion 1993-adjusted CAD, and finished in 1993 at a cost of $14.5 billion. 70% of the price increase was due to interest charges incurred due to delays imposed to postpone units 3 and 4, 46% inflation over a 4-year period and other changes in financial policy. No new nuclear reactor has since been built in Canada, although a few have been and are undergoing refurbishment and environment assessment is complete for 4 new generation stations at Darlington with the Ontario government committed in keeping a nuclear base load of 50% or around 10GW.

In the United Kingdom and the United States cost overruns on nuclear plants contributed to the bankruptcies of several utility companies. In the United States these losses helped usher in energy deregulation in the mid-1990s that saw rising electricity rates and power blackouts in California. When the UK began privatizing utilities, its nuclear reactors "were so unprofitable they could not be sold." Eventually in 1996, the government gave them away. But the company that took them over, British Energy, had to be bailed out in 2004 to the extent of 3.4 billion pounds.

Operating costs

In general, coal and nuclear plants have the same types of operating costs (operations and maintenance plus fuel costs). However, nuclear has lower fuel costs but higher operating and maintenance costs.

Fuel costs

Nuclear plants require fissile fuel. Generally, the fuel used is uranium, although other materials may be used. In 2005, prices on the world market for uranium averaged US$20/lb (US$44.09/kg). On 2007-04-19, prices reached US$113/lb (US$249.12/kg). On 2008-07-02, the price had dropped to $59/lb.

Fuel costs account for about 28% of a nuclear plant's operating expenses. As of 2013, half the cost of reactor fuel was taken up by enrichment and fabrication, so that the cost of the uranium concentrate raw material was 14 percent of operating costs. Doubling the price of uranium would add about 10% to the cost of electricity produced in existing nuclear plants, and about half that much to the cost of electricity in future power plants. The cost of raw uranium contributes about $0.0015/kWh to the cost of nuclear electricity, while in breeder reactors the uranium cost falls to $0.000015/kWh.

As of 2008, mining activity was growing rapidly, especially from smaller companies, but putting a uranium deposit into production takes 10 years or more. The world's present measured resources of uranium, economically recoverable at a price of US$130/kg according to the industry groups Organisation for Economic Co-operation and Development (OECD), Nuclear Energy Agency (NEA) and International Atomic Energy Agency (IAEA), are enough to last for "at least a century" at current consumption rates.

According to the World Nuclear Association, "the world's present measured resources of uranium (5.7 Mt) in the cost category less than three times present spot prices and used only in conventional reactors, are enough to last for about 90 years. This represents a higher level of assured resources than is normal for most minerals. Further exploration and higher prices will certainly, on the basis of present geological knowledge, yield further resources as present ones are used up." The amount of uranium present in all currently known conventional reserves alone (excluding the huge quantities of currently-uneconomical uranium present in "unconventional" reserves such as phosphate/phosphorite deposits, seawater, and other sources) is enough to last over 200 years at current consumption rates. Fuel efficiency in conventional reactors has increased over time. Additionally, since 2000, 12–15% of world uranium requirements have been met by the dilution of highly enriched weapons-grade uranium from the decommissioning of nuclear weapons and related military stockpiles with depleted uranium, natural uranium, or partially-enriched uranium sources to produce low-enriched uranium for use in commercial power reactors. Similar efforts have been utilizing weapons-grade plutonium to produce mixed oxide (MOX) fuel, which is also produced from reprocessing used fuel. Other components of used fuel are currently less commonly utilized, but have a substantial capacity for reuse, especially so in next-generation fast neutron reactors. Over 35 European reactors are licensed to use MOX fuel, as well as Russian and American nuclear plants. Reprocessing of used fuel increases utilization by approximately 30%, while the widespread use of fast breeder reactors would allow for an increase of "50-fold or more" in utilization.

Waste disposal costs

All nuclear plants produce radioactive waste. To pay for the cost of storing, transporting and disposing these wastes in a permanent location, in the United States a surcharge of a tenth of a cent per kilowatt-hour is added to electricity bills. Roughly one percent of electrical utility bills in provinces using nuclear power are diverted to fund nuclear waste disposal in Canada.

In 2009, the Obama administration announced that the Yucca Mountain nuclear waste repository would no longer be considered the answer for U.S. civilian nuclear waste. Currently, there is no plan for disposing of the waste and plants will be required to keep the waste on the plant premises indefinitely.

The disposal of low level waste reportedly costs around £2,000/m³ in the UK. High level waste costs somewhere between £67,000/m³ and £201,000/m³. General division is 80%/20% of low level/high level waste, and one reactor produces roughly 12 m³ of high level waste annually.

In Canada, the NWMO was created in 2002 to oversee long term disposal of nuclear waste, and in 2007 adopted the Adapted Phased Management procedure. Long term management is subject to change based on technology and public opinion, but currently largely follows the recommendations for a centralized repository as first extensively outlined by AECL in 1988. It was determined after extensive review that following these recommendations would safely isolate the waste from the biosphere. The location has not yet been determined, and the project is expected to cost between $9 and $13 billion CAD for construction and operation for 60–90 years, employing roughly a thousand people for the duration. Funding is available and has been collected since 1978 under the Canadian Nuclear Fuel Waste Management Program. Very long term monitoring requires less staff since high-level waste is less toxic than naturally occurring uranium ore deposits within a few centuries.

The primary argument for pursuing IFR-style technology today is that it provides the best solution to the existing nuclear waste problem because fast reactors can be fueled from the waste products of existing reactors as well as from the plutonium used in weapons, as is the case of the discontinued EBR-II in Arco, Idaho, and in the operating, as of 2014, BN-800 reactor. Depleted uranium (DU) waste can also be used as fuel in fast reactors. Waste produced by a fast-neutron reactor and a pyroelectric refiner would consist only of fission products, which are produced at a rate of about one tonne per GWe-year. This is 5% as much as present reactors produce, and needs special custody for only 300 years instead of 300,000. Only 9.2% of fission products (strontium and caesium) contribute 99% of radiotoxicity; at some additional cost, these could be separated, reducing the disposal problem by a further factor of ten.

Decommissioning

At the end of a nuclear plant's lifetime, the plant must be decommissioned. This entails either dismantling, safe storage or entombment. In the United States, the Nuclear Regulatory Commission (NRC) requires plants to finish the process within 60 years of closing. Since it may cost $500 million or more to shut down and decommission a plant, the NRC requires plant owners to set aside money when the plant is still operating to pay for the future shutdown costs.

Decommissioning a reactor that has undergone a meltdown is inevitably more difficult and expensive. Three Mile Island was decommissioned 14 years after its incident for $837 million. The cost of the Fukushima disaster cleanup is not yet known, but has been estimated to cost around $100 billion. Chernobyl is not yet decommissioned, different estimates put the end date between 2013 and 2020.

Proliferation and terrorism

A 2011 report for the Union of Concerned Scientists stated that "the costs of preventing nuclear proliferation and terrorism should be recognized as negative externalities of civilian nuclear power, thoroughly evaluated, and integrated into economic assessments—just as global warming emissions are increasingly identified as a cost in the economics of coal-fired electricity".

"Construction of the ELWR was completed in 2013 and is optimized for civilian electricity production, but it has "dual-use" potential and can be modified to produce material for nuclear weapons."

Safety, security and accidents

2000 candles in memory of the Chernobyl disaster in 1986, at a commemoration 25 years after the nuclear accident, as well as for the Fukushima nuclear disaster of 2011.

Nuclear safety and security is a chief goal of the nuclear industry. Great care is taken so that accidents are avoided, and if unpreventable, have limited consequences. Accidents could stem from system failures related to faulty construction or pressure vessel embrittlement due to prolonged radiation exposure. As with any aging technology, risks of failure increase over time, and since many currently operating nuclear reactors were built in the mid 20th century, care must be taken to ensure proper operation. Many more recent reactor designs have been proposed, most of which include passive safety systems. These design considerations serve to significantly mitigate or totally prevent major accidents from occurring, even in the event of a system failure. Still, reactors must be designed, built, and operated properly to minimize accident risks. The Fukushima disaster represents one instance where these systems were not comprehensive enough, where the tsunami following the Tōhoku earthquake disabled the backup generators that were stabilizing the reactor. According to UBS AG, the Fukushima I nuclear accidents have cast doubt on whether even an advanced economy like Japan can master nuclear safety. Catastrophic scenarios involving terrorist attacks are also conceivable.

An interdisciplinary team from MIT estimated that given the expected growth of nuclear power from 2005 to 2055, at least four core damage incidents would be expected in that period (assuming only current designs were used – the number of incidents expected in that same time period with the use of advanced designs is only one). To date, there have been five core damage incidents in the world since 1970 (one at Three Mile Island in 1979; one at Chernobyl in 1986; and three at Fukushima-Daiichi in 2011), corresponding to the beginning of the operation of generation II reactors.

According to the Paul Scherrer Institute, the Chernobyl incident is the only incident ever to have caused any fatalities. The report that UNSCEAR presented to the UN General Assembly in 2011 states that 29 plant workers and emergency responders died from effects of radiation exposure, two died from causes related to the incident but unrelated to radiation, and one died from coronary thrombosis. It attributed fifteen cases of fatal thyroid cancer to the incident. It said there is no evidence the incident caused an ongoing increase in incidence of solid tumors or blood cancers in Eastern Europe.

In terms of nuclear accidents, the Union of Concerned Scientists have claimed that "reactor owners ... have never been economically responsible for the full costs and risks of their operations. Instead, the public faces the prospect of severe losses in the event of any number of potential adverse scenarios, while private investors reap the rewards if nuclear plants are economically successful. For all practical purposes, nuclear power's economic gains are privatized, while its risks are socialized".

However, the problem of insurance costs for worst-case scenarios is not unique to nuclear power: hydroelectric power plants are similarly not fully insured against a catastrophic event such as the Banqiao Dam disaster, where 11 million people lost their homes and from 30,000 to 200,000 people died, or large dam failures in general. Private insurers base dam insurance premiums on worst-case scenarios, so insurance for major disasters in this sector is likewise provided by the state. In the US, insurance coverage for nuclear reactors is provided by the combination of operator-purchased private insurance and the primarily operator-funded Price Anderson Act.

Any effort to construct a new nuclear facility around the world, whether an existing design or an experimental future design, must deal with NIMBY or NIABY objections. Because of the high profiles of the Three Mile Island accident and Chernobyl disaster, relatively few municipalities welcome a new nuclear reactor, processing plant, transportation route, or deep geological repository within their borders, and some have issued local ordinances prohibiting the locating of such facilities there.

Nancy Folbre, an economics professor at the University of Massachusetts, has questioned the economic viability of nuclear power following the 2011 Japanese nuclear accidents:

The proven dangers of nuclear power amplify the economic risks of expanding reliance on it. Indeed, the stronger regulation and improved safety features for nuclear reactors called for in the wake of the Japanese disaster will almost certainly require costly provisions that may price it out of the market.

The cascade of problems at Fukushima, from one reactor to another, and from reactors to fuel storage pools, will affect the design, layout and ultimately the cost of future nuclear plants.

In 1986, Pete Planchon conducted a demonstration of the inherent safety of the Integral Fast Reactor. Safety interlocks were turned off. Coolant circulation was turned off. Core temperature rose from the usual 1000 degrees Fahrenheit to 1430 degrees within 20 seconds. The boiling temperature of the sodium coolant is 1621 degrees. Within seven minutes the reactor had shut itself down without action from the operators, without valves, pumps, computers, auxiliary power, or any moving parts. The temperature was below the operating temperature. The reactor was not damaged. The operators were not injured. There was no release of radioactive material. The reactor was restarted with coolant circulation but the steam generator disconnected. The same scenario recurred. Three weeks later, the operators at Chernobyl repeated the latter experiment, ironically in a rush to complete a safety test, using a very different reactor, with tragic consequences. Safety of the Integral Fast Reactor depends on the composition and geometry of the core, not efforts by operators or computer algorithms.

Insurance

Insurance available to the operators of nuclear power plants varies by nation. The worst case nuclear accident costs are so large that it would be difficult for the private insurance industry to carry the size of the risk, and the premium cost of full insurance would make nuclear energy uneconomic.

Nuclear power has largely worked under an insurance framework that limits or structures accident liabilities in accordance with the Paris convention on nuclear third-party liability, the Brussels supplementary convention, the Vienna convention on civil liability for nuclear damage, and in the United States the Price-Anderson Act. It is often argued that this potential shortfall in liability represents an external cost not included in the cost of nuclear electricity.

However, the problem of insurance costs for worst-case scenarios is not unique to nuclear power: hydroelectric power plants are similarly not fully insured against a catastrophic event such as the Banqiao Dam disaster, where 11 million people lost their homes and from 30,000 to 200,000 people died, or large dam failures in general. Private insurers base dam insurance premiums on worst-case scenarios, so insurance for major disasters in this sector is likewise provided by the state.

In Canada, the Canadian Nuclear Liability Act requires nuclear power plant operators to obtain $650 million (CAD) of liability insurance coverage per installation (regardless of the number of individual reactors present) starting in 2017 (up from the prior $75 million requirement established in 1976), increasing to $750 million in 2018, to $850 million in 2019, and finally to $1 billion in 2020. Claims beyond the insured amount would be assessed by a government appointed but independent tribunal, and paid by the federal government.

In the UK, the Nuclear Installations Act 1965 governs liability for nuclear damage for which a UK nuclear licensee is responsible. The limit for the operator is £140 million.

In the United States, the Price-Anderson Act has governed the insurance of the nuclear power industry since 1957. Owners of nuclear power plants are required to pay a premium each year for the maximum obtainable amount of private insurance ($450 million) for each licensed reactor unit. This primary or "first tier" insurance is supplemented by a second tier. In the event a nuclear accident incurs damages in excess of $450 million, each licensee would be assessed a prorated share of the excess up to $121,255,000. With 104 reactors currently licensed to operate, this secondary tier of funds contains about $12.61 billion. This results in a maximum combined primary+secondary coverage amount of up to $13.06 billion for a hypothetical single-reactor incident. If 15 percent of these funds are expended, prioritization of the remaining amount would be left to a federal district court. If the second tier is depleted, Congress is committed to determine whether additional disaster relief is required. In July 2005, Congress extended the Price-Anderson Act to newer facilities.

The Vienna Convention on Civil Liability for Nuclear Damage and the Paris Convention on Third Party Liability in the Field of Nuclear Energy put in place two similar international frameworks for nuclear liability. The limits for the conventions vary. The Vienna convention was adapted in 2004 to increase the operator liability to €700 million per incident, but this modification is not yet ratified.

Cost per kWh

The cost per unit of electricity produced (kWh) will vary according to country, depending on costs in the area, the regulatory regime and consequent financial and other risks, and the availability and cost of finance. Costs will also depend on geographic factors such as availability of cooling water, earthquake likelihood, and availability of suitable power grid connections. So it is not possible to accurately estimate costs on a global basis.

Commodity prices rose in 2008, and so all types of plants became more expensive than previously calculated. In June 2008 Moody's estimated that the cost of installing new nuclear capacity in the United States might possibly exceed $7,000/KWe in final cost. In comparison, the reactor units already under construction in China have been reported with substantially lower costs due to significantly lower labour rates.

In 2009, MIT updated its 2003 study, concluding that inflation and rising construction costs had increased the overnight cost of nuclear power plants to about $4,000/kWe, and thus increased the power cost to $0.084/kWh. The 2003 study had estimated the cost as $0.067/kWh.

A 2013 study indicates that the cost competitiveness of nuclear power is "questionable" and that public support will be required if new power stations are to be built within liberalized electricity markets.

In 2014, the US Energy Information Administration estimated the levelized cost of electricity from new nuclear power plants going online in 2019 to be $0.096/kWh before government subsidies, comparable to the cost of electricity from a new coal-fired power plant without carbon capture, but higher than the cost from natural gas-fired plants.

In 2019 the US EIA revised the levelized cost of electricity from new advanced nuclear power plants going online in 2023 to be $0.0775/kWh before government subsidies, using a regulated industry 4.3% cost of capital (WACC - pre-tax 6.6%) over a 30-year cost recovery period. Financial firm Lazard also updated its levelized cost of electricity report costing new nuclear at between $0.118/kWh and $0.192/kWh using a commercial 7.7% cost of capital (WACC - pre-tax 12% cost for the higher-risk 40% equity finance and 8% cost for the 60% loan finance) over a 40 year lifetime.

Comparisons with other power sources

Nuke, coal, gas generating costs.png

Generally, a nuclear power plant is significantly more expensive to build than an equivalent coal-fueled or gas-fueled plant. If natural gas is plentiful and cheap operating costs of conventional power plants is less. Most forms of electricity generation produce some form of negative externality — costs imposed on third parties that are not directly paid by the producer — such as pollution which negatively affects the health of those near and downwind of the power plant, and generation costs often do not reflect these external costs.

A comparison of the "real" cost of various energy sources is complicated by a number of uncertainties:

  • The cost of climate change through emissions of greenhouse gases is hard to estimate. Carbon taxes may be enacted, or carbon capture and storage may become mandatory.
  • The cost of environmental damage caused by any energy source through land use (whether for mining fuels or for power generation), air and water pollution, solid waste production, manufacturing-related damages (such as from mining and processing ores or rare earth elements), etc.
  • The cost and political feasibility of disposal of the waste from reprocessed spent nuclear fuel is still not fully resolved. In the United States, the ultimate disposal costs of spent nuclear fuel are assumed by the U.S. government after producers pay a fixed surcharge.
  • Operating reserve requirements are different for different generation methods. When nuclear units shut down unexpectedly they tend to do so independently, so the "hot spinning reserve" must be at least the size of the largest unit. On the other hand, some renewable energy sources (such as solar/wind power) are intermittent power sources with uncontrollably varying outputs, so the grid will require a combination of demand response, extra long-range transmission infrastructure, and large-scale energy storage. (Some firm renewables such as hydroelectricity have a storage reservoir and can be used as reliable back-up power for other power sources.)
  • Potential governmental instabilities in the plant's lifetime. Modern nuclear reactors are designed for a minimum operational lifetime of 60 years (extendible to 100+ years), compared to the 40 years (extendible to 60+ years) that older reactors were designed for.
  • Actual plant lifetime (to date, no nuclear plant has been shut down solely due to reaching its licensed lifetime. Over 87 reactors in the United States have been granted extended operating licenses to 60 years of operation by the NRC as of December 2016, and subsequent license renewals could extend that to 80 years. Modern nuclear reactors are also designed to last longer than older reactors as outlined above, allowing for even further increased plant lifetimes.)
  • Due to the dominant role of initial construction costs and the multi-year construction time, the interest rate for the capital required (as well as the timeline that the plant is completed in) has a major impact on the total cost of building a new nuclear plant.

Lazard's report on the estimated levelized cost of energy by source (10th edition) estimated unsubsidized prices of $97–$136/MWh for nuclear, $50–$60/MWh for solar PV, $32–$62/MWh for onshore wind, and $82–$155/MWh for offshore wind.

However, the most important subsidies to the nuclear industry do not involve cash payments. Rather, they shift construction costs and operating risks from investors to taxpayers and ratepayers, burdening them with an array of risks including cost overruns, defaults to accidents, and nuclear waste management. This approach has remained remarkably consistent throughout the nuclear industry's history, and distorts market choices that would otherwise favor less risky energy investments.

In 2011, Benjamin K. Sovacool said that: "When the full nuclear fuel cycle is considered — not only reactors but also uranium mines and mills, enrichment facilities, spent fuel repositories, and decommissioning sites — nuclear power proves to be one of the costliest sources of energy".

In 2014, Brookings Institution published The Net Benefits of Low and No-Carbon Electricity Technologies which states, after performing an energy and emissions cost analysis, that "The net benefits of new nuclear, hydro, and natural gas combined cycle plants far outweigh the net benefits of new wind or solar plants", with the most cost effective low carbon power technology being determined to be nuclear power. Moreover, Paul Joskow of MIT maintains that the "Levelized cost of electricity" (LCOE) metric is a poor means of comparing electricity sources as it hides the extra costs, such as the need to frequently operate back up power stations, incurred due to the use of intermittent power sources such as wind energy, while the value of baseload power sources are underpresented.

A 2017 focused response to these claims, particularly "baseload" or "back up", by Amory Lovins in 2017, countered with statistics from operating grids.

Other economic issues

Kristin Shrader-Frechette analysed 30 papers on the economics of nuclear power for possible conflicts of interest. She found of the 30, 18 had been funded either by the nuclear industry or pro-nuclear governments and were pro-nuclear, 11 were funded by universities or non-profit non-government organisations and were anti-nuclear, the remaining 1 had unknown sponsors and took the pro-nuclear stance. The pro-nuclear studies were accused of using cost-trimming methods such as ignoring government subsidies and using industry projections above empirical evidence where ever possible. The situation was compared to medical research where 98% of industry sponsored studies return positive results.

Nuclear power plants tend to be competitive in areas where other fuel resources are not readily available — France, most notably, has almost no native supplies of fossil fuels. France's nuclear power experience has also been one of paradoxically increasing rather than decreasing costs over time.

Making a massive investment of capital in a project with long-term recovery might affect a company's credit rating.

A Council on Foreign Relations report on nuclear energy argues that a rapid expansion of nuclear power may create shortages in building materials such as reactor-quality concrete and steel, skilled workers and engineers, and safety controls by skilled inspectors. This would drive up current prices. It may be easier to rapidly expand, for example, the number of coal power plants, without this having a large effect on current prices.

Existing nuclear plants generally have a somewhat limited ability to significantly vary their output in order to match changing demand (a practice called load following). However, many BWRs, some PWRs (mainly in France), and certain CANDU reactors (primarily those at Bruce Nuclear Generating Station) have various levels of load-following capabilities (sometimes substantial), which allow them to fill more than just baseline generation needs. Several newer reactor designs also offer some form of enhanced load-following capability. For example, the Areva EPR can slew its electrical output power between 990 and 1,650 MW at 82.5 MW per minute.

The number of companies that manufacture certain parts for nuclear reactors is limited, particularly the large forgings used for reactor vessels and steam systems. In 2010, only four companies (Japan Steel Works, China First Heavy Industries, Russia's OMZ Izhora and Korea's Doosan Heavy Industries) manufacture pressure vessels for reactors of 1100 MWe or larger. Some have suggested that this poses a bottleneck that could hamper expansion of nuclear power internationally, however, some Western reactor designs require no steel pressure vessel such as CANDU derived reactors which rely on individual pressurized fuel channels. The large forgings for steam generators — although still very heavy — can be produced by a far larger number of suppliers.

For a country with both a nuclear power industry and a nuclear arms industry, synergies between the two can favor a nuclear power plant with an otherwise uncertain economy. For example, in the United Kingdom researchers have informed MPs that the government was using the Hinkley Point C project to cross-subsidise the UK military's nuclear-related activity by maintaining nuclear skills. In support of that, researchers from the University of Sussex Andy Stirling and Phil Johnstone stated that the costs of the Trident nuclear submarine programme would be prohibitive without “an effective subsidy from electricity consumers to military nuclear infrastructure”.

Recent trends

Brunswick Nuclear Plant discharge canal
 

The nuclear power industry in Western nations has a history of construction delays, cost overruns, plant cancellations, and nuclear safety issues despite significant government subsidies and support. In December 2013, Forbes magazine reported that, in developed countries, "reactors are not a viable source of new power". Even in developed nations where they make economic sense, they are not feasible because nuclear's “enormous costs, political and popular opposition, and regulatory uncertainty”. This view echoes the statement of former Exelon CEO John Rowe, who said in 2012 that new nuclear plants “don’t make any sense right now” and won't be economically viable in the foreseeable future. John Quiggin, economics professor, also says the main problem with the nuclear option is that it is not economically-viable. Quiggin says that we need more efficient energy use and more renewable energy commercialization. Former NRC member Peter A. Bradford and Professor Ian Lowe have recently made similar statements. However, some "nuclear cheerleaders" and lobbyists in the West continue to champion reactors, often with proposed new but largely untested designs, as a source of new power.

Significant new build activity is occurring in developing countries like South Korea, India and China. China has 25 reactors under construction, However, according to a government research unit, China must not build "too many nuclear power reactors too quickly", in order to avoid a shortfall of fuel, equipment and qualified plant workers.

The 1.6 GWe EPR reactor is being built in Olkiluoto Nuclear Power Plant, Finland. A joint effort of French AREVA and German Siemens AG, it will be the largest pressurized water reactor (PWR) in the world. The Olkiluoto project has been claimed to have benefited from various forms of government support and subsidies, including liability limitations, preferential financing rates, and export credit agency subsidies, but the European Commission's investigation didn't find anything illegal in the proceedings. However, as of August 2009, the project is "more than three years behind schedule and at least 55% over budget, reaching a total cost estimate of €5 billion ($7 billion) or close to €3,100 ($4,400) per kilowatt". Finnish electricity consumers interest group ElFi OY evaluated in 2007 the effect of Olkiluoto-3 to be slightly over 6%, or €3/MWh, to the average market price of electricity within Nord Pool Spot. The delay is therefore costing the Nordic countries over 1.3 billion euros per year as the reactor would replace more expensive methods of production and lower the price of electricity.

Russia has launched the world's first floating nuclear power plant. The £100 million vessel, the Akademik Lomonosov, is the first of seven plants (70 MWe per ship) that Moscow says will bring vital energy resources to remote Russian regions. Startup of the first of the ships two reactors was announced in December 2018.

Following the Fukushima nuclear disaster in 2011, costs are likely to go up for currently operating and new nuclear power plants, due to increased requirements for on-site spent fuel management and elevated design basis threats. After Fukushima, the International Energy Agency halved its estimate of additional nuclear generating capacity built by 2035.

Many license applications filed with the U.S. Nuclear Regulatory Commission for proposed new reactors have been suspended or cancelled. As of October 2011, plans for about 30 new reactors in the United States have been reduced to 14. There are currently five new nuclear plants under construction in the United States (Watts Bar 2, Summer 2, Summer 3, Vogtle 3, Vogtle 4). Matthew Wald from The New York Times has reported that "the nuclear renaissance is looking small and slow".

In 2013, four aging, uncompetitive reactors were permanently closed in the US: San Onofre 2 and 3 in California, Crystal River 3 in Florida, and Kewaunee in Wisconsin. The Vermont Yankee plant closed in 2014. New York State is seeking to close Indian Point Nuclear Power Plant, in Buchanan, 30 miles from New York City. The additional cancellation of five large reactor uprates (Prairie Island, 1 reactor, LaSalle, 2 reactors, and Limerick, 2 reactors), four by the largest nuclear company in the United States, suggest that the nuclear industry faces "a broad range of operational and economic problems".

As of July 2013, economist Mark Cooper has identified some US nuclear power plants that face particularly significant challenges to their continued operation due to regulatory policies. These are Palisades, Fort Calhoun (meanwhile closed for economical reasons), Nine Mile Point, Fitzpatrick, Ginna, Oyster Creek (same as Ft. Calhoun), Vermont Yankee (same as Ft. Calhoun), Millstone, Clinton, Indian Point. Cooper said the lesson here for policy makers and economists is clear: "nuclear reactors are simply not competitive". An 2017 analysis by Bloomberg showed that over half of U.S. nuclear plants were running at a loss, first of all those at a single unit site.

Levelized cost of energy

From Wikipedia, the free encyclopedia

The levelized cost of energy (LCOE), or levelized cost of electricity, is a measure of the average net present cost of electricity generation for a generating plant over its lifetime. It is used for investment planning and to compare different methods of electricity generation on a consistent basis. The LCOE "represents the average revenue per unit of electricity generated that would be required to recover the costs of building and operating a generating plant during an assumed financial life and duty cycle", and is calculated as the ratio between all the discounted costs over the lifetime of an electricity generating plant divided by a discounted sum of the actual energy amounts delivered. Inputs to LCOE are chosen by the estimator. They can include cost of capital, decommissioning, "fuel costs, fixed and variable operations and maintenance costs, financing costs, and an assumed utilization rate."

Calculation

The LCOE is calculated as:

It : investment expenditures in the year t
Mt : operations and maintenance expenditures in the year t
Ft : fuel expenditures in the year t
Et : electrical energy generated in the year t
r : discount rate
n : expected lifetime of system or power station
Note: caution must be taken when using formulas for the levelized cost, as they often embody unseen assumptions, neglect effects like taxes, and may be specified in real or nominal levelized cost. For example, other versions of the above formula do not discount the electricity stream.

Typically the LCOE is calculated over the design lifetime of a plant and given in currency per energy unit, for example EUR per kilowatt-hour or AUD per megawatt-hour.

LCOE does not represent cost of electricity for consumer and is most meaningful from the investor point of view. Care should be taken in comparing different LCOE studies and the sources of the information as the LCOE for a given energy source is highly dependent on the assumptions, financing terms and technological deployment analyzed.

Thus, a key requirement for the analysis is a clear statement of the applicability of the analysis based on justified assumptions. In particular, for LCOE to be usable for rank-ordering energy-generation alternatives, caution must be taken to calculate it in "real" terms, i.e. including adjustment for expected inflation.

Considerations

There are potential limits to some levelized cost of electricity metrics for comparing energy generating sources. One of the most important potential limitations of LCOE is that it may not control for time effects associated with matching electricity production to demand. This can happen at two levels:

  • Dispatchability, the ability of a generating system to come online, go offline, or ramp up or down, quickly as demand swings.
  • The extent to which the availability profile matches or conflicts with the market demand profile.

In particular, if matching grid energy storage is not included in models for variable renewable energy sources such as solar and wind that are otherwise not dispatchable, they may produce electricity when it is not needed in the grid without storage. The value of this electricity may be lower than if it was produced at another time, or even negative. At the same time, intermittent sources can be competitive if they are available to produce when demand and prices are highest, such as solar during summertime mid-day peaks seen in hot countries where air conditioning is a major consumer. Some dispatchable technologies, such as most coal power plants, are incapable of fast ramping. Excess generation when not needed may force curtailments, thus reducing the revenue of a energy provider.

Another potential limitation of LCOE is that some analyses may not adequately consider indirect costs of generation. These can include environmental externalities or grid upgrades requirements. Intermittent power sources, such as wind and solar, may incur extra costs associated with needing to have storage or backup generation available.

The LCOE of energy efficiency and conservation (EEC) efforts can be calculated, and included alongside LCOE numbers of other options such as generation infrastructure for comparison. If this is omitted or incomplete, LCOE may not give a comprehensive picture of potential options available for meeting energy needs, and of any opportunity costs. Considering the LCOE only for utility scale plants will tend to maximize generation and risks overestimating required generation due to efficiency, thus "lowballing" their LCOE. For solar systems installed at the point of end use, it is more economical to invest in EEC first, then solar. This results in a smaller required solar system than what would be needed without the EEC measures. However, designing a solar system on the basis of its LCOE without considering that of EEC would cause the smaller system LCOE to increase, as the energy generation drops faster than the system cost. Every option should be considered, not just the LCOE of the energy source. LCOE is not as relevant to end-users than other financial considerations such as income, cashflow, mortgage, leases, rent, and electricity bills. Comparing solar investments in relation to these can make it easier for end-users to make a decision, or using cost-benefit calculations "and/or an asset’s capacity value or contribution to peak on a system or circuit level".

Capacity factor

Assumption of capacity factor has significant impact on the calculation of LCOE as it determines the actual amount of energy produced by specific installed power. Formulas that output cost per unit of energy ($/MWh) already account for the capacity factor, while formulas that output cost per unit of power ($/MW) do not.

Discount rate

Cost of capital expressed as the discount rate is one of the most controversial inputs into the LCOE equation, as it significantly impacts the outcome and a number of comparisons assume arbitrary discount rate values with little transparency of why specific value was selected. Comparisons that assume public funding, subsidies and social cost of capital (see below) tend to choose low discount rates (3%), while comparisons prepared by private investment banks tend to assume high discount rate (7-15%) associated with commercial for-profit funding.

The differences in outcomes for different assumed discount rates are dramatic — for example, NEA LCOE calculation for residential PV at 3% discount rate produces $150/MWh, while at 10% it produces $250/MWh. LCOE estimate prepared by Lazard (2020) for nuclear power based on unspecified methodology produced $164/MWh, while LCOE calculated by the investor for an actual Olkiluoto Nuclear Power Plant in Finland came out to be below 30 EUR/MWh.

A choice of 10% discount rate results in the energy production in 20 years being assigned accounting value of just 15%, which nearly triples the LCOE price. This approach, which is considered prudent from today's private financial investor's perspective, is being criticised as inappropriate for assessment of public infrastructure that mitigates climate change as it ignores social cost of the CO2 emissions for future generations and focuses on short-term investment perspective only. The approach has been criticised equally by proponents of nuclear and renewable technologies, which require high initial investment but then have low operational cost and, most importantly, are low-carbon. According to Social Cost of Carbon methodology, the discount rate for low-carbon technologies should be 1-3%.

Levelized avoided cost of energy

The metric levelized avoided cost of energy (LACE) addresses some of the shortcomings of LCOE by considering the economic value that the source provides to the grid. The economic value takes into account the dispatchability of a resource, as well as the existing energy mix in a region.

Levelized cost of storage

The levelized cost of storage (LCOS) is the analogous of LCOE applied to electricity storage technologies, such as batteries. Distinction between the two metrics can be blurred when the LCOE of systems incorporating both generation and storage are considered.

Superconductivity

From Wikipedia, the free encyclopedia
 
A magnet levitating above a high-temperature superconductor, cooled with liquid nitrogen. Persistent electric current flows on the surface of the superconductor, acting to exclude the magnetic field of the magnet (Faraday's law of induction). This current effectively forms an electromagnet that repels the magnet.
 
Video of the Meissner effect in a high-temperature superconductor (black pellet) with a NdFeB magnet (metallic)
 
A high-temperature superconductor levitating above a magnet

Superconductivity is a set of physical properties observed in certain materials where electrical resistance vanishes and magnetic flux fields are expelled from the material. Any material exhibiting these properties is a superconductor. Unlike an ordinary metallic conductor, whose resistance decreases gradually as its temperature is lowered even down to near absolute zero, a superconductor has a characteristic critical temperature below which the resistance drops abruptly to zero. An electric current through a loop of superconducting wire can persist indefinitely with no power source.

The superconductivity phenomenon was discovered in 1911 by Dutch physicist Heike Kamerlingh Onnes. Like ferromagnetism and atomic spectral lines, superconductivity is a phenomenon which can only be explained by quantum mechanics. It is characterized by the Meissner effect, the complete ejection of magnetic field lines from the interior of the superconductor during its transitions into the superconducting state. The occurrence of the Meissner effect indicates that superconductivity cannot be understood simply as the idealization of perfect conductivity in classical physics.

In 1986, it was discovered that some cuprate-perovskite ceramic materials have a critical temperature above 90 K (−183 °C). Such a high transition temperature is theoretically impossible for a conventional superconductor, leading the materials to be termed high-temperature superconductors. The cheaply available coolant liquid nitrogen boils at 77 K, and thus the existence of superconductivity at higher temperatures than this facilitates many experiments and applications that are less practical at lower temperatures.

Classification

There are many criteria by which superconductors are classified. The most common are:

Response to a magnetic field

A superconductor can be Type I, meaning it has a single critical field, above which all superconductivity is lost and below which the magnetic field is completely expelled from the superconductor; or Type II, meaning it has two critical fields, between which it allows partial penetration of the magnetic field through isolated points. These points are called vortices. Furthermore, in multicomponent superconductors it is possible to have a combination of the two behaviours. In that case the superconductor is of Type-1.5.

By theory of operation

It is conventional if it can be explained by the BCS theory or its derivatives, or unconventional, otherwise. Alternatively, a superconductor is called unconventional if the superconducting order parameter transforms according to a non-trivial irreducible representation of the point group or space group of the system.

By critical temperature

A superconductor is generally considered high-temperature if it reaches a superconducting state above a temperature of 30 K (−243.15 °C); as in the initial discovery by Georg Bednorz and K. Alex Müller. It may also reference materials that transition to superconductivity when cooled using liquid nitrogen – that is, at only Tc > 77 K, although this is generally used only to emphasize that liquid nitrogen coolant is sufficient. Low temperature superconductors refer to materials with a critical temperature below 30 K. One exception to this rule is the iron pnictide group of superconductors which display behaviour and properties typical of high-temperature superconductors, yet some of the group have critical temperatures below 30 K.

By material

"Top: Periodic table of superconducting elemental solids and their experimental critical temperature (T). Bottom: Periodic table of superconducting binary hydrides (0–300 GPa). Theoretical predictions indicated in blue and experimental results in red."

Superconductor material classes include chemical elements (e.g. mercury or lead), alloys (such as niobium–titanium, germanium–niobium, and niobium nitride), ceramics (YBCO and magnesium diboride), superconducting pnictides (like fluorine-doped LaOFeAs) or organic superconductors (fullerenes and carbon nanotubes; though perhaps these examples should be included among the chemical elements, as they are composed entirely of carbon).

Elementary properties of superconductors

Several physical properties of superconductors vary from material to material, such as the critical temperature, the value of the superconducting gap, the critical magnetic field, and the critical current density at which superconductivity is destroyed. On the other hand, there is a class of properties that are independent of the underlying material. The Meissner effect, the quantization of the magnetic flux or permanent currents, i.e. the state of zero resistance are the most important examples. The existence of these "universal" properties is rooted in the nature of the broken symmetry of the superconductor and the emergence of off-diagonal long range order. Superconductivity is a thermodynamic phase, and thus possesses certain distinguishing properties which are largely independent of microscopic details.

Off diagonal long range order is closely connected to the formation of Cooper pairs. An article by V.F. Weisskopf presents simple physical explanations for the formation of Cooper pairs, for the origin of the attractive force causing the binding of the pairs, for the finite energy gap, and for the existence of permanent currents.

Zero electrical DC resistance

Electric cables for accelerators at CERN. Both the massive and slim cables are rated for 12,500 A. Top: regular cables for LEP; bottom: superconductor-based cables for the LHC
 
Cross section of a preform superconductor rod from abandoned Texas Superconducting Super Collider (SSC).

The simplest method to measure the electrical resistance of a sample of some material is to place it in an electrical circuit in series with a current source I and measure the resulting voltage V across the sample. The resistance of the sample is given by Ohm's law as R = V / I. If the voltage is zero, this means that the resistance is zero.

Superconductors are also able to maintain a current with no applied voltage whatsoever, a property exploited in superconducting electromagnets such as those found in MRI machines. Experiments have demonstrated that currents in superconducting coils can persist for years without any measurable degradation. Experimental evidence points to a current lifetime of at least 100,000 years. Theoretical estimates for the lifetime of a persistent current can exceed the estimated lifetime of the universe, depending on the wire geometry and the temperature. In practice, currents injected in superconducting coils have persisted for more than 25 years (as of August 4, 2020) in superconducting gravimeters. In such instruments, the measurement principle is based on the monitoring of the levitation of a superconducting niobium sphere with a mass of 4 grams.

In a normal conductor, an electric current may be visualized as a fluid of electrons moving across a heavy ionic lattice. The electrons are constantly colliding with the ions in the lattice, and during each collision some of the energy carried by the current is absorbed by the lattice and converted into heat, which is essentially the vibrational kinetic energy of the lattice ions. As a result, the energy carried by the current is constantly being dissipated. This is the phenomenon of electrical resistance and Joule heating.

The situation is different in a superconductor. In a conventional superconductor, the electronic fluid cannot be resolved into individual electrons. Instead, it consists of bound pairs of electrons known as Cooper pairs. This pairing is caused by an attractive force between electrons from the exchange of phonons. Due to quantum mechanics, the energy spectrum of this Cooper pair fluid possesses an energy gap, meaning there is a minimum amount of energy ΔE that must be supplied in order to excite the fluid. Therefore, if ΔE is larger than the thermal energy of the lattice, given by kT, where k is Boltzmann's constant and T is the temperature, the fluid will not be scattered by the lattice. The Cooper pair fluid is thus a superfluid, meaning it can flow without energy dissipation.

In a class of superconductors known as type II superconductors, including all known high-temperature superconductors, an extremely low but nonzero resistivity appears at temperatures not too far below the nominal superconducting transition when an electric current is applied in conjunction with a strong magnetic field, which may be caused by the electric current. This is due to the motion of magnetic vortices in the electronic superfluid, which dissipates some of the energy carried by the current. If the current is sufficiently small, the vortices are stationary, and the resistivity vanishes. The resistance due to this effect is tiny compared with that of non-superconducting materials, but must be taken into account in sensitive experiments. However, as the temperature decreases far enough below the nominal superconducting transition, these vortices can become frozen into a disordered but stationary phase known as a "vortex glass". Below this vortex glass transition temperature, the resistance of the material becomes truly zero.

Phase transition

Behavior of heat capacity (cv, blue) and resistivity (ρ, green) at the superconducting phase transition

In superconducting materials, the characteristics of superconductivity appear when the temperature T is lowered below a critical temperature Tc. The value of this critical temperature varies from material to material. Conventional superconductors usually have critical temperatures ranging from around 20 K to less than 1 K. Solid mercury, for example, has a critical temperature of 4.2 K. As of 2015, the highest critical temperature found for a conventional superconductor is 203K for H2S, although high pressures of approximately 90 gigapascals were required. Cuprate superconductors can have much higher critical temperatures: YBa2Cu3O7, one of the first cuprate superconductors to be discovered, has a critical temperature above 90 K, and mercury-based cuprates have been found with critical temperatures in excess of 130 K. The basic physical mechanism responsible for the high critical temperature is not yet clear. However, it is clear that a two-electron pairing is involved, although the nature of the pairing ( wave vs. wave) remains controversial.

Similarly, at a fixed temperature below the critical temperature, superconducting materials cease to superconduct when an external magnetic field is applied which is greater than the critical magnetic field. This is because the Gibbs free energy of the superconducting phase increases quadratically with the magnetic field while the free energy of the normal phase is roughly independent of the magnetic field. If the material superconducts in the absence of a field, then the superconducting phase free energy is lower than that of the normal phase and so for some finite value of the magnetic field (proportional to the square root of the difference of the free energies at zero magnetic field) the two free energies will be equal and a phase transition to the normal phase will occur. More generally, a higher temperature and a stronger magnetic field lead to a smaller fraction of electrons that are superconducting and consequently to a longer London penetration depth of external magnetic fields and currents. The penetration depth becomes infinite at the phase transition.

The onset of superconductivity is accompanied by abrupt changes in various physical properties, which is the hallmark of a phase transition. For example, the electronic heat capacity is proportional to the temperature in the normal (non-superconducting) regime. At the superconducting transition, it suffers a discontinuous jump and thereafter ceases to be linear. At low temperatures, it varies instead as e−α/T for some constant, α. This exponential behavior is one of the pieces of evidence for the existence of the energy gap.

The order of the superconducting phase transition was long a matter of debate. Experiments indicate that the transition is second-order, meaning there is no latent heat. However, in the presence of an external magnetic field there is latent heat, because the superconducting phase has a lower entropy below the critical temperature than the normal phase. It has been experimentally demonstrated that, as a consequence, when the magnetic field is increased beyond the critical field, the resulting phase transition leads to a decrease in the temperature of the superconducting material.

Calculations in the 1970s suggested that it may actually be weakly first-order due to the effect of long-range fluctuations in the electromagnetic field. In the 1980s it was shown theoretically with the help of a disorder field theory, in which the vortex lines of the superconductor play a major role, that the transition is of second order within the type II regime and of first order (i.e., latent heat) within the type I regime, and that the two regions are separated by a tricritical point. The results were strongly supported by Monte Carlo computer simulations.

Meissner effect

When a superconductor is placed in a weak external magnetic field H, and cooled below its transition temperature, the magnetic field is ejected. The Meissner effect does not cause the field to be completely ejected but instead the field penetrates the superconductor but only to a very small distance, characterized by a parameter λ, called the London penetration depth, decaying exponentially to zero within the bulk of the material. The Meissner effect is a defining characteristic of superconductivity. For most superconductors, the London penetration depth is on the order of 100 nm.

The Meissner effect is sometimes confused with the kind of diamagnetism one would expect in a perfect electrical conductor: according to Lenz's law, when a changing magnetic field is applied to a conductor, it will induce an electric current in the conductor that creates an opposing magnetic field. In a perfect conductor, an arbitrarily large current can be induced, and the resulting magnetic field exactly cancels the applied field.

The Meissner effect is distinct from this—it is the spontaneous expulsion which occurs during transition to superconductivity. Suppose we have a material in its normal state, containing a constant internal magnetic field. When the material is cooled below the critical temperature, we would observe the abrupt expulsion of the internal magnetic field, which we would not expect based on Lenz's law.

The Meissner effect was given a phenomenological explanation by the brothers Fritz and Heinz London, who showed that the electromagnetic free energy in a superconductor is minimized provided

where H is the magnetic field and λ is the London penetration depth.

This equation, which is known as the London equation, predicts that the magnetic field in a superconductor decays exponentially from whatever value it possesses at the surface.

A superconductor with little or no magnetic field within it is said to be in the Meissner state. The Meissner state breaks down when the applied magnetic field is too large. Superconductors can be divided into two classes according to how this breakdown occurs. In Type I superconductors, superconductivity is abruptly destroyed when the strength of the applied field rises above a critical value Hc. Depending on the geometry of the sample, one may obtain an intermediate state consisting of a baroque pattern of regions of normal material carrying a magnetic field mixed with regions of superconducting material containing no field. In Type II superconductors, raising the applied field past a critical value Hc1 leads to a mixed state (also known as the vortex state) in which an increasing amount of magnetic flux penetrates the material, but there remains no resistance to the flow of electric current as long as the current is not too large. At a second critical field strength Hc2, superconductivity is destroyed. The mixed state is actually caused by vortices in the electronic superfluid, sometimes called fluxons because the flux carried by these vortices is quantized. Most pure elemental superconductors, except niobium and carbon nanotubes, are Type I, while almost all impure and compound superconductors are Type II.

London moment

Conversely, a spinning superconductor generates a magnetic field, precisely aligned with the spin axis. The effect, the London moment, was put to good use in Gravity Probe B. This experiment measured the magnetic fields of four superconducting gyroscopes to determine their spin axes. This was critical to the experiment since it is one of the few ways to accurately determine the spin axis of an otherwise featureless sphere.

History of superconductivity

Heike Kamerlingh Onnes (right), the discoverer of superconductivity. Paul Ehrenfest, Hendrik Lorentz, Niels Bohr stand to his left.

Superconductivity was discovered on April 8, 1911 by Heike Kamerlingh Onnes, who was studying the resistance of solid mercury at cryogenic temperatures using the recently produced liquid helium as a refrigerant. At the temperature of 4.2 K, he observed that the resistance abruptly disappeared. In the same experiment, he also observed the superfluid transition of helium at 2.2 K, without recognizing its significance. The precise date and circumstances of the discovery were only reconstructed a century later, when Onnes's notebook was found. In subsequent decades, superconductivity was observed in several other materials. In 1913, lead was found to superconduct at 7 K, and in 1941 niobium nitride was found to superconduct at 16 K.

Great efforts have been devoted to finding out how and why superconductivity works; the important step occurred in 1933, when Meissner and Ochsenfeld discovered that superconductors expelled applied magnetic fields, a phenomenon which has come to be known as the Meissner effect. In 1935, Fritz and Heinz London showed that the Meissner effect was a consequence of the minimization of the electromagnetic free energy carried by superconducting current.

London constitutive equations

The theoretical model that was first conceived for superconductivity was completely classical: it is summarized by London constitutive equations. It was put forward by the brothers Fritz and Heinz London in 1935, shortly after the discovery that magnetic fields are expelled from superconductors. A major triumph of the equations of this theory is their ability to explain the Meissner effect, wherein a material exponentially expels all internal magnetic fields as it crosses the superconducting threshold. By using the London equation, one can obtain the dependence of the magnetic field inside the superconductor on the distance to the surface.

The two constitutive equations for a superconductor by London are:

The first equation follows from Newton's second law for superconducting electrons.

Conventional theories (1950s)

During the 1950s, theoretical condensed matter physicists arrived at an understanding of "conventional" superconductivity, through a pair of remarkable and important theories: the phenomenological Ginzburg–Landau theory (1950) and the microscopic BCS theory (1957).

In 1950, the phenomenological Ginzburg–Landau theory of superconductivity was devised by Landau and Ginzburg. This theory, which combined Landau's theory of second-order phase transitions with a Schrödinger-like wave equation, had great success in explaining the macroscopic properties of superconductors. In particular, Abrikosov showed that Ginzburg–Landau theory predicts the division of superconductors into the two categories now referred to as Type I and Type II. Abrikosov and Ginzburg were awarded the 2003 Nobel Prize for their work (Landau had received the 1962 Nobel Prize for other work, and died in 1968). The four-dimensional extension of the Ginzburg–Landau theory, the Coleman-Weinberg model, is important in quantum field theory and cosmology.

Also in 1950, Maxwell and Reynolds et al. found that the critical temperature of a superconductor depends on the isotopic mass of the constituent element. This important discovery pointed to the electron-phonon interaction as the microscopic mechanism responsible for superconductivity.

The complete microscopic theory of superconductivity was finally proposed in 1957 by Bardeen, Cooper and Schrieffer. This BCS theory explained the superconducting current as a superfluid of Cooper pairs, pairs of electrons interacting through the exchange of phonons. For this work, the authors were awarded the Nobel Prize in 1972.

The BCS theory was set on a firmer footing in 1958, when N. N. Bogolyubov showed that the BCS wavefunction, which had originally been derived from a variational argument, could be obtained using a canonical transformation of the electronic Hamiltonian. In 1959, Lev Gor'kov showed that the BCS theory reduced to the Ginzburg–Landau theory close to the critical temperature.

Generalizations of BCS theory for conventional superconductors form the basis for understanding of the phenomenon of superfluidity, because they fall into the lambda transition universality class. The extent to which such generalizations can be applied to unconventional superconductors is still controversial.

Further history

The first practical application of superconductivity was developed in 1954 with Dudley Allen Buck's invention of the cryotron. Two superconductors with greatly different values of critical magnetic field are combined to produce a fast, simple switch for computer elements.

Soon after discovering superconductivity in 1911, Kamerlingh Onnes attempted to make an electromagnet with superconducting windings but found that relatively low magnetic fields destroyed superconductivity in the materials he investigated. Much later, in 1955, G. B. Yntema  succeeded in constructing a small 0.7-tesla iron-core electromagnet with superconducting niobium wire windings. Then, in 1961, J. E. Kunzler, E. Buehler, F. S. L. Hsu, and J. H. Wernick  made the startling discovery that, at 4.2 kelvin niobium–tin, a compound consisting of three parts niobium and one part tin, was capable of supporting a current density of more than 100,000 amperes per square centimeter in a magnetic field of 8.8 tesla. Despite being brittle and difficult to fabricate, niobium–tin has since proved extremely useful in supermagnets generating magnetic fields as high as 20 tesla. In 1962 T. G. Berlincourt and R. R. Hake discovered that more ductile alloys of niobium and titanium are suitable for applications up to 10 tesla. Promptly thereafter, commercial production of niobium–titanium supermagnet wire commenced at Westinghouse Electric Corporation and at Wah Chang Corporation. Although niobium–titanium boasts less-impressive superconducting properties than those of niobium–tin, niobium–titanium has, nevertheless, become the most widely used "workhorse" supermagnet material, in large measure a consequence of its very high ductility and ease of fabrication. However, both niobium–tin and niobium–titanium find wide application in MRI medical imagers, bending and focusing magnets for enormous high-energy-particle accelerators, and a host of other applications. Conectus, a European superconductivity consortium, estimated that in 2014, global economic activity for which superconductivity was indispensable amounted to about five billion euros, with MRI systems accounting for about 80% of that total.

In 1962, Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulator. This phenomenon, now called the Josephson effect, is exploited by superconducting devices such as SQUIDs. It is used in the most accurate available measurements of the magnetic flux quantum Φ0 = h/(2e), where h is the Planck constant. Coupled with the quantum Hall resistivity, this leads to a precise measurement of the Planck constant. Josephson was awarded the Nobel Prize for this work in 1973.

In 2008, it was proposed that the same mechanism that produces superconductivity could produce a superinsulator state in some materials, with almost infinite electrical resistance. The first development and study of superconducting Bose–Einstein condensate (BEC) in 2020 suggests that there is a "smooth transition between" BEC and Bardeen-Cooper-Shrieffer regimes.

High-temperature superconductivity

Timeline of superconducting materials. Colors represent different classes of materials:

Until 1986, physicists had believed that BCS theory forbade superconductivity at temperatures above about 30 K. In that year, Bednorz and Müller discovered superconductivity in lanthanum barium copper oxide (LBCO), a lanthanum-based cuprate perovskite material, which had a transition temperature of 35 K (Nobel Prize in Physics, 1987). It was soon found that replacing the lanthanum with yttrium (i.e., making YBCO) raised the critical temperature above 90 K.

This temperature jump is particularly significant, since it allows liquid nitrogen as a refrigerant, replacing liquid helium. This can be important commercially because liquid nitrogen can be produced relatively cheaply, even on-site. Also, the higher temperatures help avoid some of the problems that arise at liquid helium temperatures, such as the formation of plugs of frozen air that can block cryogenic lines and cause unanticipated and potentially hazardous pressure buildup.

Many other cuprate superconductors have since been discovered, and the theory of superconductivity in these materials is one of the major outstanding challenges of theoretical condensed matter physics. There are currently two main hypotheses – the resonating-valence-bond theory, and spin fluctuation which has the most support in the research community. The second hypothesis proposed that electron pairing in high-temperature superconductors is mediated by short-range spin waves known as paramagnons.

In 2008, holographic superconductivity, which uses holographic duality or AdS/CFT correspondence theory, was proposed by Gubser, Hartnoll, Herzog, and Horowitz, as a possible explanation of high-temperature superconductivity in certain materials.

From about 1993, the highest-temperature superconductor known was a ceramic material consisting of mercury, barium, calcium, copper and oxygen (HgBa2Ca2Cu3O8+δ) with Tc = 133–138 K.

In February 2008, an iron-based family of high-temperature superconductors was discovered. Hideo Hosono, of the Tokyo Institute of Technology, and colleagues found lanthanum oxygen fluorine iron arsenide (LaO1−xFxFeAs), an oxypnictide that superconducts below 26 K. Replacing the lanthanum in LaO1−xFxFeAs with samarium leads to superconductors that work at 55 K.

In 2014 and 2015, hydrogen sulfide (H
2
S
) at extremely high pressures (around 150 gigapascals) was first predicted and then confirmed to be a high-temperature superconductor with a transition temperature of 80 K. Additionally, in 2019 it was discovered that lanthanum hydride (LaH
10
) becomes a superconductor at 250 K under a pressure of 170 gigapascals.

In 2018, a research team from the Department of Physics, Massachusetts Institute of Technology, discovered superconductivity in bilayer graphene with one layer twisted at an angle of approximately 1.1 degrees with cooling and applying a small electric charge. Even if the experiments were not carried out in a high-temperature environment, the results are correlated less to classical but high temperature superconductors, given that no foreign atoms need to be introduced. The superconductivity effect came about as a result of electrons twisted into a vortex between the graphene layers, called "skyrmions". These act as a single particle and can pair up across the graphene's layers, leading to the basic conditions required for superconductivity.

In 2020, a room-temperature superconductor made from hydrogen, carbon and sulfur under pressures of around 270 gigapascals was described in a paper in Nature. This is currently the highest temperature at which any material has shown superconductivity.

Applications

Video of superconducting levitation of YBCO

Superconducting magnets are some of the most powerful electromagnets known. They are used in MRI/NMR machines, mass spectrometers, the beam-steering magnets used in particle accelerators and plasma confining magnets in some tokamaks. They can also be used for magnetic separation, where weakly magnetic particles are extracted from a background of less or non-magnetic particles, as in the pigment industries. They can also be used in large wind turbines to overcome the restrictions imposed by high electrical currents, with an industrial grade 3.6 megawatt superconducting windmill generator having been tested successfully in Denmark.

In the 1950s and 1960s, superconductors were used to build experimental digital computers using cryotron switches. More recently, superconductors have been used to make digital circuits based on rapid single flux quantum technology and RF and microwave filters for mobile phone base stations.

Superconductors are used to build Josephson junctions which are the building blocks of SQUIDs (superconducting quantum interference devices), the most sensitive magnetometers known. SQUIDs are used in scanning SQUID microscopes and magnetoencephalography. Series of Josephson devices are used to realize the SI volt. Depending on the particular mode of operation, a superconductor–insulator–superconductor Josephson junction can be used as a photon detector or as a mixer. The large resistance change at the transition from the normal- to the superconducting state is used to build thermometers in cryogenic micro-calorimeter photon detectors. The same effect is used in ultrasensitive bolometers made from superconducting materials.

Other early markets are arising where the relative efficiency, size and weight advantages of devices based on high-temperature superconductivity outweigh the additional costs involved. For example, in wind turbines the lower weight and volume of superconducting generators could lead to savings in construction and tower costs, offsetting the higher costs for the generator and lowering the total levelized cost of electricity (LCOE).

Promising future applications include high-performance smart grid, electric power transmission, transformers, power storage devices, electric motors (e.g. for vehicle propulsion, as in vactrains or maglev trains), magnetic levitation devices, fault current limiters, enhancing spintronic devices with superconducting materials, and superconducting magnetic refrigeration. However, superconductivity is sensitive to moving magnetic fields, so applications that use alternating current (e.g. transformers) will be more difficult to develop than those that rely upon direct current. Compared to traditional power lines, superconducting transmission lines are more efficient and require only a fraction of the space, which would not only lead to a better environmental performance but could also improve public acceptance for expansion of the electric grid.

 

Representation of a Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Representation_of_a_Lie_group...