Search This Blog

Wednesday, February 28, 2018

Liquefied natural gas

From Wikipedia, the free encyclopedia

Liquefied natural gas (LNG) is natural gas (predominantly methane, CH4, with some mixture of ethane C2H6) that has been converted to liquid form for ease and safety of non-pressurized storage or transport. It takes up about 1/600th the volume of natural gas in the gaseous state (at standard conditions for temperature and pressure). It is odorless, colorless, non-toxic and non-corrosive. Hazards include flammability after vaporization into a gaseous state, freezing and asphyxia. The liquefaction process involves removal of certain components, such as dust, acid gases, helium, water, and heavy hydrocarbons, which could cause difficulty downstream. The natural gas is then condensed into a liquid at close to atmospheric pressure by cooling it to approximately −162 °C (−260 °F); maximum transport pressure is set at around 25 kPa (4 psi).

A typical LNG process. The gas is first extracted and transported to a processing plant where it is purified by removing any condensates such as water, oil, mud, as well as other gases such as CO2 and H2S. An LNG process train will also typically be designed to remove trace amounts of mercury from the gas stream to prevent mercury amalgamizing with aluminium in the cryogenic heat exchangers. The gas is then cooled down in stages until it is liquefied. LNG is finally stored in storage tanks and can be loaded and shipped.

Natural gas is mainly converted in to LNG to achieve the natural gas transport over the seas where laying pipelines is not feasible technically and economically. LNG achieves a higher reduction in volume than compressed natural gas (CNG) so that the (volumetric) energy density of LNG is 2.4 times greater than that of CNG (at 250 bar) or 60 percent that of diesel fuel.[1] This makes LNG cost efficient in marine transport over long distances. However, CNG carrier can be used economically up to medium distances in marine transport.[2] Specially designed cryogenic sea vessels (LNG carriers) or cryogenic road tankers are used for its transport. LNG is principally used for transporting natural gas to markets, where it is regasified and distributed as pipeline natural gas. It can be used in natural gas vehicles, although it is more common to design vehicles to use compressed natural gas. Its relatively high cost of production and the need to store it in expensive cryogenic tanks have hindered widespread commercial use. Despite these drawbacks, on energy basis LNG production is expected to hit 10% of the global crude production by 2020.(see LNG Trade)

Specific energy content and energy density

The heating value depends on the source of gas that is used and the process that is used to liquefy the gas. The range of heating value can span +/- 10 to 15 percent. A typical value of the higher heating value of LNG is approximately 50 MJ/kg or 21,500 BTU/lb.[3] A typical value of the lower heating value of LNG is 45 MJ/kg or 19,350 BTU/lb.

For the purpose of comparison of different fuels the heating value may be expressed in terms of energy per volume which is known as the energy density expressed in MJ/liter. The density of LNG is roughly 0.41 kg/liter to 0.5 kg/liter, depending on temperature, pressure, and composition,[4] compared to water at 1.0 kg/liter. Using the median value of 0.45 kg/liter, the typical energy density values are 22.5 MJ/liter (based on higher heating value) or 20.3 MJ/liter (based on lower heating value).

The (volume-based) energy density of LNG is approximately 2.4 times greater than that of CNG which makes it economical to transport natural gas by ship in the form of LNG. The energy density of LNG is comparable to propane and ethanol but is only 60 percent that of diesel and 70 percent that of gasoline.[5]

History

Experiments on the properties of gases started early in the seventeenth century. By the middle of the seventeenth century Robert Boyle had derived the inverse relationship between the pressure and the volume of gases. About the same time, Guillaume Amontons started looking into temperature effects on gas. Various gas experiments continued for the next 200 years. During that time there were efforts to liquefy gases. Many new facts on the nature of gases had been discovered. For example, early in the nineteenth century Cagniard de la Tour had shown there was a temperature above which a gas could not be liquefied. There was a major push in the mid to late nineteenth century to liquefy all gases. A number of scientists including Michael Faraday, James Joule, and William Thomson (Lord Kelvin), did experiments in this area. In 1886 Karol Olszewski liquefied methane, the primary constituent of natural gas. By 1900 all gases had been liquefied except helium which was liquefied in 1908.

The first large scale liquefaction of natural gas in the U.S. was in 1918 when the U.S. government liquefied natural gas as a way to extract helium, which is a small component of some natural gas. This helium was intended for use in British dirigibles for World War I. The liquid natural gas (LNG) was not stored, but regasified and immediately put into the gas mains.[6]

The key patents having to do with natural gas liquefaction were in 1915 and the mid-1930s. In 1915 Godfrey Cabot patented a method for storing liquid gases at very low temperatures. It consisted of a Thermos bottle type design which included a cold inner tank within an outer tank; the tanks being separated by insulation. In 1937 Lee Twomey received patents for a process for large scale liquefaction of natural gas. The intention was to store natural gas as a liquid so it could be used for shaving peak energy loads during cold snaps. Because of large volumes it is not practical to store natural gas, as a gas, near atmospheric pressure. However, if it can be liquefied it can be stored in a volume 600 times smaller. This is a practical way to store it but the gas must be stored at −260 °F (−162 °C).

There are two processes for liquefying natural gas in large quantities. The first is the cascade process, in which the natural gas is cooled by another gas which in turn has been cooled by still another gas, hence named the "cascade" process. There are usually two cascade cycles prior to the liquid natural gas cycle. The other method is the Linde process, with a variation of the Linde process, called the Claude process, being sometimes used. In this process, the gas is cooled regeneratively by continually passing it through an orifice until it is cooled to temperatures at which it liquefies. The cooling of gas by expanding it through an orifice was developed by James Joule and William Thomson and is known as the Joule–Thomson effect. Lee Twomey used the cascade process for his patents.

Commercial operations in the United States

The East Ohio Gas Company built a full-scale commercial liquid natural gas (LNG) plant in Cleveland, Ohio, in 1940 just after a successful pilot plant built by its sister company, Hope Natural Gas Company of West Virginia. This was the first such plant in the world. Originally it had three spheres, approximately 63 feet in diameter containing LNG at −260 °F. Each sphere held the equivalent of about 50 million cubic feet of natural gas. A fourth tank, a cylinder, was added in 1942. It had an equivalent capacity of 100 million cubic feet of gas. The plant operated successfully for three years. The stored gas was regasified and put into the mains when cold snaps hit and extra capacity was needed. This precluded the denial of gas to some customers during a cold snap.

The Cleveland plant failed on October 20, 1944 when the cylindrical tank ruptured spilling thousands of gallons of LNG over the plant and nearby neighborhood. The gas evaporated and caught fire, which caused 130 fatalities.[7] The fire delayed further implementation of LNG facilities for several years. However, over the next 15 years new research on low-temperature alloys, and better insulation materials, set the stage for a revival of the industry. It restarted in 1959 when a U.S. World War II Liberty ship, the Methane Pioneer, converted to carry LNG, made a delivery of LNG from the U.S. Gulf coast to energy starved Great Britain. In June 1964, the world's first purpose-built LNG carrier, the "Methane Princess" entered service.[8] Soon after that a large natural gas field was discovered in Algeria. International trade in LNG quickly followed as LNG was shipped to France and Great Britain from the Algerian fields. One more important attribute of LNG had now been exploited. Once natural gas was liquefied it could not only be stored more easily, but it could be transported. Thus energy could now be shipped over the oceans via LNG the same way it was shipped by oil.

The US LNG industry restarted in 1965 when a series of new plants were built in the U.S. The building continued through the 1970s. These plants were not only used for peak-shaving, as in Cleveland, but also for base-load supplies for places that never had natural gas prior to this. A number of import facilities were built on the East Coast in anticipation of the need to import energy via LNG. However, a recent boom in U.S. natural gas production (2010–2014), enabled by hydraulic fracturing (“fracking”), has many of these import facilities being considered as export facilities. The first U.S. LNG export was completed in early 2016.[9]

Production

The natural gas fed into the LNG plant will be treated to remove water, hydrogen sulfide, carbon dioxide and other components that will freeze (e.g., benzene) under the low temperatures needed for storage or be destructive to the liquefaction facility. LNG typically contains more than 90 percent methane. It also contains small amounts of ethane, propane, butane, some heavier alkanes, and nitrogen. The purification process can be designed to give almost 100 percent methane. One of the risks of LNG is a rapid phase transition explosion (RPT), which occurs when cold LNG comes into contact with water.[10]

The most important infrastructure needed for LNG production and transportation is an LNG plant consisting of one or more LNG trains, each of which is an independent unit for gas liquefaction. The largest LNG train in operation is in Qatar. These facilities recently reached a safety milestone, completing 12 years of operations on its offshore facilities without a Lost Time Incident.[11] The Qatar operation overtook the Train 4 of Atlantic LNG in Trinidad and Tobago with a production capacity of 5.2 million metric ton per annum (mmtpa),[12] followed by the SEGAS LNG plant in Egypt with a capacity of 5 mmtpa. In July 2014, Atlantic LNG celebrated its 3000th cargo of LNG at the company’s liquefaction facility in Trinidad.[13] The Qatargas II plant has a production capacity of 7.8 mmtpa for each of its two trains. LNG sourced from Qatargas II will be supplied to Kuwait, following the signing of an agreement in May 2014 between Qatar Liquefied Gas Company and Kuwait Petroleum Corp.[13] LNG is loaded onto ships and delivered to a regasification terminal, where the LNG is allowed to expand and reconvert into gas. Regasification terminals are usually connected to a storage and pipeline distribution network to distribute natural gas to local distribution companies (LDCs) or independent power plants (IPPs).

LNG plant production

Information for the following table is derived in part from publication by the U.S. Energy Information Administration.[14]

Plant Name Location Country Startup Date Capacity (mmtpa) Corporation
Gorgon Barrow Island Australia 2016 3 x 5 = 15 Chevron 47%
Ichthys Browse Basin Australia 2016 2 x 4.2 = 8.4 INPEX, Total S.A. 24%
Das Island I Trains 1–2 Abu Dhabi UAE 1977 1.7 x 2 = 3.4 ADGAS (ADNOC, BP, Total, Mitsui)
Das Island II Train 3 Abu Dhabi UAE 1994 2.6 ADGAS (ADNOC, BP, Total, Mitsui)
Arzew (CAMEL) GL4Z Trains 1–3
Algeria 1964 0.3 x 3 = 0.9 Sonatrach. Shutdown since April 2010.
Arzew GL1Z Trains 1–6
Algeria 1978 1.3 x 6 = 7.8 Sonatrach
Arzew GL2Z Trains 1–6
Algeria 1981 1.4 x 6 = 8.4 Sonatrach
Skikda GL1K Phase 1 & 2 Trains 1–6
Algeria 1972/1981 Total 6.0 Sonatrach
Skikda GL3Z Skikda Train 1
Algeria 2013 4.7 Sonatrach
Skikda GL3Z Skikda Train 2
Algeria 2013 4.5 Sonatrach
Angola LNG Soyo Angola 2013 5.2 Chevron
Lumut 1
Brunei 1972 7.2
Badak NGL A-B Bontang Indonesia 1977 4 Pertamina
Badak NGL C-D Bontang Indonesia 1986 4.5 Pertamina
Badak NGL E Bontang Indonesia 1989 3.5 Pertamina
Badak NGL F Bontang Indonesia 1993 3.5 Pertamina
Badak NGL G Bontang Indonesia 1998 3.5 Pertamina
Badak NGL H Bontang Indonesia 1999 3.7 Pertamina
Darwin LNG Darwin, NT Australia 2006 3.7 ConocoPhillips
Donggi Senoro LNG Luwuk Indonesia 2015 2 Mitsubishi, Pertamina, Medco
Sengkang LNG Sengkang Indonesia 2014 5 Energy World Corp.
Atlantic LNG Point Fortin Trinidad and Tobago 1999
Atlantic LNG
Atlantic LNG [Point Fortin] Trinidad and Tobago 2003 9.9 Atlantic LNG
SEGAS LNG Damietta Egypt 2004 5.5 SEGAS LNG
Egyptian LNG Idku Egypt 2005 7.2
Bintulu MLNG 1
Malaysia 1983 7.6
Bintulu MLNG 2
Malaysia 1994 7.8
Bintulu MLNG 3
Malaysia 2003 3.4
Nigeria LNG
Nigeria 1999 23.5
Northwest Shelf Venture Karratha Australia 1984 16.3
Withnell Bay Karratha Australia 1989

Withnell Bay Karratha Australia 1995 (7.7)
Sakhalin II
Russia 2009 9.6.[15]
Yemen LNG Balhaf Yemen 2008 6.7
Tangguh LNG Project Papua Barat Indonesia 2009 7.6
Qatargas Train 1 Ras Laffan Qatar 1996 3.3
Qatargas Train 2 Ras Laffan Qatar 1997 3.3
Qatargas Train 3 Ras Laffan Qatar 1998 3.3
Qatargas Train 4 Ras Laffan Qatar 2009 7.8
Qatargas Train 5 Ras Laffan Qatar 2009 7.8
Qatargas Train 6 Ras Laffan Qatar 2010 7.8
Qatargas Train 7 Ras Laffan Qatar 2011 7.8
Rasgas Train 1 Ras Laffan Qatar 1999 3.3
Rasgas Train 2 Ras Laffan Qatar 2000 3.3
Rasgas Train 3 Ras Laffan Qatar 2004 4.7
Rasgas Train 4 Ras Laffan Qatar 2005 4.7
Rasgas Train 5 Ras Laffan Qatar 2006 4.7
Rasgas Train 6 Ras Laffan Qatar 2009 7.8
Rasgas Train 7 Ras Laffan Qatar 2010 7.8
Qalhat
Oman 2000 7.3
Melkøya Hammerfest Norway 2007 4.2 Statoil
Equatorial Guinea

2007 3.4 Marathon Oil
Risavika Stavanger Norway 2010 0.3 Risavika LNG Production[16]
Dominion Cove Point LNG Lusby, Maryland United States 2018 5.2 Dominion Resources

World total production

Global LNG import trends, by volume (in red), and as a percentage of global natural gas imports (in black) (US EIA data)
 
Trends in the top five LNG-importing nations as of 2009 (US EIA data)
 
Year Capacity (Mmtpa) Notes
1990 50[17]
2002 130[18]
2007 160[17]
2014 246[19]

The LNG industry developed slowly during the second half of the last century because most LNG plants are located in remote areas not served by pipelines, and because of the large costs to treat and transport LNG. Constructing an LNG plant costs at least $1.5 billion per 1 mmtpa capacity, a receiving terminal costs $1 billion per 1 bcf/day throughput capacity and LNG vessels cost $200 million–$300 million.

In the early 2000s, prices for constructing LNG plants, receiving terminals and vessels fell as new technologies emerged and more players invested in liquefaction and regasification. This tended to make LNG more competitive as a means of energy distribution, but increasing material costs and demand for construction contractors have put upward pressure on prices in the last few years. The standard price for a 125,000 cubic meter LNG vessel built in European and Japanese shipyards used to be US$250 million. When Korean and Chinese shipyards entered the race, increased competition reduced profit margins and improved efficiency—reducing costs by 60 percent. Costs in US dollars also declined due to the devaluation of the currencies of the world's largest shipbuilders: the Japanese yen and Korean won.

Since 2004, the large number of orders increased demand for shipyard slots, raising their price and increasing ship costs. The per-ton construction cost of an LNG liquefaction plant fell steadily from the 1970s through the 1990s. The cost reduced by approximately 35 percent. However, recently the cost of building liquefaction and regasification terminals doubled due to increased cost of materials and a shortage of skilled labor, professional engineers, designers, managers and other white-collar professionals.

Due to natural gas shortage concerns in the northeastern U.S. and surplus nature gas in the rest of the country, many new LNG import and export terminals are being contemplated in the United States. Concerns about the safety of such facilities create controversy in some regions where they are proposed. One such location is in the Long Island Sound between Connecticut and Long Island. Broadwater Energy, an effort of TransCanada Corp. and Shell, wishes to build an LNG import terminal in the sound on the New York side. Local politicians including the Suffolk County Executive raised questions about the terminal. In 2005, New York Senators Chuck Schumer and Hillary Clinton also announced their opposition to the project.[20] Several import terminal proposals along the coast of Maine were also met with high levels of resistance and questions. On Sep. 13, 2013 the U.S. Department of Energy approved Dominion Cove Point's application to export up to 770 million cubic feet per day of LNG to countries that do not have a free trade agreement with the U.S.[21] In May 2014, the FERC concluded its environmental assessment of the Cove Point LNG project, which found that the proposed natural gas export project could be built and operated safely.[22] Another LNG terminal is currently proposed for Elba Island, Ga.[23] Plans for three LNG export terminals in the U.S. Gulf Coast region have also received conditional Federal approval.[21][24] In Canada, an LNG export terminal is under construction near Guysborough, Nova Scotia.[25]

Commercial aspects

Global Trade

In the commercial development of an LNG value chain, LNG suppliers first confirm sales to the downstream buyers and then sign long-term contracts (typically 20–25 years) with strict terms and structures for gas pricing. Only when the customers are confirmed and the development of a greenfield project deemed economically feasible, could the sponsors of an LNG project invest in their development and operation. Thus, the LNG liquefaction business has been limited to players with strong financial and political resources. Major international oil companies (IOCs) such as ExxonMobil, Royal Dutch Shell, BP, BG Group, Chevron, and national oil companies (NOCs) such as Pertamina and Petronas are active players.

LNG is shipped around the world in specially constructed seagoing vessels. The trade of LNG is completed by signing an SPA (sale and purchase agreement) between a supplier and receiving terminal, and by signing a GSA (gas sale agreement) between a receiving terminal and end-users. Most of the contract terms used to be DES or ex ship, holding the seller responsible for the transport of the gas. With low shipbuilding costs, and the buyers preferring to ensure reliable and stable supply, however, contracts with FOB terms increased. Under such terms the buyer, who often owns a vessel or signs a long-term charter agreement with independent carriers, is responsible for the transport.

LNG purchasing agreements used to be for a long term with relatively little flexibility both in price and volume. If the annual contract quantity is confirmed, the buyer is obliged to take and pay for the product, or pay for it even if not taken, in what is referred to as the obligation of take-or-pay contract (TOP).

In the mid-1990s, LNG was a buyer's market. At the request of buyers, the SPAs began to adopt some flexibilities on volume and price. The buyers had more upward and downward flexibilities in TOP, and short-term SPAs less than 16 years came into effect. At the same time, alternative destinations for cargo and arbitrage were also allowed. By the turn of the 21st century, the market was again in favor of sellers. However, sellers have become more sophisticated and are now proposing sharing of arbitrage opportunities and moving away from S-curve pricing. There has been much discussion regarding the creation of an "OGEC" as a natural gas equivalent of OPEC. Russia and Qatar, countries with the largest and the third largest natural gas reserves in the world, have finally supported such move.[citation needed]

Until 2003, LNG prices have closely followed oil prices. Since then, LNG prices in Europe and Japan have been lower than oil prices, although the link between LNG and oil is still strong. In contrast, prices in the US and the UK have recently skyrocketed, then fallen as a result of changes in supply and storage.[citation needed] In late 1990s and in early 2000s, the market shifted for buyers, but since 2003 and 2004, it has been a strong seller's market, with net-back as the best estimation for prices.[citation needed].

Research from QNB Group in 2014 shows that robust global demand is likely to keep LNG prices high for at least the next few years.[26]

The current surge in unconventional oil and gas in the U.S. has resulted in lower gas prices in the U.S. This has led to discussions in Asia' oil linked gas markets to import gas based on Henry Hub index.[27] Recent high level conference in Vancouver, the Pacific Energy Summit 2013 Pacific Energy Summit 2013 convened policy makers and experts from Asia and the U.S. to discuss LNG trade relations between these regions.

Receiving terminals exist in about 18 countries, including India, Japan, Korea, Taiwan, China, Greece, Belgium, Spain, Italy, France, the UK, the US, Chile, and the Dominican Republic, among others. Plans exist for Argentina, Brazil, Uruguay, Canada, Ukraine and others to also construct new receiving (gasification) terminals.

LNG Project Screening

Base load (large scale, >1 MTPA) LNG projects require natural gas reserves,[28] buyers[29] and financing. Using proven technology and a proven contractor is extremely important for both investors and buyers.[30] Gas reserves required: 1 tcf of gas required per Mtpa of LNG over 20 years.[28]

LNG is most cost efficiently produced in relatively large facilities due to economies of scale, at sites with marine access allowing regular large bulk shipments direct to market. This requires a secure gas supply of sufficient capacity. Ideally, facilities are located close to the gas source, to minimize the cost of intermediate transport infrastructure and gas shrinkage (fuel loss in transport). The high cost of building large LNG facilities makes the progressive development of gas sources to maximize facility utilization essential, and the life extension of existing, financially depreciated LNG facilities cost effective. Particularly when combined with lower sale prices due to large installed capacity and rising construction costs, this makes the economic screening/ justification to develop new, and especially greenfield, LNG facilities challenging, even if these could be more environmentally friendly than existing facilities with all stakeholder concerns satisfied. Due to high financial risk, it is usual to contractually secure gas supply/ concessions and gas sales for extended periods before proceeding to an investment decision.

Uses

The primary use of LNG is to simplify transport of natural gas from the source to a destination. On the large scale, this is done when the source and the destination are across an ocean from each other. It can also be used when adequate pipeline capacity is not available. For large scale transport uses, the LNG is typically regassified at the receiving end and pushed into the local natural gas pipeline infrastructure.

– LNG can also be used to meet peak demand when the normal pipeline infrastructure can meet most demand needs, but not the peak demand needs. These plants are typically called LNG Peak Shaving Plants as the purpose is to shave off part of the peak demand from what is required out of the supply pipeline.

– LNG can be used to fuel internal combustion engines. LNG is in the early stages of becoming a mainstream fuel for transportation needs. It is being evaluated and tested for over-the-road trucking,[31] off-road,[32] marine, and train applications.[33] There are known problems with the fuel tanks and delivery of gas to the engine,[34] but despite these concerns the move to LNG as a transportation fuel has begun. LNG competes directly with compressed natural gas as a fuel for natural gas vehicles since the engine is identical. There may be applications where LNG trucks, buses, trains and boats could be cost effective in order to regularly distribute LNG energy together with general freight and/or passengers to smaller, isolated communities without a local gas source or access to pipelines.

Use of LNG to fuel large over-the-road trucks

China has been a leader in the use of LNG vehicles[35] with over 100,000 LNG powered vehicles on the road as of Sept 2014.[36]

In the United States the beginnings of a public LNG Fueling capability is being put in place. An alternative fuelling centre tracking site shows 84 public truck LNG fuel centres as of Dec 2016.[37] It is possible for large trucks to make cross country trips such as Los Angeles to Boston and refuel at public refuelling stations every 500 miles. The 2013 National Trucker's Directory lists approximately 7,000 truckstops,[38] thus approximately 1% of US truckstops have LNG available.

As of December 2014 LNG fuel and NGV's have not been taken to very quickly within Europe and it is questionable whether LNG will ever become the fuel of choice among fleet operators.[39] During the year 2015, Netherlands introduced LNG powered trucks in transport sector.[40] Australian government is planning to develop an LNG highway to utilise the locally produced LNG and replace the imported diesel fuel used by interstate haulage vehicles.[41]

In the year 2015, India also made small beginning by transporting LNG by LNG powered road tankers in Kerala state.[42] In 2017, Petronet LNG is setting up 20 LNG stations on highways along the Indian west coast that connect Delhi with Thiruvananthapuram covering a total distance of 4,500 km via Mumbai and Bengaluru.[43] Japan, the world’s largest importer of LNG, is set to use of LNG as road transport fuel.[44]

Use of LNG to fuel high-horsepower/high-torque engines

In internal combustion engines the volume of the cylinders is a common measure of the power of an engine. Thus a 2000cc engine would typically be more powerful than a 1800cc engine, but that assumes a similar air-fuel mixture is used. Also If, via a turbocharger as an example, the 1800cc engine were using an air-fuel mixture that was significantly more energy dense, then it might be able to produce more power than a 2000cc engine burning a less energy dense air-fuel mixture. Unfortunately turbochargers are both complex and expensive. Thus it becomes clear for high-horsepower/high-torque engines a fuel that can inherently be used to create a more energy dense air-fuel mixture is preferred because a smaller and simpler engine can be used to produce the same power.

With traditional gasoline and diesel engines the energy density of the air-fuel mixture is limited because the liquid fuels do not mix well in the cylinder. Further, gasoline and diesel auto-ignite[45] at temperatures and pressures relevant to engine design. An important part of traditional engine design is designing the cylinders, compression ratios, and fuel injectors such that pre-ignition is avoided,[46] but at the same time as much fuel as possible can be injected, become well mixed, and still have time to complete the combustion process during the power stroke.

Natural gas does not auto-ignite at pressures and temperatures relevant to traditional gasoline and diesel engine design, thus providing more flexibility in the design of a natural gas engine. Methane, the main component of natural gas, has an autoignition temperature of 580 °C (1,076 °F),[47] whereas gasoline and diesel autoignite at approximately 250 °C (482 °F) and 210 °C (410 °F) respectively.

With a compressed natural gas (CNG) engine, the mixing of the fuel and the air is more effective since gases typically mix well in a short period of time, but at typical CNG compression pressures the fuel itself is less energy dense than gasoline or diesel thus the end result is a lower energy dense air-fuel mixture. Thus for the same cylinder displacement engine, a non turbocharged CNG powered engine is typically less powerful than a similarly sized gas or diesel engine. For that reason turbochargers are popular on European CNG cars.[48] Despite that limitation, the 12 liter Cummins Westport ISX12G engine[49] is an example of a CNG capable engine designed to pull tractor/trailer loads up to 80,000 lbs showing CNG can be used in most if not all on-road truck applications. The original ISX G engines incorporated a turbocharger to enhance the air-fuel energy density.[50]

LNG offers a unique advantage over CNG for more demanding high-horsepower applications by eliminating the need for a turbocharger. Because LNG boils at approximately −160 °C (−256 °F), by using a simple heat exchanger a small amount of LNG can be converted to its gaseous form at extremely high pressure with the use of little or no mechanical energy. A properly designed high-horsepower engine can leverage this extremely high pressure energy dense gaseous fuel source to create a higher energy density air-fuel mixture than can be efficiently created with a CNG powered engine. The end result when compared to CNG engines is more overall efficiency in high-horsepower engine applications when high-pressure direct injection technology is used. The Westport HDMI2[51] fuel system is an example of a high-pressure direct injection technology that does not require a turbocharger if teamed with appropriate LNG heat exchanger technology. The Volvo Trucks 13-liter LNG engine[52] is another example of a LNG engine leveraging advanced high pressure technology.

Westport recommends CNG for engines 7 liters or smaller and LNG with direct injection for engines between 20 and 150 liters. For engines between 7 and 20 liters either option is recommended. See slide 13 from there NGV Bruxelles – Industry Innovation Session presentation[53]

High horsepower engines in the oil drilling, mining, locomotive, and marine fields have been or are being developed.[54] Paul Blomerous has written a paper[55] concluding as much as 40 Million tonnes per annum of LNG (approximately 26.1 billion gallons/year or 71 million gallons/day) could be required just to meet the global needs of the high-horsepower engines by 2025 to 2030.

As of the end of 1st quarter 2015 Prometheus Energy Group Inc claims to have delivered over 100 million gallons of LNG within the previous 4 years into the industrial market,[56] and is continuing to add new customers.

Use of LNG in maritime applications

LNG bunkering has been established in some ports via truck to ship fueling. This type of LNG fueling is straightforward to establish assuming a supply of LNG is available. Unfortunately, it doesn't meet the needs of containerships and other vessels with large full capacity.

Container shipping company, Maersk Group has decided to introduce LNG fuel driven container ships.[57] DEME Group has contracted Wärtsilä to power its new generation ‘Antigoon’ class dredger with dual fuel (DF) engines.[58]

In 2014, Shell ordered a dedicated LNG bunker vessel.[59] It is planned to go into service in Rotterdam in the summer of 2017[60]

The International Convention for Prevention of Pollution from Ships (MARPOL), adopted by the IMO, has mandated that marine vessels shall not consume fuel (bunker fuel, diesel, etc.) with a sulphur content greater than 0.1% from the year 2020. Replacement of high sulphur bunker fuel with sulphur free LNG is required on major scale in marine transport sector as low sulphur liquid fuels are costlier than LNG.[61]

Trade

The global trade in LNG is growing rapidly from negligible in 1970 to what is expected to be a globally meaningful amount by 2020. As a reference, the 2014 global production of crude oil was 92 million barrels per day[62] or 186.4 quads/yr (quadrillion BTUs/yr).

In 1970, global LNG trade was of 3 billion cubic metres (bcm) (0.11 quads).[63] In 2011, it was 331 bcm (11.92 quads).[63] The U.S. started exporting LNG in February 2016. The Black & Veatch Oct 2014 forecast is that by 2020, the U.S. alone will export between 10 Bcf/d (3.75 quads/yr) and 14 Bcf/d (5.25 quads/yr).[64] E&Y projects global LNG demand could hit 400 mtpa (19.7 quads) by 2020.[65] If that occurs, the LNG market will be roughly 10% the size of the global crude oil market, and that does not count the vast majority of natural gas which is delivered via pipeline directly from the well to the consumer.

In 2004, LNG accounted for 7 percent of the world’s natural gas demand.[66] The global trade in LNG, which has increased at a rate of 7.4 percent per year over the decade from 1995 to 2005, is expected to continue to grow substantially.[67] LNG trade is expected to increase at 6.7 percent per year from 2005 to 2020.[67]

Until the mid-1990s, LNG demand was heavily concentrated in Northeast Asia: Japan, South Korea and Taiwan. At the same time, Pacific Basin supplies dominated world LNG trade.[67] The worldwide interest in using natural gas-fired combined cycle generating units for electric power generation, coupled with the inability of North American and North Sea natural gas supplies to meet the growing demand, substantially broadened the regional markets for LNG. It also brought new Atlantic Basin and Middle East suppliers into the trade.[67]

By the end of 2011, there were 18 LNG exporting countries and 25 LNG importing countries. The three biggest LNG exporters in 2011 were Qatar (75.5 MT), Malaysia (25 MT) and Indonesia (21.4 MT). The three biggest LNG importers in 2011 were Japan (78.8 MT), South Korea (35 MT) and UK (18.6 MT).[68] LNG trade volumes increased from 140 MT in 2005 to 158 MT in 2006, 165 MT in 2007, 172 MT in 2008.[69] Global LNG production was 246 MT in 2014,[70] most of which was used in trade between countries.[71] During the next several years there would be significant increase in volume of LNG Trade.[65] For example, about 59 MTPA of new LNG supply from six new plants came to market just in 2009, including:
  • Northwest Shelf Train 5: 4.4 MTPA
  • Sakhalin II: 9.6 MTPA
  • Yemen LNG: 6.7 MTPA
  • Tangguh: 7.6 MTPA
  • Qatargas: 15.6 MTPA
  • Rasgas Qatar: 15.6 MTPA
In 2006, Qatar became the world's biggest exporter of LNG.[63] As of 2012, Qatar is the source of 25 percent of the world's LNG exports.[63]

Investments in U.S. export facilities were increasing by 2013, these investments were spurred by increasing shale gas production in the United States and a large price differential between natural gas prices in the U.S. and those in Europe and Asia. Cheniere Energy became the first company in the United States to receive permission and export LNG in 2016.[9]

Imports

In 1964, the UK and France made the first LNG trade, buying gas from Algeria, witnessing a new era of energy.

Today, only 19 countries export LNG.[63]

Compared with the crude oil market, in 2013 the natural gas market was about 72 percent of the crude oil market (measured on a heat equivalent basis),[72] of which LNG forms a small but rapidly growing part. Much of this growth is driven by the need for clean fuel and some substitution effect due to the high price of oil (primarily in the heating and electricity generation sectors).

Japan, South Korea, Spain, France, Italy and Taiwan import large volumes of LNG due to their shortage of energy. In 2005, Japan imported 58.6 million tons of LNG, representing some 30 percent of the LNG trade around the world that year. Also in 2005, South Korea imported 22.1 million tons, and in 2004 Taiwan imported 6.8 million tons. These three major buyers purchase approximately two-thirds of the world's LNG demand. In addition, Spain imported some 8.2 mmtpa in 2006, making it the third largest importer. France also imported similar quantities as Spain.[citation needed] Following the Fukushima Daiichi nuclear disaster in March 2011 Japan became a major importer accounting for one third of the total.[73] European LNG imports fell by 30 percent in 2012, and are expected to fall further by 24 percent in 2013, as South American and Asian importers pay more.[74]

Cargo diversion

Based on the LNG SPAs, LNG is destined for pre-agreed destinations, and diversion of that LNG is not allowed. However, if Seller and Buyer make a mutual agreement, then the diversion of the cargo is permitted—subject to sharing the additional profit created by such a diversion. In the European Union and some other jurisdictions, it is not permitted to apply the profit-sharing clause in LNG SPAs.

Cost of LNG plants

For an extended period of time, design improvements in liquefaction plants and tankers had the effect of reducing costs.

In the 1980s, the cost of building an LNG liquefaction plant cost $350 per tpa (tonne per year). In 2000s, it was $200/tpa. In 2012, the costs can go as high as $1,000/tpa, partly due to the increase in the price of steel.[63]

As recently as 2003, it was common to assume that this was a “learning curve” effect and would continue into the future. But this perception of steadily falling costs for LNG has been dashed in the last several years.[67]

The construction cost of greenfield LNG projects started to skyrocket from 2004 afterward and has increased from about $400 per ton per year of capacity to $1,000 per ton per year of capacity in 2008.

The main reasons for skyrocketed costs in LNG industry can be described as follows:
  1. Low availability of EPC contractors as result of extraordinary high level of ongoing petroleum projects worldwide.[15]
  2. High raw material prices as result of surge in demand for raw materials.
  3. Lack of skilled and experienced workforce in LNG industry.[15]
  4. Devaluation of US dollar.
The 2007–2008 global financial crisis caused a general decline in raw material and equipment prices, which somewhat lessened the construction cost of LNG plants. However, by 2012 this was more than offset by increasing demand for materials and labor for the LNG market.

Small-scale liquefaction plants

Small-scale liquefaction plants are suitable for peakshaving on natural gas pipelines, transportation fuel, or for deliveries of natural gas to remote areas not connected to pipelines.[75] They typically have a compact size, are fed from a natural gas pipeline, and are located close to the location where the LNG will be used. This proximity decreases transportation and LNG product costs for consumers.[76][77] It also avoids the additional greenhouse gas emissions generated during long transportation.

The small-scale LNG plant also allows localized peakshaving to occur—balancing the availability of natural gas during high and low periods of demand. It also makes it possible for communities without access to natural gas pipelines to install local distribution systems and have them supplied with stored LNG.[78]

LNG pricing

There are three major pricing systems in the current LNG contracts:
  • Oil indexed contract used primarily in Japan, Korea, Taiwan and China;
  • Oil, oil products and other energy carriers indexed contracts used primarily in Continental Europe;[79] and
  • Market indexed contracts used in the US and the UK.;
The formula for an indexed price is as follows:

CP = BP + β X
  • BP: constant part or base price
  • β: gradient
  • X: indexation
The formula has been widely used in Asian LNG SPAs, where base price represents various non-oil factors, but usually a constant determined by negotiation at a level which can prevent LNG prices from falling below a certain level. It thus varies regardless of oil price fluctuation.

Henry Hub Plus

Some LNG buyers have already signed contracts for future US-based cargos at Henry Hub-linked prices.[80] Cheniere Energy’s LNG export contract pricing consists of a fixed fee (liquefaction tolling fee) plus 115% of Henry Hub per MMBtu of LNG.[81] Tolling fees in the Cheniere contracts vary: $2.25/MMBtu with BG Group signed in 2011; $2.49/MMBtu with Spain's GNF signed in 2012; and $3.00/MMBtu with South Korea's Kogas and Centrica signed in 2013.[82]

Oil parity

Oil parity is the LNG price that would be equal to that of crude oil on a Barrel of oil equivalent basis. If the LNG price exceeds the price of crude oil in BOE terms, then the situation is called broken oil parity. A coefficient of 0.1724 results in full oil parity. In most cases the price of LNG is less than the price of crude oil in BOE terms. In 2009, in several spot cargo deals especially in East Asia, oil parity approached the full oil parity or even exceeds oil parity.[83] In January 2016, the spot LNG price (5.461 US$/mmbtu) has broken oil parity when the Brent crude price (≤32 US$/bbl) has fallen steeply.[84] By the end of June 2016, LNG price has fallen by nearly 50% below its oil parity price making it more economical than more polluting diesel/gas oil in transport sector.[85]

S-curve

Many formulae include an S-curve, where the price formula is different above and below a certain oil price, to dampen the impact of high oil prices on the buyer, and low oil prices on the seller. Most of the LNG trade is governed by long term contracts. When the spot LNG price are cheaper than long term oil price indexed contracts, the most profitable LNG end use is to power mobile engines for replacing costly gasoline and diesel consumption.

JCC and ICP

In most of the East Asian LNG contracts, price formula is indexed to a basket of crude imported to Japan called the Japan Crude Cocktail (JCC). In Indonesian LNG contracts, price formula is linked to Indonesian Crude Price (ICP).

Brent and other energy carriers

In continental Europe, the price formula indexation does not follow the same format, and it varies from contract to contract. Brent crude price (B), heavy fuel oil price (HFO), light fuel oil price (LFO), gas oil price (GO), coal price, electricity price and in some cases, consumer and producer price indexes are the indexation elements of price formulas.

Price review

Usually there exists a clause allowing parties to trigger the price revision or price reopening in LNG SPAs. In some contracts there are two options for triggering a price revision. regular and special. Regular ones are the dates that will be agreed and defined in the LNG SPAs for the purpose of price review.

Quality of LNG

LNG quality is one of the most important issues in the LNG business. Any gas which does not conform to the agreed specifications in the sale and purchase agreement is regarded as “off-specification” (off-spec) or “off-quality” gas or LNG. Quality regulations serve three purposes:[86]
1 – to ensure that the gas distributed is non-corrosive and non-toxic, below the upper limits for H2S, total sulphur, CO2 and Hg content;
2 – to guard against the formation of liquids or hydrates in the networks, through maximum water and hydrocarbon dewpoints;
3 – to allow interchangeability of the gases distributed, via limits on the variation range for parameters affecting combustion: content of inert gases, calorific value, Wobbe index, Soot Index, Incomplete Combustion Factor, Yellow Tip Index, etc.
In the case of off-spec gas or LNG the buyer can refuse to accept the gas or LNG and the seller has to pay liquidated damages for the respective off-spec gas volumes.

The quality of gas or LNG is measured at delivery point by using an instrument such as a gas chromatograph.

The most important gas quality concerns involve the sulphur and mercury content and the calorific value. Due to the sensitivity of liquefaction facilities to sulfur and mercury elements, the gas being sent to the liquefaction process shall be accurately refined and tested in order to assure the minimum possible concentration of these two elements before entering the liquefaction plant, hence there is not much concern about them.

However, the main concern is the heating value of gas. Usually natural gas markets can be divided in three markets in terms of heating value:[86]
  • Asia (Japan, Korea, Taiwan) where gas distributed is rich, with a gross calorific value (GCV) higher than 43 MJ/m3(n), i.e. 1,090 Btu/scf,
  • the UK and the US, where distributed gas is lean, with a GCV usually lower than 42 MJ/m3(n), i.e. 1,065 Btu/scf,
  • Continental Europe, where the acceptable GCV range is quite wide: approx. 39 to 46 MJ/m3(n), i.e. 990 to 1,160 Btu/scf.
There are some methods to modify the heating value of produced LNG to the desired level. For the purpose of increasing the heating value, injecting propane and butane is a solution. For the purpose of decreasing heating value, nitrogen injecting and extracting butane and propane are proved solutions. Blending with gas or LNG can be a solutions; however all of these solutions while theoretically viable can be costly and logistically difficult to manage in large scale. Lean LNG price in terms of mmbtu is lower to the rich LNG price.[87]

Liquefaction technology

There are several liquefaction processes available for large, baseload LNG plants (in order of prevalence):[88]
  1. AP-C3MRTM – designed by Air Products & Chemicals, Inc. (APCI)
  2. Cascade – designed by ConocoPhillips
  3. AP-X® – designed by Air Products & Chemicals, Inc. (APCI)
  4. AP-SMRTM (Single Mixed Refrigerant) – designed by Air Products & Chemicals, Inc. (APCI)
  5. MFC® (mixed fluid cascade) – designed by Linde
  6. PRICO® (SMR) – designed by Black & Veatch
  7. DMR (Dual Mixed Refrigerant)
  8. Liquefin – designed by Air Liquide
As of January 2016, global nominal LNG liquefaction capacity was 301.5 MTPA (million tonnes per annum), and liquefaction capacity under construction was 142 MTPA.[89]

The majority of these trains use either APCI AP-C3MR or Cascade technology for the liquefaction process. The other processes, used in a small minority of some liquefaction plants, include Shell's DMR (double-mixed refrigerant) technology and the Linde technology.

APCI technology is the most-used liquefaction process in LNG plants: out of 100 liquefaction trains onstream or under-construction, 86 trains with a total capacity of 243 MMTPA have been designed based on the APCI process. Philips Cascade process is the second most-used, used in 10 trains with a total capacity of 36.16 MMTPA. The Shell DMR process has been used in three trains with total capacity of 13.9 MMTPA; and, finally, the Linde/Statoil process is used in the Snohvit 4.2 MMTPA single train.

Floating liquefied natural gas (FLNG) facilities float above an offshore gas field, and produce, liquefy, store and transfer LNG (and potentially LPG and condensate) at sea before carriers ship it directly to markets. The first FLNG facility is now in development by Shell,[90] due for completion in around 2017.[91]

Storage

LNG storage tank at EG LNG

Modern LNG storage tanks are typically full containment type, which has a prestressed concrete outer wall and a high-nickel steel inner tank, with extremely efficient insulation between the walls. Large tanks are low aspect ratio (height to width) and cylindrical in design with a domed steel or concrete roof. Storage pressure in these tanks is very low, less than 10 kPa (1.45 psig). Sometimes more expensive underground tanks are used for storage. Smaller quantities (say 700 m3 (190,000 US gallons) and less), may be stored in horizontal or vertical, vacuum-jacketed, pressure vessels. These tanks may be at pressures anywhere from less than 50 kPa to over 1,700 kPa (7 psig to 250 psig).

LNG must be kept cold to remain a liquid, independent of pressure. Despite efficient insulation, there will inevitably be some heat leakage into the LNG, resulting in vaporisation of the LNG. This boil-off gas acts to keep the LNG cold. The boil-off gas is typically compressed and exported as natural gas, or it is reliquefied and returned to storage.

Transportation

Tanker LNG Rivers, LNG capacity of 135,000 cubic metres
 
Interior of an LNG cargo tank

LNG is transported in specially designed ships with double hulls protecting the cargo systems from damage or leaks. There are several special leak test methods available to test the integrity of an LNG vessel's membrane cargo tanks.[92]

The tankers cost around US$200 million each.[63]

Transportation and supply is an important aspect of the gas business, since natural gas reserves are normally quite distant from consumer markets. Natural gas has far more volume than oil to transport, and most gas is transported by pipelines. There is a natural gas pipeline network in the former Soviet Union, Europe and North America. Natural gas is less dense, even at higher pressures. Natural gas will travel much faster than oil through a high-pressure pipeline, but can transmit only about a fifth of the amount of energy per day due to the lower density. Natural gas is usually liquefied to LNG at the end of the pipeline, prior to shipping.

Short LNG pipelines for use in moving product from LNG vessels to onshore storage are available. Longer pipelines, which allow vessels to offload LNG at a greater distance from port facilities are under development. This requires pipe in pipe technology due to requirements for keeping the LNG cold.[93]

LNG is transported using both tanker truck,[94] railway tanker, and purpose built ships known as LNG carriers. LNG will be sometimes taken to cryogenic temperatures to increase the tanker capacity. The first commercial ship-to-ship transfer (STS) transfers were undertaken in February 2007 at the Flotta facility in Scapa Flow[95] with 132,000 m3 of LNG being passed between the vessels Excalibur and Excelsior. Transfers have also been carried out by Exmar Shipmanagement, the Belgian gas tanker owner in the Gulf of Mexico, which involved the transfer of LNG from a conventional LNG carrier to an LNG regasification vessel (LNGRV). Prior to this commercial exercise LNG had only ever been transferred between ships on a handful of occasions as a necessity following an incident.[citation needed] SIGTTO - the Society of International Gas Tanker and Terminal Operators is the responsible body for LNG operators around the world and seeks to disseminate knowledge regarding the safe transport of LNG at sea.[96]

Besides LNG vessels, LNG is also used in some aircraft.

Terminals

Liquefied natural gas is used to transport natural gas over long distances, often by sea. In most cases, LNG terminals are purpose-built ports used exclusively to export or import LNG.

Refrigeration

The insulation, as efficient as it is, will not keep LNG cold enough by itself. Inevitably, heat leakage will warm and vapourise the LNG. Industry practice is to store LNG as a boiling cryogen. That is, the liquid is stored at its boiling point for the pressure at which it is stored (atmospheric pressure). As the vapour boils off, heat for the phase change cools the remaining liquid. Because the insulation is very efficient, only a relatively small amount of boil off is necessary to maintain temperature. This phenomenon is also called auto-refrigeration.

Boil off gas from land based LNG storage tanks is usually compressed and fed to natural gas pipeline networks. Some LNG carriers use boil off gas for fuel.

Environmental concerns

Natural gas could be considered the most environmentally friendly fossil fuel, because it has the lowest CO2 emissions per unit of energy and because it is suitable for use in high efficiency combined cycle power stations. For an equivalent amount of heat, burning natural gas produces about 30 percent less carbon dioxide than burning petroleum and about 45 per cent less than burning coal. [97] On a per kilometre transported basis, emissions from LNG are lower than piped natural gas, which is a particular issue in Europe, where significant amounts of gas are piped several thousand kilometres from Russia. However, emissions from natural gas transported as LNG are higher than for natural gas produced locally to the point of combustion as emissions associated with transport are lower for the latter.[citation needed]

However, on the West Coast of the United States, where up to three new LNG importation terminals were proposed prior to the U.S. fracking boom, environmental groups, such as Pacific Environment, Ratepayers for Affordable Clean Energy (RACE), and Rising Tide had moved to oppose them.[98] They claimed that, while natural gas power plants emit approximately half the carbon dioxide of an equivalent coal power plant, the natural gas combustion required to produce and transport LNG to the plants adds 20 to 40 percent more carbon dioxide than burning natural gas alone.[99] A 2015 peer reviewed study evaluated the full end to end life cycle of LNG produced in the U.S. and consumed in Europe or Asia.[100] It concluded that global CO2 production would be reduced due to the resulting reduction in other fossil fuels burned.

Green bordered white diamond symbol used on LNG-powered vehicles in China

Safety and accidents

Natural gas is a fuel and a combustible substance. To ensure safe and reliable operation, particular measures are taken in the design, construction and operation of LNG facilities.

In its liquid state, LNG is not explosive and can not burn. For LNG to burn, it must first vaporize, then mix with air in the proper proportions (the flammable range is 5 percent to 15 percent), and then be ignited. In the case of a leak, LNG vaporizes rapidly, turning into a gas (methane plus trace gases), and mixing with air. If this mixture is within the flammable range, there is risk of ignition which would create fire and thermal radiation hazards.

Gas venting from vehicles powered by LNG may create a flammability hazard if parked indoors for longer than a week. Additionally, due to its low temperature, refueling a LNG-powered vehicle requires training to avoid the risk of frostbite.[101][102]

LNG tankers have sailed over 100 million miles without a shipboard death or even a major accident.[103]

Several on-site accidents involving or related to LNG are listed below:
  • 1944, Oct. 20. The East Ohio Natural Gas Co. experienced a failure of an LNG tank in Cleveland, Ohio, US.[104] 128 people perished in the explosion and fire. The tank did not have a dike retaining wall, and it was made during World War II, when metal rationing was very strict. The steel of the tank was made with an extremely low amount of nickel, which meant the tank was brittle when exposed to the cryogenic nature of LNG. The tank ruptured, spilling LNG into the city sewer system. The LNG vaporized and turned into gas, which exploded and burned.
  • 1979, Oct. 6, Lusby, Maryland, US. A pump seal failed at the Cove Point LNG import facility, releasing natural gas vapors (not LNG), which entered an electrical conduit.[104] A worker switched off a circuit breaker, which ignited the gas vapors. The resulting explosion killed a worker, severely injured another and caused heavy damage to the building. A safety analysis was not required at the time, and none was performed during the planning, design or construction of the facility.[105] National fire codes were changed as a result of the accident.
  • 2004, Jan. 19, Skikda, Algeria. Explosion at Sonatrach LNG liquefaction facility.[104] 27 killed, 56 injured, three LNG trains destroyed, a marine berth was damaged and 2004 production was down 76 percent for the year. Total loss was US$900 million. A steam boiler that was part of an LNG liquefaction train exploded triggering a massive hydrocarbon gas explosion. The explosion occurred where propane and ethane refrigeration storage were located. Site distribution of the units caused a domino effect of explosions.[106][107] It remains unclear if LNG or LNG vapour, or other hydrocarbon gases forming part of the liquefaction process initiated the explosions. One report, of the US Government Team Site Inspection of the Sonatrach Skikda LNG Plant in Skikda, Algeria, March 12–16, 2004, has cited it was a leak of hydrocarbons from the refrigerant (liquefaction) process system.

Maxwell's demon

From Wikipedia, the free encyclopedia
Schematic figure of Maxwell's demon

In the philosophy of thermal and statistical physics, Maxwell's demon is a thought experiment created by the physicist James Clerk Maxwell in which he suggested how the second law of thermodynamics might hypothetically be violated.[1] In the thought experiment, a demon controls a small door between two chambers of gas. As individual gas molecules approach the door, the demon quickly opens and shuts the door so that fast molecules pass into the other chamber, while slow molecules remain in the first chamber. Because faster molecules are hotter, the demon's behavior causes one chamber to warm up as the other cools, thus decreasing entropy and violating the second law of thermodynamics.

Origin and history of the idea

The thought experiment first appeared in a letter Maxwell wrote to Peter Guthrie Tait on 11 December 1867. It appeared again in a letter to John William Strutt in 1871, before it was presented to the public in Maxwell's 1872 book on thermodynamics titled Theory of Heat.[2]
In his letters and books, Maxwell described the agent opening the door between the chambers as a "finite being." William Thomson (Lord Kelvin) was the first to use the word "demon" for Maxwell's concept, in the journal Nature in 1874, and implied that he intended the mediating, rather than malevolent, connotation of the word.[3][4][5]

Original thought experiment

The second law of thermodynamics ensures (through statistical probability) that two bodies of different temperature, when brought into contact with each other and isolated from the rest of the Universe, will evolve to a thermodynamic equilibrium in which both bodies have approximately the same temperature.[6] The second law is also expressed as the assertion that in an isolated system, entropy never decreases.[6]

Maxwell conceived a thought experiment as a way of furthering the understanding of the second law. His description of the experiment is as follows:[6][7]
... if we conceive of a being whose faculties are so sharpened that he can follow every molecule in its course, such a being, whose attributes are as essentially finite as our own, would be able to do what is impossible to us. For we have seen that molecules in a vessel full of air at uniform temperature are moving with velocities by no means uniform, though the mean velocity of any great number of them, arbitrarily selected, is almost exactly uniform. Now let us suppose that such a vessel is divided into two portions, A and B, by a division in which there is a small hole, and that a being, who can see the individual molecules, opens and closes this hole, so as to allow only the swifter molecules to pass from A to B, and only the slower molecules to pass from B to A. He will thus, without expenditure of work, raise the temperature of B and lower that of A, in contradiction to the second law of thermodynamics.
In other words, Maxwell imagines one container divided into two parts, A and B.[6][8] Both parts are filled with the same gas at equal temperatures and placed next to each other. Observing the molecules on both sides, an imaginary demon guards a trapdoor between the two parts. When a faster-than-average molecule from A flies towards the trapdoor, the demon opens it, and the molecule will fly from A to B. Likewise, when a slower-than-average molecule from B flies towards the trapdoor, the demon will let it pass from B to A. The average speed of the molecules in B will have increased while in A they will have slowed down on average. Since average molecular speed corresponds to temperature, the temperature decreases in A and increases in B, contrary to the second law of thermodynamics. A heat engine operating between the thermal reservoirs A and B could extract useful work from this temperature difference.

The demon must allow molecules to pass in both directions in order to produce only a temperature difference; one-way passage only of faster-than-average molecules from A to B will cause higher temperature and pressure to develop on the B side.

Criticism and development

Several physicists have presented calculations that show that the second law of thermodynamics will not actually be violated, if a more complete analysis is made of the whole system including the demon.[6][8][9] The essence of the physical argument is to show, by calculation, that any demon must "generate" more entropy segregating the molecules than it could ever eliminate by the method described. That is, it would take more thermodynamic work to gauge the speed of the molecules and selectively allow them to pass through the opening between A and B than the amount of energy gained by the difference of temperature caused by this.

One of the most famous responses to this question was suggested in 1929 by Leó Szilárd,[10] and later by Léon Brillouin.[6][8] Szilárd pointed out that a real-life Maxwell's demon would need to have some means of measuring molecular speed, and that the act of acquiring information would require an expenditure of energy. Since the demon and the gas are interacting, we must consider the total entropy of the gas and the demon combined. The expenditure of energy by the demon will cause an increase in the entropy of the demon, which will be larger than the lowering of the entropy of the gas.

In 1960, Rolf Landauer raised an exception to this argument.[6][8][11] He realized that some measuring processes need not increase thermodynamic entropy as long as they were thermodynamically reversible. He suggested these "reversible" measurements could be used to sort the molecules, violating the Second Law. However, due to the connection between thermodynamic entropy and information entropy, this also meant that the recorded measurement must not be erased. In other words, to determine whether to let a molecule through, the demon must acquire information about the state of the molecule and either discard it or store it. Discarding it leads to immediate increase in entropy but the demon cannot store it indefinitely: In 1982, Charles Bennett showed that, however well prepared, eventually the demon will run out of information storage space and must begin to erase the information it has previously gathered.[8][12] Erasing information is a thermodynamically irreversible process that increases the entropy of a system. Although Bennett had reached the same conclusion as Szilard’s 1929 paper, that a Maxwellian demon could not violate the second law because entropy would be created, he had reached it for different reasons. Regarding Landauer's principle, the minimum energy dissipated by deleting information was experimentally measured by Eric Lutz et al. in 2012. Furthermore, Lutz et al. confirmed that in order to approach the Landauer's limit, the system must asymptotically approach zero processing speed.[13]

John Earman and John D. Norton have argued that Szilárd and Landauer's explanations of Maxwell's demon begin by assuming that the second law of thermodynamics cannot be violated by the demon, and derive further properties of the demon from this assumption, including the necessity of consuming energy when erasing information, etc.[14][15] It would therefore be circular to invoke these derived properties to defend the second law from the demonic argument. Bennett later acknowledged the validity of Earman and Norton's argument, while maintaining that Landauer's principle explains the mechanism by which real systems do not violate the second law of thermodynamics.[16]

Recent progress

Although the argument by Landauer and Bennett only answers the consistency between the second law of thermodynamics and the whole cyclic process of the entire system of a Szilard engine (a composite system of the engine and the demon), a recent approach based on the non-equilibrium thermodynamics for small fluctuating systems has provided deeper insight on each information process with each subsystem. From this viewpoint, the measurement process is regarded as a process where the correlation (mutual information) between the engine and the demon increases, and the feedback process is regarded as a process where the correlation decreases. If the correlation changes, thermodynamic relations as the second law of thermodynamics and the fluctuation theorem for each subsystem should be modified, and for the case of external control a second-law like inequality[17] and a generalized fluctuation theorem[18] with mutual information are satisfied. These relations suggest that we need extra thermodynamic cost to increase correlation (measurement case), and in contrast we can apparently violate the second law up to the consumption of correlation (feedback case). For more general information processes including biological information processing, both inequality[19] and equality[20] with mutual information hold.

Applications

Real-life versions of Maxwellian demons occur, but all such "real demons" have their entropy-lowering effects duly balanced by increase of entropy elsewhere. Molecular-sized mechanisms are no longer found only in biology; they are also the subject of the emerging field of nanotechnology. Single-atom traps used by particle physicists allow an experimenter to control the state of individual quanta in a way similar to Maxwell's demon.

If hypothetical mirror matter exists, Zurab Silagadze proposes that demons can be envisaged, "which can act like perpetuum mobiles of the second kind: extract heat energy from only one reservoir, use it to do work and be isolated from the rest of ordinary world. Yet the Second Law is not violated because the demons pay their entropy cost in the hidden (mirror) sector of the world by emitting mirror photons."[21]

Experimental work

In the February 2007 issue of Nature, David Leigh, a professor at the University of Edinburgh, announced the creation of a nano-device based on the Brownian ratchet popularized by Richard Feynman. Leigh's device is able to drive a chemical system out of equilibrium, but it must be powered by an external source (light in this case) and therefore does not violate thermodynamics.[22]

Previously, other researchers[who?] created a ring-shaped molecule which could be placed on an axle connecting two sites, A and B. Particles from either site would bump into the ring and move it from end to end. If a large collection of these devices were placed in a system, half of the devices had the ring at site A and half at B, at any given moment in time.

Leigh made a minor change to the axle so that if a light is shone on the device, the center of the axle will thicken, restricting the motion of the ring. It only keeps the ring from moving, however, if it is at A. Over time, therefore, the rings will be bumped from B to A and get stuck there, creating an imbalance in the system. In his experiments, Leigh was able to take a pot of "billions of these devices" from 50:50 equilibrium to a 70:30 imbalance within a few minutes.[23]

In 2009 Mark G. Raizen developed a laser atomic cooling technique which realizes the process Maxwell envisioned of sorting individual atoms in a gas into different containers based on their energy.[6][24][25] The new concept is a one-way wall for atoms or molecules that allows them to move in one direction, but not go back. The operation of the one-way wall relies on an irreversible atomic and molecular process of absorption of a photon at a specific wavelength, followed by spontaneous emission to a different internal state. The irreversible process is coupled to a conservative force created by magnetic fields and/or light. Raizen and collaborators proposed to use the one-way wall in order to reduce the entropy of an ensemble of atoms. In parallel, Gonzalo Muga and Andreas Ruschhaupt independently developed a similar concept. Their "atom diode" was not proposed for cooling, but rather to regulate flow of atoms. The Raizen Group demonstrated significant cooling of atoms with the one-way wall in a series of experiments in 2008. Subsequently, the operation of a one-way wall for atoms was demonstrated by Daniel Steck and collaborators later in 2008. Their experiment was based on the 2005 scheme for the one-way wall, and was not used for cooling. The cooling method realized by the Raizen Group was called "Single-Photon Cooling," because only one photon on average is required in order to bring an atom to near-rest. This is in contrast to other laser cooling techniques which uses the momentum of the photon and requires a two-level cycling transition.

In 2006 Raizen, Muga, and Ruschhaupt showed in a theoretical paper that as each atom crosses the one-way wall, it scatters one photon, and information is provided about the turning point and hence the energy of that particle. The entropy increase of the radiation field scattered from a directional laser into a random direction is exactly balanced by the entropy reduction of the atoms as they are trapped with the one-way wall.

This technique is widely described as a "Maxwell's demon" because it realizes Maxwell's process of creating a temperature difference by sorting high and low energy atoms into different containers. However scientists have pointed out that it is not a true Maxwell's demon in the sense that it does not violate the second law of thermodynamics;[6][26] it does not result in a net decrease in entropy[6][26] and cannot be used to produce useful energy. This is because the process requires more energy from the laser beams than could be produced by the temperature difference generated. The atoms absorb low entropy photons from the laser beam and emit them in a random direction, thus increasing the entropy of the environment.[6][26]

Over the last 20 years macroscopic alternatives to Maxwell's Demon, known as Maxwell's Zombies, have been explored by a number of experiments worldwide.[27][28][29][30]

As metaphor

Daemons in computing, generally processes that run on servers to respond to users, are named for Maxwell's demon.[31] A machine powered by Maxwell's demon plays a role in Thomas Pynchon's novel The Crying of Lot 49.

Historian Henry Brooks Adams in his manuscript The Rule of Phase Applied to History attempted to use Maxwell's demon as a historical metaphor, though he misunderstood and misapplied the original principle.[32] Adams interpreted history as a process moving towards "equilibrium", but he saw militaristic nations (he felt Germany pre-eminent in this class) as tending to reverse this process, a Maxwell's demon of history. Adams made many attempts to respond to the criticism of his formulation from his scientific colleagues, but the work remained incomplete at Adams' death in 1918. It was only published posthumously.[33]

Sociologist Pierre Bourdieu incorporated Maxwell's demon into his work, "Raisons Pratiques" as a metaphor for the socioeconomic inequality among students, as maintained by the school system, the economy, and families.

The demon is mentioned several times in The Cyberiad, a series of short stories by the noted science fiction writer Stanisław Lem. In the book the demon appears both in its original form and in a modified form where it uses its knowledge of all particles in the box in order to surmise general (but unfocused and random) facts about the rest of the universe.

The demon is implied in Larry Niven's short story "Unfinished Story Nr 2", within the context of a world of magic, depending on local concentrations of 'manna', a prerequirement for magic such that magic is no longer possible after manna has been locally depleted.

The titular character in the webcomic Alice Grove (by Jeph Jacques) refers to herself and other characters with identical abilities as Maxwell's Demons. Through unexplained means, their bodies have been quantum-entangled with black holes, allowing them to shunt their personal entropy into those singularities. As such, they are capable of things considered impossible, including moving faster than humanly possible, surviving in total vacuum, and being impervious to most damage except when dealt by others of their type.

Monday, February 26, 2018

Copenhagen interpretation

From Wikipedia, the free encyclopedia

The Copenhagen interpretation is an expression of the meaning of quantum mechanics that was largely devised in the years 1925 to 1927 by Niels Bohr and Werner Heisenberg. It remains one of the most commonly taught interpretations of quantum mechanics.[1]

According to the Copenhagen interpretation, physical systems generally do not have definite properties prior to being measured, and quantum mechanics can only predict the probabilities that measurements will produce certain results. The act of measurement affects the system, causing the set of probabilities to reduce to only one of the possible values immediately after the measurement. This feature is known as wave function collapse.

There have been many objections to the Copenhagen interpretation over the years. These include: discontinuous jumps when there is an observation, the probabilistic element introduced upon observation, the subjectiveness of requiring an observer, the difficulty of defining a measuring device, and the necessity of invoking classical physics to describe the "laboratory" in which the results are measured.

Alternatives to the Copenhagen interpretation include the many-worlds interpretation, the De Broglie–Bohm (pilot-wave) interpretation, and quantum decoherence theories.

Background

Max Planck, Albert Einstein, and Niels Bohr postulated the occurrence of energy in discrete quantities (quanta) in order to explain phenomena such as the spectrum of black-body radiation, the photoelectric effect, and the stability and spectrum of atoms. These phenomena had eluded explanation by classical physics and even appeared to be in contradiction with it. While elementary particles show predictable properties in many experiments, they become thoroughly unpredictable in others, such as attempts to identify individual particle trajectories through a simple physical apparatus.

Classical physics draws a distinction between particles and waves. It also relies on continuity and determinism in natural phenomena. In the early twentieth century, newly discovered atomic and subatomic phenomena seemed to defy those conceptions. In 1925–1926, quantum mechanics was invented as a mathematical formalism that accurately describes the experiments, yet appears to reject those classical conceptions. Instead, it posits that probability, and discontinuity, are fundamental in the physical world. Classical physics also relies on causality. The standing of causality for quantum mechanics is disputed.

Quantum mechanics cannot easily be reconciled with everyday language and observation. Its interpretation has often seemed counter-intuitive to physicists, including its inventors[examples needed][citation needed].

The Copenhagen interpretation intends to indicate the proper ways of thinking and speaking about the physical meaning of the mathematical formulations of quantum mechanics and the corresponding experimental results. It offers due respect to discontinuity, probability, and a conception of wave–particle dualism. In some respects, it denies standing to causality.

Origin of the term

Werner Heisenberg had been an assistant to Niels Bohr at his institute in Copenhagen during part of the 1920s, when they helped originate quantum mechanical theory. In 1929, Heisenberg gave a series of invited lectures at the University of Chicago explaining the new field of quantum mechanics. The lectures then served as the basis for his textbook, The Physical Principles of the Quantum Theory, published in 1930.[2] In the book's preface, Heisenberg wrote:
On the whole the book contains nothing that is not to be found in previous publications, particularly in the investigations of Bohr. The purpose of the book seems to me to be fulfilled if it contributes somewhat to the diffusion of that 'Kopenhagener Geist der Quantentheorie' [i.e., Copenhagen spirit of quantum theory] if I may so express myself, which has directed the entire development of modern atomic physics.
The term 'Copenhagen interpretation' suggests something more than just a spirit, such as some definite set of rules for interpreting the mathematical formalism of quantum mechanics, presumably dating back to the 1920s. However, no such text exists, apart from some informal popular lectures by Bohr and Heisenberg, which contradict each other on several important issues[citation needed]. It appears that the particular term, with its more definite sense, was coined by Heisenberg in the 1950s,[3] while criticizing alternate "interpretations" (e.g., David Bohm's[4]) that had been developed.[5] Lectures with the titles 'The Copenhagen Interpretation of Quantum Theory' and 'Criticisms and Counterproposals to the Copenhagen Interpretation', that Heisenberg delivered in 1955, are reprinted in the collection Physics and Philosophy.[6] Before the book was released for sale, Heisenberg privately expressed regret for having used the term, due to its suggestion of the existence of other interpretations, that he considered to be "nonsense".[7]

Current status of the term

According to an opponent of the Copenhagen interpretation, John G. Cramer, "Despite an extensive literature which refers to, discusses, and criticizes the Copenhagen interpretation of quantum mechanics, nowhere does there seem to be any concise statement which defines the full Copenhagen interpretation."[8]

Principles

There is no uniquely definitive statement of the Copenhagen interpretation. It consists of the views developed by a number of scientists and philosophers during the second quarter of the 20th Century. Bohr and Heisenberg never totally agreed on how to understand the mathematical formalism of quantum mechanics. Bohr once distanced himself from what he considered to be Heisenberg's more subjective interpretation.[9]

Different commentators and researchers have associated various ideas with it. Asher Peres remarked that very different, sometimes opposite, views are presented as "the Copenhagen interpretation" by different authors.[10]

Some basic principles generally accepted as part of the interpretation include:
  1. A wave function \Psi represents the state of the system. It encapsulates everything that can be known about that system before an observation; there are no additional "hidden parameters".[11] The wavefunction evolves smoothly in time while isolated from other systems.
  2. The properties of the system are subject to a principle of incompatibility. Certain properties cannot be jointly defined for the same system at the same time. The incompatibility is expressed quantitatively by Heisenberg's uncertainty principle. For example, if a particle at a particular instant has a definite location, it is meaningless to speak of its momentum at that instant.
  3. During an observation, the system must interact with a laboratory device. When that device makes a measurement, the wave function of the systems is said to collapse, or irreversibly reduce to an eigenstate of the observable that is registered.[12]
  4. The results provided by measuring devices are essentially classical, and should be described in ordinary language. This was particularly emphasized by Bohr, and was accepted by Heisenberg.[13]
  5. The description given by the wave function is probabilistic. This principle is called the Born rule, after Max Born.
  6. The wave function expresses a necessary and fundamental wave–particle duality. This should be reflected in ordinary language accounts of experiments. An experiment can show particle-like properties, or wave-like properties, according to the complementarity principle of Niels Bohr.[14]
  7. The inner workings of atomic and subatomic processes are necessarily and essentially inaccessible to direct observation, because the act of observing them would greatly affect them.
  8. When quantum numbers are large, they refer to properties which closely match those of the classical description. This is the correspondence principle of Bohr and Heisenberg.

Metaphysics of the wave function

The Copenhagen interpretation denies that the wave function provides a directly apprehensible image of an ordinary material body or a discernible component of some such,[15][16] or anything more than a theoretical concept.

In metaphysical terms, the Copenhagen interpretation views quantum mechanics as providing knowledge of phenomena, but not as pointing to 'really existing objects', which it regarded as residues of ordinary intuition. This makes it an epistemic theory. This may be contrasted with Einstein's view, that physics should look for 'really existing objects', making itself an ontic theory.[17]

The metaphysical question is sometimes asked: "Could quantum mechanics be extended by adding so-called "hidden variables" to the mathematical formalism, to convert it from an epistemic to an ontic theory?" The Copenhagen interpretation answers this with a strong 'No'.[18] It is sometimes alleged, for example by J.S. Bell, that Einstein opposed the Copenhagen interpretation because he believed that the answer to that question of "hidden variables" was "yes". That allegation has achieved mythical potency, but is mistaken. Countering that myth, Max Jammer writes "Einstein never proposed a hidden variable theory."[19] Einstein explored the possibility of a hidden variable theory, and wrote a paper describing his exploration, but withdrew it from publication because he felt it was faulty.[20][21]

Because it asserts that a wave function becomes 'real' only when the system is observed, the term "subjective" is sometimes proposed for the Copenhagen interpretation. This term is rejected by many Copenhagenists because the process of observation is mechanical and does not depend on the individuality of the observer.

Some authors have proposed that Bohr was influenced by positivism (or even pragmatism). On the other hand, Bohr and Heisenberg were not in complete agreement, and they held different views at different times. Heisenberg in particular was prompted to move towards realism.[22]

Carl Friedrich von Weizsäcker, while participating in a colloquium at Cambridge, denied that the Copenhagen interpretation asserted "What cannot be observed does not exist." He suggested instead that the Copenhagen interpretation follows the principle "What is observed certainly exists; about what is not observed we are still free to make suitable assumptions. We use that freedom to avoid paradoxes."[8]

Born rule

Max Born speaks of his probability interpretation as a "statistical interpretation" of the wave function,[23][24] and the Born rule is essential to the Copenhagen interpretation.[25]

Writers do not all follow the same terminology. The phrase "statistical interpretation", referring to the "ensemble interpretation", often indicates an interpretation of the Born rule somewhat different from the Copenhagen interpretation.[26][27] For the Copenhagen interpretation, it is axiomatic that the wave function exhausts all that can ever be known in advance about a particular occurrence of the system. The "statistical" or "ensemble" interpretation, on the other hand, is explicitly agnostic about whether the information in the wave function is exhaustive of what might be known in advance. It sees itself as more 'minimal' than the Copenhagen interpretation in its claims. It only goes as far as saying that on every occasion of observation, some actual value of some property is found, and that such values are found probabilistically, as detected by many occasions of observation of the same system. The many occurrences of the system are said to constitute an 'ensemble', and they jointly reveal the probability through these occasions of observation. Though they all have the same wave function, the elements of the ensemble might not be identical to one another in all respects, according to the 'agnostic' interpretations. They may, for all we know, beyond current knowledge and beyond the wave function, have individual distinguishing properties. For present-day science, the experimental significance of these various forms of Born's rule is the same, since they make the same predictions about the probability distribution of outcomes of observations, and the unobserved or unactualized potential properties are not accessible to experiment.

Nature of collapse

Those who hold to the Copenhagen interpretation are willing to say that a wave function involves the various probabilities that a given event will proceed to certain different outcomes. But when the apparatus registers one of those outcomes, no probabilities or superposition of the others linger.[28] According to Howard, wave function collapse is not mentioned in the writings of Bohr.[3]

Some argue that the concept of the collapse of a "real" wave function was introduced by Heisenberg and later developed by John von Neumann in 1932.[29] However, Heisenberg spoke of the wavefunction as representing available knowledge of a system, and did not use the term "collapse" per se, but instead termed it "reduction" of the wavefunction to a new state representing the change in available knowledge which occurs once a particular phenomenon is registered by the apparatus (often called "measurement").[30]

In 1952 David Bohm developed decoherence, an explanatory mechanism for the appearance of wave function collapse. Bohm applied decoherence to Louis DeBroglie's pilot wave theory, producing Bohmian mechanics,[31][32] the first successful hidden variables interpretation of quantum mechanics. Collapse was avoided by Hugh Everett in 1957 in his relative state interpretation.[33] Decoherence was largely[34] ignored until the 1980s.[35][36]

Non-separability of the wave function

The domain of the wave function is configuration space, an abstract object quite different from ordinary physical space–time. At a single "point" of configuration space, the wave function collects probabilistic information about several distinct particles, that respectively have physically space-like separation. So the wave function is said to supply a non-separable representation. This reflects a feature of the quantum world that was recognized by Einstein as early[citation needed] as 1905.

In 1927, Bohr drew attention to a consequence of non-separability. The evolution of the system, as determined by the Schrödinger equation, does not display particle trajectories through space–time. It is possible to extract trajectory information from such evolution, but not simultaneously to extract energy–momentum information. This incompatibility is expressed in the Heisenberg uncertainty principle. The two kinds of information have to be extracted on different occasions, because of the non-separability of the wave function representation. In Bohr's thinking, space–time visualizability meant trajectory information. Again, in Bohr's thinking, 'causality' referred to energy–momentum transfer; in his view, lack of energy–momentum knowledge meant lack of 'causality' knowledge. Therefore Bohr thought that knowledge respectively of 'causality' and of space–time visualizability were incompatible but complementary.[3]

Wave–particle dilemma

The term Copenhagen interpretation is not well defined when one asks about the wave–particle dilemma, because Bohr and Heisenberg had different or perhaps disagreeing views on it.
According to Camilleri, Bohr thought that the distinction between a wave view and a particle view was defined by a distinction between experimental setups, while, differing, Heisenberg thought that it was defined by the possibility of viewing the mathematical formulas as referring to waves or particles. Bohr thought that a particular experimental setup would display either a wave picture or a particle picture, but not both. Heisenberg thought that every mathematical formulation was capable of both wave and particle interpretations.[37][38]

Alfred Landé was for a long time considered orthodox. He did, however, take the Heisenberg viewpoint, in so far as he thought that the wave function was always mathematically open to both interpretations. Eventually this led to his being considered unorthodox, partly because he did not accept Bohr's one-or-the-other view, preferring Heisenberg's always-both view. Another part of the reason for branding Landé unorthodox was that he recited, as did Heisenberg, the 1923 work[39] of old-quantum-theorist William Duane, which anticipated a quantum mechanical theorem that had not been recognized by Born. That theorem seems to make the always-both view, like the one adopted by Heisenberg, rather cogent. One might say "It's there in the mathematics", but that is not a physical statement that would have convinced Bohr. Perhaps the main reason for attacking Landé is that his work demystified the phenomenon of diffraction of particles of matter, such as buckyballs.[40]

Acceptance among physicists

Throughout much of the twentieth century the Copenhagen interpretation had overwhelming acceptance among physicists. Although astrophysicist and science writer John Gribbin described it as having fallen from primacy after the 1980s,[41] according to a very informal poll (some people voted for multiple interpretations) conducted at a quantum mechanics conference in 1997,[42] the Copenhagen interpretation remained the most widely accepted specific interpretation of quantum mechanics among physicists. In more recent polls conducted at various quantum mechanics conferences, varying results have been found.[43][44][45] In a 2017 article, physicist and Nobel laureate Steven Weinberg states that the Copenhagen interpretation "is now widely felt to be unacceptable."[46]

Consequences

The nature of the Copenhagen interpretation is exposed by considering a number of experiments and paradoxes.

1. Schrödinger's cat
This thought experiment highlights the implications that accepting uncertainty at the microscopic level has on macroscopic objects. A cat is put in a sealed box, with its life or death made dependent on the state of a subatomic particle. Thus a description of the cat during the course of the experiment—having been entangled with the state of a subatomic particle—becomes a "blur" of "living and dead cat." But this can't be accurate because it implies the cat is actually both dead and alive until the box is opened to check on it. But the cat, if it survives, will only remember being alive. Schrödinger resists "so naively accepting as valid a 'blurred model' for representing reality."[47] How can the cat be both alive and dead?
The Copenhagen interpretation: The wave function reflects our knowledge of the system. The wave function (|{\text{dead}}\rangle +|{\text{alive}}\rangle )/{\sqrt {2}} means that, once the cat is observed, there is a 50% chance it will be dead, and 50% chance it will be alive.
2. Wigner's friend
Wigner puts his friend in with the cat. The external observer believes the system is in the state (|{\text{dead}}\rangle +|{\text{alive}}\rangle )/{\sqrt {2}}. His friend, however, is convinced that the cat is alive, i.e. for him, the cat is in the state |{\text{alive}}\rangle . How can Wigner and his friend see different wave functions?
The Copenhagen interpretation: The answer depends on the positioning of Heisenberg cut, which can be placed arbitrarily. If Wigner's friend is positioned on the same side of the cut as the external observer, his measurements collapse the wave function for both observers. If he is positioned on the cat's side, his interaction with the cat is not considered a measurement.
3. Double-slit diffraction
Light passes through double slits and onto a screen resulting in a diffraction pattern. Is light a particle or a wave?
The Copenhagen interpretation: Light is neither. A particular experiment can demonstrate particle (photon) or wave properties, but not both at the same time (Bohr's complementarity principle).
The same experiment can in theory be performed with any physical system: electrons, protons, atoms, molecules, viruses, bacteria, cats, humans, elephants, planets, etc. In practice it has been performed for light, electrons, buckminsterfullerene,[48][49] and some atoms. Due to the smallness of Planck's constant it is practically impossible to realize experiments that directly reveal the wave nature of any system bigger than a few atoms but, in general, quantum mechanics considers all matter as possessing both particle and wave behaviors. The greater systems (like viruses, bacteria, cats, etc.) are considered as "classical" ones but only as an approximation, not exact.
4. EPR (Einstein–Podolsky–Rosen) paradox
Entangled "particles" are emitted in a single event. Conservation laws ensure that the measured spin of one particle must be the opposite of the measured spin of the other, so that if the spin of one particle is measured, the spin of the other particle is now instantaneously known. Because this outcome cannot be separated from quantum randomness, no information can be sent in this manner and there is no violation of either special relativity or the Copenhagen interpretation.
The Copenhagen interpretation: Assuming wave functions are not real, wave-function collapse is interpreted subjectively. The moment one observer measures the spin of one particle, he knows the spin of the other. However, another observer cannot benefit until the results of that measurement have been relayed to him, at less than or equal to the speed of light.
Copenhagenists claim that interpretations of quantum mechanics where the wave function is regarded as real have problems with EPR-type effects, since they imply that the laws of physics allow for influences to propagate at speeds greater than the speed of light. However, proponents of many worlds[50] and the transactional interpretation[51][52] (TI) maintain that Copenhagen interpretation is fatally non-local.
The claim that EPR effects violate the principle that information cannot travel faster than the speed of light have been countered by noting that they cannot be used for signaling because neither observer can control, or predetermine, what he observes, and therefore cannot manipulate what the other observer measures.

Criticism

The completeness of quantum mechanics (thesis 1) was attacked by the Einstein–Podolsky–Rosen thought experiment which was intended to show that quantum mechanics could not be a complete theory.

Experimental tests of Bell's inequality using particles have supported the quantum mechanical prediction of entanglement.

The Copenhagen interpretation gives special status to measurement processes without clearly defining them or explaining their peculiar effects. In his article entitled "Criticism and Counterproposals to the Copenhagen Interpretation of Quantum Theory," countering the view of Alexandrov that (in Heisenberg's paraphrase) "the wave function in configuration space characterizes the objective state of the electron." Heisenberg says,
Of course the introduction of the observer must not be misunderstood to imply that some kind of subjective features are to be brought into the description of nature. The observer has, rather, only the function of registering decisions, i.e., processes in space and time, and it does not matter whether the observer is an apparatus or a human being; but the registration, i.e., the transition from the "possible" to the "actual," is absolutely necessary here and cannot be omitted from the interpretation of quantum theory.[53]
Many physicists and philosophers have objected to the Copenhagen interpretation, both on the grounds that it is non-deterministic and that it includes an undefined measurement process that converts probability functions into non-probabilistic measurements. Einstein's comments "I, at any rate, am convinced that He (God) does not throw dice."[54] and "Do you really think the moon isn't there if you aren't looking at it?"[55] exemplify this. Bohr, in response, said, "Einstein, don't tell God what to do."[56]

Steven Weinberg in "Einstein's Mistakes", Physics Today, November 2005, page 31, said:
All this familiar story is true, but it leaves out an irony. Bohr's version of quantum mechanics was deeply flawed, but not for the reason Einstein thought. The Copenhagen interpretation describes what happens when an observer makes a measurement, but the observer and the act of measurement are themselves treated classically. This is surely wrong: Physicists and their apparatus must be governed by the same quantum mechanical rules that govern everything else in the universe. But these rules are expressed in terms of a wave function (or, more precisely, a state vector) that evolves in a perfectly deterministic way. So where do the probabilistic rules of the Copenhagen interpretation come from?
Considerable progress has been made in recent years toward the resolution of the problem, which I cannot go into here. It is enough to say that neither Bohr nor Einstein had focused on the real problem with quantum mechanics. The Copenhagen rules clearly work, so they have to be accepted. But this leaves the task of explaining them by applying the deterministic equation for the evolution of the wave function, the Schrödinger equation, to observers and their apparatus.
The problem of thinking in terms of classical measurements of a quantum system becomes particularly acute in the field of quantum cosmology, where the quantum system is the universe.[57]

E. T. Jaynes,[58] from a Bayesian point of view, argued that probability is a measure of a state of information about the physical world. Quantum mechanics under the Copenhagen interpretation interpreted probability as a physical phenomenon, which is what Jaynes called a mind projection fallacy.

Common criticisms of the Copenhagen interpretation often lead to the problem of continuum of random occurrences: whether in time (as subsequent measurements, which under certain interpretations of the measurement problem may happen continuously) or even in space. A recent experiment showed that a particle may leave a trace about the path which it used when travelling as a wave – and that this trace exhibits equality of both paths.[59] If such result is raised to the rank of a wave-only non-transactional worldview and proved better – i.e. that a particle is in fact a continuum of points capable of acting independently but under a common wavefunction – it would rather support theories such as Bohm's one (with its guiding towards the centre of orbital and spreading of physical properties over it) than interpretations which presuppose full randomness, because with the latter it will be problematic to demonstrate universally and in all practical cases how can a particle remain coherent in time, in spite of non-zero probabilities of its individual points going into regions distant from the centre of mass (through a continuum of different random determinations).[60] An alternative possibility would be to assume that there is a finite number of instants/points within a given time or area, but theories which try to quantize the space or time itself seem to be fatally incompatible with the special relativity.

The view that particle diffraction logically guarantees the need for a wave interpretation has been questioned. A recent experiment has carried out the two-slit protocol with helium atoms.[61] The basic physics of quantal momentum transfer considered here was originally pointed out in 1923, by William Duane, before quantum mechanics was invented.[39] It was later recognized by Heisenberg[62] and by Pauling.[63] It was championed against orthodox ridicule by Alfred Landé.[64] It has also recently been considered by Van Vliet.[65][66] If the diffracting slits are considered as classical objects, theoretically ideally seamless, then a wave interpretation seems necessary, but if the diffracting slits are considered physically, as quantal objects exhibiting collective quantal motions, then the particle-only and wave-only interpretations seem perhaps equally valid.

Alternatives

The Ensemble interpretation is similar; it offers an interpretation of the wave function, but not for single particles. The consistent histories interpretation advertises itself as "Copenhagen done right". Although the Copenhagen interpretation is often confused with the idea that consciousness causes collapse, it defines an "observer" merely as that which collapses the wave function.[53] Quantum information theories are more recent, and have attracted growing support.[67][68]

Under realism and indeterminism, if the wave function is regarded as ontologically real, and collapse is entirely rejected, a many worlds theory results. If wave function collapse is regarded as ontologically real as well, an objective collapse theory is obtained. Under realism and determinism (as well as non-localism), a hidden variable theory exists, e.g., the de Broglie–Bohm interpretation, which treats the wavefunction as real, position and momentum as definite and resulting from the expected values, and physical properties as spread in space. For an atemporal indeterministic interpretation that “makes no attempt to give a ‘local’ account on the level of determinate particles”,[69] the conjugate wavefunction, ("advanced" or time-reversed) of the relativistic version of the wavefunction, and the so-called "retarded" or time-forward version[70] are both regarded as real and the transactional interpretation results.[69]

Many physicists have subscribed to the instrumentalist interpretation of quantum mechanics, a position often equated with eschewing all interpretation. It is summarized by the sentence "Shut up and calculate!". While this slogan is sometimes attributed to Paul Dirac[71] or Richard Feynman, it seems to be due to David Mermin.[72]

Philosophy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Philosoph...