Search This Blog

Saturday, August 12, 2023

Stellar nucleosynthesis

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Stellar_nucleosynthesis
Logarithm of the relative energy output (ε) of proton–proton (PP), CNO and Triple-α fusion processes at different temperatures (T). The dashed line shows the combined energy generation of the PP and CNO processes within a star. At the Sun's core temperature, the PP process is more efficient.

Stellar nucleosynthesis is the creation (nucleosynthesis) of chemical elements by nuclear fusion reactions within stars. Stellar nucleosynthesis has occurred since the original creation of hydrogen, helium and lithium during the Big Bang. As a predictive theory, it yields accurate estimates of the observed abundances of the elements. It explains why the observed abundances of elements change over time and why some elements and their isotopes are much more abundant than others. The theory was initially proposed by Fred Hoyle in 1946, who later refined it in 1954. Further advances were made, especially to nucleosynthesis by neutron capture of the elements heavier than iron, by Margaret and Geoffrey Burbidge, William Alfred Fowler and Fred Hoyle in their famous 1957 B2FH paper, which became one of the most heavily cited papers in astrophysics history.

Stars evolve because of changes in their composition (the abundance of their constituent elements) over their lifespans, first by burning hydrogen (main sequence star), then helium (horizontal branch star), and progressively burning higher elements. However, this does not by itself significantly alter the abundances of elements in the universe as the elements are contained within the star. Later in its life, a low-mass star will slowly eject its atmosphere via stellar wind, forming a planetary nebula, while a higher–mass star will eject mass via a sudden catastrophic event called a supernova. The term supernova nucleosynthesis is used to describe the creation of elements during the explosion of a massive star or white dwarf.

The advanced sequence of burning fuels is driven by gravitational collapse and its associated heating, resulting in the subsequent burning of carbon, oxygen and silicon. However, most of the nucleosynthesis in the mass range A = 28–56 (from silicon to nickel) is actually caused by the upper layers of the star collapsing onto the core, creating a compressional shock wave rebounding outward. The shock front briefly raises temperatures by roughly 50%, thereby causing furious burning for about a second. This final burning in massive stars, called explosive nucleosynthesis or supernova nucleosynthesis, is the final epoch of stellar nucleosynthesis.

A stimulus to the development of the theory of nucleosynthesis was the discovery of variations in the abundances of elements found in the universe. The need for a physical description was already inspired by the relative abundances of the chemical elements in the solar system. Those abundances, when plotted on a graph as a function of the atomic number of the element, have a jagged sawtooth shape that varies by factors of tens of millions (see history of nucleosynthesis theory). This suggested a natural process that is not random. A second stimulus to understanding the processes of stellar nucleosynthesis occurred during the 20th century, when it was realized that the energy released from nuclear fusion reactions accounted for the longevity of the Sun as a source of heat and light.

History

In 1920, Arthur Eddington proposed that stars obtained their energy from nuclear fusion of hydrogen to form helium and also raised the possibility that the heavier elements are produced in stars.

In 1920, Arthur Eddington, on the basis of the precise measurements of atomic masses by F.W. Aston and a preliminary suggestion by Jean Perrin, proposed that stars obtained their energy from nuclear fusion of hydrogen to form helium and raised the possibility that the heavier elements are produced in stars. This was a preliminary step toward the idea of stellar nucleosynthesis. In 1928 George Gamow derived what is now called the Gamow factor, a quantum-mechanical formula yielding the probability for two contiguous nuclei to overcome the electrostatic Coulomb barrier between them and approach each other closely enough to undergo nuclear reaction due to the strong nuclear force which is effective only at very short distances. In the following decade the Gamow factor was used by Atkinson and Houtermans and later by Edward Teller and Gamow himself to derive the rate at which nuclear reactions would occur at the high temperatures believed to exist in stellar interiors.

In 1939, in a Nobel lecture entitled "Energy Production in Stars", Hans Bethe analyzed the different possibilities for reactions by which hydrogen is fused into helium. He defined two processes that he believed to be the sources of energy in stars. The first one, the proton–proton chain reaction, is the dominant energy source in stars with masses up to about the mass of the Sun. The second process, the carbon–nitrogen–oxygen cycle, which was also considered by Carl Friedrich von Weizsäcker in 1938, is more important in more massive main-sequence stars. These works concerned the energy generation capable of keeping stars hot. A clear physical description of the proton–proton chain and of the CNO cycle appears in a 1968 textbook. Bethe's two papers did not address the creation of heavier nuclei, however. That theory was begun by Fred Hoyle in 1946 with his argument that a collection of very hot nuclei would assemble thermodynamically into iron. Hoyle followed that in 1954 with a paper describing how advanced fusion stages within massive stars would synthesize the elements from carbon to iron in mass.

Hoyle's theory was extended to other processes, beginning with the publication of the 1957 review paper "Synthesis of the Elements in Stars" by Burbidge, Burbidge, Fowler and Hoyle, more commonly referred to as the B2FH paper. This review paper collected and refined earlier research into a heavily cited picture that gave promise of accounting for the observed relative abundances of the elements; but it did not itself enlarge Hoyle's 1954 picture for the origin of primary nuclei as much as many assumed, except in the understanding of nucleosynthesis of those elements heavier than iron by neutron capture. Significant improvements were made by Alastair G. W. Cameron and by Donald D. Clayton. In 1957 Cameron presented his own independent approach to nucleosynthesis, informed by Hoyle's example, and introduced computers into time-dependent calculations of evolution of nuclear systems. Clayton calculated the first time-dependent models of the s-process in 1961 and of the r-process in 1965, as well as of the burning of silicon into the abundant alpha-particle nuclei and iron-group elements in 1968, and discovered radiogenic chronologies for determining the age of the elements.

Cross section of a supergiant showing nucleosynthesis and elements formed.

Key reactions

A version of the periodic table indicating the origins – including stellar nucleosynthesis – of the elements.

The most important reactions in stellar nucleosynthesis:

Hydrogen fusion


Proton–proton chain reaction
CNO-I cycle
 
The helium nucleus is released at the top-left step.

Hydrogen fusion (nuclear fusion of four protons to form a helium-4 nucleus) is the dominant process that generates energy in the cores of main-sequence stars. It is also called "hydrogen burning", which should not be confused with the chemical combustion of hydrogen in an oxidizing atmosphere. There are two predominant processes by which stellar hydrogen fusion occurs: proton–proton chain and the carbon–nitrogen–oxygen (CNO) cycle. Ninety percent of all stars, with the exception of white dwarfs, are fusing hydrogen by these two processes.

In the cores of lower-mass main-sequence stars such as the Sun, the dominant energy production process is the proton–proton chain reaction. This creates a helium-4 nucleus through a sequence of reactions that begin with the fusion of two protons to form a deuterium nucleus (one proton plus one neutron) along with an ejected positron and neutrino. In each complete fusion cycle, the proton–proton chain reaction releases about 26.2 MeV. The proton–proton chain reaction cycle is relatively insensitive to temperature; a 10% rise of temperature would increase energy production by this method by 46%, hence, this hydrogen fusion process can occur in up to a third of the star's radius and occupy half the star's mass. For stars above 35% of the Sun's mass, the energy flux toward the surface is sufficiently low and energy transfer from the core region remains by radiative heat transfer, rather than by convective heat transfer. As a result, there is little mixing of fresh hydrogen into the core or fusion products outward.

In higher-mass stars, the dominant energy production process is the CNO cycle, which is a catalytic cycle that uses nuclei of carbon, nitrogen and oxygen as intermediaries and in the end produces a helium nucleus as with the proton–proton chain. During a complete CNO cycle, 25.0 MeV of energy is released. The difference in energy production of this cycle, compared to the proton–proton chain reaction, is accounted for by the energy lost through neutrino emission. The CNO cycle is very temperature sensitive, a 10% rise of temperature would produce a 350% rise in energy production. About 90% of the CNO cycle energy generation occurs within the inner 15% of the star's mass, hence it is strongly concentrated at the core. This results in such an intense outward energy flux that convective energy transfer becomes more important than does radiative transfer. As a result, the core region becomes a convection zone, which stirs the hydrogen fusion region and keeps it well mixed with the surrounding proton-rich region. This core convection occurs in stars where the CNO cycle contributes more than 20% of the total energy. As the star ages and the core temperature increases, the region occupied by the convection zone slowly shrinks from 20% of the mass down to the inner 8% of the mass. The Sun produces on the order of 1% of its energy from the CNO cycle.

The type of hydrogen fusion process that dominates in a star is determined by the temperature dependency differences between the two reactions. The proton–proton chain reaction starts at temperatures about 4×106 K, making it the dominant fusion mechanism in smaller stars. A self-maintaining CNO chain requires a higher temperature of approximately 16×106 K, but thereafter it increases more rapidly in efficiency as the temperature rises, than does the proton–proton reaction. Above approximately 17×106 K, the CNO cycle becomes the dominant source of energy. This temperature is achieved in the cores of main-sequence stars with at least 1.3 times the mass of the Sun. The Sun itself has a core temperature of about 15.7×106 K. As a main-sequence star ages, the core temperature will rise, resulting in a steadily increasing contribution from its CNO cycle.

Helium fusion

Main sequence stars accumulate helium in their cores as a result of hydrogen fusion, but the core does not become hot enough to initiate helium fusion. Helium fusion first begins when a star leaves the red giant branch after accumulating sufficient helium in its core to ignite it. In stars around the mass of the Sun, this begins at the tip of the red giant branch with a helium flash from a degenerate helium core, and the star moves to the horizontal branch where it burns helium in its core. More massive stars ignite helium in their core without a flash and execute a blue loop before reaching the asymptotic giant branch. Such a star initially moves away from the AGB toward bluer colours, then loops back again to what is called the Hayashi track. An important consequence of blue loops is that they give rise to classical Cepheid variables, of central importance in determining distances in the Milky Way and to nearby galaxies Despite the name, stars on a blue loop from the red giant branch are typically not blue in colour but are rather yellow giants, possibly Cepheid variables. They fuse helium until the core is largely carbon and oxygen. The most massive stars become supergiants when they leave the main sequence and quickly start helium fusion as they become red supergiants. After the helium is exhausted in the core of a star, helium fusion will continue in a shell around the carbon–oxygen core.

In all cases, helium is fused to carbon via the triple-alpha process, i.e., three helium nuclei are transformed into carbon via 8Be. This can then form oxygen, neon, and heavier elements via the alpha process. In this way, the alpha process preferentially produces elements with even numbers of protons by the capture of helium nuclei. Elements with odd numbers of protons are formed by other fusion pathways.

Reaction rate

The reaction rate density between species A and B, having number densities nA,B, is given by:

where k is the reaction rate constant of each single elementary binary reaction composing the nuclear fusion process:

here, σ(v) is the cross-section at relative velocity v, and averaging is performed over all velocities.

Semi-classically, the cross section is proportional to , where is the de Broglie wavelength. Thus semi-classically the cross section is proportional to .

However, since the reaction involves quantum tunneling, there is an exponential damping at low energies that depends on Gamow factor EG, giving an Arrhenius equation:

where S(E) depends on the details of the nuclear interaction, and has the dimension of an energy multiplied for a cross section.

One then integrates over all energies to get the total reaction rate, using the Maxwell–Boltzmann distribution and the relation:

where is the reduced mass.

Since this integration has an exponential damping at high energies of the form and at low energies from the Gamow factor, the integral almost vanished everywhere except around the peak, called Gamow peak, at E0, where:

Thus:

The exponent can then be approximated around E0 as:

And the reaction rate is approximated as:

Values of S(E0) are typically 10−3 – 103 keV·b, but are damped by a huge factor when involving a beta decay, due to the relation between the intermediate bound state (e.g. diproton) half-life and the beta decay half-life, as in the proton–proton chain reaction. Note that typical core temperatures in main-sequence stars give kT of the order of keV.

Thus, the limiting reaction in the CNO cycle, proton capture by 14
7
N
, has S(E0) ~ S(0) = 3.5 keV·b, while the limiting reaction in the proton–proton chain reaction, the creation of deuterium from two protons, has a much lower S(E0) ~ S(0) = 4×10−22 keV·b. Incidentally, since the former reaction has a much higher Gamow factor, and due to the relative abundance of elements in typical stars, the two reaction rates are equal at a temperature value that is within the core temperature ranges of main-sequence stars.

Electric power transmission

From Wikipedia, the free encyclopedia
Five-hundred kilovolt (500 kV) Three-phase electric power Transmission Lines at Grand Coulee Dam. Four circuits are shown. Two additional circuits are obscured by trees on the far right. The entire 7079 MW nameplate generation capacity of the dam is accommodated by these six circuits.

Electric power transmission is the bulk movement of electrical energy from a generating site, such as a power plant, to an electrical substation. The interconnected lines that facilitate this movement form a transmission network. This is distinct from the local wiring between high-voltage substations and customers, which is typically referred to as electric power distribution. The combined transmission and distribution network is part of electricity delivery, known as the electrical grid.

Efficient long-distance transmission of electric power requires high voltages. This reduces the losses produced by strong currents. Transmission lines use either alternating current (AC) or direct current (DC). The voltage level is changed with transformers. The voltage is stepped up for transmission, then reduced for local distribution.

A wide area synchronous grid, known as an "interconnection" in North America, directly connects generators delivering AC power with the same relative frequency to many consumers. North America has four major interconnections: Western, Eastern, Quebec and Texas. One grid connects most of continental Europe.

Historically, transmission and distribution lines were often owned by the same company, but starting in the 1990s, many countries liberalized the regulation of the electricity market in ways that led to separate companies handling transmission and distribution.

System

A diagram of an electric power system. The transmission system is in blue.

Most North American transmission lines are high-voltage three-phase AC, although single phase AC is sometimes used in railway electrification systems. DC technology is used for greater efficiency over longer distances, typically hundreds of miles. High-voltage direct current (HVDC) technology is also used in submarine power cables (typically longer than 30 miles (50 km)), and in the interchange of power between grids that are not mutually synchronized. HVDC links stabilize power distribution networks where sudden new loads, or blackouts, in one part of a network might otherwise result in synchronization problems and cascading failures.

Electricity is transmitted at high voltages to reduce the energy loss due to resistance that occurs over long distances. Power is usually transmitted through overhead power lines. Underground power transmission has a significantly higher installation cost and greater operational limitations, but lowers maintenance costs. Underground transmission is more common in urban areas or environmentally sensitive locations.

Electrical energy must typically be generated at the same rate at which it is consumed. A sophisticated control system is required to ensure that power generation closely matches demand. If demand exceeds supply, the imbalance can cause generation plant(s) and transmission equipment to automatically disconnect or shut down to prevent damage. In the worst case, this may lead to a cascading series of shutdowns and a major regional blackout.

The US Northeast faced blackouts in 1965, 1977, 2003, and major blackouts in other US regions in 1996 and 2011. Electric transmission networks are interconnected into regional, national, and even continent-wide networks to reduce the risk of such a failure by providing multiple redundant, alternative routes for power to flow should such shutdowns occur. Transmission companies determine the maximum reliable capacity of each line (ordinarily less than its physical or thermal limit) to ensure that spare capacity is available in the event of a failure in another part of the network.

Overhead

A four-circuit, two-voltage power transmission line; "Bundled" 2-ways
 
A typical ACSR. The conductor consists of seven strands of steel surrounded by four layers of aluminium.

High-voltage overhead conductors are not covered by insulation. The conductor material is nearly always an aluminum alloy, formed of several strands and possibly reinforced with steel strands. Copper was sometimes used for overhead transmission, but aluminum is lighter, reduces yields only marginally and costs much less. Overhead conductors are supplied by several companies. Conductor material and shapes are regularly improved to increase capacity.

Conductor sizes range from 12 mm2 (#6 American wire gauge) to 750 mm2 (1,590,000 circular mils area), with varying resistance and current-carrying capacity. For large conductors (more than a few centimetres in diameter), much of the current flow is concentrated near the surface due to the skin effect. The center of the conductor carries little current but contributes weight and cost. Thus, multiple parallel cables (called bundle conductors) are used for higher capacity. Bundle conductors are used at high voltages to reduce energy loss caused by corona discharge.

Today, transmission-level voltages are usually 110 kV and above. Lower voltages, such as 66 kV and 33 kV, are usually considered subtransmission voltages, but are occasionally used on long lines with light loads. Voltages less than 33 kV are usually used for distribution. Voltages above 765 kV are considered extra high voltage and require different designs.

Overhead transmission wires depend on air for insulation, requiring that lines maintain minimum clearances. Adverse weather conditions, such as high winds and low temperatures, interrupt transmission. Wind speeds as low as 23 knots (43 km/h) can permit conductors to encroach operating clearances, resulting in a flashover and loss of supply. Oscillatory motion of the physical line is termed conductor gallop or flutter depending on the frequency and amplitude of oscillation.


Underground

Electric power can be transmitted by underground power cables. Underground cables take up no right-of-way, have lower visibility, and are less affected by weather. However, cables must be insulated. Cable and excavation costs are much higher than overhead construction. Faults in buried transmission lines take longer to locate and repair.

In some metropolitan areas, cables are enclosed by metal pipe and insulated with dielectric fluid (usually an oil) that is either static or circulated via pumps. If an electric fault damages the pipe and leaks dielectric, liquid nitrogen is used to freeze portions of the pipe to enable draining and repair. This extends the repair period and increases costs. The temperature of the pipe and surroundings are monitored throughout the repair period.

Underground lines are limited by their thermal capacity, which permits less overload or re-rating lines. Long underground AC cables have significant capacitance, which reduces their ability to provide useful power beyond 50 miles (80 kilometres). DC cables are not limited in length by their capacitance.

History

New York City streets in 1890. Besides telegraph lines, multiple electric lines were required for each class of device requiring different voltages.

Commercial electric power was initially transmitted at the same voltage used by lighting and mechanical loads. This restricted the distance between generating plant and loads. In 1882, DC voltage could not easily be increased for long-distance transmission. Different classes of loads (for example, lighting, fixed motors, and traction/railway systems) required different voltages, and so used different generators and circuits.

Thus, generators were sited near their loads, a practice that later became known as distributed generation using large numbers of small generators.

Transmission of alternating current (AC) became possible after Lucien Gaulard and John Dixon Gibbs built what they called the secondary generator, an early transformer provided with 1:1 turn ratio and open magnetic circuit, in 1881.

The first long distance AC line was 34 kilometres (21 miles) long, built for the 1884 International Exhibition of Electricity in Turin, Italy. It was powered by a 2 kV, 130 Hz Siemens & Halske alternator and featured several Gaulard transformers with primary windings connected in series, which fed incandescent lamps. The system proved the feasibility of AC electric power transmission over long distances.

The first commercial AC distribution system entered service in 1885 in via dei Cerchi, Rome, Italy, for public lighting. It was powered by two Siemens & Halske alternators rated 30 hp (22 kW), 2 kV at 120 Hz and used 19 km of cables and 200 parallel-connected 2 kV to 20 V step-down transformers provided with a closed magnetic circuit, one for each lamp. A few months later it was followed by the first British AC system, serving Grosvenor Gallery. It also featured Siemens alternators and 2.4 kV to 100 V step-down transformers – one per user – with shunt-connected primaries.

Working to improve what he considered an impractical Gaulard-Gibbs design, electrical engineer William Stanley, Jr. developed the first practical series AC transformer in 1885. Working with the support of George Westinghouse, in 1886 he demonstrated a transformer-based AC lighting system in Great Barrington, Massachusetts. It was powered by a steam engine-driven 500 V Siemens generator. Voltage was stepped down to 100 volts using the Stanley transformer to power incandescent lamps at 23 businesses over 4,000 feet (1,200 m). This practical demonstration of a transformer and alternating current lighting system led Westinghouse to begin installing AC systems later that year.

In 1888 the first designs for an AC motor appeared. These were induction motors running on polyphase current, independently invented by Galileo Ferraris and Nikola Tesla. Westinghouse licensed Tesla's design. Practical three-phase motors were designed by Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown. Widespread use of such motors were delayed many years by development problems and the scarcity of polyphase power systems needed to power them.

Westinghouse alternating current polyphase generators on display at the 1893 World's Fair in Chicago, part of their "Tesla Poly-phase System". Such polyphase innovations revolutionized transmission.

In the late 1880s and early 1890s smaller electric companies merged into larger corporations such as Ganz and AEG in Europe and General Electric and Westinghouse Electric in the US. These companies developed AC systems, but the technical difference between direct and alternating current systems required a much longer technical merger. Alternating current's economies of scale with large generating plants and long-distance transmission slowly added the ability to link all the loads. These included single phase AC systems, poly-phase AC systems, low voltage incandescent lighting, high-voltage arc lighting, and existing DC motors in factories and street cars. In what became a universal system, these technological differences were temporarily bridged via the rotary converters and motor-generators that allowed the legacy systems to connect to the AC grid. These stopgaps were slowly replaced as older systems were retired or upgraded.

The first transmission of single-phase alternating current using high voltage came in Oregon in 1890 when power was delivered from a hydroelectric plant at Willamette Falls to the city of Portland 14 miles (23 km) down river. The first three-phase alternating current using high voltage took place in 1891 during the international electricity exhibition in Frankfurt. A 15 kV transmission line, approximately 175 km long, connected Lauffen on the Neckar and Frankfurt.

Transmission voltages increased throughout the 20th century. By 1914, fifty-five transmission systems operating at more than 70 kV were in service. The highest voltage then used was 150 kV. Interconnecting multiple generating plants over a wide area reduced costs. The most efficient plants could be used to supply varying loads during the day. Reliability was improved and capital costs were reduced, because stand-by generating capacity could be shared over many more customers and a wider area. Remote and low-cost sources of energy, such as hydroelectric power or mine-mouth coal, could be exploited to further lower costs.

The 20th century's rapid industrialization made electrical transmission lines and grids critical infrastructure. Interconnection of local generation plants and small distribution networks was spurred by World War I, when large electrical generating plants were built by governments to power munitions factories.

Bulk transmission

A transmission substation decreases the voltage of incoming electricity, allowing it to connect from long-distance high-voltage transmission, to local lower voltage distribution. It also reroutes power to other transmission lines that serve local markets. This is the PacifiCorp Hale Substation, Orem, Utah, US.

These networks use components such as power lines, cables, circuit breakers, switches and transformers. The transmission network is usually administered on a regional basis by an entity such as a regional transmission organization or transmission system operator.

Transmission efficiency is improved at higher voltage and lower current. The reduced current reduces heating losses. Joule's first law states that energy losses are proportional to the square of the current. Thus, reducing the current by a factor of two lowers the energy lost to conductor resistance by a factor of four for any given size of conductor.

The optimum size of a conductor for a given voltage and current can be estimated by Kelvin's law for conductor size, which states that size is optimal when the annual cost of energy wasted in resistance is equal to the annual capital charges of providing the conductor. At times of lower interest rates and low commodity costs, Kelvin's law indicates that thicker wires are optimal. Otherwise, thinner conductors are indicated. Since power lines are designed for long-term use, Kelvin's law is used in conjunction with long-term estimates of the price of copper and aluminum as well as interest rates.

Higher voltage is achieved in AC circuits by using a step-up transformer. High-voltage direct current (HVDC) systems require relatively costly conversion equipment that may be economically justified for particular projects such as submarine cables and longer distance high capacity point-to-point transmission. HVDC is necessary for sending energy between unsynchronized grids.

A transmission grid is a network of power stations, transmission lines, and substations. Energy is usually transmitted within a grid with three-phase AC. Single-phase AC is used only for distribution to end users since it is not usable for large polyphase induction motors. In the 19th century, two-phase transmission was used but required either four wires or three wires with unequal currents. Higher order phase systems require more than three wires, but deliver little or no benefit.

The synchronous grids of Europe

While the price of generating capacity is high, energy demand is variable, making it often cheaper to import needed power than to generate it locally. Because loads often rise and fall together across large areas, power often comes from distant sources. Because of the economic benefits of load sharing, wide area transmission grids may span countries and even continents. Interconnections between producers and consumers enables power to flow even if some links are inoperative.

The slowly varying portion of demand is known as the base load and is generally served by large facilities with constant operating costs, termed firm power. Such facilities are nuclear, coal or hydroelectric, while other energy sources such as concentrated solar thermal and geothermal power have the potential to provide firm power. Renewable energy sources, such as solar photovoltaics, wind, wave, and tidal, are, due to their intermittency, not considered to be firm. The remaining or "peak" power demand, is supplied by peaking power plants, which are typically smaller, faster-responding, and higher cost sources, such as combined cycle or combustion turbine plants typically fueled by natural gas.

Long-distance transmission (hundreds of kilometers) is cheap and efficient, with costs of US$0.005–0.02 per kWh (compared to annual averaged large producer costs of US$0.01–0.025 per kWh, retail rates upwards of US$0.10 per kWh, and multiples of retail for instantaneous suppliers at unpredicted high demand moments. New York often buys over 1000 MW of low-cost hydropower from Canada. Local sources (even if more expensive and infrequently used) can protect the power supply from weather and other disasters that can disconnect distant suppliers.

A high-power electrical transmission tower, 230 kV, double-circuit, also double-bundled

Hydro and wind sources cannot be moved closer to big cities, and solar costs are lowest in remote areas where local power needs are nominal. Connection costs can determine whether any particular renewable alternative is economically realistic. Costs can be prohibitive for transmission lines, but high capacity, long distance super grid transmission network costs could be recovered with modest usage fees.

Grid input

At power stations, power is produced at a relatively low voltage between about 2.3 kV and 30 kV, depending on the size of the unit. The voltage is then stepped up by the power station transformer to a higher voltage (115 kV to 765 kV AC) for transmission.

In the United States, power transmission is, variously, 230 kV to 500 kV, with less than 230 kV or more than 500 kV as exceptions.

The Western Interconnection has two primary interchange voltages: 500 kV AC at 60 Hz, and ±500 kV (1,000 kV net) DC from North to South (Columbia River to Southern California) and Northeast to Southwest (Utah to Southern California). The 287.5 kV (Hoover Dam to Los Angeles line, via Victorville) and 345 kV (Arizona Public Service (APS) line) are local standards, both of which were implemented before 500 kV became practical.

Losses

Transmitting electricity at high voltage reduces the fraction of energy lost to Joule heating, which varies by conductor type, the current, and the transmission distance. For example, a 100 mi (160 km) span at 765 kV carrying 1000 MW of power can have losses of 0.5% to 1.1%. A 345 kV line carrying the same load across the same distance has losses of 4.2%. For a given amount of power, a higher voltage reduces the current and thus the resistive losses. For example, raising the voltage by a factor of 10 reduces the current by a corresponding factor of 10 and therefore the losses by a factor of 100, provided the same sized conductors are used in both cases. Even if the conductor size (cross-sectional area) is decreased ten-fold to match the lower current, the losses are still reduced ten-fold using the higher voltage.

While power loss can also be reduced by increasing the wire's conductance (by increasing its cross-sectional area), larger conductors are heavier and more expensive. And since conductance is proportional to cross-sectional area, resistive power loss is only reduced proportionally with increasing cross-sectional area, providing a much smaller benefit than the squared reduction provided by multiplying the voltage.

Long-distance transmission is typically done with overhead lines at voltages of 115 to 1,200 kV. At higher voltages, where more than 2,000 kV exists between conductor and ground, corona discharge losses are so large that they can offset the lower resistive losses in the line conductors. Measures to reduce corona losses include larger conductor diameter, hollow cores or conductor bundles.

Factors that affect resistance and thus loss include temperature, spiraling, and the skin effect. Resistance increases with temperature. Spiraling, which refers to the way stranded conductors spiral about the center, also contributes to increases in conductor resistance. The skin effect causes the effective resistance to increase at higher AC frequencies. Corona and resistive losses can be estimated using a mathematical model.

US transmission and distribution losses were estimated at 6.6% in 1997, 6.5% in 2007 and 5% from 2013 to 2019. In general, losses are estimated from the discrepancy between power produced (as reported by power plants) and power sold; the difference constitutes transmission and distribution losses, assuming no utility theft occurs.

As of 1980, the longest cost-effective distance for DC transmission was 7,000 kilometres (4,300 miles). For AC it was 4,000 kilometres (2,500 miles), though US transmission lines are substantially shorter.

In any AC line, conductor inductance and capacitance can be significant. Currents that flow solely in reaction to these properties, (which together with the resistance define the impedance) constitute reactive power flow, which transmits no power to the load. These reactive currents, however, cause extra heating losses. The ratio of real power transmitted to the load to apparent power (the product of a circuit's voltage and current, without reference to phase angle) is the power factor. As reactive current increases, reactive power increases and power factor decreases.

For transmission systems with low power factor, losses are higher than for systems with high power factor. Utilities add capacitor banks, reactors and other components (such as phase-shifters; static VAR compensators; and flexible AC transmission systems, FACTS) throughout the system help to compensate for the reactive power flow, reduce the losses in power transmission and stabilize system voltages. These measures are collectively called 'reactive support'.

Transposition

Current flowing through transmission lines induces a magnetic field that surrounds the lines of each phase and affects the inductance of the surrounding conductors of other phases. The conductors' mutual inductance is partially dependent on the physical orientation of the lines with respect to each other. Three-phase lines are conventionally strung with phases separated vertically. The mutual inductance seen by a conductor of the phase in the middle of the other two phases is different from the inductance seen on the top/bottom.

Unbalanced inductance among the three conductors is problematic because it may force the middle line to carry a disproportionate amount of the total power transmitted. Similarly, an unbalanced load may occur if one line is consistently closest to the ground and operates at a lower impedance. Because of this phenomenon, conductors must be periodically transposed along the line so that each phase sees equal time in each relative position to balance out the mutual inductance seen by all three phases. To accomplish this, line position is swapped at specially designed transposition towers at regular intervals along the line using various transposition schemes.

Subtransmission

A 115 kV subtransmission line in the Philippines, along with 20 kV distribution lines and a street light, all mounted on a wood subtransmission pole
115 kV H-frame transmission tower

Subtransmission runs at relatively lower voltages. It is uneconomical to connect all distribution substations to the high main transmission voltage, because that equipment is larger and more expensive. Typically, only larger substations connect with this high voltage. Voltage is stepped down before the current is sent to smaller substations. Subtransmission circuits are usually arranged in loops so that a single line failure does not stop service to many customers for more than a short time.

Loops can be "normally closed", where loss of one circuit should result in no interruption, or "normally open" where substations can switch to a backup supply. While subtransmission circuits are usually carried on overhead lines, in urban areas buried cable may be used. The lower-voltage subtransmission lines use less right-of-way and simpler structures; undergrounding is less difficult.

No fixed cutoff separates subtransmission and transmission, or subtransmission and distribution. Their voltage ranges overlap. Voltages of 69 kV, 115 kV, and 138 kV are often used for subtransmission in North America. As power systems evolved, voltages formerly used for transmission were used for subtransmission, and subtransmission voltages became distribution voltages. Like transmission, subtransmission moves relatively large amounts of power, and like distribution, subtransmission covers an area instead of just point-to-point.

Transmission grid exit

Substation transformers reduce the voltage to a lower level for distribution to loads. This distribution is accomplished with a combination of sub-transmission (33 to 132 kV) and distribution (3.3 to 25 kV). Finally, at the point of use, the energy is transformed to low voltage.

Advantage of high-voltage transmission

High-voltage power transmission allows for lesser resistive losses over long distances. This efficiency delivers a larger proportion of the generated power to the loads.

Electrical grid without a transformer
Electrical grid with a transformer

In a simplified model, the grid delivers electricity from an ideal voltage source with voltage , delivering a power ) to a single point of consumption, modelled by a resistance , when the wires are long enough to have a significant resistance .

If the resistances are in series with no intervening transformer, the circuit acts as a voltage divider, because the same current runs through the wire resistance and the powered device. As a consequence, the useful power (at the point of consumption) is:

Should an ideal transformer convert high-voltage, low-current electricity into low-voltage, high-current electricity with a voltage ratio of (i.e., the voltage is divided by and the current is multiplied by in the secondary branch, compared to the primary branch), then the circuit is again equivalent to a voltage divider, but the wires now have apparent resistance of only . The useful power is then:

For (i.e. conversion of high voltage to low voltage near the consumption point), a larger fraction of the generator's power is transmitted to the consumption point and a lesser fraction is lost to Joule heating.

Modeling

"Black box" model for transmission line

The terminal characteristics of the transmission line are the voltage and current at the sending (S) and receiving (R) ends. The transmission line can be modeled as a "black box" and a 2 by 2 transmission matrix is used to model its behavior, as follows:

The line is assumed to be a reciprocal, symmetrical network, meaning that the receiving and sending labels can be switched with no consequence. The transmission matrix T has the properties:

The parameters A, B, C, and D differ depending on how the desired model handles the line's resistance (R), inductance (L), capacitance (C), and shunt (parallel, leak) conductance G.

The four main models are the short line approximation, the medium line approximation, the long line approximation (with distributed parameters), and the lossless line. In such models, a capital letter such as R refers to the total quantity summed over the line and a lowercase letter such as c refers to the per-unit-length quantity.

Lossless line

The lossless line approximation is the least accurate; it is typically used on short lines where the inductance is much greater than the resistance. For this approximation, the voltage and current are identical at the sending and receiving ends.

Voltage on sending and receiving ends for lossless line

The characteristic impedance is pure real, which means resistive for that impedance, and it is often called surge impedance. When a lossless line is terminated by surge impedance, the voltage does not drop. Though the phase angles of voltage and current are rotated, the magnitudes of voltage and current remain constant along the line. For load > SIL, the voltage drops from sending end and the line "consumes" VARs. For load < SIL, the voltage increases from the sending end, and the line "generates" VARs.

Short line

The short line approximation is normally used for lines shorter than 80 km (50 mi). There, only a series impedance Z is considered, while C and G are ignored. The final result is that A = D = 1 per unit, B = Z Ohms, and C = 0. The associated transition matrix for this approximation is therefore:

Medium line

The medium line approximation is used for lines running between 80 and 250 km (50 and 155 mi). The series impedance and the shunt (current leak) conductance are considered, placing half of the shunt conductance at each end of the line. This circuit is often referred to as a "nominal π (pi)" circuit because of the shape (π) that is taken on when leak conductance is placed on both sides of the circuit diagram. The analysis of the medium line produces:

Counterintuitive behaviors of medium-length transmission lines:

  • voltage rise at no load or small current (Ferranti effect)
  • receiving-end current can exceed sending-end current

Long line

The long line model is used when a higher degree of accuracy is needed or when the line under consideration is more than 250 km (160 mi) long. Series resistance and shunt conductance are considered to be distributed parameters, such that each differential length of the line has a corresponding differential series impedance and shunt admittance. The following result can be applied at any point along the transmission line, where is the propagation constant.

To find the voltage and current at the end of the long line, should be replaced with (the line length) in all parameters of the transmission matrix. This model applies the Telegrapher's equations.

High-voltage direct current

High-voltage direct current (HVDC) is used to transmit large amounts of power over long distances or for interconnections between asynchronous grids. When electrical energy is transmitted over very long distances, the power lost in AC transmission becomes appreciable and it is less expensive to use direct current instead. For a long transmission line, these lower losses (and reduced construction cost of a DC line) can offset the cost of the required converter stations at each end.

HVDC is used for long submarine cables where AC cannot be used because of cable capacitance. In these cases special high-voltage cables are used. Submarine HVDC systems are often used to interconnect the electricity grids of islands, for example, between Great Britain and continental Europe, between Great Britain and Ireland, between Tasmania and the Australian mainland, between the North and South Islands of New Zealand, between New Jersey and New York City, and between New Jersey and Long Island. Submarine connections up to 600 kilometres (370 mi) in length have been deployed.

HVDC links can be used to control grid problems. The power transmitted by an AC line increases as the phase angle between source end voltage and destination ends increases, but too large a phase angle allows the systems at either end to fall out of step. Since the power flow in a DC link is controlled independently of the phases of the AC networks that it connects, this phase angle limit does not exist, and a DC link is always able to transfer its full rated power. A DC link therefore stabilizes the AC grid at either end, since power flow and phase angle can then be controlled independently.

As an example, to adjust the flow of AC power on a hypothetical line between Seattle and Boston would require adjustment of the relative phase of the two regional electrical grids. This is an everyday occurrence in AC systems, but one that can become disrupted when AC system components fail and place unexpected loads on the grid. With an HVDC line instead, such an interconnection would:

  • Convert AC in Seattle into HVDC;
  • Use HVDC for the 3,000 miles (4,800 km) of cross-country transmission; and
  • Convert the HVDC to locally synchronized AC in Boston,

(and possibly in other cooperating cities along the transmission route). Such a system could be less prone to failure if parts of it were suddenly shut down. One example of a long DC transmission line is the Pacific DC Intertie located in the Western United States.

Capacity

The amount of power that can be sent over a transmission line varies with the length of the line. The heating of short line conductors due to line losses sets a thermal limit. If too much current is drawn, conductors may sag too close to the ground, or conductors and equipment may overheat. For intermediate-length lines on the order of 100 kilometres (62 miles), the limit is set by the voltage drop in the line. For longer AC lines, system stability becomes the limiting factor. Approximately, the power flowing over an AC line is proportional to the cosine of the phase angle of the voltage and current at the ends.

This angle varies depending on system loading. It is undesirable for the angle to approach 90 degrees, as the power flowing decreases while resistive losses remain. The product of line length and maximum load is approximately proportional to the square of the system voltage. Series capacitors or phase-shifting transformers are used on long lines to improve stability. HVDC lines are restricted only by thermal and voltage drop limits, since the phase angle is not material.

Understanding the temperature distribution along the cable route became possible with the introduction of distributed temperature sensing (DTS) systems that measure temperatures all along the cable. Without them maximum current was typically set as a compromise between understanding of operation conditions and risk minimization. This monitoring solution uses passive optical fibers as temperature sensors, either inside a high-voltage cable or externally mounted on the cable insulation.

For overhead cables the fiber is integrated into the core of a phase wire. The integrated Dynamic Cable Rating (DCR)/Real Time Thermal Rating (RTTR) solution makes it possible to run the network to its maximum. It allows the operator to predict the behavior of the transmission system to reflect major changes to its initial operating conditions.

Control

To ensure safe and predictable operation, system components are controlled with generators, switches, circuit breakers and loads. The voltage, power, frequency, load factor, and reliability capabilities of the transmission system are designed to provide cost effective performance.

Load balancing

The transmission system provides for base load and peak load capability, with margins for safety and fault tolerance. Peak load times vary by region largely due to the industry mix. In hot and cold climates home air conditioning and heating loads affect the overall load. They are typically highest in the late afternoon in the hottest part of the year and in mid-mornings and mid-evenings in the coldest part of the year. Power requirements vary by season and time of day. Distribution system designs always take the base load and the peak load into consideration.

The transmission system usually does not have a large buffering capability to match loads with generation. Thus generation has to be kept matched to the load, to prevent overloading generation equipment.

Multiple sources and loads can be connected to the transmission system and they must be controlled to provide orderly transfer of power. In centralized power generation, only local control of generation is necessary. This involves synchronization of the generation units.

In distributed power generation the generators are geographically distributed and the process to bring them online and offline must be carefully controlled. The load control signals can either be sent on separate lines or on the power lines themselves. Voltage and frequency can be used as signaling mechanisms to balance the loads.

In voltage signaling, voltage is varied to increase generation. The power added by any system increases as the line voltage decreases. This arrangement is stable in principle. Voltage-based regulation is complex to use in mesh networks, since the individual components and setpoints would need to be reconfigured every time a new generator is added to the mesh.

In frequency signaling, the generating units match the frequency of the power transmission system. In droop speed control, if the frequency decreases, the power is increased. (The drop in line frequency is an indication that the increased load is causing the generators to slow down.)

Wind turbines, vehicle-to-grid, virtual power plants, and other locally distributed storage and generation systems can interact with the grid to improve system operation. Internationally, a slow move from a centralized to decentralized power system has taken place. The main draw of locally distributed generation systems is that they reduce transmission losses by leading to consumption of electricity closer to where it was produced.

Failure protection

Under excess load conditions, the system can be designed to fail incrementally rather than all at once. Brownouts occur when power supplied drops below the demand. Blackouts occur when the grid fails completely.

Rolling blackouts (also called load shedding) are intentionally engineered electrical power outages, used to distribute insufficient power to various loads in turn.

Communications

Grid operators require reliable communications to manage the grid and associated generation and distribution facilities. Fault-sensing protective relays at each end of the line must communicate to monitor the flow of power so that faulted conductors or equipment can be quickly de-energized and the balance of the system restored. Protection of the transmission line from short circuits and other faults is usually so critical that common carrier telecommunications are insufficiently reliable, while in some remote areas no common carrier is available. Communication systems associated with a transmission project may use:

Rarely, and for short distances, pilot-wires are strung along the transmission line path. Leased circuits from common carriers are not preferred since availability is not under control of the operator.

Transmission lines can be used to carry data: this is called power-line carrier, or power-line communication (PLC). PLC signals can be easily received with a radio in the long wave range.

High-voltage pylons carrying additional optical fibre cable in Kenya

Optical fibers can be included in the stranded conductors of a transmission line, in the overhead shield wires. These cables are known as optical ground wire (OPGW). Sometimes a standalone cable is used, all-dielectric self-supporting (ADSS) cable, attached to the transmission line cross arms.

Some jurisdictions, such as Minnesota, prohibit energy transmission companies from selling surplus communication bandwidth or acting as a telecommunications common carrier. Where the regulatory structure permits, the utility can sell capacity in extra dark fibers to a common carrier.

Market structure

Electricity transmission is generally considered to be a natural monopoly, but one that is not inherently linked to generation. Many countries regulate transmission separately from generation.

Spain was the first country to establish a regional transmission organization. In that country, transmission operations and electricity markets are separate. The transmission system operator is Red Eléctrica de España (REE) and the wholesale electricity market operator is Operador del Mercado Ibérico de Energía – Polo Español, S.A. (OMEL) OMEL Holding | Omel Holding. Spain's transmission system is interconnected with those of France, Portugal, and Morocco.

The establishment of RTOs in the United States was spurred by the FERC's Order 888, Promoting Wholesale Competition Through Open Access Non-discriminatory Transmission Services by Public Utilities; Recovery of Stranded Costs by Public Utilities and Transmitting Utilities, issued in 1996. In the United States and parts of Canada, electric transmission companies operate independently of generation companies, but in the Southern United States vertical integration is intact. In regions of separation, transmission owners and generation owners continue to interact with each other as market participants with voting rights within their RTO. RTOs in the United States are regulated by the Federal Energy Regulatory Commission.

Merchant transmission projects in the United States include the Cross Sound Cable from Shoreham, New York to New Haven, Connecticut, Neptune RTS Transmission Line from Sayreville, New Jersey, to New Bridge, New York, and Path 15 in California. Additional projects are in development or have been proposed throughout the United States, including the Lake Erie Connector, an underwater transmission line proposed by ITC Holdings Corp., connecting Ontario to load serving entities in the PJM Interconnection region.

Australia has one unregulated or market interconnector - Basslink - between Tasmania and Victoria. Two DC links originally implemented as market interconnectors, Directlink and Murraylink, were converted to regulated interconnectors.

A major barrier to wider adoption of merchant transmission is the difficulty in identifying who benefits from the facility so that the beneficiaries pay the toll. Also, it is difficult for a merchant transmission line to compete when the alternative transmission lines are subsidized by utilities with a monopolized and regulated rate base. In the United States, the FERC's Order 1000, issued in 2010, attempted to reduce barriers to third party investment and creation of merchant transmission lines where a public policy need is found.

Transmission costs

The cost of high voltage transmission is comparatively low, compared to all other costs constituting consumer electricity bills. In the UK, transmission costs are about 0.2 p per kWh compared to a delivered domestic price of around 10 p per kWh.

The level of capital expenditure in the electric power T&D equipment market was estimated to be $128.9 bn in 2011.

Health concerns

Mainstream scientific evidence suggests that low-power, low-frequency, electromagnetic radiation associated with household currents and high transmission power lines does not constitute a short- or long-term health hazard.

Some studies failed to find any link between living near power lines and developing any sickness or diseases, such as cancer. A 1997 study reported no increased risk of cancer or illness from living near a transmission line. Other studies, however, reported statistical correlations between various diseases and living or working near power lines. No adverse health effects have been substantiated for people not living close to power lines.

The New York State Public Service Commission conducted a study to evaluate potential health effects of electric fields. The study measured the electric field strength at the edge of an existing right-of-way on a 765 kV transmission line. The field strength was 1.6 kV/m, and became the interim maximum strength standard for new transmission lines in New York State. The opinion also limited the voltage of new transmission lines built in New York to 345 kV. On September 11, 1990, after a similar study of magnetic field strengths, the NYSPSC issued their Interim Policy Statement on Magnetic Fields. This policy established a magnetic field standard of 200 mG at the edge of the right-of-way using the winter-normal conductor rating. As a comparison with everyday items, a hair dryer or electric blanket produces a 100 mG – 500 mG magnetic field.

Applications for a new transmission line typically include an analysis of electric and magnetic field levels at the edge of rights-of-way. Public utility commissions typically do not comment on health impacts.

Biological effects have been established for acute high level exposure to magnetic fields above 100 µT (1 G) (1,000 mG). In a residential setting, one study reported "limited evidence of carcinogenicity in humans and less than sufficient evidence for carcinogenicity in experimental animals", in particular, childhood leukemia, associated with average exposure to residential power-frequency magnetic field above 0.3 µT (3 mG) to 0.4 µT (4 mG). These levels exceed average residential power-frequency magnetic fields in homes, which are about 0.07 µT (0.7 mG) in Europe and 0.11 µT (1.1 mG) in North America.

The Earth's natural geomagnetic field strength varies over the surface of the planet between 0.035 mT and 0.07 mT (35 µT – 70 µT or 350 mG – 700 mG) while the international standard for continuous exposure is set at 40 mT (400,000 mG or 400 G) for the general public.

Tree growth regulators and herbicides may be used in transmission line right of ways, which may have health effects.

Policy by country

United States

The Federal Energy Regulatory Commission (FERC) is the primary regulatory agency of electric power transmission and wholesale electricity sales within the United States. FERC was originally established by Congress in 1920 as the Federal Power Commission and has since undergone multiple name and responsibility modifications. Electric power distribution and the retail sale of power is under state jurisdiction.

Order No. 888

Order No. 888 was adopted by FERC on April 24, 1996. It was "designed to remove impediments to competition in the wholesale bulk power marketplace and to bring more efficient, lower cost power to the Nation's electricity consumers. The legal and policy cornerstone of these rules is to remedy undue discrimination in access to the monopoly owned transmission wires that control whether and to whom electricity can be transported in interstate commerce." The Order required all public utilities that own, control, or operate facilities used for transmitting electric energy in interstate commerce, to have open access, non-discriminatory transmission tariffs. These tariffs allow any electricity generator to utilize existing power lines to transmit the power that they generate. The Order also permits public utilities to recover the costs associated with providing their power lines as an open access service.

Energy Policy Act of 2005

The Energy Policy Act of 2005 (EPAct) expanded federal authority to regulate power transmission. EPAct gave FERC significant new responsibilities, including enforcement of electric transmission reliability standards and the establishment of rate incentives to encourage investment in electricity transmission.

Historically, local governments exercised authority over the grid and maintained significant disincentives to actions that would benefit states other than their own. Localities with cheap electricity have a disincentive to encourage making interstate commerce in electricity trading easier, since other regions would be able to compete for that energy and drive up rates. For example, some regulators in Maine refused to address congestion problems because the congestion protects Maine rates.

Local constituencies can block or slow permitting by pointing to visual impacts, environmental, and health concerns. In the US, generation is growing four times faster than transmission, but transmission upgrades require the coordination of multiple jurisdictions, complex permitting, and cooperation between a significant portion of the many companies that collectively own the grid. The US national security interest in improving transmission was reflected in the EPAct which gave the Department of Energy the authority to approve transmission if states refused to act.

Specialized transmission

Grids for railways

In some countries where electric locomotives or electric multiple units run on low frequency AC power, separate single phase traction power networks are operated by the railways. Prime examples are countries such as Austria, Germany and Switzerland that utilize AC technology based on 16 2/3 Hz. Norway and Sweden also use this frequency but use conversion from the 50 Hz public supply; Sweden has a 16 2/3 Hz traction grid but only for part of the system.

Superconducting cables

High-temperature superconductors (HTS) promise to revolutionize power distribution by providing lossless transmission. The development of superconductors with transition temperatures higher than the boiling point of liquid nitrogen has made the concept of superconducting power lines commercially feasible, at least for high-load applications. It has been estimated that waste would be halved using this method, since the necessary refrigeration equipment would consume about half the power saved by the elimination of resistive losses. Companies such as Consolidated Edison and American Superconductor began commercial production of such systems in 2007.

Superconducting cables are particularly suited to high load density areas such as the business district of large cities, where purchase of an easement for cables is costly.

HTS transmission lines
Location Length (km) Voltage (kV) Capacity (GW) Date
Carrollton, Georgia


2000
Albany, New York 0.35 34.5 0.048 2006
Holbrook, Long Island 0.6 138 0.574 2008
Tres Amigas

5 Proposed 2013
Manhattan: Project Hydra


Proposed 2014
Essen, Germany 1 10 0.04 2014

Single-wire earth return

Single-wire earth return (SWER) or single-wire ground return is a single-wire transmission line for supplying single-phase electrical power to remote areas at low cost. It is principally used for rural electrification, but also finds use for larger isolated loads such as water pumps. Single-wire earth return is also used for HVDC over submarine power cables.

Wireless power transmission

Both Nikola Tesla and Hidetsugu Yagi attempted to devise systems for large scale wireless power transmission in the late 1800s and early 1900s, without commercial success.

In November 2009, LaserMotive won the NASA 2009 Power Beaming Challenge by powering a cable climber 1 km vertically using a ground-based laser transmitter. The system produced up to 1 kW of power at the receiver end. In August 2010, NASA contracted with private companies to pursue the design of laser power beaming systems to power low earth orbit satellites and to launch rockets using laser power beams.

Wireless power transmission has been studied for transmission of power from solar power satellites to the earth. A high power array of microwave or laser transmitters would beam power to a rectenna. Major engineering and economic challenges face any solar power satellite project.

Security

The Federal government of the United States stated that the power grid is susceptible to cyber-warfare.[64][65] The United States Department of Homeland Security works with industry to identify vulnerabilities and to help industry enhance the security of control system networks.

In June 2019, Russia conceded that it was "possible" its electrical grid is under cyber-attack by the United States. The New York Times reported that American hackers from the United States Cyber Command planted malware potentially capable of disrupting the Russian electrical grid.

Records

  • Highest capacity system: 12 GW Zhundong–Wannan(准东-皖南)±1100 kV HVDC.
  • Highest transmission voltage (AC):
    • planned: 1.20 MV (Ultra-High Voltage) on Wardha-Aurangabad line (India) – under construction. Initially will operate at 400 kV.
    • worldwide: 1.15 MV (Ultra-High Voltage) on Ekibastuz-Kokshetau line (Kazakhstan)
  • Largest double-circuit transmission, Kita-Iwaki Powerline (Japan).
  • Highest towers: Yangtze River Crossing (China) (height: 345 m or 1,132 ft)
  • Longest power line: Inga-Shaba (Democratic Republic of Congo) (length: 1,700 kilometres or 1,056 miles)
  • Longest span of power line: 5,376 m (17,638 ft) at Ameralik Span (Greenland, Denmark)
  • Longest submarine cables:
    • North Sea Link, (Norway/United Kingdom) – (length of submarine cable: 720 kilometres or 447 miles)
    • NorNed, North Sea (Norway/Netherlands) – (length of submarine cable: 580 kilometres or 360 miles)
    • Basslink, Bass Strait, (Australia) – (length of submarine cable: 290 kilometres or 180 miles, total length: 370.1 kilometres or 230 miles)
    • Baltic Cable, Baltic Sea (Germany/Sweden) – (length of submarine cable: 238 kilometres or 148 miles, HVDC length: 250 kilometres or 155 miles, total length: 262 kilometres or 163 miles)
  • Longest underground cables:

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...