Search This Blog

Tuesday, September 8, 2015

Electrical grid


From Wikipedia, the free encyclopedia


General layout of electricity networks. Voltages and depictions of electrical lines are typical for Germany and other European systems.
An electrical grid is an interconnected network for delivering electricity from suppliers to consumers. It consists of generating stations that produce electrical power, high-voltage transmission lines that carry power from distant sources to demand centers, and distribution lines that connect individual customers.[1]

Power stations may be located near a fuel source, at a dam site, or to take advantage of renewable energy sources, and are often located away from heavily populated areas. They are usually quite large to take advantage of the economies of scale. The electric power which is generated is stepped up to a higher voltage at which it connects to the electric power transmission network.

The bulk power transmission network will move the power long distances, sometimes across international boundaries, until it reaches its wholesale customer (usually the company that owns the local electric power distribution network).

On arrival at a substation, the power will be stepped down from a transmission level voltage to a distribution level voltage. As it exits the substation, it enters the distribution wiring. Finally, upon arrival at the service location, the power is stepped down again from the distribution voltage to the required service voltage(s).

Term

The term grid usually refers to a network, and should not be taken to imply a particular physical layout or a breadth. Grid may also be used to refer to an entire continent's electrical network, a regional transmission network or may be used to describe a subnetwork such as a local utility's transmission grid or distribution grid.

History

Since its inception in the Industrial Age, the electrical grid has evolved from an insular system that serviced a particular geographic area to a wider, expansive network that incorporated multiple areas. At one point, all energy was produced near the device or service requiring that energy. In the early 19th century, electricity was a novel invention that competed with steam, hydraulics, direct heating and cooling, light, and most notably gas. During this period, gas production and delivery had become the first centralized element in the modern energy industry. It was first produced on customer’s premises but later evolved into large gasifiers that enjoyed economies of scale. Virtually every city in the U.S. and Europe had town gas piped through their municipalities as it was a dominant form of household energy use. By the mid-19th century, electric arc lighting soon became advantageous compared to volatile gas lamps since gas lamps produced poor light, tremendous wasted heat which made rooms hot and smoky, and noxious elements in the form of hydrogen and carbon monoxide. Modeled after the gas lighting industry, the first electric utility systems supplied energy through virtual mains to light filtration as opposed to gas burners. With this, electric utilities also took advantage of economies of scale and moved to centralized power generation, distribution, and system management.[2]

With the realization of long distance power transmission it was possible to interconnect different central stations to balance loads and improve load factors. Interconnection became increasingly desirable as electrification grew rapidly in the early years of the 20th century. Like telegraphy before it, wired electricity was often carried on and through the circuits of colonial rule.[3]

Charles Merz, of the Merz & McLellan consulting partnership, built the Neptune Bank Power Station near Newcastle upon Tyne in 1901,[4] and by 1912 had developed into the largest integrated power system in Europe.[5] In 1905 he tried to influence Parliament to unify the variety of voltages and frequencies in the country's electricity supply industry, but it was not until World War I that Parliament began to take this idea seriously, appointing him head of a Parliamentary Committee to address the problem. In 1916 Merz pointed out that the UK could use its small size to its advantage, by creating a dense distribution grid to feed its industries efficiently. His findings led to the Williamson Report of 1918, which in turn created the Electricity Supply Bill of 1919. The bill was the first step towards an integrated electricity system.

The more significant Electricity (Supply) Act of 1926 led to the setting up of the National Grid.[6] The Central Electricity Board standardised the nation's electricity supply and established the first synchronised AC grid, running at 132 kilovolts and 50 Hertz. This started operating as a national system, the National Grid, in 1938.

In the United States in the 1920s, utilities joined together establishing a wider utility grid as joint-operations saw the benefits of sharing peak load coverage and backup power. Also, electric utilities were easily financed by Wall Street private investors who backed many of their ventures. In 1934, with the passage of the Public Utility Holding Company Act (USA), electric utilities were recognized as public goods of importance along with gas, water, and telephone companies and thereby were given outlined restrictions and regulatory oversight of their operations. This ushered in the Golden Age of Regulation for more than 60 years. However, with the successful deregulation of airlines and telecommunication industries in late 1970s, the Energy Policy Act (EPAct) of 1992 advocated deregulation of electric utilities by creating wholesale electric markets. It required transmission line owners to allow electric generation companies open access to their network.[2][7] The act led to a major restructuring of how the electric industry operated in an effort to create competition in power generation. No longer were electric utilities built as vertical monopolies, where generation, transmission and distribution were handled by a single company. Now, the three stages could be split among various companies, in an effort to provide fair accessibility to high voltage transmission.[8] In 2005, the Energy Policy Act of 2005 was passed to allow incentives and loan guarantees for alternative energy production and advance innovative technologies that avoided greenhouse emissions.

Features

The wide area synchronous grids of Europe. Most are members of the European Transmission System Operators association.
The Continental U.S. power transmission grid consists of about 300,000 km of lines operated by approximately 500 companies. The North American Electric Reliability Corporation (NERC) oversees all of them.
High-voltage direct current interconnections in western Europe - red are existing links, green are under construction, and blue are proposed.

Structure of distribution grids

The structure, or "topology" of a grid can vary considerably. The physical layout is often forced by what land is available and its geology. The logical topology can vary depending on the constraints of budget, requirements for system reliability, and the load and generation characteristics.
The cheapest and simplest topology for a distribution or transmission grid is a radial structure. This is a tree shape where power from a large supply radiates out into progressively lower voltage lines until the destination homes and businesses are reached.

Most transmission grids require the reliability that more complex mesh networks provide. If one were to imagine running redundant lines between limbs/branches of a tree that could be turned in case any particular limb of the tree were severed, then this image approximates how a mesh system operates. The expense of mesh topologies restrict their application to transmission and medium voltage distribution grids. Redundancy allows line failures to occur and power is simply rerouted while workmen repair the damaged and deactivated line.

Other topologies used are looped systems found in Europe and tied ring networks.

In cities and towns of North America, the grid tends to follow the classic radially fed design. A substation receives its power from the transmission network, the power is stepped down with a transformer and sent to a bus from which feeders fan out in all directions across the countryside. These feeders carry three-phase power, and tend to follow the major streets near the substation. As the distance from the substation grows, the fan out continues as smaller laterals spread out to cover areas missed by the feeders. This tree-like structure grows outward from the substation, but for reliability reasons, usually contains at least one unused backup connection to a nearby substation. This connection can be enabled in case of an emergency, so that a portion of a substation's service territory can be alternatively fed by another substation.

Geography of transmission networks

Transmission networks are more complex with redundant pathways. For example, see the map of the United States' (right) high-voltage transmission network.

A wide area synchronous grid or "interconnection" is a group of distribution areas all operating with alternating current (AC) frequencies synchronized (so that peaks occur at the same time). This allows transmission of AC power throughout the area, connecting a large number of electricity generators and consumers and potentially enabling more efficient electricity markets and redundant generation. Interconnection maps are shown of North America (right) and Europe (below left).

In a synchronous grid all the generators run not only at the same frequency but also at the same phase, each generator maintained by a local governor that regulates the driving torque by controlling the steam supply to the turbine driving it. Generation and consumption must be balanced across the entire grid, because energy is consumed almost instantaneously as it is produced. Energy is stored in the immediate short term by the rotational kinetic energy of the generators.

A large failure in one part of the grid - unless quickly compensated for - can cause current to re-route itself to flow from the remaining generators to consumers over transmission lines of insufficient capacity, causing further failures. One downside to a widely connected grid is thus the possibility of cascading failure and widespread power outage. A central authority is usually designated to facilitate communication and develop protocols to maintain a stable grid. For example, the North American Electric Reliability Corporation gained binding powers in the United States in 2006, and has advisory powers in the applicable parts of Canada and Mexico. The U.S. government has also designated National Interest Electric Transmission Corridors, where it believes transmission bottlenecks have developed.

Some areas, for example rural communities in Alaska, do not operate on a large grid, relying instead on local diesel generators.[9]

High-voltage direct current lines or variable frequency transformers can be used to connect two alternating current interconnection networks which are not synchronized with each other. This provides the benefit of interconnection without the need to synchronize an even wider area. For example, compare the wide area synchronous grid map of Europe (above left) with the map of HVDC lines (below right).

Redundancy and defining "grid"

A town is only said to have achieved grid connection when it is connected to several redundant sources, generally involving long-distance transmission.

This redundancy is limited. Existing national or regional grids simply provide the interconnection of facilities to utilize whatever redundancy is available. The exact stage of development at which the supply structure becomes a grid is arbitrary. Similarly, the term national grid is something of an anachronism in many parts of the world, as transmission cables now frequently cross national boundaries. The terms distribution grid for local connections and transmission grid for long-distance transmissions are therefore preferred, but national grid is often still used for the overall structure.

Interconnected Grid

Electric utilities across regions are many times interconnected for improved economy and reliability. Interconnections allow for economies of scale, allowing energy to be purchased from large, efficient sources. Utilities can draw power from generator reserves from a different region in order to ensure continuing, reliable power and diversify their loads. Interconnection also allows regions to have access to cheap bulk energy by receiving power from different sources. For example, one region may be producing cheap hydro power during high water seasons, but in low water seasons, another area may be producing cheaper power through wind, allowing both regions to access cheaper energy sources from one another during different times of the year. Neighboring utilities also help others to maintain the overall system frequency and also help manage tie transfers between utility regions.[8]

Aging Infrastructure

Despite the novel institutional arrangements and network designs of the electrical grid, its power delivery infrastructures suffer aging across the developed world. Four contributing factors to the current state of the electric grid and its consequences are:
  1. Aging power equipment – older equipment has higher failure rates, leading to customer interruption rates affecting the economy and society; also, older assets and facilities lead to higher inspection maintenance costs and further repair/restoration costs.
  2. Obsolete system layout – older areas require serious additional substation sites and rights-of-way that cannot be obtained in current area and are forced to use existing, insufficient facilities.
  3. Outdated engineering – traditional tools for power delivery planning and engineering are ineffective in addressing current problems of aged equipment, obsolete system layouts, and modern deregulated loading levels.
  4. Old cultural value – planning, engineering, operating of system using concepts and procedures that worked in vertically integrated industry exacerbate the problem under a deregulated industry [10]

Modern trends

As the 21st century progresses, the electric utility industry seeks to take advantage of novel approaches to meet growing energy demand. Utilities are under pressure to evolve their classic topologies to accommodate distributed generation. As generation becomes more common from rooftop solar and wind generators, the differences between distribution and transmission grids will continue to blur. Also, demand response is a grid management technique where retail or wholesale customers are requested either electronically or manually to reduce their load. Currently, transmission grid operators use demand response to request load reduction from major energy users such as industrial plants.[11]

With everything interconnected, and open competition occurring in a free market economy, it starts to make sense to allow and even encourage distributed generation (DG). Smaller generators, usually not owned by the utility, can be brought on-line to help supply the need for power. The smaller generation facility might be a home-owner with excess power from their solar panel or wind turbine. It might be a small office with a diesel generator. These resources can be brought on-line either at the utility's behest, or by owner of the generation in an effort to sell electricity. Many small generators are allowed to sell electricity back to the grid for the same price they would pay to buy it. Furthermore, numerous efforts are underway to develop a "smart grid". In the U.S., the Energy Policy Act of 2005 and Title XIII of the Energy Independence and Security Act of 2007 are providing funding to encourage smart grid development. The hope is to enable utilities to better predict their needs, and in some cases involve consumers in some form of time-of-use based tariff. Funds have also been allocated to develop more robust energy control technologies.[12][13]

Various planned and proposed systems to dramatically increase transmission capacity are known as super, or mega grids. The promised benefits include enabling the renewable energy industry to sell electricity to distant markets, the ability to increase usage of intermittent energy sources by balancing them across vast geological regions, and the removal of congestion that prevents electricity markets from flourishing. Local opposition to siting new lines and the significant cost of these projects are major obstacles to super grids. One study for a European super grid estimates that as much as 750 GW of extra transmission capacity would be required- capacity that would be accommodated in increments of 5 GW HVDC lines. A recent proposal by Transcanada priced a 1,600-km, 3-GW HVDC line at $3 billion USD and would require a corridor wide. In India, a recent 6 GW, 1,850-km proposal was priced at $790 million and would require a wide right of way. With 750 GW of new HVDC transmission capacity required for a European super grid, the land and money needed for new transmission lines would be considerable.

Future trends

As deregulation continues further, utilities are driven to sell their assets as the energy market follows in line with the gas market in use of the futures and spot markets and other financial arrangements. Even globalization with foreign purchases are taking place. One such purchase was the when UK’s National Grid, the largest private electric utility in the world, bought New England’s electric system for $3.2 billion.[14] Also, Scottish Power purchased Pacific Energy for $12.8 billion.[citation needed] Domestically, local electric and gas firms have merged operations as they saw the advantages of joint affiliation, especially with the reduced cost of joint-metering. Technological advances will take place in the competitive wholesale electric markets, such examples already being utilized include fuel cells used in space flight; aeroderivative gas turbines used in jet aircraft; solar engineering and photovoltaic systems; off-shore wind farms; and the communication advances spawned by the digital world, particularly with microprocessing which aids in monitoring and dispatching.[2]

Electricity is expected to see growing demand in the future. The Information Revolution is highly reliant on electric power. Other growth areas include emerging new electricity-exclusive technologies, developments in space conditioning, industrial processes, and transportation (for example hybrid vehicles, locomotives).[2]

Emerging smart grid

As mentioned above, the electrical grid is expected to evolve to a new grid paradigm—smart grid, an enhancement of the 20th century electrical grid. The traditional electrical grids are generally used to carry power from a few central generators to a large number of users or customers. In contrast, the new emerging smart grid uses two-way flows of electricity and information to create an automated and distributed advanced energy delivery network.

Many research projects have been conducted to explore the concept of smart grid. According to a newest survey on smart grid,[15] the research is mainly focused on three systems in smart grid- the infrastructure system, the management system, and the protection system.

The infrastructure system is the energy, information, and communication infrastructure underlying of the smart grid that supports 1) advanced electricity generation, delivery, and consumption; 2) advanced information metering, monitoring, and management; and 3) advanced communication technologies. In the transition from the conventional power grid to smart grid, we will replace a physical infrastructure with a digital one. The needs and changes present the power industry with one of the biggest challenges it has ever faced.

A smart grid would allow the power industry to observe and control parts of the system at higher resolution in time and space.[16] It would allow for customers to obtain cheaper, greener, less intrusive, more reliable and higher quality power from the grid. The legacy grid did not allow for real time information to be relayed from the grid, so one of the main purposes of the smart grid would be to allow real time information to be received and sent from and to various parts of the grid to make operation as efficient and seamless as possible. It would allow us to manage logistics of the grid and view consequences that arise from its operation on a time scale with high resolution; from high-frequency switching devices on a microsecond scale, to wind and solar output variations on a minute scale, to the future effects of the carbon emissions generated by power production on a decade scale.

The management system is the subsystem in smart grid that provides advanced management and control services. Most of the existing works aim to improve energy efficiency, demand profile, utility, cost, and emission, based on the infrastructure by using optimization, machine learning, and game theory. Within the advanced infrastructure framework of smart grid, more and more new management services and applications are expected to emerge and eventually revolutionize consumers' daily lives.

The protection system is the subsystem in smart grid that provides advanced grid reliability analysis, failure protection, and security and privacy protection services. We must note that the advanced infrastructure used in smart grid on one hand empowers us to realize more powerful mechanisms to defend against attacks and handle failures, but on the other hand, opens up many new vulnerabilities. For example, National Institute of Standards and Technology pointed out that the major benefit provided by smart grid, the ability to get richer data to and from customer smart meters and other electric devices, is also its Achilles' heel from a privacy viewpoint. The obvious privacy concern is that the energy use information stored at the meter acts as an information rich side channel. This information could be mined and retrieved by interested parties to reveal personal information such as individual's habits, behaviors, activities, and even beliefs.

Saturday, September 5, 2015

Newton's law of universal gravitation


From Wikipedia, the free encyclopedia
File:Newtonslawofgravity.oggPlay media
Professor Walter Lewin explains Newton's law of gravitation during the 1999 MIT Physics course 8.01[dead link] 
Newton's law of universal gravitation states that any two bodies in the universe attract each other with a force that is directly proportional to the product of their masses and inversely proportional to the square of the distance between them.[note 1] This is a general physical law derived from empirical observations by what Isaac Newton called induction.[2] It is a part of classical mechanics and was formulated in Newton's work Philosophiæ Naturalis Principia Mathematica ("the Principia"), first published on 5 July 1687. (When Newton's book was presented in 1686 to the Royal Society, Robert Hooke made a claim that Newton had obtained the inverse square law from him; see the History section below.)

In modern language, the law states: Every point mass attracts every single other point mass by a force pointing along the line intersecting both points. The force is proportional to the product of the two masses and inversely proportional to the square of the distance between them.[3] The first test of Newton's theory of gravitation between masses in the laboratory was the Cavendish experiment conducted by the British scientist Henry Cavendish in 1798.[4] It took place 111 years after the publication of Newton's Principia and 71 years after his death.

Newton's law of gravitation resembles Coulomb's law of electrical forces, which is used to calculate the magnitude of electrical force arising between two charged bodies. Both are inverse-square laws, where force is inversely proportional to the square of the distance between the bodies. Coulomb's law has the product of two charges in place of the product of the masses, and the electrostatic constant in place of the gravitational constant.

Newton's law has since been superseded by Einstein's theory of general relativity, but it continues to be used as an excellent approximation of the effects of gravity in most applications. Relativity is required only when there is a need for extreme precision, or when dealing with very strong gravitational fields, such as those found near extremely massive and dense objects, or at very close distances (such as Mercury's orbit around the sun).

History

Early history

A recent assessment (by Ofer Gal) about the early history of the inverse square law is "by the late 1660s", the assumption of an "inverse proportion between gravity and the square of distance was rather common and had been advanced by a number of different people for different reasons". The same author does credit Hooke with a significant and even seminal contribution, but he treats Hooke's claim of priority on the inverse square point as uninteresting since several individuals besides Newton and Hooke had at least suggested it, and he points instead to the idea of "compounding the celestial motions" and the conversion of Newton's thinking away from "centrifugal" and towards "centripetal" force as Hooke's significant contributions.

Plagiarism dispute

In 1686, when the first book of Newton's Principia was presented to the Royal Society, Robert Hooke accused Newton of plagiarism by claiming that he had taken from him the "notion" of "the rule of the decrease of Gravity, being reciprocally as the squares of the distances from the Center". At the same time (according to Edmond Halley's contemporary report) Hooke agreed that "the Demonstration of the Curves generated thereby" was wholly Newton's.[5]

In this way the question arose as to what, if anything, Newton owed to Hooke. This is a subject extensively discussed since that time and on which some points continue to excite some controversy.

Hooke's work and claims

Robert Hooke published his ideas about the "System of the World" in the 1660s, when he read to the Royal Society on 21 March 1666 a paper "On gravity", "concerning the inflection of a direct motion into a curve by a supervening attractive principle", and he published them again in somewhat developed form in 1674, as an addition to "An Attempt to Prove the Motion of the Earth from Observations".[6] Hooke announced in 1674 that he planned to "explain a System of the World differing in many particulars from any yet known", based on three "Suppositions": that "all Celestial Bodies whatsoever, have an attraction or gravitating power towards their own Centers" [and] "they do also attract all the other Celestial Bodies that are within the sphere of their activity";[7] that "all bodies whatsoever that are put into a direct and simple motion, will so continue to move forward in a straight line, till they are by some other effectual powers deflected and bent..."; and that "these attractive powers are so much the more powerful in operating, by how much the nearer the body wrought upon is to their own Centers". Thus Hooke clearly postulated mutual attractions between the Sun and planets, in a way that increased with nearness to the attracting body, together with a principle of linear inertia.

Hooke's statements up to 1674 made no mention, however, that an inverse square law applies or might apply to these attractions. Hooke's gravitation was also not yet universal, though it approached universality more closely than previous hypotheses.[8] He also did not provide accompanying evidence or mathematical demonstration. On the latter two aspects, Hooke himself stated in 1674: "Now what these several degrees [of attraction] are I have not yet experimentally verified"; and as to his whole proposal: "This I only hint at present", "having my self many other things in hand which I would first compleat, and therefore cannot so well attend it" (i.e. "prosecuting this Inquiry").[6] It was later on, in writing on 6 January 1679|80[9] to Newton, that Hooke communicated his "supposition ... that the Attraction always is in a duplicate proportion to the Distance from the Center Reciprocall, and Consequently that the Velocity will be in a subduplicate proportion to the Attraction and Consequently as Kepler Supposes Reciprocall to the Distance."[10] (The inference about the velocity was incorrect.[11])

Hooke's correspondence of 1679-1680 with Newton mentioned not only this inverse square supposition for the decline of attraction with increasing distance, but also, in Hooke's opening letter to Newton, of 24 November 1679, an approach of "compounding the celestial motions of the planets of a direct motion by the tangent & an attractive motion towards the central body".[12]

Newton's work and claims

Newton, faced in May 1686 with Hooke's claim on the inverse square law, denied that Hooke was to be credited as author of the idea. Among the reasons, Newton recalled that the idea had been discussed with Sir Christopher Wren previous to Hooke's 1679 letter.[13] Newton also pointed out and acknowledged prior work of others,[14] including Bullialdus,[15] (who suggested, but without demonstration, that there was an attractive force from the Sun in the inverse square proportion to the distance), and Borelli[16] (who suggested, also without demonstration, that there was a centrifugal tendency in counterbalance with a gravitational attraction towards the Sun so as to make the planets move in ellipses). D T Whiteside has described the contribution to Newton's thinking that came from Borelli's book, a copy of which was in Newton's library at his death.[17]

Newton further defended his work by saying that had he first heard of the inverse square proportion from Hooke, he would still have some rights to it in view of his demonstrations of its accuracy. Hooke, without evidence in favor of the supposition, could only guess that the inverse square law was approximately valid at great distances from the center. According to Newton, while the 'Principia' was still at pre-publication stage, there were so many a-priori reasons to doubt the accuracy of the inverse-square law (especially close to an attracting sphere) that "without my (Newton's) Demonstrations, to which Mr Hooke is yet a stranger, it cannot believed by a judicious Philosopher to be any where accurate."[18]

This remark refers among other things to Newton's finding, supported by mathematical demonstration, that if the inverse square law applies to tiny particles, then even a large spherically symmetrical mass also attracts masses external to its surface, even close up, exactly as if all its own mass were concentrated at its center. Thus Newton gave a justification, otherwise lacking, for applying the inverse square law to large spherical planetary masses as if they were tiny particles.[19] In addition, Newton had formulated in Propositions 43-45 of Book 1,[20] and associated sections of Book 3, a sensitive test of the accuracy of the inverse square law, in which he showed that only where the law of force is accurately as the inverse square of the distance will the directions of orientation of the planets' orbital ellipses stay constant as they are observed to do apart from small effects attributable to inter-planetary perturbations.

In regard to evidence that still survives of the earlier history, manuscripts written by Newton in the 1660s show that Newton himself had arrived by 1669 at proofs that in a circular case of planetary motion, "endeavour to recede" (what was later called centrifugal force) had an inverse-square relation with distance from the center.[21] After his 1679-1680 correspondence with Hooke, Newton adopted the language of inward or centripetal force. According to Newton scholar J. Bruce Brackenridge, although much has been made of the change in language and difference of point of view, as between centrifugal or centripetal forces, the actual computations and proofs remained the same either way. They also involved the combination of tangential and radial displacements, which Newton was making in the 1660s. The lesson offered by Hooke to Newton here, although significant, was one of perspective and did not change the analysis.[22] This background shows there was basis for Newton to deny deriving the inverse square law from Hooke.

Newton's acknowledgment

On the other hand, Newton did accept and acknowledge, in all editions of the 'Principia', that Hooke (but not exclusively Hooke) had separately appreciated the inverse square law in the solar system. Newton acknowledged Wren, Hooke and Halley in this connection in the Scholium to Proposition 4 in Book 1.[23] Newton also acknowledged to Halley that his correspondence with Hooke in 1679-80 had reawakened his dormant interest in astronomical matters, but that did not mean, according to Newton, that Hooke had told Newton anything new or original: "yet am I not beholden to him for any light into that business but only for the diversion he gave me from my other studies to think on these things & for his dogmaticalness in writing as if he had found the motion in the Ellipsis, which inclined me to try it ..."[14]

Modern controversy

Since the time of Newton and Hooke, scholarly discussion has also touched on the question of whether Hooke's 1679 mention of 'compounding the motions' provided Newton with something new and valuable, even though that was not a claim actually voiced by Hooke at the time. As described above, Newton's manuscripts of the 1660s do show him actually combining tangential motion with the effects of radially directed force or endeavour, for example in his derivation of the inverse square relation for the circular case. They also show Newton clearly expressing the concept of linear inertia—for which he was indebted to Descartes' work, published in 1644 (as Hooke probably was).[24] These matters do not appear to have been learned by Newton from Hooke.

Nevertheless, a number of authors have had more to say about what Newton gained from Hooke and some aspects remain controversial.[25] The fact that most of Hooke's private papers had been destroyed or have disappeared does not help to establish the truth.

Newton's role in relation to the inverse square law was not as it has sometimes been represented. He did not claim to think it up as a bare idea. What Newton did was to show how the inverse-square law of attraction had many necessary mathematical connections with observable features of the motions of bodies in the solar system; and that they were related in such a way that the observational evidence and the mathematical demonstrations, taken together, gave reason to believe that the inverse square law was not just approximately true but exactly true (to the accuracy achievable in Newton's time and for about two centuries afterwards – and with some loose ends of points that could not yet be certainly examined, where the implications of the theory had not yet been adequately identified or calculated).[26][27]

About thirty years after Newton's death in 1727, Alexis Clairaut, a mathematical astronomer eminent in his own right in the field of gravitational studies, wrote after reviewing what Hooke published, that "One must not think that this idea ... of Hooke diminishes Newton's glory"; and that "the example of Hooke" serves "to show what a distance there is between a truth that is glimpsed and a truth that is demonstrated".[28][29]

Modern form

In modern language, the law states the following:

Every point mass attracts every single other point mass by a force pointing along the line intersecting both points. The force is proportional to the product of the two masses and inversely proportional to the square of the distance between them:[3]
Diagram of two masses attracting one another
F = G \frac{m_1 m_2}{r^2}\
where:
  • F is the force between the masses;
  • G is the gravitational constant (6.673×10−11 N · (m/kg)2);
  • m1 is the first mass;
  • m2 is the second mass;
  • r is the distance between the centers of the masses.
Assuming SI units, F is measured in newtons (N), m1 and m2 in kilograms (kg), r in meters (m), and the constant G is approximately equal to 6.674×10−11 N m2 kg−2.[30] The value of the constant G was first accurately determined from the results of the Cavendish experiment conducted by the British scientist Henry Cavendish in 1798, although Cavendish did not himself calculate a numerical value for G.[4] This experiment was also the first test of Newton's theory of gravitation between masses in the laboratory. It took place 111 years after the publication of Newton's Principia and 71 years after Newton's death, so none of Newton's calculations could use the value of G; instead he could only calculate a force relative to another force.

Bodies with spatial extent


Gravitational field strength within the Earth.

Gravity field near earth at 1,2 and A.

If the bodies in question have spatial extent (rather than being theoretical point masses), then the gravitational force between them is calculated by summing the contributions of the notional point masses which constitute the bodies. In the limit, as the component point masses become "infinitely small", this entails integrating the force (in vector form, see below) over the extents of the two bodies.

In this way it can be shown that an object with a spherically-symmetric distribution of mass exerts the same gravitational attraction on external bodies as if all the object's mass were concentrated at a point at its centre.[3] (This is not generally true for non-spherically-symmetrical bodies.)

For points inside a spherically-symmetric distribution of matter, Newton's Shell theorem can be used to find the gravitational force. The theorem tells us how different parts of the mass distribution affect the gravitational force measured at a point located a distance r0 from the center of the mass distribution:[31]
  • The portion of the mass that is located at radii r < r0 causes the same force at r0 as if all of the mass enclosed within a sphere of radius r0 was concentrated at the center of the mass distribution (as noted above).
  • The portion of the mass that is located at radii r > r0 exerts no net gravitational force at the distance r0 from the center. That is, the individual gravitational forces exerted by the elements of the sphere out there, on the point at r0, cancel each other out.
As a consequence, for example, within a shell of uniform thickness and density there is no net gravitational acceleration anywhere within the hollow sphere.

Furthermore, inside a uniform sphere the gravity increases linearly with the distance from the center; the increase due to the additional mass is 1.5 times the decrease due to the larger distance from the center. Thus, if a spherically symmetric body has a uniform core and a uniform mantle with a density that is less than 2/3 of that of the core, then the gravity initially decreases outwardly beyond the boundary, and if the sphere is large enough, further outward the gravity increases again, and eventually it exceeds the gravity at the core/mantle boundary. The gravity of the Earth may be highest at the core/mantle boundary.

Vector form


Field lines drawn for a point mass using 24 field lines

Gravity field surrounding Earth from a macroscopic perspective.

Gravity field lines representation is arbitrary as illustrated here represented in 30x30 grid to 0x0 grid and almost being parallel and pointing straight down to the center of the Earth

Gravity in a room: the curvature of the Earth is negligible at this scale, and the force lines can be approximated as being parallel and pointing straight down to the center of the Earth

Newton's law of universal gravitation can be written as a vector equation to account for the direction of the gravitational force as well as its magnitude. In this formula, quantities in bold represent vectors.


\mathbf{F}_{12} =
- G {m_1 m_2 \over {\vert \mathbf{r}_{12} \vert}^2}
\, \mathbf{\hat{r}}_{12}
where
F12 is the force applied on object 2 due to object 1,
G is the gravitational constant,
m1 and m2 are respectively the masses of objects 1 and 2,
|r12| = |r2r1| is the distance between objects 1 and 2, and
 \mathbf{\hat{r}}_{12} \ \stackrel{\mathrm{def}}{=}\ \frac{\mathbf{r}_2 - \mathbf{r}_1}{\vert\mathbf{r}_2 - \mathbf{r}_1\vert} is the unit vector from object 1 to 2.
It can be seen that the vector form of the equation is the same as the scalar form given earlier, except that F is now a vector quantity, and the right hand side is multiplied by the appropriate unit vector. Also, it can be seen that F12 = −F21.

Gravitational field

The gravitational field is a vector field that describes the gravitational force which would be applied on an object in any given point in space, per unit mass. It is actually equal to the gravitational acceleration at that point.
It is a generalization of the vector form, which becomes particularly useful if more than 2 objects are involved (such as a rocket between the Earth and the Moon). For 2 objects (e.g. object 2 is a rocket, object 1 the Earth), we simply write r instead of r12 and m instead of m2 and define the gravitational field g(r) as:
\mathbf g(\mathbf r) =
- G {m_1 \over {{\vert \mathbf{r} \vert}^2}}
\, \mathbf{\hat{r}}
so that we can write:
\mathbf{F}( \mathbf r) = m \mathbf g(\mathbf r).
This formulation is dependent on the objects causing the field. The field has units of acceleration; in SI, this is m/s2.

Gravitational fields are also conservative; that is, the work done by gravity from one position to another is path-independent. This has the consequence that there exists a gravitational potential field V(r) such that
 \mathbf{g}(\mathbf{r}) = - \nabla V( \mathbf r).
If m1 is a point mass or the mass of a sphere with homogeneous mass distribution, the force field g(r) outside the sphere is isotropic, i.e., depends only on the distance r from the center of the sphere. In that case
 V(r) = -G\frac{m_1}{r}.
the gravitational field is on, inside and outside of symmetric masses.

As per Gauss Law, field in a symmetric body can be found by the mathematical equation:
 \int\!\!\!\!\int_{\partial V}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\;\;\;\bigcirc\,\,\mathbf{g(r)}\cdot d\mathbf{A} = -4\pi G M_{enc}
where \partial V is a closed surface and  M_{enc} is the mass enclosed by the surface.

Hence, for a hollow sphere of radius R and total mass M,
|\mathbf{g(r)}| = \begin{cases}
  0, & \mbox{if } r < R \\

\\
  \dfrac{GM}{r^2},  & \mbox{if } r \ge R

\end{cases}
For a uniform solid sphere of radius R and total mass M,
|\mathbf{g(r)}| = \begin{cases}
   \dfrac{GM r}{R^3}, & \mbox{if } r < R \\

\\
  \dfrac{GM}{r^2},  & \mbox{if } r \ge R

\end{cases}

Problematic aspects

Newton's description of gravity is sufficiently accurate for many practical purposes and is therefore widely used. Deviations from it are small when the dimensionless quantities φ/c2 and (v/c)2 are both much less than one, where φ is the gravitational potential, v is the velocity of the objects being studied, and c is the speed of light.[32] For example, Newtonian gravity provides an accurate description of the Earth/Sun system, since
\frac{\Phi}{c^2}=\frac{GM_\mathrm{sun}}{r_\mathrm{orbit}c^2} \sim 10^{-8},

\quad \left(\frac{v_\mathrm{Earth}}{c}\right)^2=\left(\frac{2\pi r_\mathrm{orbit}}{(1\ \mathrm{yr})c}\right)^2 \sim 10^{-8}
where rorbit is the radius of the Earth's orbit around the Sun.

In situations where either dimensionless parameter is large, then general relativity must be used to describe the system. General relativity reduces to Newtonian gravity in the limit of small potential and low velocities, so Newton's law of gravitation is often said to be the low-gravity limit of general relativity.

Theoretical concerns with Newton's expression

  • There is no immediate prospect of identifying the mediator of gravity. Attempts by physicists to identify the relationship between the gravitational force and other known fundamental forces are not yet resolved, although considerable headway has been made over the last 50 years (See: Theory of everything and Standard Model). Newton himself felt that the concept of an inexplicable action at a distance was unsatisfactory (see "Newton's reservations" below), but that there was nothing more that he could do at the time.
  • Newton's theory of gravitation requires that the gravitational force be transmitted instantaneously. Given the classical assumptions of the nature of space and time before the development of General Relativity, a significant propagation delay in gravity leads to unstable planetary and stellar orbits.

Observations conflicting with Newton's formula

  • Newton's Theory does not fully explain the precession of the perihelion of the orbits of the planets, especially of planet Mercury, which was detected long after the life of Newton.[33] There is a 43 arcsecond per century discrepancy between the Newtonian calculation, which arises only from the gravitational attractions from the other planets, and the observed precession, made with advanced telescopes during the 19th Century.
  • The predicted angular deflection of light rays by gravity that is calculated by using Newton's Theory is only one-half of the deflection that is actually observed by astronomers. Calculations using General Relativity are in much closer agreement with the astronomical observations.
  • In spiral galaxies the orbiting of stars around their centers seems to strongly disobey to Newton's law of universal gravitation. Astrophysicists, however, explain this spectacular phenomenon in the framework of the Newton's laws, with the presence of large amounts of Dark matter.
The observed fact that the gravitational mass and the inertial mass is the same for all objects is unexplained within Newton's Theories. General Relativity takes this as a basic principle. See the Equivalence Principle. In point of fact, the experiments of Galileo Galilei, decades before Newton, established that objects that have the same air or fluid resistance are accelerated by the force of the Earth's gravity equally, regardless of their different inertial masses. Yet, the forces and energies that are required to accelerate various masses is completely dependent upon their different inertial masses, as can be seen from Newton's Second Law of Motion, F = ma.

Newton's reservations

While Newton was able to formulate his law of gravity in his monumental work, he was deeply uncomfortable with the notion of "action at a distance" which his equations implied. In 1692, in his third letter to Bentley, he wrote: "That one body may act upon another at a distance through a vacuum without the mediation of anything else, by and through which their action and force may be conveyed from one another, is to me so great an absurdity that, I believe, no man who has in philosophic matters a competent faculty of thinking could ever fall into it."

He never, in his words, "assigned the cause of this power". In all other cases, he used the phenomenon of motion to explain the origin of various forces acting on bodies, but in the case of gravity, he was unable to experimentally identify the motion that produces the force of gravity (although he invented two mechanical hypotheses in 1675 and 1717). Moreover, he refused to even offer a hypothesis as to the cause of this force on grounds that to do so was contrary to sound science. He lamented that "philosophers have hitherto attempted the search of nature in vain" for the source of the gravitational force, as he was convinced "by many reasons" that there were "causes hitherto unknown" that were fundamental to all the "phenomena of nature". These fundamental phenomena are still under investigation and, though hypotheses abound, the definitive answer has yet to be found. And in Newton's 1713 General Scholium in the second edition of Principia: "I have not yet been able to discover the cause of these properties of gravity from phenomena and I feign no hypotheses... It is enough that gravity does really exist and acts according to the laws I have explained, and that it abundantly serves to account for all the motions of celestial bodies."[34]

Einstein's solution

These objections were explained by Einstein's theory of general relativity, in which gravitation is an attribute of curved spacetime instead of being due to a force propagated between bodies. In Einstein's theory, energy and momentum distort spacetime in their vicinity, and other particles move in trajectories determined by the geometry of spacetime. This allowed a description of the motions of light and mass that was consistent with all available observations. In general relativity, the gravitational force is a fictitious force due to the curvature of spacetime, because the gravitational acceleration of a body in free fall is due to its world line being a geodesic of spacetime.

Extensions

Newton was the first to consider in his Principia an extended expression of his law of gravity including an inverse-cube term of the form
F = G \frac{m_1 m_2}{r^2} + B \frac{m_1 m_2}{r^3} \ , B a constant
attempting to explain the Moon's apsidal motion. Other extensions were proposed by Laplace (around 1790) and Decombes (1913):[35]
F(r) =k \frac {m_1 m_2}{r^2} \exp(-\alpha r) (Laplace)
F(r) = k \frac {m_1 m_2}{r^2} \left(1+ {\alpha \over {r^3}}\right) (Decombes)
In recent years quests for non-inverse square terms in the law of gravity have been carried out by neutron interferometry.[36]

Solutions of Newton's law of universal gravitation

The n-body problem is an ancient, classical problem[37] of predicting the individual motions of a group of celestial objects interacting with each other gravitationally. Solving this problem — from the time of the Greeks and on — has been motivated by the desire to understand the motions of the Sun, planets and the visible stars. In the 20th century, understanding the dynamics of globular cluster star systems became an important n-body problem too.[38] The n-body problem in general relativity is considerably more difficult to solve.
The classical physical problem can be informally stated as: given the quasi-steady orbital properties (instantaneous position, velocity and time)[39] of a group of celestial bodies, predict their interactive forces; and consequently, predict their true orbital motions for all future times.[40]

The two-body problem has been completely solved, as has the Restricted 3-Body Problem.[41]

Friday, September 4, 2015

Statistical Extrapolations of Climate Trends


















CO2

For some years, climate activists have claimed that we can't wait until we have all the data on AGW/CC, because it will be too late by then.  But it is 2015 now, and we do have substantial data on many of the trends climate models have been used to predict up until now.  I have been working with some of this data from the NOAA, EPA, and 2015 satellite datasets, on global temperatures, methane, and CO2 levels. Not surprisingly, there are some excellent fits to trend lines, which can be extrapolated to 2100.

A word of caution here, though:  trend lines cannot predict events that may change them, and some lines simply don't make scientific sense.  For example,
PPM CO2 Increases from 1959 - 2014












 
Clearly CO2 increases are not going to follow this 6'th order polynomial trend, although it did give the best fit of all trends I tried.  This is understandable.  CO2 increases vary considerably from year to year.  Better is to follow atmospheric CO2 levels, which I have done below:

PPM CO2 Levels 1980











 
Notice I've used a quadratic trend line here, which fits with > 99.99% correlation.  According to CO2 forcing calculations, this leads to an approximate 1.7 degree C increase in temperature above 2014, or 2.7 degrees overall since 1900. This fits well with official IPCC numbers, albeit on the low end.  It also agrees with temperatures plotted from NOAA data:









 
The increase from 2014 to 2100 is about 1.6 degrees, or 2.6 degrees by the IPCC. Very encouraging!

But not very hopeful, if many climate scientists are right.  It's generally predicted that an additional rise above one degree from current or more may lead to unacceptable consequences, most of which I'm sure you've read about.  Can we reduce them to acceptable levels?  And in a way that doesn't crash the global economy -- no, civilization itself -- leading to billions dead and the survivors back in the Dark Ages?  I think there is, and have tried to apply my thoughts to the above chart.  The result is the new chart below: 

I have used an exponential type decay on the CO2 increase model to project how emissions trends might be reduced down to virtually zero without (I hope) harming the global economy.  This new trend would not only depend on political/economic conditions, but mainly on developments in science and technology..At present the trend is about 2.3 PPM/year, and will increase to 3.2 PPM by the end of this century if nothing is done.  I calculate that this will reduce  a projected CO2 increase from 240 PPM down to an additional mere 72 PPM.  The calculated temperature increase from 2015 is then only about +0.5 degrees above 2014 conditions.

I must add, however, that even reducing carbon emissions to zero -- even removing the gas from the atmosphere -- will not mean atmospheric levels will drop quickly. About ten times as much CO2 is dissolved in the oceans than is in the air (the greatest majority is in living matter and carbonate rocks); so that, even if can reduce atmospheric levels, an equilibrium process will redistribute some of the dissolved gas back into the air.  For example, to remove a billions tons of CO2 quickly and permanently from the latter might require removing almost 10 billion tons from the ocean in order to maintain equilibrium.

On the other hand, it is also likely that anthropogenic CO2 has not fully equilibrated with the oceans  (it is supposed to have a half-life of ~100 years in the air), so that even if we added no more, it would probably absorb further, and atmospheric levels would slowly decline.  I do not know how this might affect the 2100 CO2 levels should the scheme above be applied.  http://www.nature.com/nature/journal/v488/n7409/full/nature11299.html



Methane

Of course, CO2 is not the only greenhouse gas climate scientists have been worrying about.  The other main other culprit has been methane.  Therefor, I have plotted atmospheric methane levels from 1980-2014.  Instead of plotting the gas directly, I have converted to CO2 PPM equivalents.













 
Strange, to say the least, especially if the quadratic fit is correct!  At first I didn't believe what I seeing.  Recently however, I encountered several articles,
http://wattsupwiththat.com/2015/08/19/the-arctic-methane-emergency-appears-canceled-due-to-methane-eating-bacteria/
http://fabiusmaximus.com/2015/08/20/ipcc-defeats-the-methane-monster-apocalypse-88620/, www.epa.gov/climatechange/indicators, and
http://www.climatechange2013.org/images/report/WG1AR5_Chapter02_FINAL.pdf, which shows rises in methane levels diminishing over time. This is possibly due to recently discovered sources of  methane consuming bacteria, found mainly in the arctic but probably widely distributed.  These bacteria naturally convert methane to CO2, and become even more active when temperature rises as it is doing.  If the quadratic trend line (best fit) is correct, then methane levels in 2100 will be about the same as 1980, peaking at 2040.  A linear trend line, on the other hand, shows methane increasing to almost twice that level.  Either way, it appears that methane can no longer be counted as a serious greenhouse gas with any certainty.




Satellite Data

I've also plotted NOAA (blue) versus satellite temperature change data (orange):














The differences are striking.  The satellite data indicate a full degree less warming by 2100 than the NOAA data, or only +0.7 degrees above current temperatures; however, the varriation is also much larger, so this might not be accurate.  Incidentally, for those who follow this issue, the so-called "hiatus" (of "climate-denier" fame) is based on the satellite data (plotted from 1996-2014 below):














So the pause is real, at least for the satellite data.  The fallacy, I think, is to assume that because temperatures have been falling for some eighteen years, we can declare global warming to be over.  The entire data set does not support that, and it is common for global temperatures to decrease (and increase) over short periods.  So beware.

I have puzzled over why the NOAA and satellite datasets differ as much as they do.  Since NASA has both geostationary and polar satellites, it shouldn't be a lack of global coverage, though there could be systematic errors.  There is also the often said accusation that the NOAA data have been "adjusted" (meaning fudged in this case) to make global warming appear worse than it is (and I have read about some peculiar adjustments), but I don't have no data to back this claim up. It could also be, of course, that satellites measure a higher region in the troposphere, while the NOAA data is strictly ground level.

Naturally, it could be any combination of any of these reasons.


Sea Level Rise and Global Ice















Now we have three conflicting data sets!  CISRO (30 cm -> 2100) and NASA (40 cm) agree the best, and they are both best within predicted range for the middle of the next century (~ one meter rise from 1900).  Of course, if CO2 emissions are reduced in the fashion described above, these numbers should not as high; I have no data to demonstrate this, however.


















These data show an approximately 10,000 cubic kilometer loss in Arctic ice over the last 35 years, leading to a three cm sea level rise during that period.  Unfortunately, I could not obtain the raw data for this chart, so I cannot draw a trend line through the data (the straight line came with the chart). We can, however, calculate a geometric increase from this data, by assuming rising temperatures will double each succeeding 35 year period.  This give 2015-2050 = 20,000 km^3, 2050-2085 - 40,000 km^3, and 2085-2120 = 80,000 km^3.  As we are only extrapolating to 2100, this approximately adds an another 20,000 + 40,000 + 40,000 = 100,000 km^3 melted ice, enough to raise sea levels another 30 cm over 2015.  It is in excellent agreement with the CISPRO and NASA sea level trendline above.

I haven't included Antarctic sea ice because, up to the present, it doesn't show enough shrinkage or expansion (estimates differ) to extrapolate a significant trend.

What about ice coverage?  This is important because a significant shrinkage here could reduce the Earth's albedo (solar reflectance) and strongly enhance global warming.  Again, I can only include a chart as presented from it's source without raw data   http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/global.daily.ice.area.withtrend.jp:










Very little, if any global sea ice area is demonstrated; however, if melting follows the path I have outlined above, it is quite possible that these levels will at least gradually shrink, raising global warming by reducing the planet's overall albedo.


Population Issues














It is, I hope, evident that the planet's increasing human population only adds more greenhouse gases, thereby increasing global warming.  Of equal interest, however, is the per capital emission rates.  To calculate this, I took the data from chart 2, and divided the points from the chart with population to produce the blue line:












The orange line is similar to that used to show how CO2 can be reduced to near zero before the century is out.  See my comments about this modified trend above.


Conclusions

Global warming, along with the dire consequences this study supports, is very real and no hoax or scheme by some secret cabal of government and the scientific community.  Yet neither is it an unstoppable catastrophe as some have alleged -- in fact, the trends revealed here mostly confirm to the more conservative estimates (model outputs) of how serious the problem is.  That is no reason for complacency, however, for even the these models inexorably lead to serious problems.

I suggest, however, that these problems can be at least strongly ameliorated at least, if we invest in the basic science and technologies that alone can address the issue.  For myself, I am strongly convinced that we will do so; more than that, that we have already begun. 

Saturday, August 22, 2015

End Permian Connection with Current AGW Fails on Numbers.
























Peter Ward, Robert Scribbler, and others have de facto been collaberating on a hypothesis that current climatic conditions strongly resemble those of the end Permian "Great Dying", and that we are headed toward the same conditions.  Although easily refuted, this idea has been recently spreading among climate-fanaticists, and receiving more publicity than it deserves.

Some of their work can be found at http://robertscribbler.com/2014/01/21/awakening-the-horrors-of-the-ancient-hothouse-hydrogen-sulfide-in-the-worlds-warming-oceans/, http://robertscribbler.com/2013/12/18/through-the-looking-glass-of-the-great-dying-new-study-finds-ocean-stratification-proceeded-rapidly-over-past-150-years/, and other links contained within.

Let's proceed to the important facts, which punch holes in this speculation.

At present, our atmosphere contains ~ 300 billion tons of CO2. During the end-Permian extinction (the largest mass extinction in geohistory), a combination of massive volcanic activity (greatest we know of) and CO2-producing bacteria may have injected more than 40 times that much CO2 in a short period (http://www.bitsofscience.org/permian-triassic-mass-extinct…/), along with massive amounts of methane and sulfur dioxide, the latter a deadly gas. This resulted in "a doubling of carbon dioxide levels from 2,000 parts per million to 4,400 ppm [11 times today's levels]." (http://thinkprogress.org/…/doubling-of-co2-levels-in-end-t…/) This would have risen the global temperature then from 6-7 degrees above the present to 8.5 - 9.5 (my calculations; the article says three degrees), although whether this is pertinent is uncertain.
.
According to current theories, the combination of very high CO2, along with SO2, would have caused the ocean depths to become anoxic (lacking oxygen, like the Black Sea today). This in turn would have lead to enormous blooms of hydrogen sulfide (also highly toxic) producing bacteria in those depths, which, along with SO2 and the higher temperatures, exterminated almost all life in the sea and on the land.

AGW/CC enthusiasts have been unable to resist drawing parallels with current conditions; and indeed, there is a partial, superficial resemblance. But just as clearly the numbers aren't remotely close (nor does any trend extrapolation lead to such a situation); also, the Earth's continents were all joined at the time, changing oceanic and atmospheric currents in many ways which could have made the extinctions worse.

Please, give it up already.

Distance education

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Distance_...