Search This Blog

Saturday, September 5, 2015

Newton's law of universal gravitation


From Wikipedia, the free encyclopedia
File:Newtonslawofgravity.oggPlay media
Professor Walter Lewin explains Newton's law of gravitation during the 1999 MIT Physics course 8.01[dead link] 
Newton's law of universal gravitation states that any two bodies in the universe attract each other with a force that is directly proportional to the product of their masses and inversely proportional to the square of the distance between them.[note 1] This is a general physical law derived from empirical observations by what Isaac Newton called induction.[2] It is a part of classical mechanics and was formulated in Newton's work Philosophiæ Naturalis Principia Mathematica ("the Principia"), first published on 5 July 1687. (When Newton's book was presented in 1686 to the Royal Society, Robert Hooke made a claim that Newton had obtained the inverse square law from him; see the History section below.)

In modern language, the law states: Every point mass attracts every single other point mass by a force pointing along the line intersecting both points. The force is proportional to the product of the two masses and inversely proportional to the square of the distance between them.[3] The first test of Newton's theory of gravitation between masses in the laboratory was the Cavendish experiment conducted by the British scientist Henry Cavendish in 1798.[4] It took place 111 years after the publication of Newton's Principia and 71 years after his death.

Newton's law of gravitation resembles Coulomb's law of electrical forces, which is used to calculate the magnitude of electrical force arising between two charged bodies. Both are inverse-square laws, where force is inversely proportional to the square of the distance between the bodies. Coulomb's law has the product of two charges in place of the product of the masses, and the electrostatic constant in place of the gravitational constant.

Newton's law has since been superseded by Einstein's theory of general relativity, but it continues to be used as an excellent approximation of the effects of gravity in most applications. Relativity is required only when there is a need for extreme precision, or when dealing with very strong gravitational fields, such as those found near extremely massive and dense objects, or at very close distances (such as Mercury's orbit around the sun).

History

Early history

A recent assessment (by Ofer Gal) about the early history of the inverse square law is "by the late 1660s", the assumption of an "inverse proportion between gravity and the square of distance was rather common and had been advanced by a number of different people for different reasons". The same author does credit Hooke with a significant and even seminal contribution, but he treats Hooke's claim of priority on the inverse square point as uninteresting since several individuals besides Newton and Hooke had at least suggested it, and he points instead to the idea of "compounding the celestial motions" and the conversion of Newton's thinking away from "centrifugal" and towards "centripetal" force as Hooke's significant contributions.

Plagiarism dispute

In 1686, when the first book of Newton's Principia was presented to the Royal Society, Robert Hooke accused Newton of plagiarism by claiming that he had taken from him the "notion" of "the rule of the decrease of Gravity, being reciprocally as the squares of the distances from the Center". At the same time (according to Edmond Halley's contemporary report) Hooke agreed that "the Demonstration of the Curves generated thereby" was wholly Newton's.[5]

In this way the question arose as to what, if anything, Newton owed to Hooke. This is a subject extensively discussed since that time and on which some points continue to excite some controversy.

Hooke's work and claims

Robert Hooke published his ideas about the "System of the World" in the 1660s, when he read to the Royal Society on 21 March 1666 a paper "On gravity", "concerning the inflection of a direct motion into a curve by a supervening attractive principle", and he published them again in somewhat developed form in 1674, as an addition to "An Attempt to Prove the Motion of the Earth from Observations".[6] Hooke announced in 1674 that he planned to "explain a System of the World differing in many particulars from any yet known", based on three "Suppositions": that "all Celestial Bodies whatsoever, have an attraction or gravitating power towards their own Centers" [and] "they do also attract all the other Celestial Bodies that are within the sphere of their activity";[7] that "all bodies whatsoever that are put into a direct and simple motion, will so continue to move forward in a straight line, till they are by some other effectual powers deflected and bent..."; and that "these attractive powers are so much the more powerful in operating, by how much the nearer the body wrought upon is to their own Centers". Thus Hooke clearly postulated mutual attractions between the Sun and planets, in a way that increased with nearness to the attracting body, together with a principle of linear inertia.

Hooke's statements up to 1674 made no mention, however, that an inverse square law applies or might apply to these attractions. Hooke's gravitation was also not yet universal, though it approached universality more closely than previous hypotheses.[8] He also did not provide accompanying evidence or mathematical demonstration. On the latter two aspects, Hooke himself stated in 1674: "Now what these several degrees [of attraction] are I have not yet experimentally verified"; and as to his whole proposal: "This I only hint at present", "having my self many other things in hand which I would first compleat, and therefore cannot so well attend it" (i.e. "prosecuting this Inquiry").[6] It was later on, in writing on 6 January 1679|80[9] to Newton, that Hooke communicated his "supposition ... that the Attraction always is in a duplicate proportion to the Distance from the Center Reciprocall, and Consequently that the Velocity will be in a subduplicate proportion to the Attraction and Consequently as Kepler Supposes Reciprocall to the Distance."[10] (The inference about the velocity was incorrect.[11])

Hooke's correspondence of 1679-1680 with Newton mentioned not only this inverse square supposition for the decline of attraction with increasing distance, but also, in Hooke's opening letter to Newton, of 24 November 1679, an approach of "compounding the celestial motions of the planets of a direct motion by the tangent & an attractive motion towards the central body".[12]

Newton's work and claims

Newton, faced in May 1686 with Hooke's claim on the inverse square law, denied that Hooke was to be credited as author of the idea. Among the reasons, Newton recalled that the idea had been discussed with Sir Christopher Wren previous to Hooke's 1679 letter.[13] Newton also pointed out and acknowledged prior work of others,[14] including Bullialdus,[15] (who suggested, but without demonstration, that there was an attractive force from the Sun in the inverse square proportion to the distance), and Borelli[16] (who suggested, also without demonstration, that there was a centrifugal tendency in counterbalance with a gravitational attraction towards the Sun so as to make the planets move in ellipses). D T Whiteside has described the contribution to Newton's thinking that came from Borelli's book, a copy of which was in Newton's library at his death.[17]

Newton further defended his work by saying that had he first heard of the inverse square proportion from Hooke, he would still have some rights to it in view of his demonstrations of its accuracy. Hooke, without evidence in favor of the supposition, could only guess that the inverse square law was approximately valid at great distances from the center. According to Newton, while the 'Principia' was still at pre-publication stage, there were so many a-priori reasons to doubt the accuracy of the inverse-square law (especially close to an attracting sphere) that "without my (Newton's) Demonstrations, to which Mr Hooke is yet a stranger, it cannot believed by a judicious Philosopher to be any where accurate."[18]

This remark refers among other things to Newton's finding, supported by mathematical demonstration, that if the inverse square law applies to tiny particles, then even a large spherically symmetrical mass also attracts masses external to its surface, even close up, exactly as if all its own mass were concentrated at its center. Thus Newton gave a justification, otherwise lacking, for applying the inverse square law to large spherical planetary masses as if they were tiny particles.[19] In addition, Newton had formulated in Propositions 43-45 of Book 1,[20] and associated sections of Book 3, a sensitive test of the accuracy of the inverse square law, in which he showed that only where the law of force is accurately as the inverse square of the distance will the directions of orientation of the planets' orbital ellipses stay constant as they are observed to do apart from small effects attributable to inter-planetary perturbations.

In regard to evidence that still survives of the earlier history, manuscripts written by Newton in the 1660s show that Newton himself had arrived by 1669 at proofs that in a circular case of planetary motion, "endeavour to recede" (what was later called centrifugal force) had an inverse-square relation with distance from the center.[21] After his 1679-1680 correspondence with Hooke, Newton adopted the language of inward or centripetal force. According to Newton scholar J. Bruce Brackenridge, although much has been made of the change in language and difference of point of view, as between centrifugal or centripetal forces, the actual computations and proofs remained the same either way. They also involved the combination of tangential and radial displacements, which Newton was making in the 1660s. The lesson offered by Hooke to Newton here, although significant, was one of perspective and did not change the analysis.[22] This background shows there was basis for Newton to deny deriving the inverse square law from Hooke.

Newton's acknowledgment

On the other hand, Newton did accept and acknowledge, in all editions of the 'Principia', that Hooke (but not exclusively Hooke) had separately appreciated the inverse square law in the solar system. Newton acknowledged Wren, Hooke and Halley in this connection in the Scholium to Proposition 4 in Book 1.[23] Newton also acknowledged to Halley that his correspondence with Hooke in 1679-80 had reawakened his dormant interest in astronomical matters, but that did not mean, according to Newton, that Hooke had told Newton anything new or original: "yet am I not beholden to him for any light into that business but only for the diversion he gave me from my other studies to think on these things & for his dogmaticalness in writing as if he had found the motion in the Ellipsis, which inclined me to try it ..."[14]

Modern controversy

Since the time of Newton and Hooke, scholarly discussion has also touched on the question of whether Hooke's 1679 mention of 'compounding the motions' provided Newton with something new and valuable, even though that was not a claim actually voiced by Hooke at the time. As described above, Newton's manuscripts of the 1660s do show him actually combining tangential motion with the effects of radially directed force or endeavour, for example in his derivation of the inverse square relation for the circular case. They also show Newton clearly expressing the concept of linear inertia—for which he was indebted to Descartes' work, published in 1644 (as Hooke probably was).[24] These matters do not appear to have been learned by Newton from Hooke.

Nevertheless, a number of authors have had more to say about what Newton gained from Hooke and some aspects remain controversial.[25] The fact that most of Hooke's private papers had been destroyed or have disappeared does not help to establish the truth.

Newton's role in relation to the inverse square law was not as it has sometimes been represented. He did not claim to think it up as a bare idea. What Newton did was to show how the inverse-square law of attraction had many necessary mathematical connections with observable features of the motions of bodies in the solar system; and that they were related in such a way that the observational evidence and the mathematical demonstrations, taken together, gave reason to believe that the inverse square law was not just approximately true but exactly true (to the accuracy achievable in Newton's time and for about two centuries afterwards – and with some loose ends of points that could not yet be certainly examined, where the implications of the theory had not yet been adequately identified or calculated).[26][27]

About thirty years after Newton's death in 1727, Alexis Clairaut, a mathematical astronomer eminent in his own right in the field of gravitational studies, wrote after reviewing what Hooke published, that "One must not think that this idea ... of Hooke diminishes Newton's glory"; and that "the example of Hooke" serves "to show what a distance there is between a truth that is glimpsed and a truth that is demonstrated".[28][29]

Modern form

In modern language, the law states the following:

Every point mass attracts every single other point mass by a force pointing along the line intersecting both points. The force is proportional to the product of the two masses and inversely proportional to the square of the distance between them:[3]
Diagram of two masses attracting one another
F = G \frac{m_1 m_2}{r^2}\
where:
  • F is the force between the masses;
  • G is the gravitational constant (6.673×10−11 N · (m/kg)2);
  • m1 is the first mass;
  • m2 is the second mass;
  • r is the distance between the centers of the masses.
Assuming SI units, F is measured in newtons (N), m1 and m2 in kilograms (kg), r in meters (m), and the constant G is approximately equal to 6.674×10−11 N m2 kg−2.[30] The value of the constant G was first accurately determined from the results of the Cavendish experiment conducted by the British scientist Henry Cavendish in 1798, although Cavendish did not himself calculate a numerical value for G.[4] This experiment was also the first test of Newton's theory of gravitation between masses in the laboratory. It took place 111 years after the publication of Newton's Principia and 71 years after Newton's death, so none of Newton's calculations could use the value of G; instead he could only calculate a force relative to another force.

Bodies with spatial extent


Gravitational field strength within the Earth.

Gravity field near earth at 1,2 and A.

If the bodies in question have spatial extent (rather than being theoretical point masses), then the gravitational force between them is calculated by summing the contributions of the notional point masses which constitute the bodies. In the limit, as the component point masses become "infinitely small", this entails integrating the force (in vector form, see below) over the extents of the two bodies.

In this way it can be shown that an object with a spherically-symmetric distribution of mass exerts the same gravitational attraction on external bodies as if all the object's mass were concentrated at a point at its centre.[3] (This is not generally true for non-spherically-symmetrical bodies.)

For points inside a spherically-symmetric distribution of matter, Newton's Shell theorem can be used to find the gravitational force. The theorem tells us how different parts of the mass distribution affect the gravitational force measured at a point located a distance r0 from the center of the mass distribution:[31]
  • The portion of the mass that is located at radii r < r0 causes the same force at r0 as if all of the mass enclosed within a sphere of radius r0 was concentrated at the center of the mass distribution (as noted above).
  • The portion of the mass that is located at radii r > r0 exerts no net gravitational force at the distance r0 from the center. That is, the individual gravitational forces exerted by the elements of the sphere out there, on the point at r0, cancel each other out.
As a consequence, for example, within a shell of uniform thickness and density there is no net gravitational acceleration anywhere within the hollow sphere.

Furthermore, inside a uniform sphere the gravity increases linearly with the distance from the center; the increase due to the additional mass is 1.5 times the decrease due to the larger distance from the center. Thus, if a spherically symmetric body has a uniform core and a uniform mantle with a density that is less than 2/3 of that of the core, then the gravity initially decreases outwardly beyond the boundary, and if the sphere is large enough, further outward the gravity increases again, and eventually it exceeds the gravity at the core/mantle boundary. The gravity of the Earth may be highest at the core/mantle boundary.

Vector form


Field lines drawn for a point mass using 24 field lines

Gravity field surrounding Earth from a macroscopic perspective.

Gravity field lines representation is arbitrary as illustrated here represented in 30x30 grid to 0x0 grid and almost being parallel and pointing straight down to the center of the Earth

Gravity in a room: the curvature of the Earth is negligible at this scale, and the force lines can be approximated as being parallel and pointing straight down to the center of the Earth

Newton's law of universal gravitation can be written as a vector equation to account for the direction of the gravitational force as well as its magnitude. In this formula, quantities in bold represent vectors.


\mathbf{F}_{12} =
- G {m_1 m_2 \over {\vert \mathbf{r}_{12} \vert}^2}
\, \mathbf{\hat{r}}_{12}
where
F12 is the force applied on object 2 due to object 1,
G is the gravitational constant,
m1 and m2 are respectively the masses of objects 1 and 2,
|r12| = |r2r1| is the distance between objects 1 and 2, and
 \mathbf{\hat{r}}_{12} \ \stackrel{\mathrm{def}}{=}\ \frac{\mathbf{r}_2 - \mathbf{r}_1}{\vert\mathbf{r}_2 - \mathbf{r}_1\vert} is the unit vector from object 1 to 2.
It can be seen that the vector form of the equation is the same as the scalar form given earlier, except that F is now a vector quantity, and the right hand side is multiplied by the appropriate unit vector. Also, it can be seen that F12 = −F21.

Gravitational field

The gravitational field is a vector field that describes the gravitational force which would be applied on an object in any given point in space, per unit mass. It is actually equal to the gravitational acceleration at that point.
It is a generalization of the vector form, which becomes particularly useful if more than 2 objects are involved (such as a rocket between the Earth and the Moon). For 2 objects (e.g. object 2 is a rocket, object 1 the Earth), we simply write r instead of r12 and m instead of m2 and define the gravitational field g(r) as:
\mathbf g(\mathbf r) =
- G {m_1 \over {{\vert \mathbf{r} \vert}^2}}
\, \mathbf{\hat{r}}
so that we can write:
\mathbf{F}( \mathbf r) = m \mathbf g(\mathbf r).
This formulation is dependent on the objects causing the field. The field has units of acceleration; in SI, this is m/s2.

Gravitational fields are also conservative; that is, the work done by gravity from one position to another is path-independent. This has the consequence that there exists a gravitational potential field V(r) such that
 \mathbf{g}(\mathbf{r}) = - \nabla V( \mathbf r).
If m1 is a point mass or the mass of a sphere with homogeneous mass distribution, the force field g(r) outside the sphere is isotropic, i.e., depends only on the distance r from the center of the sphere. In that case
 V(r) = -G\frac{m_1}{r}.
the gravitational field is on, inside and outside of symmetric masses.

As per Gauss Law, field in a symmetric body can be found by the mathematical equation:
 \int\!\!\!\!\int_{\partial V}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\;\;\;\bigcirc\,\,\mathbf{g(r)}\cdot d\mathbf{A} = -4\pi G M_{enc}
where \partial V is a closed surface and  M_{enc} is the mass enclosed by the surface.

Hence, for a hollow sphere of radius R and total mass M,
|\mathbf{g(r)}| = \begin{cases}
  0, & \mbox{if } r < R \\

\\
  \dfrac{GM}{r^2},  & \mbox{if } r \ge R

\end{cases}
For a uniform solid sphere of radius R and total mass M,
|\mathbf{g(r)}| = \begin{cases}
   \dfrac{GM r}{R^3}, & \mbox{if } r < R \\

\\
  \dfrac{GM}{r^2},  & \mbox{if } r \ge R

\end{cases}

Problematic aspects

Newton's description of gravity is sufficiently accurate for many practical purposes and is therefore widely used. Deviations from it are small when the dimensionless quantities φ/c2 and (v/c)2 are both much less than one, where φ is the gravitational potential, v is the velocity of the objects being studied, and c is the speed of light.[32] For example, Newtonian gravity provides an accurate description of the Earth/Sun system, since
\frac{\Phi}{c^2}=\frac{GM_\mathrm{sun}}{r_\mathrm{orbit}c^2} \sim 10^{-8},

\quad \left(\frac{v_\mathrm{Earth}}{c}\right)^2=\left(\frac{2\pi r_\mathrm{orbit}}{(1\ \mathrm{yr})c}\right)^2 \sim 10^{-8}
where rorbit is the radius of the Earth's orbit around the Sun.

In situations where either dimensionless parameter is large, then general relativity must be used to describe the system. General relativity reduces to Newtonian gravity in the limit of small potential and low velocities, so Newton's law of gravitation is often said to be the low-gravity limit of general relativity.

Theoretical concerns with Newton's expression

  • There is no immediate prospect of identifying the mediator of gravity. Attempts by physicists to identify the relationship between the gravitational force and other known fundamental forces are not yet resolved, although considerable headway has been made over the last 50 years (See: Theory of everything and Standard Model). Newton himself felt that the concept of an inexplicable action at a distance was unsatisfactory (see "Newton's reservations" below), but that there was nothing more that he could do at the time.
  • Newton's theory of gravitation requires that the gravitational force be transmitted instantaneously. Given the classical assumptions of the nature of space and time before the development of General Relativity, a significant propagation delay in gravity leads to unstable planetary and stellar orbits.

Observations conflicting with Newton's formula

  • Newton's Theory does not fully explain the precession of the perihelion of the orbits of the planets, especially of planet Mercury, which was detected long after the life of Newton.[33] There is a 43 arcsecond per century discrepancy between the Newtonian calculation, which arises only from the gravitational attractions from the other planets, and the observed precession, made with advanced telescopes during the 19th Century.
  • The predicted angular deflection of light rays by gravity that is calculated by using Newton's Theory is only one-half of the deflection that is actually observed by astronomers. Calculations using General Relativity are in much closer agreement with the astronomical observations.
  • In spiral galaxies the orbiting of stars around their centers seems to strongly disobey to Newton's law of universal gravitation. Astrophysicists, however, explain this spectacular phenomenon in the framework of the Newton's laws, with the presence of large amounts of Dark matter.
The observed fact that the gravitational mass and the inertial mass is the same for all objects is unexplained within Newton's Theories. General Relativity takes this as a basic principle. See the Equivalence Principle. In point of fact, the experiments of Galileo Galilei, decades before Newton, established that objects that have the same air or fluid resistance are accelerated by the force of the Earth's gravity equally, regardless of their different inertial masses. Yet, the forces and energies that are required to accelerate various masses is completely dependent upon their different inertial masses, as can be seen from Newton's Second Law of Motion, F = ma.

Newton's reservations

While Newton was able to formulate his law of gravity in his monumental work, he was deeply uncomfortable with the notion of "action at a distance" which his equations implied. In 1692, in his third letter to Bentley, he wrote: "That one body may act upon another at a distance through a vacuum without the mediation of anything else, by and through which their action and force may be conveyed from one another, is to me so great an absurdity that, I believe, no man who has in philosophic matters a competent faculty of thinking could ever fall into it."

He never, in his words, "assigned the cause of this power". In all other cases, he used the phenomenon of motion to explain the origin of various forces acting on bodies, but in the case of gravity, he was unable to experimentally identify the motion that produces the force of gravity (although he invented two mechanical hypotheses in 1675 and 1717). Moreover, he refused to even offer a hypothesis as to the cause of this force on grounds that to do so was contrary to sound science. He lamented that "philosophers have hitherto attempted the search of nature in vain" for the source of the gravitational force, as he was convinced "by many reasons" that there were "causes hitherto unknown" that were fundamental to all the "phenomena of nature". These fundamental phenomena are still under investigation and, though hypotheses abound, the definitive answer has yet to be found. And in Newton's 1713 General Scholium in the second edition of Principia: "I have not yet been able to discover the cause of these properties of gravity from phenomena and I feign no hypotheses... It is enough that gravity does really exist and acts according to the laws I have explained, and that it abundantly serves to account for all the motions of celestial bodies."[34]

Einstein's solution

These objections were explained by Einstein's theory of general relativity, in which gravitation is an attribute of curved spacetime instead of being due to a force propagated between bodies. In Einstein's theory, energy and momentum distort spacetime in their vicinity, and other particles move in trajectories determined by the geometry of spacetime. This allowed a description of the motions of light and mass that was consistent with all available observations. In general relativity, the gravitational force is a fictitious force due to the curvature of spacetime, because the gravitational acceleration of a body in free fall is due to its world line being a geodesic of spacetime.

Extensions

Newton was the first to consider in his Principia an extended expression of his law of gravity including an inverse-cube term of the form
F = G \frac{m_1 m_2}{r^2} + B \frac{m_1 m_2}{r^3} \ , B a constant
attempting to explain the Moon's apsidal motion. Other extensions were proposed by Laplace (around 1790) and Decombes (1913):[35]
F(r) =k \frac {m_1 m_2}{r^2} \exp(-\alpha r) (Laplace)
F(r) = k \frac {m_1 m_2}{r^2} \left(1+ {\alpha \over {r^3}}\right) (Decombes)
In recent years quests for non-inverse square terms in the law of gravity have been carried out by neutron interferometry.[36]

Solutions of Newton's law of universal gravitation

The n-body problem is an ancient, classical problem[37] of predicting the individual motions of a group of celestial objects interacting with each other gravitationally. Solving this problem — from the time of the Greeks and on — has been motivated by the desire to understand the motions of the Sun, planets and the visible stars. In the 20th century, understanding the dynamics of globular cluster star systems became an important n-body problem too.[38] The n-body problem in general relativity is considerably more difficult to solve.
The classical physical problem can be informally stated as: given the quasi-steady orbital properties (instantaneous position, velocity and time)[39] of a group of celestial bodies, predict their interactive forces; and consequently, predict their true orbital motions for all future times.[40]

The two-body problem has been completely solved, as has the Restricted 3-Body Problem.[41]

Friday, September 4, 2015

Statistical Extrapolations of Climate Trends


















CO2

For some years, climate activists have claimed that we can't wait until we have all the data on AGW/CC, because it will be too late by then.  But it is 2015 now, and we do have substantial data on many of the trends climate models have been used to predict up until now.  I have been working with some of this data from the NOAA, EPA, and 2015 satellite datasets, on global temperatures, methane, and CO2 levels. Not surprisingly, there are some excellent fits to trend lines, which can be extrapolated to 2100.

A word of caution here, though:  trend lines cannot predict events that may change them, and some lines simply don't make scientific sense.  For example,
PPM CO2 Increases from 1959 - 2014












 
Clearly CO2 increases are not going to follow this 6'th order polynomial trend, although it did give the best fit of all trends I tried.  This is understandable.  CO2 increases vary considerably from year to year.  Better is to follow atmospheric CO2 levels, which I have done below:

PPM CO2 Levels 1980











 
Notice I've used a quadratic trend line here, which fits with > 99.99% correlation.  According to CO2 forcing calculations, this leads to an approximate 1.7 degree C increase in temperature above 2014, or 2.7 degrees overall since 1900. This fits well with official IPCC numbers, albeit on the low end.  It also agrees with temperatures plotted from NOAA data:









 
The increase from 2014 to 2100 is about 1.6 degrees, or 2.6 degrees by the IPCC. Very encouraging!

But not very hopeful, if many climate scientists are right.  It's generally predicted that an additional rise above one degree from current or more may lead to unacceptable consequences, most of which I'm sure you've read about.  Can we reduce them to acceptable levels?  And in a way that doesn't crash the global economy -- no, civilization itself -- leading to billions dead and the survivors back in the Dark Ages?  I think there is, and have tried to apply my thoughts to the above chart.  The result is the new chart below: 

I have used an exponential type decay on the CO2 increase model to project how emissions trends might be reduced down to virtually zero without (I hope) harming the global economy.  This new trend would not only depend on political/economic conditions, but mainly on developments in science and technology..At present the trend is about 2.3 PPM/year, and will increase to 3.2 PPM by the end of this century if nothing is done.  I calculate that this will reduce  a projected CO2 increase from 240 PPM down to an additional mere 72 PPM.  The calculated temperature increase from 2015 is then only about +0.5 degrees above 2014 conditions.

I must add, however, that even reducing carbon emissions to zero -- even removing the gas from the atmosphere -- will not mean atmospheric levels will drop quickly. About ten times as much CO2 is dissolved in the oceans than is in the air (the greatest majority is in living matter and carbonate rocks); so that, even if can reduce atmospheric levels, an equilibrium process will redistribute some of the dissolved gas back into the air.  For example, to remove a billions tons of CO2 quickly and permanently from the latter might require removing almost 10 billion tons from the ocean in order to maintain equilibrium.

On the other hand, it is also likely that anthropogenic CO2 has not fully equilibrated with the oceans  (it is supposed to have a half-life of ~100 years in the air), so that even if we added no more, it would probably absorb further, and atmospheric levels would slowly decline.  I do not know how this might affect the 2100 CO2 levels should the scheme above be applied.  http://www.nature.com/nature/journal/v488/n7409/full/nature11299.html



Methane

Of course, CO2 is not the only greenhouse gas climate scientists have been worrying about.  The other main other culprit has been methane.  Therefor, I have plotted atmospheric methane levels from 1980-2014.  Instead of plotting the gas directly, I have converted to CO2 PPM equivalents.













 
Strange, to say the least, especially if the quadratic fit is correct!  At first I didn't believe what I seeing.  Recently however, I encountered several articles,
http://wattsupwiththat.com/2015/08/19/the-arctic-methane-emergency-appears-canceled-due-to-methane-eating-bacteria/
http://fabiusmaximus.com/2015/08/20/ipcc-defeats-the-methane-monster-apocalypse-88620/, www.epa.gov/climatechange/indicators, and
http://www.climatechange2013.org/images/report/WG1AR5_Chapter02_FINAL.pdf, which shows rises in methane levels diminishing over time. This is possibly due to recently discovered sources of  methane consuming bacteria, found mainly in the arctic but probably widely distributed.  These bacteria naturally convert methane to CO2, and become even more active when temperature rises as it is doing.  If the quadratic trend line (best fit) is correct, then methane levels in 2100 will be about the same as 1980, peaking at 2040.  A linear trend line, on the other hand, shows methane increasing to almost twice that level.  Either way, it appears that methane can no longer be counted as a serious greenhouse gas with any certainty.




Satellite Data

I've also plotted NOAA (blue) versus satellite temperature change data (orange):














The differences are striking.  The satellite data indicate a full degree less warming by 2100 than the NOAA data, or only +0.7 degrees above current temperatures; however, the varriation is also much larger, so this might not be accurate.  Incidentally, for those who follow this issue, the so-called "hiatus" (of "climate-denier" fame) is based on the satellite data (plotted from 1996-2014 below):














So the pause is real, at least for the satellite data.  The fallacy, I think, is to assume that because temperatures have been falling for some eighteen years, we can declare global warming to be over.  The entire data set does not support that, and it is common for global temperatures to decrease (and increase) over short periods.  So beware.

I have puzzled over why the NOAA and satellite datasets differ as much as they do.  Since NASA has both geostationary and polar satellites, it shouldn't be a lack of global coverage, though there could be systematic errors.  There is also the often said accusation that the NOAA data have been "adjusted" (meaning fudged in this case) to make global warming appear worse than it is (and I have read about some peculiar adjustments), but I don't have no data to back this claim up. It could also be, of course, that satellites measure a higher region in the troposphere, while the NOAA data is strictly ground level.

Naturally, it could be any combination of any of these reasons.


Sea Level Rise and Global Ice















Now we have three conflicting data sets!  CISRO (30 cm -> 2100) and NASA (40 cm) agree the best, and they are both best within predicted range for the middle of the next century (~ one meter rise from 1900).  Of course, if CO2 emissions are reduced in the fashion described above, these numbers should not as high; I have no data to demonstrate this, however.


















These data show an approximately 10,000 cubic kilometer loss in Arctic ice over the last 35 years, leading to a three cm sea level rise during that period.  Unfortunately, I could not obtain the raw data for this chart, so I cannot draw a trend line through the data (the straight line came with the chart). We can, however, calculate a geometric increase from this data, by assuming rising temperatures will double each succeeding 35 year period.  This give 2015-2050 = 20,000 km^3, 2050-2085 - 40,000 km^3, and 2085-2120 = 80,000 km^3.  As we are only extrapolating to 2100, this approximately adds an another 20,000 + 40,000 + 40,000 = 100,000 km^3 melted ice, enough to raise sea levels another 30 cm over 2015.  It is in excellent agreement with the CISPRO and NASA sea level trendline above.

I haven't included Antarctic sea ice because, up to the present, it doesn't show enough shrinkage or expansion (estimates differ) to extrapolate a significant trend.

What about ice coverage?  This is important because a significant shrinkage here could reduce the Earth's albedo (solar reflectance) and strongly enhance global warming.  Again, I can only include a chart as presented from it's source without raw data   http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/global.daily.ice.area.withtrend.jp:










Very little, if any global sea ice area is demonstrated; however, if melting follows the path I have outlined above, it is quite possible that these levels will at least gradually shrink, raising global warming by reducing the planet's overall albedo.


Population Issues














It is, I hope, evident that the planet's increasing human population only adds more greenhouse gases, thereby increasing global warming.  Of equal interest, however, is the per capital emission rates.  To calculate this, I took the data from chart 2, and divided the points from the chart with population to produce the blue line:












The orange line is similar to that used to show how CO2 can be reduced to near zero before the century is out.  See my comments about this modified trend above.


Conclusions

Global warming, along with the dire consequences this study supports, is very real and no hoax or scheme by some secret cabal of government and the scientific community.  Yet neither is it an unstoppable catastrophe as some have alleged -- in fact, the trends revealed here mostly confirm to the more conservative estimates (model outputs) of how serious the problem is.  That is no reason for complacency, however, for even the these models inexorably lead to serious problems.

I suggest, however, that these problems can be at least strongly ameliorated at least, if we invest in the basic science and technologies that alone can address the issue.  For myself, I am strongly convinced that we will do so; more than that, that we have already begun. 

Saturday, August 22, 2015

End Permian Connection with Current AGW Fails on Numbers.
























Peter Ward, Robert Scribbler, and others have de facto been collaberating on a hypothesis that current climatic conditions strongly resemble those of the end Permian "Great Dying", and that we are headed toward the same conditions.  Although easily refuted, this idea has been recently spreading among climate-fanaticists, and receiving more publicity than it deserves.

Some of their work can be found at http://robertscribbler.com/2014/01/21/awakening-the-horrors-of-the-ancient-hothouse-hydrogen-sulfide-in-the-worlds-warming-oceans/, http://robertscribbler.com/2013/12/18/through-the-looking-glass-of-the-great-dying-new-study-finds-ocean-stratification-proceeded-rapidly-over-past-150-years/, and other links contained within.

Let's proceed to the important facts, which punch holes in this speculation.

At present, our atmosphere contains ~ 300 billion tons of CO2. During the end-Permian extinction (the largest mass extinction in geohistory), a combination of massive volcanic activity (greatest we know of) and CO2-producing bacteria may have injected more than 40 times that much CO2 in a short period (http://www.bitsofscience.org/permian-triassic-mass-extinct…/), along with massive amounts of methane and sulfur dioxide, the latter a deadly gas. This resulted in "a doubling of carbon dioxide levels from 2,000 parts per million to 4,400 ppm [11 times today's levels]." (http://thinkprogress.org/…/doubling-of-co2-levels-in-end-t…/) This would have risen the global temperature then from 6-7 degrees above the present to 8.5 - 9.5 (my calculations; the article says three degrees), although whether this is pertinent is uncertain.
.
According to current theories, the combination of very high CO2, along with SO2, would have caused the ocean depths to become anoxic (lacking oxygen, like the Black Sea today). This in turn would have lead to enormous blooms of hydrogen sulfide (also highly toxic) producing bacteria in those depths, which, along with SO2 and the higher temperatures, exterminated almost all life in the sea and on the land.

AGW/CC enthusiasts have been unable to resist drawing parallels with current conditions; and indeed, there is a partial, superficial resemblance. But just as clearly the numbers aren't remotely close (nor does any trend extrapolation lead to such a situation); also, the Earth's continents were all joined at the time, changing oceanic and atmospheric currents in many ways which could have made the extinctions worse.

Please, give it up already.

Thursday, August 20, 2015

Wireless power transfer

From Wikipedia, the free encyclopedia

Inductive charging pad for LG smartphone, using the Qi (pronounced 'Chi') system, an example of near-field wireless transfer. When the phone is set on the pad, a coil in the pad creates a magnetic field which induces a current in another coil, in the phone, charging its battery.

Wireless power transfer (WPT)[1] or wireless energy transmission is the transmission of electrical power from a power source to a consuming device without using solid wires or conductors.[2][3][4][5] It is a generic term that refers to a number of different power transmission technologies that use time-varying electromagnetic fields.[1][5][6][7] Wireless transmission is useful to power electrical devices in cases where interconnecting wires are inconvenient, hazardous, or are not possible. In wireless power transfer, a transmitter device connected to a power source, such as the mains power line, transmits power by electromagnetic fields across an intervening space to one or more receiver devices, where it is converted back to electric power and utilized.[1]

Wireless power techniques fall into two categories, non-radiative and radiative.[1][6][8][9][10] In near-field or non-radiative techniques, power is transferred over short distances by magnetic fields using inductive coupling between coils of wire or in a few devices by electric fields using capacitive coupling between electrodes.[5][8] Applications of this type are electric toothbrush chargers, RFID tags, smartcards, and chargers for implantable medical devices like artificial cardiac pacemakers, and inductive powering or charging of electric vehicles like trains or buses.[9][11] A current focus is to develop wireless systems to charge mobile and handheld computing devices such as cellphones, digital music players and portable computers without being tethered to a wall plug.

In radiative or far-field techniques, also called power beaming, power is transmitted by beams of electromagnetic radiation, like microwaves or laser beams. These techniques can transport energy longer distances but must be aimed at the receiver. Proposed applications for this type are solar power satellites, and wireless powered drone aircraft.[9] An important issue associated with all wireless power systems is limiting the exposure of people and other living things to potentially injurious electromagnetic fields (see Electromagnetic radiation and health).[9]

Overview


Generic block diagram of a wireless power system

"Wireless power transmission" is a collective term that refers to a number of different technologies for transmitting power by means of time-varying electromagnetic fields.[1][5][8] The technologies, listed in the table below, differ in the distance over which they can transmit power efficiently, whether the transmitter must be aimed (directed) at the receiver, and in the type of electromagnetic energy they use: time varying electric fields, magnetic fields, radio waves, microwaves, or infrared or visible light waves.[8]

In general a wireless power system consists of a "transmitter" device connected to a source of power such as mains power lines, which converts the power to a time-varying electromagnetic field, and one or more "receiver" devices which receive the power and convert it back to DC or AC electric power which is consumed by an electrical load.[1][8] In the transmitter the input power is converted to an oscillating electromagnetic field by some type of "antenna" device. The word "antenna" is used loosely here; it may be a coil of wire which generates a magnetic field, a metal plate which generates an electric field, an antenna which radiates radio waves, or a laser which generates light. A similar antenna or coupling device in the receiver converts the oscillating fields to an electric current. An important parameter which determines the type of waves is the frequency f in hertz of the oscillations. The frequency determines the wavelength λ = c/f of the waves which carry the energy across the gap, where c is the velocity of light.

Wireless power uses the same fields and waves as wireless communication devices like radio,[6][12] another familiar technology which involves power transmitted without wires by electromagnetic fields, used in cellphones, radio and television broadcasting, and WiFi. In radio communication the goal is the transmission of information, so the amount of power reaching the receiver is unimportant as long as it is enough that the signal to noise ratio is high enough that the information can be received intelligibly.[5][6][12] In wireless communication technologies generally only tiny amounts of power reach the receiver. By contrast, in wireless power, the amount of power received is the important thing, so the efficiency (fraction of transmitted power that is received) is the more significant parameter.[5] For this reason wireless power technologies are more limited by distance than wireless communication technologies.

These are the different wireless power technologies:[1][8][9][13][14]

Technology Range[15] Directivity[8] Frequency Antenna devices Current and or possible future applications
Inductive coupling Short Low Hz – MHz Wire coils Electric tooth brush and razor battery charging, induction stovetops and industrial heaters.
Resonant inductive coupling Mid- Low MHz – GHz Tuned wire coils, lumped element resonators Charging portable devices (Qi), biomedical implants, electric vehicles, powering busses, trains, MAGLEV, RFID, smartcards.
Capacitive coupling Short Low kHz – MHz Electrodes Charging portable devices, power routing in large scale integrated circuits, Smartcards.
Magnetodynamic[13] Short N.A. Hz Rotating magnets Charging electric vehicles.
Microwaves Long High GHz Parabolic dishes, phased arrays, rectennas Solar power satellite, powering drone aircraft.
Light waves Long High ≥THz Lasers, photocells, lenses Powering drone aircraft, powering space elevator climbers.

Field regions

Electric and magnetic fields are created by charged particles in matter such as electrons. A stationary charge creates an electrostatic field in the space around it. A steady current of charges (direct current, DC) creates a static magnetic field around it. The above fields contain energy, but cannot carry power because they are static. However time-varying fields can carry power.[16] Accelerating electric charges, such as are found in an alternating current (AC) of electrons in a wire, create time-varying electric and magnetic fields in the space around them. These fields can exert oscillating forces on the electrons in a receiving "antenna", causing them to move back and forth. These represent alternating current which can be used to power a load.

The oscillating electric and magnetic fields surrounding moving electric charges in an antenna device can be divided into two regions, depending on distance Drange from the antenna.[1][4][6][8][9][10][17] The boundary between the regions is somewhat vaguely defined.[8] The fields have different characteristics in these regions, and different technologies are used for transmitting power:
  • Near-field or nonradiative region – This means the area within about 1 wavelength (λ) of the antenna.[1][4][10] In this region the oscillating electric and magnetic fields are separate[6] and power can be transferred via electric fields by capacitive coupling (electrostatic induction) between metal electrodes, or via magnetic fields by inductive coupling (electromagnetic induction) between coils of wire.[5][6][8][9] These fields are not radiative,[10] meaning the energy stays within a short distance of the transmitter.[18] If there is no receiving device or absorbing material within their limited range to "couple" to, no power leaves the transmitter.[18] The range of these fields is short, and depends on the size and shape of the "antenna" devices, which are usually coils of wire. The fields, and thus the power transmitted, decrease exponentially with distance,[4][17][19] so if the distance between the two "antennas" Drange is much larger than the diameter of the "antennas" Dant very little power will be received. Therefore, these techniques cannot be used for long distance power transmission.
Resonance, such as resonant inductive coupling, can increase the coupling between the antennas greatly, allowing efficient transmission at somewhat greater distances,[1][4][6][9][20][21] although the fields still decrease exponentially. Therefore the range of near-field devices is conventionally devided into two categories:
  • Short range – up to about one antenna diameter: Drange ≤ Dant.[18][20][22] This is the range over which ordinary nonresonant capacitive or inductive coupling can transfer practical amounts of power.
  • Mid-range – up to 10 times the antenna diameter: Drange ≤ 10 Dant.[20][21][22][23] This is the range over which resonant capacitive or inductive coupling can transfer practical amounts of power.
  • Far-field or radiative region – Beyond about 1 wavelength (λ) of the antenna, the electric and magnetic fields are perpendicular to each other and propagate as an electromagnetic wave; examples are radio waves, microwaves, or light waves.[1][4][9] This part of the energy is radiative,[10] meaning it leaves the antenna whether or not there is a receiver to absorb it. The portion of energy which does not strike the receiving antenna is dissipated and lost to the system. The amount of power emitted as electromagnetic waves by an antenna depends on the ratio of the antenna's size Dant to the wavelength of the waves λ,[24] which is determined by the frequency: λ = c/f. At low frequencies f where the antenna is much smaller than the size of the waves, Dant << λ, very little power is radiated. Therefore the near-field devices above, which use lower frequencies, radiate almost none of their energy as electromagnetic radiation. Antennas about the same size as the wavelength Dant ≈ λ such as monopole or dipole antennas, radiate power efficiently, but the electromagnetic waves are radiated in all directions (omnidirectionally), so if the receiving antenna is far away, only a small amount of the radiation will hit it.[10][20] Therefore, these can be used for short range, inefficient power transmission but not for long range transmission.[25]
However, unlike fields, electromagnetic radiation can be focused by reflection or refraction into beams. By using a high-gain antenna or optical system which concentrates the radiation into a narrow beam aimed at the receiver, it can be used for long range power transmission.[20][25] From the Rayleigh criterion, to produce the narrow beams necessary to focus a significant amount of the energy on a distant receiver, an antenna must be much larger than the wavelength of the waves used: Dant >> λ = c/f.[26][27] Practical beam power devices require wavelengths in the centimeter region or below, corresponding to frequencies above 1 GHz, in the microwave range or above.[1]

Near-field or non-radiative techniques

The near-field components of electric and magnetic fields die out quickly beyond a distance of about one diameter of the antenna (Dant). Outside very close ranges the field strength and coupling is roughly proportional to (Drange/Dant)−3[17][28] Since power is proportional to the square of the field strength, the power transferred decreases with the sixth power of the distance (Drange/Dant)−6.[6][19][29][30] or 60 dB per decade. In other words, doubling the distance between transmitter and receiver causes the power received to decrease by a factor of 26 = 64.

Inductive coupling


Generic block diagram of an inductive wireless power system.
(right) A light bulb powered wirelessly by induction, in 1910. (left)
Modern inductive power transfer, an electric toothbrush charger. A coil in the stand produces a magnetic field, inducing an AC current in a coil in the toothbrush, which is rectified to charge the batteries.

In inductive coupling (electromagnetic induction[9][31] or inductive power transfer, IPT), power is transferred between coils of wire by a magnetic field.[6] The transmitter and receiver coils together form a transformer[6][9] (see diagram). An alternating current (AC) through the transmitter coil (L1) creates an oscillating magnetic field (B) by Ampere's law. The magnetic field passes through the receiving coil (L2), where it induces an alternating EMF (voltage) by Faraday's law of induction, which creates an AC current in the receiver.[5][31] The induced alternating current may either drive the load directly, or be rectified to direct current (DC) by a rectifier in the receiver, which drives the load. A few systems, such as electric toothbrush charging stands, work at 50/60 Hz so AC mains current is applied directly to the transmitter coil, but in most systems an electronic oscillator generates a higher frequency AC current which drives the coil, because transmission efficiency improves with frequency.[31]

Inductive coupling is the oldest and most widely used wireless power technology, and virtually the only one so far which is used in commercial products. It is used in inductive charging stands for cordless appliances used in wet environments such as electric toothbrushes[9] and shavers, to reduce the risk of electric shock.[7] Another application area is "transcutaneous" recharging of biomedical prosthetic devices implanted in the human body, such as cardiac pacemakers and insulin pumps, to avoid having wires passing through the skin.[32][33] It is also used to charge electric vehicles such as cars and to either charge or power transit vehicles like buses and trains.[9][14]

However the fastest growing use is wireless charging pads to recharge mobile and handheld wireless devices such as laptop and tablet computers, cellphones, digital media players, and video game controllers.[14]

The power transferred increases with frequency[31] and the mutual inductance M between the coils,[5] which depends on their geometry and the distance Drange between them. A widely-used figure of merit is the coupling coefficient \scriptstyle k\; =\; M/\sqrt{L_1 L_2}.[31][34] This dimensionless parameter is equal to the fraction of magnetic flux through L1 that passes through L2. If the two coils are on the same axis and close together so all the magnetic flux from L1 passes through L2, k = 1 and the link efficiency approaches 100%. The greater the separation between the coils, the more of the magnetic field from the first coil misses the second, and the lower k and the link efficiency are, approaching zero at large separations.[31] The link efficiency and power transferred is roughly proportional to k2.[31] In order to achieve high efficiency, the coils must be very close together, a fraction of the coil diameter Dant,[31] usually within centimeters,[25] with the coils' axes aligned. Wide, flat coil shapes are usually used, to increase coupling.[31] Ferrite "flux confinement" cores can confine the magnetic fields, improving coupling and reducing interference to nearby electronics,[31][32] but they are heavy and bulky so small wireless devices often use air-core coils.

Ordinary inductive coupling can only achieve high efficiency when the coils are very close together, usually adjacent. In most modern inductive systems resonant inductive coupling (described below) is used, in which the efficiency is increased by using resonant circuits.[10][21][31][35] This can achieve high efficiencies at greater distances than nonresonant inductive coupling.

Prototype inductive electric car charging system at 2011 Tokyo Auto Show
Powermat inductive charging spots in a coffee shop. Customers can set their phones and computers on them to recharge.
Wireless powered access card.

Resonant inductive coupling


Diagram of the resonant inductive wireless power system demonstrated by Marin Soljačić's MIT team in 2007. The resonant circuits were coils of copper wire which resonated with their internal capacitance (dotted capacitors) at 10 MHz. Power was coupled into the transmitter resonator, and out of the receiver resonator into the rectifier, by small coils which also served for impedance matching.

Resonant inductive coupling (electrodynamic coupling,[9] evanescent wave coupling or strongly coupled magnetic resonance[20]) is a form of inductive coupling in which power is transferred by magnetic fields (B, green) between two resonant circuits (tuned circuits), one in the transmitter and one in the receiver (see diagram, right).[6][7][9][10][35] Each resonant circuit consists of a coil of wire connected to a capacitor, or a self-resonant coil or other resonator with internal capacitance. The two are tuned to resonate at the same resonant frequency. The resonance between the coils can greatly increase coupling and power transfer, analogously to the way a vibrating tuning fork can induce sympathetic vibration in a distant fork tuned to the same pitch. Nikola Tesla first discovered resonant coupling during his pioneering experiments in wireless power transfer around the turn of the 20th century,[36][37][38] but the possibilities of using resonant coupling to increase transmission range has only recently been explored.[39] In 2007 a team led by Marin Soljačić at MIT used two coupled tuned circuits each made of a 25 cm self-resonant coil of wire at 10 MHz to achieve the transmission of 60 W of power over a distance of 2 meters (6.6 ft) (8 times the coil diameter) at around 40% efficiency.[7][9][20][37][40]

The concept behind resonant inductive coupling is that high Q factor resonators exchange energy at a much higher rate than they lose energy due to internal damping.[20] Therefore, by using resonance, the same amount of power can be transferred at greater distances, using the much weaker magnetic fields out in the peripheral regions ("tails") of the near fields (these are sometimes called evanescent fields[20]). Resonant inductive coupling can achieve high efficiency at ranges of 4 to 10 times the coil diameter (Dant).[21][22][23] This is called "mid-range" transfer,[22] in contrast to the "short range" of nonresonant inductive transfer, which can achieve similar efficiencies only when the coils are adjacent. Another advantage is that resonant circuits interact with each other so much more strongly than they do with nonresonant objects that power losses due to absorption in stray nearby objects are negligible.[10][20] A drawback of resonant coupling is that at close ranges when the two resonant circuits are tightly coupled, the resonant frequency of the system is no longer constant but "splits" into two resonant peaks, so the maximum power transfer no longer occurs at the original resonant frequency and the oscillator frequency must be tuned to the new resonance peak.[21]

Resonant technology is currently being widely incorporated in modern inductive wireless power systems.[31] One of the possibilities envisioned for this technology is area wireless power coverage. A coil in the wall or ceiling of a room might be able to wirelessly power lights and mobile devices anywhere in the room, with reasonable efficiency.[7] An environmental and economic benefit of wirelessly powering small devices such as clocks, radios, music players and remote controls is that it could drastically reduce the 6 billion batteries disposed of each year, a large source of toxic waste and groundwater contamination.[25]

Capacitive coupling

In capacitive coupling (electrostatic induction), the dual of inductive coupling, power is transmitted by electric fields[5] between electrodes such as metal plates. The transmitter and receiver electrodes form a capacitor, with the intervening space as the dielectric.[5][6][9][32][41] An alternating voltage generated by the transmitter is applied to the transmitting plate, and the oscillating electric field induces an alternating potential on the receiver plate by electrostatic induction,[5][41] which causes an alternating current to flow in the load circuit. The amount of power transferred increases with the frequency[41] and the capacitance between the plates, which is proportional to the area of the smaller plate and (for short distances) inversely proportional to the separation.[5]
Capacitive coupling has only been used practically in a few low power applications, because the very high voltages on the electrodes required to transmit significant power can be hazardous,[6][9] and can cause unpleasant side effects such as noxious ozone production. In addition, in contrast to magnetic fields,[20] electric fields interact strongly with most materials, including the human body, due to dielectric polarization.[32] Intervening materials between or near the electrodes can absorb the energy, in the case of humans possibly causing excessive electromagnetic field exposure.[6] However capacitive coupling has a few advantages over inductive. The field is largely confined between the capacitor plates, reducing interference, which in inductive coupling requires heavy ferrite "flux confinement" cores.[5][32] Also, alignment requirements between the transmitter and receiver are less critical.[5][6][41] Capacitive coupling has recently been applied to charging battery powered portable devices[42] and is being considered as a means of transferring power between substrate layers in integrated circuits.[43]

Capacitive wireless power systems
Bipolar
Unipolar

Two types of circuit have been used:
  • Bipolar design:[44] In this type of circuit, there are two transmitter plates and two receiver plates. Each transmitter plate is coupled to a receiver plate. The transmitter oscillator drives the transmitter plates in opposite phase (180° phase difference) by a high alternating voltage, and the load is connected between the two receiver plates. The alternating electric fields induce opposite phase alternating potentials in the receiver plates, and this "push-pull" action causes current to flow back and forth between the plates through the load. A disadvantage of this configuration for wireless charging is that the two plates in the receiving device must be aligned face to face with the charger plates for the device to work.
  • Unipolar design:[5][41] In this type of circuit, the transmitter and receiver have only one active electrode, and either the ground or a large inactive capacitive electrode serves as the return path for the current. The transmitter oscillator and the load is connected between the electrodes and a ground connection, inducing an alternating potential on the nearby receiving electrode with respect to ground, causing alternating current to flow through the load connected between it and ground.
Resonance can also be used with capacitive coupling to extend the range. At the turn of the century, Nikola Tesla did the first experiments with both resonant electrostatic and magnetic coupling.

Magnetodynamic coupling 

In this method, power is transmitted between two rotating armatures, one in the transmitter and one in the receiver, which rotate synchronously, coupled together by a magnetic field generated by permanent magnets on the armatures.[13] The transmitter armature is turned either by or as the rotor of an electric motor, and its magnetic field exerts torque on the receiver armature, turning it. The magnetic field acts like a mechanical coupling between the armatures.[13] The receiver armature produces power to drive the load, either by turning a separate electric generator or by using the receiver armature itself as the rotor in a generator.

This device has been proposed as an alternative to inductive power transfer for noncontact charging of electric vehicles.[13] A rotating armature embedded in a garage floor or curb would turn a receiver armature in the underside of the vehicle to charge its batteries.[13] It is claimed that this technique can transfer power over distances of 10 to 15 cm (4 to 6 inches) with high efficiency, over 90%.[13][45] Also, the low frequency stray magnetic fields produced by the rotating magnets produce less electromagnetic interference to nearby electronic devices than the high frequency magnetic fields produced by inductive coupling systems. A prototype system charging electric vehicles has been in operation at University of British Columbia since 2012. Other researchers, however, claim that the two energy conversions (electrical to mechanical to electrical again) make the system less efficient than electrical systems like inductive coupling.[13]

Far-field or radiative techniques

Far field methods achieve longer ranges, often multiple kilometer ranges, where the distance is much greater than the diameter of the device(s). The main reason for longer ranges with radio wave and optical devices is the fact that electromagnetic radiation in the far-field can be made to match the shape of the receiving area (using high directivity antennas or well-collimated laser beams). The maximum directivity for antennas is physically limited by diffraction.

In general, visible light (from lasers) and microwaves (from purpose-designed antennas) are the forms of electromagnetic radiation best suited to energy transfer.
The dimensions of the components may be dictated by the distance from transmitter to receiver, the wavelength and the Rayleigh criterion or diffraction limit, used in standard radio frequency antenna design, which also applies to lasers. Airy's diffraction limit is also frequently used to determine an approximate spot size at an arbitrary distance from the aperture. Electromagnetic radiation experiences less diffraction at shorter wavelengths (higher frequencies); so, for example, a blue laser is diffracted less than a red one.

The Rayleigh criterion dictates that any radio wave, microwave or laser beam will spread and become weaker and diffuse over distance; the larger the transmitter antenna or laser aperture compared to the wavelength of radiation, the tighter the beam and the less it will spread as a function of distance (and vice versa). Smaller antennae also suffer from excessive losses due to side lobes. However, the concept of laser aperture considerably differs from an antenna. Typically, a laser aperture much larger than the wavelength induces multi-moded radiation and mostly collimators are used before emitted radiation couples into a fiber or into space.

Ultimately, beamwidth is physically determined by diffraction due to the dish size in relation to the wavelength of the electromagnetic radiation used to make the beam.

Microwave power beaming can be more efficient than lasers, and is less prone to atmospheric attenuation caused by dust or water vapor.

Then the power levels are calculated by combining the above parameters together, and adding in the gains and losses due to the antenna characteristics and the transparency and dispersion of the medium through which the radiation passes. That process is known as calculating a link budget.

Microwaves


An artist's depiction of a solar satellite that could send electric energy by microwaves to a space vessel or planetary surface.

Power transmission via radio waves can be made more directional, allowing longer distance power beaming, with shorter wavelengths of electromagnetic radiation, typically in the microwave range.[46] A rectenna may be used to convert the microwave energy back into electricity. Rectenna conversion efficiencies exceeding 95% have been realized. Power beaming using microwaves has been proposed for the transmission of energy from orbiting solar power satellites to Earth and the beaming of power to spacecraft leaving orbit has been considered.[47][48]

Power beaming by microwaves has the difficulty that, for most space applications, the required aperture sizes are very large due to diffraction limiting antenna directionality. For example, the 1978 NASA Study of solar power satellites required a 1-km diameter transmitting antenna and a 10 km diameter receiving rectenna for a microwave beam at 2.45 GHz.[49] These sizes can be somewhat decreased by using shorter wavelengths, although short wavelengths may have difficulties with atmospheric absorption and beam blockage by rain or water droplets. Because of the "thinned array curse," it is not possible to make a narrower beam by combining the beams of several smaller satellites.

For earthbound applications, a large-area 10 km diameter receiving array allows large total power levels to be used while operating at the low power density suggested for human electromagnetic exposure safety. A human safe power density of 1 mW/cm2 distributed across a 10 km diameter area corresponds to 750 megawatts total power level. This is the power level found in many modern electric power plants.

Following World War II, which saw the development of high-power microwave emitters known as cavity magnetrons, the idea of using microwaves to transmit power was researched. By 1964, a miniature helicopter propelled by microwave power had been demonstrated.[50]

Japanese researcher Hidetsugu Yagi also investigated wireless energy transmission using a directional array antenna that he designed. In February 1926, Yagi and his colleague Shintaro Uda published their first paper on the tuned high-gain directional array now known as the Yagi antenna. While it did not prove to be particularly useful for power transmission, this beam antenna has been widely adopted throughout the broadcasting and wireless telecommunications industries due to its excellent performance characteristics.[51]

Wireless high power transmission using microwaves is well proven. Experiments in the tens of kilowatts have been performed at Goldstone in California in 1975[52][53][54] and more recently (1997) at Grand Bassin on Reunion Island.[55] These methods achieve distances on the order of a kilometer.
Under experimental conditions, microwave conversion efficiency was measured to be around 54%.[56]

A change to 24 GHz has been suggested as microwave emitters similar to LEDs have been made with very high quantum efficiencies using negative resistance, i.e., Gunn or IMPATT diodes, and this would be viable for short range links.

Recently, researchers at the University of Washington introduced power over Wi-Fi, which trickle-charges batteries and powered battery-free cameras and temperature sensors using transmissions from Wi-Fi routers.[57]

Lasers


With a laser beam centered on its panel of photovoltaic cells, a lightweight model plane makes the first flight of an aircraft powered by a laser beam inside a building at NASA Marshall Space Flight Center.

In the case of electromagnetic radiation closer to the visible region of the spectrum (tens of micrometers to tens of nanometres), power can be transmitted by converting electricity into a laser beam that is then pointed at a photovoltaic cell.[58] This mechanism is generally known as "power beaming" because the power is beamed at a receiver that can convert it to electrical energy.

Compared to other wireless methods:[59]
  • Collimated monochromatic wavefront propagation allows narrow beam cross-section area for transmission over large distances.
  • Compact size: solid state lasers fit into small products.
  • No radio-frequency interference to existing radio communication such as Wi-Fi and cell phones.
  • Access control: only receivers hit by the laser receive power.
Drawbacks include:
  • Laser radiation is hazardous. Low power levels can blind humans and other animals. High power levels can kill through localized spot heating.
  • Conversion between electricity and light is inefficient. Photovoltaic cells achieve only 40%–50% efficiency.[60] (Efficiency is higher with monochromatic light than with solar panels).
  • Atmospheric absorption, and absorption and scattering by clouds, fog, rain, etc., causes up to 100% losses.
  • Requires a direct line of sight with the target.
Laser "powerbeaming" technology has been mostly explored in military weapons[61][62][63] and aerospace[64][65] applications and is now being developed for commercial and consumer electronics. Wireless energy transfer systems using lasers for consumer space have to satisfy laser safety requirements standardized under IEC 60825.[citation needed]

Other details include propagation,[66] and the coherence and the range limitation problem.[67]

Geoffrey Landis[68][69][70] is one of the pioneers of solar power satellites[71] and laser-based transfer of energy especially for space and lunar missions. The demand for safe and frequent space missions has resulted in proposals for a laser-powered space elevator.[72][73]

NASA's Dryden Flight Research Center demonstrated a lightweight unmanned model plane powered by a laser beam.[74] This proof-of-concept demonstrates the feasibility of periodic recharging using the laser beam system.

Energy harvesting

In the context of wireless power, energy harvesting, also called power harvesting or energy scavenging, is the conversion of ambient energy from the environment to electric power, mainly to power small autonomous wireless electronic devices.[75] The ambient energy may come from stray electric or magnetic fields or radio waves from nearby electrical equipment, light, thermal energy (heat), or kinetic energy such as vibration or motion of the device.[75] Although the efficiency of conversion is usually low and the power gathered often minuscule (milliwatts or microwatts),[75] it can be adequate to run or recharge small micropower wireless devices such as remote sensors, which are proliferating in many fields.[75] This new technology is being developed to eliminate the need for battery replacement or charging of such wireless devices, allowing them to operate completely autonomously.

History

In 1826 André-Marie Ampère developed Ampère's circuital law showing that electric current produces a magnetic field.[76] Michael Faraday developed Faraday's law of induction in 1831, describing the electromagnetic force induced in a conductor by a time-varying magnetic flux. In 1862 James Clerk Maxwell synthesized these and other observations, experiments and equations of electricity, magnetism and optics into a consistent theory, deriving Maxwell's equations. This set of partial differential equations forms the basis for modern electromagnetics, including the wireless transmission of electrical energy.[14][35] Maxwell predicted the existence of electromagnetic waves in his 1873 A Treatise on Electricity and Magnetism.[77] In 1884 John Henry Poynting developed equations for the flow of power in an electromagnetic field, Poynting's theorem and the Poynting vector, which are used in the analysis of wireless energy transfer systems.[14][35] In 1888 Heinrich Rudolf Hertz discovered radio waves, confirming the prediction of electromagnetic waves by Maxwell.[77]

Tesla's experiments


Tesla demonstrating wireless power transmission in a lecture at Columbia College, New York, in 1891. The two metal sheets are connected to his Tesla coil oscillator, which applies a high radio frequency oscillating voltage. The oscillating electric field between the sheets ionizes the low pressure gas in the two long Geissler tubes he is holding, causing them to glow by fluorescence, similar to neon lights.
(left) Experiment in resonant inductive transfer by Tesla at Colorado Springs 1899. The coil is in resonance with Tesla's magnifying transmitter nearby, powering the light bulb at bottom. (right) Tesla's unsuccessful Wardenclyffe power station.

Inventor Nikola Tesla performed the first experiments in wireless power transmission at the turn of the 20th century,[35][37] and may have done more to popularize the idea than any other individual. In the period 1891 to 1904 he experimented with transmitting power by inductive and capacitive coupling using spark-excited radio frequency resonant transformers, now called Tesla coils, which generated high AC voltages.[35][37][78] With these he was able to transmit power for short distances without wires. In demonstrations before the American Institute of Electrical Engineers[78] and at the 1893 Columbian Exposition in Chicago he lit light bulbs from across a stage.[37] He found he could increase the distance by using a receiving LC circuit tuned to resonance with the transmitter's LC circuit.[36] using resonant inductive coupling.[37][38] At his Colorado Springs laboratory during 1899–1900, by using voltages of the order of 10 megavolts generated by an enormous coil, he was able to light three incandescent lamps at a distance of about one hundred feet.[79][80] The resonant inductive coupling which Tesla pioneered is now a familiar technology used throughout electronics; its use in wireless power has been recently rediscovered and it is currently being widely applied to short-range wireless power systems.[37][81]

The inductive and capacitive coupling used in Tesla's experiments is a "near-field" effect,[37] so it is not able to transmit power long distances. However, Tesla was obsessed with developing a wireless power distribution system that could transmit power directly into homes and factories, as proposed in a visionary 1900 article in Century magazine.[82][83][84][85] and believed that resonance was the key. He claimed to be able to transmit power on a worldwide scale, using a method that involved conduction through the Earth and atmosphere.[83][84][85][86] Tesla was vague about his methods. One of his ideas was to use balloons to suspend transmitting and receiving terminals in the air above 30,000 feet (9,100 m) in altitude, where the pressure is lower.[86] At this altitude, Tesla claimed, an ionized layer would allow electricity to be sent at high voltages (millions of volts) over long distances.

Resonant wireless power demonstration at the Franklin Institute, Philadelphia, 1937. Visitors could adjust the receiver's tuned circuit (right) with the two knobs. When the resonant frequency of the receiver was out of tune with the transmitter, the light would go out.

In 1901, Tesla began construction of a large high-voltage coil facility, the Wardenclyffe Tower at Shoreham, New York, intended as a prototype transmitter for a "World Wireless System" that was to transmit power worldwide, but by 1904 his investors had pulled out, and the facility was never completed.[84][87] Although Tesla claimed his ideas were proven, he had a history of failing to confirm his ideas by experiment,[88][89] and there seems to be no evidence that he ever transmitted significant power beyond the short-range demonstrations above.[14][35][36][79][89][90][91][92][93] The only report of long-distance transmission by Tesla is a claim, not found in reliable sources, that in 1899 he wirelessly lit 200 light bulbs at a distance of 26 miles (42 km).[79][90] There is no independent confirmation of this putative demonstration;[79][90][94] Tesla did not mention it,[90] and it does not appear in his meticulous laboratory notes.[94][95] It originated in 1944 from Tesla's first biographer, John J. O'Neill,[79] who said he pieced it together from "fragmentary material... in a number of publications".[96] In the 110 years since Tesla's experiments, efforts using similar equipment have failed to achieve long distance power transmission,[37][79][90][92] and the scientific consensus is his World Wireless system would not have worked.[14][35][36][84][90][97][98][99][100] Tesla's world power transmission scheme remains today what it was in Tesla's time, a fascinating dream.[14][84]

Microwaves

Before World War 2, little progress was made in wireless power transmission.[91] Radio was developed for communication uses, but couldn't be used for power transmission due to the fact that the relatively low-frequency radio waves spread out in all directions and little energy reached the receiver.[14][35][91] In radio communication, at the receiver, an amplifier intensifies a weak signal using energy from another source. For power transmission, efficient transmission required transmitters that could generate higher-frequency microwaves, which can be focused in narrow beams towards a receiver.[14][35][91][98]

The development of microwave technology during World War 2, such as the klystron and magnetron tubes and parabolic antennas[91] made radiative (far-field) methods practical for the first time, and the first long-distance wireless power transmission was achieved in the 1960s by William C. Brown.[14][35] In 1964 Brown invented the rectenna which could efficiently convert microwaves to DC power, and in 1964 demonstrated it with the first wireless-powered aircraft, a model helicopter powered by microwaves beamed from the ground.[14][91] A major motivation for microwave research in the 1970s and 80s was to develop a solar power satellite.[35][91] Conceived in 1968 by Peter Glaser, this would harvest energy from sunlight using solar cells and beam it down to Earth as microwaves to huge rectennas, which would convert it to electrical energy on the electric power grid.[14][101] In landmark 1975 high power experiments, Brown demonstrated short range transmission of 475 W of microwaves at 54% DC to DC efficiency, and he and Robert Dickinson at NASA's Jet Propulsion Laboratory transmitted 30 kW DC output power across 1.5 km with 2.38 GHz microwaves from a 26 m dish to a 7.3 x 3.5 m rectenna array.[14][102] The incident-RF to DC conversion efficiency of the rectenna was 80%.[14][102] In 1983 Japan launched MINIX (Microwave Ionosphere Nonlinear Interation Experiment), a rocket experiment to test transmission of high power microwaves through the ionosphere.[14]

In recent years a focus of research has been the development of wireless-powered drone aircraft, which began in 1959 with the Dept. of Defense's RAMP (Raytheon Airborne Microwave Platform) project[91] which sponsored Brown's research. In 1987 Canada's Communications Research Center developed a small prototype airplane called Stationary High Altitude Relay Platform (SHARP) to relay telecommunication data between points on earth similar to a communication satellite. Powered by a rectenna, it could fly at 13 miles (21 km) altitude and stay aloft for months. In 1992 a team at Kyoto University built a more advanced craft called MILAX (MIcrowave Lifted Airplane eXperiment). In 2003 NASA flew the first laser powered aircraft. The small model plane's motor was powered by electricity generated by photocells from a beam of infrared light from a ground based laser, while a control system kept the laser pointed at the plane.

Near-field technologies

Inductive power transfer between nearby coils of wire is an old technology, existing since the transformer was developed in the 1800s. Induction heating has been used for 100 years. With the advent of cordless appliances, inductive charging stands were developed for appliances used in wet environments like electric toothbrushes and electric razors to reduce the hazard of electric shock.

One field to which inductive transfer has been applied is to power electric vehicles. In 1892 Maurice Hutin and Maurice Leblanc patented a wireless method of powering railroad trains using resonant coils inductively coupled to a track wire at 3 kHz.[103] The first passive RFID (Radio Frequency Identification) technologies were invented by Mario Cardullo[104] (1973) and Koelle et al.[105] (1975) and by the 1990s were being used in proximity cards and contactless smartcards.

The proliferation of portable wireless communication devices such as cellphones, tablet, and laptop computers in recent decades is currently driving the development of wireless powering and charging technology to eliminate the need for these devices to be tethered to wall plugs during charging.[106] The Wireless Power Consortium was established in 2008 to develop interoperable standards across manufacturers.[106] Its Qi inductive power standard published in August 2009 enables charging and powering of portable devices of up to 5 watts over distances of 4 cm (1.6 inches).[107] The wireless device is placed on a flat charger plate (which could be embedded in table tops at cafes, for example) and power is transferred from a flat coil in the charger to a similar one in the device.

In 2007, a team led by Marin Soljačić at MIT used coupled tuned circuits made of a 25 cm resonant coil at 10 MHz to transfer 60 W of power over a distance of 2 meters (6.6 ft) (8 times the coil diameter) at around 40% efficiency.[37][40]

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...