Search This Blog

Thursday, August 21, 2014

Wave–particle duality

Wave–particle duality

From Wikipedia, the free encyclopedia
Wave–particle duality is a theory that proposes that every elementary particle exhibits the properties of not only particles, but also waves. A central concept of quantum mechanics, this duality addresses the inability of the classical concepts "particle" or "wave" to fully describe the behavior of quantum-scale objects. As Einstein described: "It seems as though we must use sometimes the one theory and sometimes the other, while at times we may use either. We are faced with a new kind of difficulty. We have two contradictory pictures of reality; separately neither of them fully explains the phenomena of light, but together they do".[1]

But Einstein's 1930s vision and the mainstream of 20th century disliked the alternative picture, of the pilot wave, where no duality paradox is found. Developed later as the de Broglie-Bohm theory, this "particle and wave" model, today, supports all the current experimental evidence, to the same extent as the other interpretations.

The standard interpretations of quantum mechanics, nevertheless, explain the "duality paradox" as a fundamental property of the universe; and some other alternative interpretations explain the duality as an emergent, second-order consequence of various limitations of the observer.
The standard treatment focuses on explaining the behaviour from the perspective of the widely used Copenhagen interpretation, in which wave-particle duality serves as one aspect of the concept of complementarity.[2]:242, 375–376

Origin of theory

The idea of duality originated in a debate over the nature of light and matter that dates back to the 17th century, when Christiaan Huygens and Isaac Newton proposed competing theories of light: light was thought either to consist of waves (Huygens) or of particles (Newton). Through the work of Max Planck, Albert Einstein, Louis de Broglie, Arthur Compton, Niels Bohr, and many others, current scientific theory holds that all particles also have a wave nature (and vice versa).[3] This phenomenon has been verified not only for elementary particles, but also for compound particles like atoms and even molecules. For macroscopic particles, because of their extremely short wavelengths, wave properties usually cannot be detected.[4]

Brief history of wave and particle viewpoints

Aristotle was one of the first to publicly hypothesize about the nature of light, proposing that light is a disturbance in the element air (that is, it is a wave-like phenomenon). On the other hand, Democritus—the original atomist—argued that all things in the universe, including light, are composed of indivisible sub-components (light being some form of solar atom).[5] At the beginning of the 11th Century, the Arabic scientist Alhazen wrote the first comprehensive treatise on optics; describing refraction, reflection, and the operation of a pinhole lens via rays of light traveling from the point of emission to the eye. He asserted that these rays were composed of particles of light. In 1630, René Descartes popularized and accredited the opposing wave description in his treatise on light, showing that the behavior of light could be re-created by modeling wave-like disturbances in a universal medium ("plenum"). Beginning in 1670 and progressing over three decades, Isaac Newton developed and championed his corpuscular hypothesis, arguing that the perfectly straight lines of reflection demonstrated light's particle nature; only particles could travel in such straight lines. He explained refraction by positing that particles of light accelerated laterally upon entering a denser medium. Around the same time, Newton's contemporaries Robert Hooke and Christiaan Huygens—and later Augustin-Jean Fresnel—mathematically refined the wave viewpoint, showing that if light traveled at different speeds in different media (such as water and air), refraction could be easily explained as the medium-dependent propagation of light waves. The resulting Huygens–Fresnel principle was extremely successful at reproducing light's behavior and, subsequently supported by Thomas Young's 1803 discovery of double-slit interference, was the beginning of the end for the particle light camp.[6][7]
Thomas Young's sketch of two-slit diffraction of waves, 1803

The final blow against corpuscular theory came when James Clerk Maxwell discovered that he could combine four simple equations, which had been previously discovered, along with a slight modification to describe self-propagating waves of oscillating electric and magnetic fields. When the propagation speed of these electromagnetic waves was calculated, the speed of light fell out. It quickly became apparent that visible light, ultraviolet light, and infrared light (phenomena thought previously to be unrelated) were all electromagnetic waves of differing frequency. The wave theory had prevailed—or at least it seemed to.

While the 19th century had seen the success of the wave theory at describing light, it had also witnessed the rise of the atomic theory at describing matter. In 1789, Antoine Lavoisier securely differentiated chemistry from alchemy by introducing rigor and precision into his laboratory techniques; allowing him to deduce the conservation of mass and categorize many new chemical elements and compounds. However, the nature of these essential chemical elements remained unknown. In 1799, Joseph Louis Proust advanced chemistry towards the atom by showing that elements combined in definite proportions. This led John Dalton to resurrect Democritus' atom in 1803, when he proposed that elements were invisible sub components; which explained why the varying oxides of metals (e.g. stannous oxide and cassiterite, SnO and SnO2 respectively) possess a 1:2 ratio of oxygen to one another. But Dalton and other chemists of the time had not considered that some elements occur in monatomic form (like Helium) and others in diatomic form (like Hydrogen), or that water was H2O, not the simpler and more intuitive HO—thus the atomic weights presented at the time were varied and often incorrect. Additionally, the formation of H2O by two parts of hydrogen gas and one part of oxygen gas would require an atom of oxygen to split in half (or two half-atoms of hydrogen to come together). This problem was solved by Amedeo Avogadro, who studied the reacting volumes of gases as they formed liquids and solids. By postulating that equal volumes of elemental gas contain an equal number of atoms, he was able to show that H2O was formed from two parts H2 and one part O2. By discovering diatomic gases, Avogadro completed the basic atomic theory, allowing the correct molecular formulae of most known compounds—as well as the correct weights of atoms—to be deduced and categorized in a consistent manner. The final stroke in classical atomic theory came when Dimitri Mendeleev saw an order in recurring chemical properties, and created a table presenting the elements in unprecedented order and symmetry. But there were holes in Mendeleev's table, with no element to fill them in. His critics initially cited this as a fatal flaw, but were silenced when new elements were discovered that perfectly fit into these holes. The success of the periodic table effectively converted any remaining opposition to atomic theory; even though no single atom had ever been observed in the laboratory, chemistry was now an atomic science.
Animation showing the wave-particle duality with a double slit experiment and effect of an observer. Increase size to see explanations in the video itself. See also quiz based on this animation.
Particle impacts make visible the interference pattern of waves.
A quantum particle is represented by a wave packet.
Interference of a quantum particle with itself.

Turn of the 20th century and the paradigm shift

Particles of electricity

At the close of the 19th century, the reductionism of atomic theory began to advance into the atom itself; determining, through physics, the nature of the atom and the operation of chemical reactions. Electricity, first thought to be a fluid, was now understood to consist of particles called electrons.
This was first demonstrated by J. J. Thomson in 1897 when, using a cathode ray tube, he found that an electrical charge would travel across a vacuum (which would possess infinite resistance in classical theory). Since the vacuum offered no medium for an electric fluid to travel, this discovery could only be explained via a particle carrying a negative charge and moving through the vacuum. This electron flew in the face of classical electrodynamics, which had successfully treated electricity as a fluid for many years (leading to the invention of batteries, electric motors, dynamos, and arc lamps). More importantly, the intimate relation between electric charge and electromagnetism had been well documented following the discoveries of Michael Faraday and James Clerk Maxwell. Since electromagnetism was known to be a wave generated by a changing electric or magnetic field (a continuous, wave-like entity itself) an atomic/particle description of electricity and charge was a non sequitur. Furthermore, classical electrodynamics was not the only classical theory rendered incomplete.

Radiation quantization

Black-body radiation, the emission of electromagnetic energy due to an object's heat, could not be explained from classical arguments alone. The equipartition theorem of classical mechanics, the basis of all classical thermodynamic theories, stated that an object's energy is partitioned equally among the object's vibrational modes. This worked well when describing thermal objects, whose vibrational modes were defined as the speeds of their constituent atoms, and the speed distribution derived from egalitarian partitioning of these vibrational modes closely matched experimental results. Speeds much higher than the average speed were suppressed by the fact that kinetic energy is quadratic—doubling the speed requires four times the energy—thus the number of atoms occupying high energy modes (high speeds) quickly drops off because the constant, equal partition can excite successively fewer atoms. Low speed modes would ostensibly dominate the distribution, since low speed modes would require ever less energy, and prima facie a zero-speed mode would require zero energy and its energy partition would contain an infinite number of atoms. But this would only occur in the absence of atomic interaction; when collisions are allowed, the low speed modes are immediately suppressed by jostling from the higher energy atoms, exciting them to higher energy modes. An equilibrium is swiftly reached where most atoms occupy a speed proportional to the temperature of the object (thus defining temperature as the average kinetic energy of the object).

But applying the same reasoning to the electromagnetic emission of such a thermal object was not so successful. It had been long known that thermal objects emit light. Hot metal glows red, and upon further heating, white (this is the underlying principle of the incandescent bulb). Since light was known to be waves of electromagnetism, physicists hoped to describe this emission via classical laws. This became known as the black body problem. Since the equipartition theorem worked so well in describing the vibrational modes of the thermal object itself, it was trivial to assume that it would perform equally well in describing the radiative emission of such objects. But a problem quickly arose when determining the vibrational modes of light. To simplify the problem (by limiting the vibrational modes) a longest allowable wavelength was defined by placing the thermal object in a cavity. Any electromagnetic mode at equilibrium (i.e. any standing wave) could only exist if it used the walls of the cavities as nodes. Thus there were no waves/modes with a wavelength larger than twice the length (L) of the cavity.
Standing waves in a cavity

The first few allowable modes would therefore have wavelengths of : 2L, L, 2L/3, L/2, etc. (each successive wavelength adding one node to the wave). However, while the wavelength could never exceed 2L, there was no such limit on decreasing the wavelength, and adding nodes to reduce the wavelength could proceed ad infinitum. Suddenly it became apparent that the short wavelength modes completely dominated the distribution, since ever shorter wavelength modes could be crammed into the cavity. If each mode received an equal partition of energy, the short wavelength modes would consume all the energy. This became clear when plotting the Rayleigh–Jeans law which, while correctly predicting the intensity of long wavelength emissions, predicted infinite total energy as the intensity diverges to infinity for short wavelengths. This became known as the ultraviolet catastrophe.

The solution arrived in 1900 when Max Planck hypothesized that the frequency of light emitted by the black body depended on the frequency of the oscillator that emitted it, and the energy of these oscillators increased linearly with frequency (according to his constant h, where E = hν). This was not an unsound proposal considering that macroscopic oscillators operate similarly: when studying five simple harmonic oscillators of equal amplitude but different frequency, the oscillator with the highest frequency possesses the highest energy (though this relationship is not linear like Planck's). By demanding that high-frequency light must be emitted by an oscillator of equal frequency, and further requiring that this oscillator occupy higher energy than one of a lesser frequency, Planck avoided any catastrophe; giving an equal partition to high-frequency oscillators produced successively fewer oscillators and less emitted light. And as in the Maxwell–Boltzmann distribution, the low-frequency, low-energy oscillators were suppressed by the onslaught of thermal jiggling from higher energy oscillators, which necessarily increased their energy and frequency.

The most revolutionary aspect of Planck's treatment of the black body is that it inherently relies on an integer number of oscillators in thermal equilibrium with the electromagnetic field. These oscillators give their entire energy to the electromagnetic field, creating a quantum of light, as often as they are excited by the electromagnetic field, absorbing a quantum of light and beginning to oscillate at the corresponding frequency. Planck had intentionally created an atomic theory of the black body, but had unintentionally generated an atomic theory of light, where the black body never generates quanta of light at a given frequency with an energy less than . However, once realizing that he had quantized the electromagnetic field, he denounced particles of light as a limitation of his approximation, not a property of reality.

Photoelectric effect illuminated

Yet while Planck had solved the ultraviolet catastrophe by using atoms and a quantized electromagnetic field, most physicists immediately agreed that Planck's "light quanta" were unavoidable flaws in his model. A more complete derivation of black body radiation would produce a fully continuous, fully wave-like electromagnetic field with no quantization. However, in 1905 Albert Einstein took Planck's black body model in itself and saw a wonderful solution to another outstanding problem of the day: the photoelectric effect, the phenomenon where electrons are emitted from atoms when they absorb energy from light. Ever since the discovery of electrons eight years previously, electrons had been the thing to study in physics laboratories worldwide.

In 1902 Philipp Lenard discovered that (within the range of the experimental parameters he was using) the energy of these ejected electrons did not depend on the intensity of the incoming light, but on its frequency. So if one shines a little low-frequency light upon a metal, a few low energy electrons are ejected. If one now shines a very intense beam of low-frequency light upon the same metal, a whole slew of electrons are ejected; however they possess the same low energy, there are merely more of them. In order to get high energy electrons, one must illuminate the metal with high-frequency light. The more light there is, the more electrons are ejected. Like blackbody radiation, this was at odds with a theory invoking continuous transfer of energy between radiation and matter. However, it can still be explained using a fully classical description of light, as long as matter is quantum mechanical in nature.[8]

If one used Planck's energy quanta, and demanded that electromagnetic radiation at a given frequency could only transfer energy to matter in integer multiples of an energy quantum , then the photoelectric effect could be explained very simply. Low-frequency light only ejects low-energy electrons because each electron is excited by the absorption of a single photon. Increasing the intensity of the low-frequency light (increasing the number of photons) only increases the number of excited electrons, not their energy, because the energy of each photon remains low. Only by increasing the frequency of the light, and thus increasing the energy of the photons, can one eject electrons with higher energy. Thus, using Planck's constant h to determine the energy of the photons based upon their frequency, the energy of ejected electrons should also increase linearly with frequency; the gradient of the line being Planck's constant. These results were not confirmed until 1915, when Robert Andrews Millikan, who had previously determined the charge of the electron, produced experimental results in perfect accord with Einstein's predictions. While the energy of ejected electrons reflected Planck's constant, the existence of photons was not explicitly proven until the discovery of the photon antibunching effect, of which a modern experiment can be performed in undergraduate-level labs.[9] This phenomenon could only be explained via photons, and not through any semi-classical theory (which could alternatively explain the photoelectric effect). When Einstein received his Nobel Prize in 1921, it was not for his more difficult and mathematically laborious special and general relativity, but for the simple, yet totally revolutionary, suggestion of quantized light. Einstein's "light quanta" would not be called photons until 1925, but even in 1905 they represented the quintessential example of wave-particle duality. Electromagnetic radiation propagates following linear wave equations, but can only be emitted or absorbed as discrete elements, thus acting as a wave and a particle simultaneously.

Developmental milestones

Huygens and Newton

The earliest comprehensive theory of light was advanced by Christiaan Huygens, who proposed a wave theory of light, and in particular demonstrated how waves might interfere to form a wavefront, propagating in a straight line. However, the theory had difficulties in other matters, and was soon overshadowed by Isaac Newton's corpuscular theory of light. That is, Newton proposed that light consisted of small particles, with which he could easily explain the phenomenon of reflection. With considerably more difficulty, he could also explain refraction through a lens, and the splitting of sunlight into a rainbow by a prism. Newton's particle viewpoint went essentially unchallenged for over a century.[10]

Young, Fresnel, and Maxwell

In the early 19th century, the double-slit experiments by Young and Fresnel provided evidence for Huygens' wave theories. The double-slit experiments showed that when light is sent through a grid, a characteristic interference pattern is observed, very similar to the pattern resulting from the interference of water waves; the wavelength of light can be computed from such patterns. The wave view did not immediately displace the ray and particle view, but began to dominate scientific thinking about light in the mid 19th century, since it could explain polarization phenomena that the alternatives could not.[11]

In the late 19th century, James Clerk Maxwell explained light as the propagation of electromagnetic waves according to the Maxwell equations. These equations were verified by experiment by Heinrich Hertz in 1887, and the wave theory became widely accepted.

 Planck's formula for black-body radiation

In 1901, Max Planck published an analysis that succeeded in reproducing the observed spectrum of light emitted by a glowing object. To accomplish this, Planck had to make an ad hoc mathematical assumption of quantized energy of the oscillators (atoms of the black body) that emit radiation. It was Einstein who later proposed that it is the electromagnetic radiation itself that is quantized, and not the energy of radiating atoms.

Einstein's explanation of the photoelectric effect

The photoelectric effect. Incoming photons on the left strike a metal plate (bottom), and eject electrons, depicted as flying off to the right.

In 1905, Albert Einstein provided an explanation of the photoelectric effect, a hitherto troubling experiment that the wave theory of light seemed incapable of explaining. He did so by postulating the existence of photons, quanta of light energy with particulate qualities.

In the photoelectric effect, it was observed that shining a light on certain metals would lead to an electric current in a circuit. Presumably, the light was knocking electrons out of the metal, causing current to flow. However, using the case of potassium as an example, it was also observed that while a dim blue light was enough to cause a current, even the strongest, brightest red light available with the technology of the time caused no current at all. According to the classical theory of light and matter, the strength or amplitude of a light wave was in proportion to its brightness: a bright light should have been easily strong enough to create a large current. Yet, oddly, this was not so.

Einstein explained this conundrum by postulating that the electrons can receive energy from electromagnetic field only in discrete portions (quanta that were called photons): an amount of energy E that was related to the frequency f of the light by
E = h f\,
where h is Planck's constant (6.626 × 10−34 J seconds). Only photons of a high enough frequency (above a certain threshold value) could knock an electron free. For example, photons of blue light had sufficient energy to free an electron from the metal, but photons of red light did not. More intense light above the threshold frequency could release more electrons, but no amount of light (using technology available at the time) below the threshold frequency could release an electron. To "violate" this law would require extremely high intensity lasers which had not yet been invented. Intensity-dependent phenomena have now been studied in detail with such lasers.[12]

Einstein was awarded the Nobel Prize in Physics in 1921 for his discovery of the law of the photoelectric effect.

De Broglie's wavelength

Propagation of de Broglie waves in 1d—real part of the complex amplitude is blue, imaginary part is green. The probability (shown as the colour opacity) of finding the particle at a given point x is spread out like a waveform; there is no definite position of the particle. As the amplitude increases above zero the curvature decreases, so the amplitude decreases again, and vice versa—the result is an alternating amplitude: a wave. Top: Plane wave. Bottom: Wave packet.

In 1924, Louis-Victor de Broglie formulated the de Broglie hypothesis, claiming that all matter,[13][14] not just light, has a wave-like nature; he related wavelength (denoted as λ), and momentum (denoted as p):
\lambda = \frac{h}{p}
This is a generalization of Einstein's equation above, since the momentum of a photon is given by p = \tfrac{E}{c} and the wavelength (in a vacuum) by λ = \tfrac{c}{f}, where c is the speed of light in vacuum.
De Broglie's formula was confirmed three years later for electrons (which differ from photons in having a rest mass) with the observation of electron diffraction in two independent experiments. At the University of Aberdeen, George Paget Thomson passed a beam of electrons through a thin metal film and observed the predicted interference patterns. At Bell Labs Clinton Joseph Davisson and Lester Halbert Germer guided their beam through a crystalline grid.

De Broglie was awarded the Nobel Prize for Physics in 1929 for his hypothesis. Thomson and Davisson shared the Nobel Prize for Physics in 1937 for their experimental work.

Heisenberg's uncertainty principle

In his work on formulating quantum mechanics, Werner Heisenberg postulated his uncertainty principle, which states:
\Delta x \Delta p \ge \frac{\hbar}{2}
where
\Delta here indicates standard deviation, a measure of spread or uncertainty;
x and p are a particle's position and linear momentum respectively.
\hbar is the reduced Planck's constant (Planck's constant divided by 2\pi).
Heisenberg originally explained this as a consequence of the process of measuring: Measuring position accurately would disturb momentum and vice-versa, offering an example (the "gamma-ray microscope") that depended crucially on the de Broglie hypothesis. It is now thought, however, that this only partly explains the phenomenon, but that the uncertainty also exists in the particle itself, even before the measurement is made.

In fact, the modern explanation of the uncertainty principle, extending the Copenhagen interpretation first put forward by Bohr and Heisenberg, depends even more centrally on the wave nature of a particle: Just as it is nonsensical to discuss the precise location of a wave on a string, particles do not have perfectly precise positions; likewise, just as it is nonsensical to discuss the wavelength of a "pulse" wave traveling down a string, particles do not have perfectly precise momenta (which corresponds to the inverse of wavelength). Moreover, when position is relatively well defined, the wave is pulse-like and has a very ill-defined wavelength (and thus momentum). And conversely, when momentum (and thus wavelength) is relatively well defined, the wave looks long and sinusoidal, and therefore it has a very ill-defined position.

de Broglie–Bohm theory

Couder experiments,[15] "materializing" the pilot wave model.

De Broglie himself had proposed a pilot wave construct to explain the observed wave-particle duality. In this view, each particle has a well-defined position and momentum, but is guided by a wave function derived from Schrödinger's equation. The pilot wave theory was initially rejected because it generated non-local effects when applied to systems involving more than one particle. Non-locality, however, soon became established as an integral feature of quantum theory (see EPR paradox), and David Bohm extended de Broglie's model to explicitly include it.

In the resulting representation, also called the de Broglie–Bohm theory or Bohmian mechanics,[16] the wave-particle duality vanishes, and explains the wave behaviour as a scattering with wave appearance, because the particle's motion subject to a guiding equation or quantum potential. "This idea seems to me so natural and simple, to resolve the wave-particle dilemma in such a clear and ordinary way, that it is a great mystery to me that it was so generally ignored",[17] J.S.Bell.

The best illustration of the pilot-wave model was gived by Couder's 2010 "walking droplets" experiments,[18] demonstrating the pilot-wave behaviour in a macroscopic mechanical analog.[15]

Wave behavior of large objects

Since the demonstrations of wave-like properties in photons and electrons, similar experiments have been conducted with neutrons and protons. Among the most famous experiments are those of Estermann and Otto Stern in 1929.[19] Authors of similar recent experiments with atoms and molecules, described below, claim that these larger particles also act like waves.

A dramatic series of experiments emphasizing the action of gravity in relation to wave–particle duality were conducted in the 1970s using the neutron interferometer.[20] Neutrons, one of the components of the atomic nucleus, provide much of the mass of a nucleus and thus of ordinary matter. In the neutron interferometer, they act as quantum-mechanical waves directly subject to the force of gravity. While the results were not surprising since gravity was known to act on everything, including light (see tests of general relativity and the Pound–Rebka falling photon experiment), the self-interference of the quantum mechanical wave of a massive fermion in a gravitational field had never been experimentally confirmed before.

In 1999, the diffraction of C60 fullerenes by researchers from the University of Vienna was reported.[21] Fullerenes are comparatively large and massive objects, having an atomic mass of about 720 u. The de Broglie wavelength is 2.5 pm, whereas the diameter of the molecule is about 1 nm, about 400 times larger. In 2012, these far-field diffraction experiments could be extended to phthalocyanine molecules and their heavier derivatives, which are composed of 58 and 114 atoms respectively. In these experiments the build-up of such interference patterns could be recorded in real time and with single molecule sensitivity.[22][23]

In 2003, the Vienna group also demonstrated the wave nature of tetraphenylporphyrin[24]—a flat biodye with an extension of about 2 nm and a mass of 614 u. For this demonstration they employed a near-field Talbot Lau interferometer.[25][26] In the same interferometer they also found interference fringes for C60F48., a fluorinated buckyball with a mass of about 1600 u, composed of 108 atoms.[24] Large molecules are already so complex that they give experimental access to some aspects of the quantum-classical interface, i.e., to certain decoherence mechanisms.[27][28] In 2011, the interference of molecules as heavy as 6910 u could be demonstrated in a Kapitza–Dirac–Talbot–Lau interferometer. These are the largest objects that so far showed de Broglie matter-wave interference.[29] In 2013, the interference of molecules beyond 10,000 u has been demonstrated.[30]

Whether objects heavier than the Planck mass (about the weight of a large bacterium) have a de Broglie wavelength is theoretically unclear and experimentally unreachable; above the Planck mass a particle's Compton wavelength would be smaller than the Planck length and its own Schwarzschild radius, a scale at which current theories of physics may break down or need to be replaced by more general ones.[31]

Recently Couder, Fort, et al. showed[32] that we can use macroscopic oil droplets on a vibrating surface as a model of wave–particle duality—localized droplet creates periodical waves around and interaction with them leads to quantum-like phenomena: interference in double-slit experiment,[33] unpredictable tunneling[34] (depending in complicated way on practically hidden state of field), orbit quantization[35] (that particle has to 'find a resonance' with field perturbations it creates—after one orbit, its internal phase has to return to the initial state) and Zeeman effect.[36]

Treatment in modern quantum mechanics

Wave–particle duality is deeply embedded into the foundations of quantum mechanics. In the formalism of the theory, all the information about a particle is encoded in its wave function, a complex-valued function roughly analogous to the amplitude of a wave at each point in space. This function evolves according to a differential equation (generically called the Schrödinger equation). For particles with mass this equation has solutions that follow the form of the wave equation.
Propagation of such waves leads to wave-like phenomena such as interference and diffraction. Particles without mass, like photons, has no solutions of the Schrödinger equation so have another wave.

The particle-like behavior is most evident due to phenomena associated with measurement in quantum mechanics. Upon measuring the location of the particle, the particle will be forced into a more localized state as given by the uncertainty principle. When viewed through this formalism, the measurement of the wave function will randomly "collapse", or rather "decohere", to a sharply peaked function at some location. For particles with mass the likelihood of detecting the particle at any particular location is equal to the squared amplitude of the wave function there. The measurement will return a well-defined position, (subject to uncertainty), a property traditionally associated with particles. It is important to note that a measurement is only a particular type of interaction where some data is recorded and the measured quantity is forced into a particular eigenstate. The act of measurement is therefore not fundamentally different from any other interaction.

Following the development of quantum field theory the ambiguity disappeared. The field permits solutions that follow the wave equation, which are referred to as the wave functions. The term particle is used to label the irreducible representations of the Lorentz group that are permitted by the field. An interaction as in a Feynman diagram is accepted as a calculationally convenient approximation where the outgoing legs are known to be simplifications of the propagation and the internal lines are for some order in an expansion of the field interaction. Since the field is non-local and quantized, the phenomena which previously were thought of as paradoxes are explained. Within the limits of the wave-particle duality the quantum field theory gives the same results.

Visualization

There are two ways to visualize the wave-particle behaviour: by the "standard model", described below; and by the Broglie–Bohm model, where no duality is perceived.

Below is an illustration of wave–particle duality as it relates to De Broglie's hypothesis and Heisenberg's uncertainty principle (above), in terms of the position and momentum space wavefunctions for one spinless particle with mass in one dimension. These wavefunctions are Fourier transforms of each other.

The more localized the position-space wavefunction, the more likely the particle is to be found with the position coordinates in that region, and correspondingly the momentum-space wavefunction is less localized so the possible momentum components the particle could have are more widespread.

Conversely the more localized the momentum-space wavefunction, the more likely the particle is to be found with those values of momentum components in that region, and correspondingly the less localized the position-space wavefunction, so the position coordinates the particle could occupy are more widespread.
Position x and momentum p wavefunctions corresponding to quantum particles. The colour opacity (%) of the particles corresponds to the probability density of finding the particle with position x or momentum component p.
Top: If wavelength λ is unknown, so are momentum p, wave-vector k and energy E (de Broglie relations). As the particle is more localized in position space, Δx is smaller than for Δpx.
Bottom: If λ is known, so are p, k, and E. As the particle is more localized in momentum space, Δp is smaller than for Δx.

Alternative views

Wave–particle duality is an ongoing conundrum in modern physics. Most physicists accept wave-particle duality as the best explanation for a broad range of observed phenomena; however, it is not without controversy. Alternative views are also presented here. These views are not generally accepted by mainstream physics, but serve as a basis for valuable discussion within the community.

Both-particle-and-wave view

The pilot wave model, originally developed by Louis de Broglie and further developed by David Bohm into the hidden variable theory proposes that there is no duality, but rather a system exhibits both particle properties and wave properties simultaneously, and particles are guided, in a deterministic fashion, by the pilot wave (or its "quantum potential") which will direct them to areas of constructive interference in preference to areas of destructive interference. This idea is held by a significant minority within the physics community.[37]

At least one physicist considers the "wave-duality" a misnomer, as L. Ballentine, Quantum Mechanics, A Modern Development, p. 4, explains:
When first discovered, particle diffraction was a source of great puzzlement. Are "particles" really "waves?" In the early experiments, the diffraction patterns were detected holistically by means of a photographic plate, which could not detect individual particles. As a result, the notion grew that particle and wave properties were mutually incompatible, or complementary, in the sense that different measurement apparatuses would be required to observe them. That idea, however, was only an unfortunate generalization from a technological limitation. Today it is possible to detect the arrival of individual electrons, and to see the diffraction pattern emerge as a statistical pattern made up of many small spots (Tonomura et al., 1989). Evidently, quantum particles are indeed particles, but whose behaviour is very different from classical physics would have us to expect.
Afshar experiment[38] (2007) has demonstrated that it is possible to simultaneously observe both wave and particle properties of photons.

Wave-only view

At least one scientist proposes that the duality can be replaced by a "wave-only" view. In his book Collective Electrodynamics: Quantum Foundations of Electromagnetism (2000), Carver Mead purports to analyze the behavior of electrons and photons purely in terms of electron wave functions, and attributes the apparent particle-like behavior to quantization effects and eigenstates. According to reviewer David Haddon:[39]
Mead has cut the Gordian knot of quantum complementarity. He claims that atoms, with their neutrons, protons, and electrons, are not particles at all but pure waves of matter. Mead cites as the gross evidence of the exclusively wave nature of both light and matter the discovery between 1933 and 1996 of ten examples of pure wave phenomena, including the ubiquitous laser of CD players, the self-propagating electrical currents of superconductors, and the Bose–Einstein condensate of atoms.
Albert Einstein, who, in his search for a Unified Field Theory, did not accept wave-particle duality, wrote:[40]
This double nature of radiation (and of material corpuscles)...has been interpreted by quantum-mechanics in an ingenious and amazingly successful fashion. This interpretation...appears to me as only a temporary way out...
The many-worlds interpretation (MWI) is sometimes presented as a waves-only theory, including by its originator, Hugh Everett who referred to MWI as "the wave interpretation".[41]

The Three Wave Hypothesis of R. Horodecki relates the particle to wave.[42][43] The hypothesis implies that a massive particle is an intrinsically spatially as well as temporally extended wave phenomenon by a nonlinear law.

Neither-wave-nor-particle view

It has been argued that there are never exact particles or waves, but only some compromise or intermediate between them. One consideration is that zero-dimensional mathematical points cannot be observed. Another is that the formal representation of such points, the Kronecker delta function is unphysical, because it cannot be normalized. Parallel arguments apply to pure wave states. Roger Penrose states:[44]
"Such 'position states' are idealized wavefunctions in the opposite sense from the momentum states. Whereas the momentum states are infinitely spread out, the position states are infinitely concentrated. Neither is normalizable [...]."

Relational approach to wave–particle duality

Relational quantum mechanics is developed which regards the detection event as establishing a relationship between the quantized field and the detector. The inherent ambiguity associated with applying Heisenberg's uncertainty principle and thus wave–particle duality is subsequently avoided.[45]

Applications

Although it is difficult to draw a line separating wave–particle duality from the rest of quantum mechanics, it is nevertheless possible to list some applications of this basic idea.
  • Wave–particle duality is exploited in electron microscopy, where the small wavelengths associated with the electron can be used to view objects much smaller than what is visible using visible light.
  • Similarly, neutron diffraction uses neutrons with a wavelength of about 0.1 nm, the typical spacing of atoms in a solid, to determine the structure of solids.

Light: Particle or Wave?

Light: Particle or Wave?

July 22, 2014 Astrophysics, Quantum Mechanics
Original link:  http://www.fromquarkstoquasars.com/light-particle-or-wave/ 

1WZ6h

Classically, light can be thought of in two ways: either as a particle or a wave. But what is it really? Well, the ‘observer effect’ makes that question kind of difficult to answer. So before we get too far into it, what is the observer effect?

Simply put, the observer effect is a principle that states simply observing (or measuring) something can change its value. This effect is vastly more important in quantum mechanics than in everyday life, though it appears in a great many places. This means that – like most things in the quantum world – the phrase “what you see is what you get” doesn’t really apply. Therefore measuring what light is, in a way, can defeat the purpose. However the observer effect does very nicely explain why we have made tests that conclusively prove that light is a particle, and we have made tests that conclusively prove that light is a wave. Logic dictates that it can’t be both, or does it?

First, let me explain why this is confusing. If you aren’t familiar with particle physics – or wave dynamics, in particular – you might simply be wondering what the big deal is. Why can’t it be both? Well the fact of the matter is that particles act in a very specific, ordered manner. As do waves. Yet, for the most part, each constituent part acts completely different from the other. Therefore, if something were to be both wave and particle, it wouldn’t make any sense from a certain standpoint. I mean, If you had to go somewhere, but you had to go east AND west to get there (not eastwestern or westeastern), you’d probably be left scratching you head as to which direction you need to take.

As we mentioned earlier, we have conclusively proven that light is a particle by giving it tests that only a particle will react to. We have also proven that light is a wave to giving it tests that only a wave will react to. Unfortunately, it have been proven that there is no test that can simultaneously test for both wave nature and particle nature, so in a way, light is whatever you want it to be. This goes back to the observer effect. By testing light, we make it whatever we want it to be. Either particle, or wave, which begs an interesting question: what is light before we test it? This is where stuff gets interesting.

Source; WikiCommons
Source; WikiCommonsThere are many interpretations of wave-particle duality, but the most commonly accepted interpretation is the . Erwin Schrödinger has credit for the thought experiment that makes this easiest to explain. To simplify the environment of Schrödinger’s cat, lets say that you are observing a box. You know exactly one thing about this box – that there is a cat inside. Now the cat can exist in two states: either alive or dead. Like a wave and particle, being alive and dead are largely contradictory so the analogy works well. According to the Copenhagen Interpretation of quantum mechanics, until you observe the cat, it is .

There are many interpretations of wave-particle duality, but the most commonly accepted interpretation is the Copenhagen Interpretation. Erwin Schrödinger has credit for the thought experiment that makes this easiest to explain. To simplify the environment of Schrödinger’s cat, lets say that you are observing a box. You know exactly one thing about this box – that there is a cat inside. Now the cat can exist in two states: either alive or dead. Like a wave and particle, being alive and dead are largely contradictory so the analogy works well. According to the Copenhagen Interpretation of quantum mechanics, until you observe the cat, it is both alive and dead simultaneously.

This is referred to as a state of quantum superposition. Even things that are direct opposites can both be true simultaneously. That is, until the object in question is observed. When this occurs, it results in decoherence, which forces an object to “snap” into one state of being.

This happens, in part, due to the uncertainty principle of quantum mechanics (Sometimes confused with the Observer Effect, this is a different, but related concept). This is pretty simple to explain. The core of the principle is that the more you know about one thing, the less you are capable of knowing about another. This is also why it is impossible to know both the location and momentum of an electron, but that is a topic for another time.

The annoying thing about light is that we can conclusively prove it is a particle, or we can conclusively prove it is a wave. If we test light to see if it is a wave, we prove with 100% certainty that it is a wave, and due to the uncertainty principle, we can know 0% about the particle aspect of light. To test one aspect is to make it impossible to demonstrate the other. So to answer to question “Is light particles or waves”, you have to observe light. But to observe light it to change it. So from a philosophical point of view, the question has no meaning. Who knew science could be so Zen?

I’d like to sum this up with a quote from Lewis Carroll, who in his book “Alice’s Adventures in Wonderland” wrote:

steam-tea-smoke-purple-cheshire-cat-1920x1080 
”Ever since her last science class, Alice had been deeply puzzled by something, and she hoped one of her new acquaintances [the mad hatter and march hair] might straighten out the confusion. Putting down her cup of tea, she asked in a timid voice, “Is light made of waves, or is it made of particles:” “Yes, exactly so,” replied the Mad Hatter. Somewhat irritated, Alice asked in a more forceful voice, “what kind of answer is that: I will repeat my question: Is light particles or is it waves:” That’s right,” said the Mad Hatter.”

Climate change

Climate change

From Wikipedia, the free encyclopedia

Climate change is a significant time variation in weather patterns occurring over periods ranging from decades to millions of years. Climate change may refer to a change in average weather conditions, or in the time variation of weather around longer-term average conditions (i.e., more or fewer extreme weather events). Climate change is caused by factors such as biotic processes, variations in solar radiation received by Earth, plate tectonics, and volcanic eruptions. Certain human activities have also been identified as significant causes of recent climate change, often referred to as "global warming".[1]

Scientists actively work to understand past and future climate by using observations and theoretical models. A climate record — extending deep into the Earth's past — has been assembled, and continues to be built up, based on geological evidence from borehole temperature profiles, cores removed from deep accumulations of ice, floral and faunal records, glacial and periglacial processes, stable-isotope and other analyses of sediment layers, and records of past sea levels. More recent data are provided by the instrumental record. General circulation models, based on the physical sciences, are often used in theoretical approaches to match past climate data, make future projections, and link causes and effects in climate change.

Terminology

The most general definition of climate change is a change in the statistical properties of the climate system when considered over long periods of time, regardless of cause.[2] Accordingly, fluctuations over periods shorter than a few decades, such as El Niño, do not represent climate change.
The term sometimes is used to refer specifically to climate change caused by human activity, as opposed to changes in climate that may have resulted as part of Earth's natural processes.[3] In this sense, especially in the context of environmental policy, the term climate change has become synonymous with anthropogenic global warming. Within scientific journals, global warming refers to surface temperature increases while climate change includes global warming and everything else that increasing greenhouse gas levels will affect.[4]

Causes

On the broadest scale, the rate at which energy is received from the sun and the rate at which it is lost to space determine the equilibrium temperature and climate of Earth. This energy is distributed around the globe by winds, ocean currents, and other mechanisms to affect the climates of different regions.

Factors that can shape climate are called climate forcings or "forcing mechanisms".[5] These include processes such as variations in solar radiation, variations in the Earth's orbit, mountain-building and continental drift and changes in greenhouse gas concentrations. There are a variety of climate change feedbacks that can either amplify or diminish the initial forcing. Some parts of the climate system, such as the oceans and ice caps, respond slowly in reaction to climate forcings, while others respond more quickly.

Forcing mechanisms can be either "internal" or "external". Internal forcing mechanisms are natural processes within the climate system itself (e.g., the thermohaline circulation). External forcing mechanisms can be either natural (e.g., changes in solar output) or anthropogenic (e.g., increased emissions of greenhouse gases).

Whether the initial forcing mechanism is internal or external, the response of the climate system might be fast (e.g., a sudden cooling due to airborne volcanic ash reflecting sunlight), slow (e.g. thermal expansion of warming ocean water), or a combination (e.g., sudden loss of albedo in the arctic ocean as sea ice melts, followed by more gradual thermal expansion of the water). Therefore, the climate system can respond abruptly, but the full response to forcing mechanisms might not be fully developed for centuries or even longer.

Internal forcing mechanisms

Scientists generally define the five components of earth's climate system to include atmosphere, hydrosphere, cryosphere, lithosphere (restricted to the surface soils, rocks, and sediments), and biosphere.[6] Natural changes in the climate system ("internal forcings") result in internal "climate variability".[7] Examples include the type and distribution of species, and changes in ocean currents.

Ocean variability


The ocean is a fundamental part of the climate system, some changes in it occurring at longer timescales than in the atmosphere, massing hundreds of times more and having very high thermal inertia (such as the ocean depths still lagging today in temperature adjustment from the Little Ice Age).[clarification needed][8]

Short-term fluctuations (years to a few decades) such as the El Niño-Southern Oscillation, the Pacific decadal oscillation, the North Atlantic oscillation, and the Arctic oscillation, represent climate variability rather than climate change. On longer time scales, alterations to ocean processes such as thermohaline circulation play a key role in redistributing heat by carrying out a very slow and extremely deep movement of water and the long-term redistribution of heat in the world's oceans.
A schematic of modern thermohaline circulation. Tens of millions of years ago, continental plate movement formed a land-free gap around Antarctica, allowing formation of the ACC which keeps warm waters away from Antarctica.

Life

Life affects climate through its role in the carbon and water cycles and such mechanisms as albedo, evapotranspiration, cloud formation, and weathering.[9][10][11] Examples of how life may have affected past climate include: glaciation 2.3 billion years ago triggered by the evolution of oxygenic photosynthesis,[12][13] glaciation 300 million years ago ushered in by long-term burial of decomposition-resistant detritus of vascular land plants (forming coal),[14][15] termination of the Paleocene-Eocene Thermal Maximum 55 million years ago by flourishing marine phytoplankton,[16][17] reversal of global warming 49 million years ago by 800,000 years of arctic azolla blooms,[18][19] and global cooling over the past 40 million years driven by the expansion of grass-grazer ecosystems.[20][21]

External forcing mechanisms

Increase in atmospheric CO
2
levels
Milankovitch cycles from 800,000 years ago in the past to 800,000 years in the future.
Variations in CO2, temperature and dust from the Vostok ice core over the last 450,000 years

Orbital variations

Slight variations in Earth's orbit lead to changes in the seasonal distribution of sunlight reaching the Earth's surface and how it is distributed across the globe. There is very little change to the area-averaged annually averaged sunshine; but there can be strong changes in the geographical and seasonal distribution. The three types of orbital variations are variations in Earth's eccentricity, changes in the tilt angle of Earth's axis of rotation, and precession of Earth's axis. Combined together, these produce Milankovitch cycles which have a large impact on climate and are notable for their correlation to glacial and interglacial periods,[22] their correlation with the advance and retreat of the Sahara,[22] and for their appearance in the stratigraphic record.[23]
The IPCC notes that Milankovitch cycles drove the ice age cycles, CO2 followed temperature change "with a lag of some hundreds of years," and that as a feedback amplified temperature change.[24] The depths of the ocean have a lag time in changing temperature (thermal inertia on such scale). Upon seawater temperature change, the solubility of CO2 in the oceans changed, as well as other factors impacting air-sea CO2 exchange.[25]

Solar output

Variations in solar activity during the last several centuries based on observations of sunspots and beryllium isotopes. The period of extraordinarily few sunspots in the late 17th century was the Maunder minimum.

The Sun is the predominant source of energy input to the Earth. Both long- and short-term variations in solar intensity are known to affect global climate.

Three to four billion years ago the sun emitted only 70% as much power as it does today. If the atmospheric composition had been the same as today, liquid water should not have existed on Earth. However, there is evidence for the presence of water on the early Earth, in the Hadean[26][27] and Archean[28][26] eons, leading to what is known as the faint young Sun paradox.[29] Hypothesized solutions to this paradox include a vastly different atmosphere, with much higher concentrations of greenhouse gases than currently exist.[30] Over the following approximately 4 billion years, the energy output of the sun increased and atmospheric composition changed. The Great Oxygenation Event – oxygenation of the atmosphere around 2.4 billion years ago – was the most notable alteration. Over the next five billion years the sun's ultimate death as it becomes a red giant and then a white dwarf will have large effects on climate, with the red giant phase possibly ending any life on Earth that survives until that time.

Solar output also varies on shorter time scales, including the 11-year solar cycle[31] and longer-term modulations.[32] Solar intensity variations are considered to have been influential in triggering the Little Ice Age,[33] and some of the warming observed from 1900 to 1950. The cyclical nature of the sun's energy output is not yet fully understood; it differs from the very slow change that is happening within the sun as it ages and evolves. Research indicates that solar variability has had effects including the Maunder minimum from 1645 to 1715 A.D., part of the Little Ice Age from 1550 to 1850 A.D. that was marked by relative cooling and greater glacier extent than the centuries before and afterward.[34][35] Some studies point toward solar radiation increases from cyclical sunspot activity affecting global warming, and climate may be influenced by the sum of all effects (solar variation, anthropogenic radiative forcings, etc.).[36][37]

Interestingly, a 2010 study[38] suggests, “that the effects of solar variability on temperature throughout the atmosphere may be contrary to current expectations.”

In an Aug 2011 Press Release,[39] CERN announced the publication in the Nature journal the initial results from its CLOUD experiment. The results indicate that ionisation from cosmic rays significantly enhances aerosol formation in the presence of sulfuric acid and water, but in the lower atmosphere where ammonia is also required, this is insufficient to account for aerosol formation and additional trace vapours must be involved. The next step is to find more about these trace vapours, including whether they are of natural or human origin.

Volcanism

In atmospheric temperature from 1979 to 2010, determined by MSU NASA satellites, effects appear from aerosols released by major volcanic eruptions (El Chichón and Pinatubo). El Niño is a separate event, from ocean variability.

The eruptions considered to be large enough to affect the Earth's climate on a scale of more than 1 year are the ones that inject over 0.1 Mt of SO2 into the stratosphere.[40] This is due to the optical properties of SO2 and sulfate aerosols, which strongly absorb or scatter solar radiation, creating a global layer of sulfuric acid haze.[41] On average, such eruptions occur several times per century, and cause cooling (by partially blocking the transmission of solar radiation to the Earth's surface) for a period of a few years.

The eruption of Mount Pinatubo in 1991, the second largest terrestrial eruption of the 20th century, affected the climate substantially, subsequently global temperatures decreased by about 0.5 °C (0.9 °F) for up to three years.[42][43] Thus, the cooling over large parts of the Earth reduced surface temperatures in 1991-93, the equivalent to a reduction in net radiation of 4 watts per square meter.[44] The Mount Tambora eruption in 1815 caused the Year Without a Summer.[45] Much larger eruptions, known as large igneous provinces, occur only a few times every fifty - hundred million years - through flood basalt, and caused in Earth past global warming and mass extinctions.[46]

Small eruptions, with injections of less than 0.1 Mt of sulfur dioxide into the stratosphere, impact the atmosphere only subtly, as temperature changes are comparable with natural variability. However, because smaller eruptions occur at a much higher frequency, they too have a significant impact on Earth's atmosphere.[40][47]

Seismic monitoring maps current and future trends in volcanic activities, and tries to develop early warning systems. In climate modelling the aim is to study the physical mechanisms and feedbacks of volcanic forcing.[48]

Volcanoes are also part of the extended carbon cycle. Over very long (geological) time periods, they release carbon dioxide from the Earth's crust and mantle, counteracting the uptake by sedimentary rocks and other geological carbon dioxide sinks. The US Geological Survey estimates are that volcanic emissions are at a much lower level than the effects of current human activities, which generate 100–300 times the amount of carbon dioxide emitted by volcanoes.[49] A review of published studies indicates that annual volcanic emissions of carbon dioxide, including amounts released from mid-ocean ridges, volcanic arcs, and hot spot volcanoes, are only the equivalent of 3 to 5 days of human caused output. The annual amount put out by human activities may be greater than the amount released by supererruptions, the most recent of which was the Toba eruption in Indonesia 74,000 years ago.[50]

Although volcanoes are technically part of the lithosphere, which itself is part of the climate system, the IPCC explicitly defines volcanism as an external forcing agent.[51]

Plate tectonics

Over the course of millions of years, the motion of tectonic plates reconfigures global land and ocean areas and generates topography. This can affect both global and local patterns of climate and atmosphere-ocean circulation.[52]

The position of the continents determines the geometry of the oceans and therefore influences patterns of ocean circulation. The locations of the seas are important in controlling the transfer of heat and moisture across the globe, and therefore, in determining global climate. A recent example of tectonic control on ocean circulation is the formation of the Isthmus of Panama about 5 million years ago, which shut off direct mixing between the Atlantic and Pacific Oceans. This strongly affected the ocean dynamics of what is now the Gulf Stream and may have led to Northern Hemisphere ice cover.[53][54] During the Carboniferous period, about 300 to 360 million years ago, plate tectonics may have triggered large-scale storage of carbon and increased glaciation.[55] Geologic evidence points to a "megamonsoonal" circulation pattern during the time of the supercontinent Pangaea, and climate modeling suggests that the existence of the supercontinent was conducive to the establishment of monsoons.[56]

The size of continents is also important. Because of the stabilizing effect of the oceans on temperature, yearly temperature variations are generally lower in coastal areas than they are inland. A larger supercontinent will therefore have more area in which climate is strongly seasonal than will several smaller continents or islands.

Human influences

In the context of climate variation, anthropogenic factors are human activities which affect the climate. The scientific consensus on climate change is "that climate is changing and that these changes are in large part caused by human activities,"[57] and it "is largely irreversible."[58]
“Science has made enormous inroads in understanding climate change and its causes, and is beginning to help develop a strong understanding of current and potential impacts that will affect people today and in coming decades. This understanding is crucial because it allows decision makers to place climate change in the context of other large challenges facing the nation and the world. There are still some uncertainties, and there always will be in understanding a complex system like Earth’s climate. Nevertheless, there is a strong, credible body of evidence, based on multiple lines of research, documenting that climate is changing and that these changes are in large part caused by human activities. While much remains to be learned, the core phenomenon, scientific questions, and hypotheses have been examined thoroughly and have stood firm in the face of serious scientific debate and careful evaluation of alternative explanations.”
United States National Research Council, Advancing the Science of Climate Change
Of most concern in these anthropogenic factors is the increase in CO2 levels due to emissions from fossil fuel combustion, followed by aerosols (particulate matter in the atmosphere) and the CO2 released by cement manufacture. Other factors, including land use, ozone depletion, animal agriculture[59] and deforestation, are also of concern in the roles they play – both separately and in conjunction with other factors – in affecting climate, microclimate, and measures of climate variables.

Physical evidence

Comparisons between Asian Monsoons from 200 A.D. to 2000 A.D. (staying in the background on other plots), Northern Hemisphere temperature, Alpine glacier extent (vertically inverted as marked), and human history as noted by the U.S. NSF.
Arctic temperature anomalies over a 100 year period as estimated by NASA. Typical high monthly variance can be seen, while longer-term averages highlight trends.

Evidence for climatic change is taken from a variety of sources that can be used to reconstruct past climates. Reasonably complete global records of surface temperature are available beginning from the mid-late 19th century. For earlier periods, most of the evidence is indirect—climatic changes are inferred from changes in proxies, indicators that reflect climate, such as vegetation, ice cores,[60] dendrochronology, sea level change, and glacial geology.

Temperature measurements and proxies

The instrumental temperature record from surface stations was supplemented by radiosonde balloons, extensive atmospheric monitoring by the mid-20th century, and, from the 1970s on, with global satellite data as well. The 18O/16O ratio in calcite and ice core samples used to deduce ocean temperature in the distant past is an example of a temperature proxy method, as are other climate metrics noted in subsequent categories.

Historical and archaeological evidence

Climate change in the recent past may be detected by corresponding changes in settlement and agricultural patterns.[61] Archaeological evidence, oral history and historical documents can offer insights into past changes in the climate. Climate change effects have been linked to the collapse of various civilizations.[61]
Decline in thickness of glaciers worldwide over the past half-century

Glaciers

Glaciers are considered among the most sensitive indicators of climate change.[62] Their size is determined by a mass balance between snow input and melt output. As temperatures warm, glaciers retreat unless snow precipitation increases to make up for the additional melt; the converse is also true.

Glaciers grow and shrink due both to natural variability and external forcings. Variability in temperature, precipitation, and englacial and subglacial hydrology can strongly determine the evolution of a glacier in a particular season. Therefore, one must average over a decadal or longer time-scale and/or over a many individual glaciers to smooth out the local short-term variability and obtain a glacier history that is related to climate.

A world glacier inventory has been compiled since the 1970s, initially based mainly on aerial photographs and maps but now relying more on satellites. This compilation tracks more than 100,000 glaciers covering a total area of approximately 240,000 km2, and preliminary estimates indicate that the remaining ice cover is around 445,000 km2. The World Glacier Monitoring Service collects data annually on glacier retreat and glacier mass balance. From this data, glaciers worldwide have been found to be shrinking significantly, with strong glacier retreats in the 1940s, stable or growing conditions during the 1920s and 1970s, and again retreating from the mid-1980s to present.[63]

The most significant climate processes since the middle to late Pliocene (approximately 3 million years ago) are the glacial and interglacial cycles. The present interglacial period (the Holocene) has lasted about 11,700 years.[64] Shaped by orbital variations, responses such as the rise and fall of continental ice sheets and significant sea-level changes helped create the climate. Other changes, including Heinrich events, Dansgaard–Oeschger events and the Younger Dryas, however, illustrate how glacial variations may also influence climate without the orbital forcing.

Glaciers leave behind moraines that contain a wealth of material—including organic matter, quartz, and potassium that may be dated—recording the periods in which a glacier advanced and retreated. Similarly, by tephrochronological techniques, the lack of glacier cover can be identified by the presence of soil or volcanic tephra horizons whose date of deposit may also be ascertained.
This time series, based on satellite data, shows the annual Arctic sea ice minimum since 1979. The September 2010 extent was the third lowest in the satellite record.

Arctic sea ice loss

The decline in Arctic sea ice, both in extent and thickness, over the last several decades is further evidence for rapid climate change.[65] Sea ice is frozen seawater that floats on the ocean surface. It covers millions of square miles in the polar regions, varying with the seasons. In the Arctic, some sea ice remains year after year, whereas almost all Southern Ocean or Antarctic sea ice melts away and reforms annually. Satellite observations show that Arctic sea ice is now declining at a rate of 11.5 percent per decade, relative to the 1979 to 2000 average.[66]
This video summarizes how climate change, associated with increased carbon dioxide levels, has affected plant growth.

Vegetation

A change in the type, distribution and coverage of vegetation may occur given a change in the climate. Some changes in climate may result in increased precipitation and warmth, resulting in improved plant growth and the subsequent sequestration of airborne CO2. A gradual increase in warmth in a region will lead to earlier flowering and fruiting times, driving a change in the timing of life cycles of dependent organisms. Conversely, cold will cause plant bio-cycles to lag.[67] Larger, faster or more radical changes, however, may result in vegetation stress, rapid plant loss and desertification in certain circumstances.[68][69] An example of this occurred during the Carboniferous Rainforest Collapse (CRC), an extinction event 300 million years ago. At this time vast rainforests covered the equatorial region of Europe and America. Climate change devastated these tropical rainforests, abruptly fragmenting the habitat into isolated 'islands' and causing the extinction of many plant and animal species.[68]

Satellite data available in recent decades indicates that global terrestrial net primary production increased by 6% from 1982 to 1999, with the largest portion of that increase in tropical ecosystems, then decreased by 1% from 2000 to 2009.[70][71]

Pollen analysis

Palynology is the study of contemporary and fossil palynomorphs, including pollen. Palynology is used to infer the geographical distribution of plant species, which vary under different climate conditions. Different groups of plants have pollen with distinctive shapes and surface textures, and since the outer surface of pollen is composed of a very resilient material, they resist decay. Changes in the type of pollen found in different layers of sediment in lakes, bogs, or river deltas indicate changes in plant communities. These changes are often a sign of a changing climate.[72][73] As an example, palynological studies have been used to track changing vegetation patterns throughout the Quaternary glaciations[74] and especially since the last glacial maximum.[75]
Top: Arid ice age climate
Middle: Atlantic Period, warm and wet
Bottom: Potential vegetation in climate now if not for human effects like agriculture.[76]

Precipitation

Past precipitation can be estimated in the modern era with the global network of precipitation gauges. Surface coverage over oceans and remote areas is relatively sparse, but, reducing reliance on interpolation, satellite data has been available since the 1970s.[77] Quantification of climatological variation of precipitation in prior centuries and epochs is less complete but approximated using proxies such as marine sediments, ice cores, cave stalagmites, and tree rings.[78]

Climatological temperatures substantially affect precipitation. For instance, during the Last Glacial Maximum of 18,000 years ago, thermal-driven evaporation from the oceans onto continental landmasses was low, causing large areas of extreme desert, including polar deserts (cold but with low rates of precipitation).[76] In contrast, the world's climate was wetter than today near the start of the warm Atlantic Period of 8000 years ago.[76]

Estimated global land precipitation increased by approximately 2% over the course of the 20th century, though the calculated trend varies if different time endpoints are chosen, complicated by ENSO and other oscillations, including greater global land precipitation in the 1950s and 1970s than the later 1980s and 1990s despite the positive trend over the century overall.[77][79][80] Similar slight overall increase in global river runoff and in average soil moisture has been perceived.[79]

Dendroclimatology

Dendroclimatology is the analysis of tree ring growth patterns to determine past climate variations.[81] Wide and thick rings indicate a fertile, well-watered growing period, whilst thin, narrow rings indicate a time of lower rainfall and less-than-ideal growing conditions.

Ice cores

Analysis of ice in a core drilled from an ice sheet such as the Antarctic ice sheet, can be used to show a link between temperature and global sea level variations. The air trapped in bubbles in the ice can also reveal the CO2 variations of the atmosphere from the distant past, well before modern environmental influences. The study of these ice cores has been a significant indicator of the changes in CO2 over many millennia, and continues to provide valuable information about the differences between ancient and modern atmospheric conditions.

Animals

Remains of beetles are common in freshwater and land sediments. Different species of beetles tend to be found under different climatic conditions. Given the extensive lineage of beetles whose genetic makeup has not altered significantly over the millennia, knowledge of the present climatic range of the different species, and the age of the sediments in which remains are found, past climatic conditions may be inferred.[82]

Similarly, the historical abundance of various fish species has been found to have a substantial relationship with observed climatic conditions.[83] Changes in the primary productivity of autotrophs in the oceans can affect marine food webs.[84]

Sea level change

Global sea level change for much of the last century has generally been estimated using tide gauge measurements collated over long periods of time to give a long-term average. More recently, altimeter measurements — in combination with accurately determined satellite orbits — have provided an improved measurement of global sea level change.[85] To measure sea levels prior to instrumental measurements, scientists have dated coral reefs that grow near the surface of the ocean, coastal sediments, marine terraces, ooids in limestones, and nearshore archaeological remains. The predominant dating methods used are uranium series and radiocarbon, with cosmogenic radionuclides being sometimes used to date terraces that have experienced relative sea level fall. In the early Pliocene, global temperatures were 1–2˚C warmer than the present temperature, yet sea level was 15–25 meters higher than today.[86]

Global Warming’s 15 Year Pause Explained By Scientists

Original link:  http://www.valuewalk.com/2014/07/global-warmings-15-year-pause/
 
The research appears to explain one of the most important parts of evidence that supports climate-change denial

A fifteen year pause in global warming has been puzzling scientists in recent years, but researchers are now claiming to have cracked the puzzle. Shaun Lovejoy, who authored the piece on the so-called hiatus in global warming, says that the lack of significant warming in recent years, despite the increase in greenhouse gases in the atmosphere, was cause by the interruption of a natural cooling event.

Global warming
 
The paper, which is titled Return periods of global climate fluctuations and the pause, was included in the Geophysical Research Letters scientific journal, which is published by Wiley. The paper has not yet appeared in a physically published version of the journal and is included in the Wiley online library as an early look at the research.

Global cooling is a separate, and equal force

There was, according to the research, a wave of global cooling over the last fifteen years which masked the effects of global warming on our blue planet. The cooling event, which Mr. Lovejoy refers to as a “a natural cooling fluctuation,” is of a type that appears to affect the planet once every 20 to 50 years, according to the research. The study involved the use of a statistical methodology developed by Mr. Lovejoy in two recently published papers.

According to the scientist, “The pause thus has a convincing statistical explanation.” Whether or not that is considered to be true will be decided by a lengthy peer-review process that the paper is sure to be subjected to. There has been a global cooling effect amounting to between 0.28 and 0.37 degrees in the last fifteen years according to the research.

Lovejoy’s research involved the use of his own statistical model to study the fluctuations in global temperature over the industrial period.
The increase in temperature that was recorded in the four years leading up to the hiatus, according to the research, shows that global warming was still taking place, it was just masked by a period of global cooling which has been more significant in magnitude in the intervening years.

Global warming controversy continues

The lack of significant global warming in the last fifteen years has fueled the rhetoric of so-called climate-change deniers, those that refuse to believe that global warming has been caused by a man-made release of carbon dioxide. In the last fifteen years a huge amount of greenhouse gas has ended up in the atmosphere, yet little warming has been noticed as a result. This study claims to have explained that effect, but the research will have to be backed by more than a single paper before it will be widely accepted by the scientific community, or civilian opponents to the idea of climate change.

This research concludes that global warming is still a problem, but the efficacy of the statistical model will have to be tested in a more complete way before its conclusions are accepted. Mr. Lovejoy says that his model is already being used to analyze precipitation trends and regional variation in climate.

Cryogenics

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cryogenics...