Search This Blog

Saturday, October 10, 2020

Scattering theory

From Wikipedia, the free encyclopedia
 
Top: the real part of a plane wave travelling upwards. Bottom: The real part of the field after inserting in the path of the plane wave a small transparent disk of index of refraction higher than the index of the surrounding medium. This object scatters part of the wave field, although at any individual point, the wave's frequency and wavelength remain intact.

In mathematics and physics, scattering theory is a framework for studying and understanding the scattering of waves and particles. Wave scattering corresponds to the collision and scattering of a wave with some material object, for instance sunlight scattered by rain drops to form a rainbow. Scattering also includes the interaction of billiard balls on a table, the Rutherford scattering (or angle change) of alpha particles by gold nuclei, the Bragg scattering (or diffraction) of electrons and X-rays by a cluster of atoms, and the inelastic scattering of a fission fragment as it traverses a thin foil. More precisely, scattering consists of the study of how solutions of partial differential equations, propagating freely "in the distant past", come together and interact with one another or with a boundary condition, and then propagate away "to the distant future". The direct scattering problem is the problem of determining the distribution of scattered radiation/particle flux basing on the characteristics of the scatterer. The inverse scattering problem is the problem of determining the characteristics of an object (e.g., its shape, internal constitution) from measurement data of radiation or particles scattered from the object.

Since its early statement for radiolocation, the problem has found vast number of applications, such as echolocation, geophysical survey, nondestructive testing, medical imaging and quantum field theory, to name just a few.

Conceptual underpinnings

The concepts used in scattering theory go by different names in different fields. The object of this section is to point the reader to common threads.

Composite targets and range equations

Equivalent quantities used in the theory of scattering from composite specimens, but with a variety of units.

When the target is a set of many scattering centers whose relative position varies unpredictably, it is customary to think of a range equation whose arguments take different forms in different application areas. In the simplest case consider an interaction that removes particles from the "unscattered beam" at a uniform rate that is proportional to the incident flux of particles per unit area per unit time, i.e. that

where Q is an interaction coefficient and x is the distance traveled in the target.

The above ordinary first-order differential equation has solutions of the form:

where Io is the initial flux, path length Δx ≡ x − xo, the second equality defines an interaction mean free path λ, the third uses the number of targets per unit volume η to define an area cross-section σ, and the last uses the target mass density ρ to define a density mean free path τ. Hence one converts between these quantities via Q = 1/λησρ/τ, as shown in the figure at left.

In electromagnetic absorption spectroscopy, for example, interaction coefficient (e.g. Q in cm−1) is variously called opacity, absorption coefficient, and attenuation coefficient. In nuclear physics, area cross-sections (e.g. σ in barns or units of 10−24 cm2), density mean free path (e.g. τ in grams/cm2), and its reciprocal the mass attenuation coefficient (e.g. in cm2/gram) or area per nucleon are all popular, while in electron microscopy the inelastic mean free path[1] (e.g. λ in nanometers) is often discussed[2] instead.

In theoretical physics

In mathematical physics, scattering theory is a framework for studying and understanding the interaction or scattering of solutions to partial differential equations. In acoustics, the differential equation is the wave equation, and scattering studies how its solutions, the sound waves, scatter from solid objects or propagate through non-uniform media (such as sound waves, in sea water, coming from a submarine). In the case of classical electrodynamics, the differential equation is again the wave equation, and the scattering of light or radio waves is studied. In particle physics, the equations are those of Quantum electrodynamics, Quantum chromodynamics and the Standard Model, the solutions of which correspond to fundamental particles.

In regular quantum mechanics, which includes quantum chemistry, the relevant equation is the Schrödinger equation, although equivalent formulations, such as the Lippmann-Schwinger equation and the Faddeev equations, are also largely used. The solutions of interest describe the long-term motion of free atoms, molecules, photons, electrons, and protons. The scenario is that several particles come together from an infinite distance away. These reagents then collide, optionally reacting, getting destroyed or creating new particles. The products and unused reagents then fly away to infinity again. 

(The atoms and molecules are effectively particles for our purposes. Also, under everyday circumstances, only photons are being created and destroyed.) The solutions reveal which directions the products are most likely to fly off to and how quickly. They also reveal the probability of various reactions, creations, and decays occurring. There are two predominant techniques of finding solutions to scattering problems: partial wave analysis, and the Born approximation.

Elastic and inelastic scattering

The term "elastic scattering" implies that the internal states of the scattered particles do not change, and hence they emerge unchanged from the scattering process. In inelastic scattering, by contrast, the particles' internal state is changed, which may amount to exciting some of the electrons of a scattering atom, or the complete annihilation of a scattering particle and the creation of entirely new particles.

The example of scattering in quantum chemistry is particularly instructive, as the theory is reasonably complex while still having a good foundation on which to build an intuitive understanding. When two atoms are scattered off one another, one can understand them as being the bound state solutions of some differential equation. Thus, for example, the hydrogen atom corresponds to a solution to the Schrödinger equation with a negative inverse-power (i.e., attractive Coulombic) central potential. The scattering of two hydrogen atoms will disturb the state of each atom, resulting in one or both becoming excited, or even ionized, representing an inelastic scattering process.

The term "deep inelastic scattering" refers to a special kind of scattering experiment in particle physics.

The mathematical framework

In mathematics, scattering theory deals with a more abstract formulation of the same set of concepts. For example, if a differential equation is known to have some simple, localized solutions, and the solutions are a function of a single parameter, that parameter can take the conceptual role of time. One then asks what might happen if two such solutions are set up far away from each other, in the "distant past", and are made to move towards each other, interact (under the constraint of the differential equation) and then move apart in the "future". The scattering matrix then pairs solutions in the "distant past" to those in the "distant future".

Solutions to differential equations are often posed on manifolds. Frequently, the means to the solution requires the study of the spectrum of an operator on the manifold. As a result, the solutions often have a spectrum that can be identified with a Hilbert space, and scattering is described by a certain map, the S matrix, on Hilbert spaces. Spaces with a discrete spectrum correspond to bound states in quantum mechanics, while a continuous spectrum is associated with scattering states. The study of inelastic scattering then asks how discrete and continuous spectra are mixed together.

An important, notable development is the inverse scattering transform, central to the solution of many exactly solvable models.

 

Wave–particle duality

From Wikipedia, the free encyclopedia

Wave–particle duality is the concept in quantum mechanics that every particle or quantum entity may be described as either a particle or a wave. It expresses the inability of the classical concepts "particle" or "wave" to fully describe the behaviour of quantum-scale objects. As Albert Einstein wrote:

It seems as though we must use sometimes the one theory and sometimes the other, while at times we may use either. We are faced with a new kind of difficulty. We have two contradictory pictures of reality; separately neither of them fully explains the phenomena of light, but together they do.

Through the work of Max Planck, Albert Einstein, Louis de Broglie, Arthur Compton, Niels Bohr, and many others, current scientific theory holds that all particles exhibit a wave nature and vice versa. This phenomenon has been verified not only for elementary particles, but also for compound particles like atoms and even molecules. For macroscopic particles, because of their extremely short wavelengths, wave properties usually cannot be detected.

Although the use of the wave-particle duality has worked well in physics, the meaning or interpretation has not been satisfactorily resolved; see Interpretations of quantum mechanics.

Bohr regarded the "duality paradox" as a fundamental or metaphysical fact of nature. A given kind of quantum object will exhibit sometimes wave, sometimes particle, character, in respectively different physical settings. He saw such duality as one aspect of the concept of complementarity. Bohr regarded renunciation of the cause-effect relation, or complementarity, of the space-time picture, as essential to the quantum mechanical account.

Werner Heisenberg considered the question further. He saw the duality as present for all quantic entities, but not quite in the usual quantum mechanical account considered by Bohr. He saw it in what is called second quantization, which generates an entirely new concept of fields that exist in ordinary space-time, causality still being visualizable. Classical field values (e.g. the electric and magnetic field strengths of Maxwell) are replaced by an entirely new kind of field value, as considered in quantum field theory. Turning the reasoning around, ordinary quantum mechanics can be deduced as a specialized consequence of quantum field theory.

History

Classical particle and wave theories of light

Thomas Young's sketch of two-slit diffraction of waves, 1803

Democritus (5th century BC) argued that all things in the universe, including light, are composed of indivisible sub-components. Euclid (4th-3rd century BC) gives treatises on light propagation, states the principle of shortest trajectory of light, including multiple reflections on mirrors, including spherical, while Plutarch (1st-2nd century AD) describes multiple reflections on spherical mirrors discussing the creation of larger or smaller images, real or imaginary, including the case of chirality of the images. At the beginning of the 11th century, the Arabic scientist Ibn al-Haytham wrote the first comprehensive Book of optics describing reflection, refraction, and the operation of a pinhole lens via rays of light traveling from the point of emission to the eye. He asserted that these rays were composed of particles of light. In 1630, René Descartes popularized and accredited the opposing wave description in his treatise on light, The World (Descartes), showing that the behavior of light could be re-created by modeling wave-like disturbances in a universal medium i.e. luminiferous aether. Beginning in 1670 and progressing over three decades, Isaac Newton developed and championed his corpuscular theory, arguing that the perfectly straight lines of reflection demonstrated light's particle nature, only particles could travel in such straight lines. He explained refraction by positing that particles of light accelerated laterally upon entering a denser medium. Around the same time, Newton's contemporaries Robert Hooke and Christiaan Huygens, and later Augustin-Jean Fresnel, mathematically refined the wave viewpoint, showing that if light traveled at different speeds in different media, refraction could be easily explained as the medium-dependent propagation of light waves. The resulting Huygens–Fresnel principle was extremely successful at reproducing light's behavior and was subsequently supported by Thomas Young's discovery of wave interference of light by his double-slit experiment in 1801. The wave view did not immediately displace the ray and particle view, but began to dominate scientific thinking about light in the mid 19th century, since it could explain polarization phenomena that the alternatives could not.

James Clerk Maxwell discovered that he could apply his previously discovered Maxwell's equations, along with a slight modification to describe self-propagating waves of oscillating electric and magnetic fields. It quickly became apparent that visible light, ultraviolet light, and infrared light were all electromagnetic waves of differing frequency.

Black-body radiation and Planck's law

In 1901, Max Planck published an analysis that succeeded in reproducing the observed spectrum of light emitted by a glowing object. To accomplish this, Planck had to make a mathematical assumption of quantized energy of the oscillators i.e. atoms of the black body that emit radiation. Einstein later proposed that electromagnetic radiation itself is quantized, not the energy of radiating atoms.

Black-body radiation, the emission of electromagnetic energy due to an object's heat, could not be explained from classical arguments alone. The equipartition theorem of classical mechanics, the basis of all classical thermodynamic theories, stated that an object's energy is partitioned equally among the object's vibrational modes. But applying the same reasoning to the electromagnetic emission of such a thermal object was not so successful. That thermal objects emit light had been long known. Since light was known to be waves of electromagnetism, physicists hoped to describe this emission via classical laws. This became known as the black body problem. Since the equipartition theorem worked so well in describing the vibrational modes of the thermal object itself, it was natural to assume that it would perform equally well in describing the radiative emission of such objects. But a problem quickly arose if each mode received an equal partition of energy, the short wavelength modes would consume all the energy. This became clear when plotting the Rayleigh–Jeans law, which, while correctly predicting the intensity of long wavelength emissions, predicted infinite total energy as the intensity diverges to infinity for short wavelengths. This became known as the ultraviolet catastrophe.

In 1900, Max Planck hypothesized that the frequency of light emitted by the black body depended on the frequency of the oscillator that emitted it, and the energy of these oscillators increased linearly with frequency (according E = hf where h is Planck's constant and f is the frequency). This was not an unsound proposal considering that macroscopic oscillators operate similarly when studying five simple harmonic oscillators of equal amplitude but different frequency, the oscillator with the highest frequency possesses the highest energy (though this relationship is not linear like Planck's). By demanding that high-frequency light must be emitted by an oscillator of equal frequency, and further requiring that this oscillator occupy higher energy than one of a lesser frequency, Planck avoided any catastrophe, giving an equal partition to high-frequency oscillators produced successively fewer oscillators and less emitted light. And as in the Maxwell–Boltzmann distribution, the low-frequency, low-energy oscillators were suppressed by the onslaught of thermal jiggling from higher energy oscillators, which necessarily increased their energy and frequency.

The most revolutionary aspect of Planck's treatment of the black body is that it inherently relies on an integer number of oscillators in thermal equilibrium with the electromagnetic field. These oscillators give their entire energy to the electromagnetic field, creating a quantum of light, as often as they are excited by the electromagnetic field, absorbing a quantum of light and beginning to oscillate at the corresponding frequency. Planck had intentionally created an atomic theory of the black body, but had unintentionally generated an atomic theory of light, where the black body never generates quanta of light at a given frequency with an energy less than hf. However, once realizing that he had quantized the electromagnetic field, he denounced particles of light as a limitation of his approximation, not a property of reality.

Photoelectric effect

The photoelectric effect. Incoming photons on the left strike a metal plate (bottom), and eject electrons, depicted as flying off to the right.

While Planck had solved the ultraviolet catastrophe by using atoms and a quantized electromagnetic field, most contemporary physicists agreed that Planck's "light quanta" represented only flaws in his model. A more-complete derivation of black-body radiation would yield a fully continuous and "wave-like" electromagnetic field with no quantization. However, in 1905 Albert Einstein took Planck's black body model to produce his solution to another outstanding problem of the day: the photoelectric effect, wherein electrons are emitted from atoms when they absorb energy from light. Since their existence was theorized eight years previously, phenomena had been studied with the electron model in mind in physics laboratories worldwide.

In 1902, Philipp Lenard discovered that the energy of these ejected electrons did not depend on the intensity of the incoming light, but instead on its frequency. So if one shines a little low-frequency light upon a metal, a few low energy electrons are ejected. If one now shines a very intense beam of low-frequency light upon the same metal, a whole slew of electrons are ejected; however they possess the same low energy, there are merely more of them. The more light there is, the more electrons are ejected. Whereas in order to get high energy electrons, one must illuminate the metal with high-frequency light. Like blackbody radiation, this was at odds with a theory invoking continuous transfer of energy between radiation and matter. However, it can still be explained using a fully classical description of light, as long as matter is quantum mechanical in nature.

If one used Planck's energy quanta, and demanded that electromagnetic radiation at a given frequency could only transfer energy to matter in integer multiples of an energy quantum hf, then the photoelectric effect could be explained very simply. Low-frequency light only ejects low-energy electrons because each electron is excited by the absorption of a single photon. Increasing the intensity of the low-frequency light (increasing the number of photons) only increases the number of excited electrons, not their energy, because the energy of each photon remains low. Only by increasing the frequency of the light, and thus increasing the energy of the photons, can one eject electrons with higher energy. Thus, using Planck's constant h to determine the energy of the photons based upon their frequency, the energy of ejected electrons should also increase linearly with frequency, the gradient of the line being Planck's constant. These results were not confirmed until 1915, when Robert Andrews Millikan produced experimental results in perfect accord with Einstein's predictions.

While energy of ejected electrons reflected Planck's constant, the existence of photons was not explicitly proven until the discovery of the photon antibunching effect, of which a modern experiment can be performed in undergraduate-level labs. This phenomenon could only be explained via photons.

Einstein's "light quanta" would not be called photons until 1925, but even in 1905 they represented the quintessential example of wave-particle duality. Electromagnetic radiation propagates following linear wave equations, but can only be emitted or absorbed as discrete elements, thus acting as a wave and a particle simultaneously.

Einstein's explanation of photoelectric effect

In 1905, Albert Einstein provided an explanation of the photoelectric effect, an experiment that the wave theory of light failed to explain. He did so by postulating the existence of photons, quanta of light energy with particulate qualities.

In the photoelectric effect, it was observed that shining a light on certain metals would lead to an electric current in a circuit. Presumably, the light was knocking electrons out of the metal, causing current to flow. However, using the case of potassium as an example, it was also observed that while a dim blue light was enough to cause a current, even the strongest, brightest red light available with the technology of the time caused no current at all. According to the classical theory of light and matter, the strength or amplitude of a light wave was in proportion to its brightness: a bright light should have been easily strong enough to create a large current. Yet, oddly, this was not so.

Einstein explained this enigma by postulating that the electrons can receive energy from electromagnetic field only in discrete units (quanta or photons): an amount of energy E that was related to the frequency f of the light by

where h is Planck's constant (6.626 × 10−34 Js). Only photons of a high enough frequency (above a certain threshold value) could knock an electron free. For example, photons of blue light had sufficient energy to free an electron from the metal, but photons of red light did not. One photon of light above the threshold frequency could release only one electron; the higher the frequency of a photon, the higher the kinetic energy of the emitted electron, but no amount of light below the threshold frequency could release an electron. To violate this law would require extremely high-intensity lasers that had not yet been invented. Intensity-dependent phenomena have now been studied in detail with such lasers.

Einstein was awarded the Nobel Prize in Physics in 1921 for his discovery of the law of the photoelectric effect.

de Broglie's hypothesis

Propagation of de Broglie waves in 1d—real part of the complex amplitude is blue, imaginary part is green. The probability (shown as the colour opacity) of finding the particle at a given point x is spread out like a waveform; there is no definite position of the particle. As the amplitude increases above zero the curvature decreases, so the amplitude decreases again, and vice versa—the result is an alternating amplitude: a wave. Top: Plane wave. Bottom: Wave packet.

In 1924, Louis-Victor de Broglie formulated the de Broglie hypothesis, claiming that all matter has a wave-like nature, he related wavelength and momentum:

This is a generalization of Einstein's equation above, since the momentum of a photon is given by p = and the wavelength (in a vacuum) by λ = , where c is the speed of light in vacuum.

De Broglie's formula was confirmed three years later for electrons with the observation of electron diffraction in two independent experiments. At the University of Aberdeen, George Paget Thomson passed a beam of electrons through a thin metal film and observed the predicted interference patterns. At Bell Labs, Clinton Joseph Davisson and Lester Halbert Germer guided the electron beam through a crystalline grid in their experiment popularly known as Davisson–Germer experiment.

De Broglie was awarded the Nobel Prize for Physics in 1929 for his hypothesis. Thomson and Davisson shared the Nobel Prize for Physics in 1937 for their experimental work.

Heisenberg's uncertainty principle

In his work on formulating quantum mechanics, Werner Heisenberg postulated his uncertainty principle, which states:

where

here indicates standard deviation, a measure of spread or uncertainty;
x and p are a particle's position and linear momentum respectively.
is the reduced Planck's constant (Planck's constant divided by 2).

Heisenberg originally explained this as a consequence of the process of measuring: Measuring position accurately would disturb momentum and vice versa, offering an example (the "gamma-ray microscope") that depended crucially on the de Broglie hypothesis. The thought is now, however, that this only partly explains the phenomenon, but that the uncertainty also exists in the particle itself, even before the measurement is made.

In fact, the modern explanation of the uncertainty principle, extending the Copenhagen interpretation first put forward by Bohr and Heisenberg, depends even more centrally on the wave nature of a particle. Just as it is nonsensical to discuss the precise location of a wave on a string, particles do not have perfectly precise positions; likewise, just as it is nonsensical to discuss the wavelength of a "pulse" wave traveling down a string, particles do not have perfectly precise momenta that corresponds to the inverse of wavelength. Moreover, when position is relatively well defined, the wave is pulse-like and has a very ill-defined wavelength, and thus momentum. And conversely, when momentum, and thus wavelength, is relatively well defined, the wave looks long and sinusoidal, and therefore it has a very ill-defined position.

de Broglie–Bohm theory

Couder experiments, "materializing" the pilot wave model.

De Broglie himself had proposed a pilot wave construct to explain the observed wave-particle duality. In this view, each particle has a well-defined position and momentum, but is guided by a wave function derived from Schrödinger's equation. The pilot wave theory was initially rejected because it generated non-local effects when applied to systems involving more than one particle. Non-locality, however, soon became established as an integral feature of quantum theory and David Bohm extended de Broglie's model to explicitly include it.

In the resulting representation, also called the de Broglie–Bohm theory or Bohmian mechanics, the wave-particle duality vanishes, and explains the wave behaviour as a scattering with wave appearance, because the particle's motion is subject to a guiding equation or quantum potential.

This idea seems to me so natural and simple, to resolve the wave–particle dilemma in such a clear and ordinary way, that it is a great mystery to me that it was so generally ignored. – J.S.Bell

The best illustration of the pilot-wave model was given by Couder's 2010 "walking droplets" experiments, demonstrating the pilot-wave behaviour in a macroscopic mechanical analog.

Wave nature of large objects

Since the demonstrations of wave-like properties in photons and electrons, similar experiments have been conducted with neutrons and protons. Among the most famous experiments are those of Estermann and Otto Stern in 1929. Authors of similar recent experiments with atoms and molecules, described below, claim that these larger particles also act like waves.

A dramatic series of experiments emphasizing the action of gravity in relation to wave–particle duality was conducted in the 1970s using the neutron interferometer. Neutrons, one of the components of the atomic nucleus, provide much of the mass of a nucleus and thus of ordinary matter. In the neutron interferometer, they act as quantum-mechanical waves directly subject to the force of gravity. While the results were not surprising since gravity was known to act on everything, including light (see tests of general relativity and the Pound–Rebka falling photon experiment), the self-interference of the quantum mechanical wave of a massive fermion in a gravitational field had never been experimentally confirmed before.

In 1999, the diffraction of C60 fullerenes by researchers from the University of Vienna was reported. Fullerenes are comparatively large and massive objects, having an atomic mass of about 720 u. The de Broglie wavelength of the incident beam was about 2.5 pm, whereas the diameter of the molecule is about 1 nm, about 400 times larger. In 2012, these far-field diffraction experiments could be extended to phthalocyanine molecules and their heavier derivatives, which are composed of 58 and 114 atoms respectively. In these experiments the build-up of such interference patterns could be recorded in real time and with single molecule sensitivity.

In 2003, the Vienna group also demonstrated the wave nature of tetraphenylporphyrin—a flat biodye with an extension of about 2 nm and a mass of 614 u. For this demonstration they employed a near-field Talbot Lau interferometer.  In the same interferometer they also found interference fringes for C60F48., a fluorinated buckyball with a mass of about 1600 u, composed of 108 atoms. Large molecules are already so complex that they give experimental access to some aspects of the quantum-classical interface, i.e., to certain decoherence mechanisms. In 2011, the interference of molecules as heavy as 6910 u could be demonstrated in a Kapitza–Dirac–Talbot–Lau interferometer. In 2013, the interference of molecules beyond 10,000 u has been demonstrated.

Whether objects heavier than the Planck mass (about the weight of a large bacterium) have a de Broglie wavelength is theoretically unclear and experimentally unreachable; above the Planck mass a particle's Compton wavelength would be smaller than the Planck length and its own Schwarzschild radius, a scale at which current theories of physics may break down or need to be replaced by more general ones.

Recently Couder, Fort, et al. showed that we can use macroscopic oil droplets on a vibrating surface as a model of wave–particle duality—localized droplet creates periodical waves around and interaction with them leads to quantum-like phenomena: interference in double-slit experiment, unpredictable tunneling (depending in complicated way on practically hidden state of field), orbit quantization[36] (that particle has to 'find a resonance' with field perturbations it creates—after one orbit, its internal phase has to return to the initial state) and Zeeman effect.

Importance

Wave–particle duality is deeply embedded into the foundations of quantum mechanics. In the formalism of the theory, all the information about a particle is encoded in its wave function, a complex-valued function roughly analogous to the amplitude of a wave at each point in space. This function evolves according to Schrödinger equation. For particles with mass this equation has solutions that follow the form of the wave equation. Propagation of such waves leads to wave-like phenomena such as interference and diffraction. Particles without mass, like photons, have no solutions of the Schrödinger equation so have another wave.

The particle-like behaviour is most evident due to phenomena associated with measurement in quantum mechanics. Upon measuring the location of the particle, the particle will be forced into a more localized state as given by the uncertainty principle. When viewed through this formalism, the measurement of the wave function will randomly lead to wave function collapse to a sharply peaked function at some location. For particles with mass, the likelihood of detecting the particle at any particular location is equal to the squared amplitude of the wave function there. The measurement will return a well-defined position, and is subject to Heisenberg's uncertainty principle.

Following the development of quantum field theory the ambiguity disappeared. The field permits solutions that follow the wave equation, which are referred to as the wave functions. The term particle is used to label the irreducible representations of the Lorentz group that are permitted by the field. An interaction as in a Feynman diagram is accepted as a calculationally convenient approximation where the outgoing legs are known to be simplifications of the propagation and the internal lines are for some order in an expansion of the field interaction. Since the field is non-local and quantized, the phenomena that previously were thought of as paradoxes are explained. Within the limits of the wave-particle duality the quantum field theory gives the same results.

Visualization

There are two ways to visualize the wave-particle behaviour: by the standard model and by the de Broglie–Bohr theory.

Below is an illustration of wave–particle duality as it relates to de Broglie's hypothesis and Heisenberg's Uncertainty principle, in terms of the position and momentum space wavefunctions for one spinless particle with mass in one dimension. These wavefunctions are Fourier transforms of each other.

The more localized the position-space wavefunction, the more likely the particle is to be found with the position coordinates in that region, and correspondingly the momentum-space wavefunction is less localized so the possible momentum components the particle could have are more widespread.

Conversely, the more localized the momentum-space wavefunction, the more likely the particle is to be found with those values of momentum components in that region, and correspondingly the less localized the position-space wavefunction, so the position coordinates the particle could occupy are more widespread.

Position x and momentum p wavefunctions corresponding to quantum particles. The colour opacity of the particles corresponds to the probability density of finding the particle with position x or momentum component p.
Top: If wavelength λ is unknown, so are momentum p, wave-vector k and energy E (de Broglie relations). As the particle is more localized in position space, Δx is smaller than for Δpx.
Bottom: If λ is known, so are p, k, and E. As the particle is more localized in momentum space, Δp is smaller than for Δx.

Alternative views

Wave–particle duality is an ongoing conundrum in modern physics. Most physicists accept wave-particle duality as the best explanation for a broad range of observed phenomena; however, it is not without controversy. Alternative views are also presented here. These views are not generally accepted by mainstream physics, but serve as a basis for valuable discussion within the community.

Both-particle-and-wave view

The pilot wave model, originally developed by Louis de Broglie and further developed by David Bohm into the hidden variable theory proposes that there is no duality, but rather a system exhibits both particle properties and wave properties simultaneously, and particles are guided, in a deterministic fashion, by the pilot wave (or its "quantum potential"), which will direct them to areas of constructive interference in preference to areas of destructive interference. This idea is held by a significant minority within the physics community.

At least one physicist considers the "wave-duality" as not being an incomprehensible mystery. L.E. Ballentine, Quantum Mechanics, A Modern Development (1989), p. 4, explains:

When first discovered, particle diffraction was a source of great puzzlement. Are "particles" really "waves?" In the early experiments, the diffraction patterns were detected holistically by means of a photographic plate, which could not detect individual particles. As a result, the notion grew that particle and wave properties were mutually incompatible, or complementary, in the sense that different measurement apparatuses would be required to observe them. That idea, however, was only an unfortunate generalization from a technological limitation. Today it is possible to detect the arrival of individual electrons, and to see the diffraction pattern emerge as a statistical pattern made up of many small spots (Tonomura et al., 1989). Evidently, quantum particles are indeed particles, but whose behaviour is very different from classical physics would have us to expect.

The Afshar experiment (2007) may suggest that it is possible to simultaneously observe both wave and particle properties of photons. This claim is, however, disputed by other scientists.

Wave-only view

Carver Mead, an American scientist and professor at Caltech, proposes that the duality can be replaced by a "wave-only" view. In his book Collective Electrodynamics: Quantum Foundations of Electromagnetism (2000), Mead purports to analyze the behavior of electrons and photons purely in terms of electron wave functions, and attributes the apparent particle-like behavior to quantization effects and eigenstates. According to reviewer David Haddon:

Mead has cut the Gordian knot of quantum complementarity. He claims that atoms, with their neutrons, protons, and electrons, are not particles at all but pure waves of matter. Mead cites as the gross evidence of the exclusively wave nature of both light and matter the discovery between 1933 and 1996 of ten examples of pure wave phenomena, including the ubiquitous laser of CD players, the self-propagating electrical currents of superconductors, and the Bose–Einstein condensate of atoms.

Albert Einstein, who, in his search for a Unified Field Theory, did not accept wave-particle duality, wrote:

This double nature of radiation (and of material corpuscles) ... has been interpreted by quantum-mechanics in an ingenious and amazingly successful fashion. This interpretation ... appears to me as only a temporary way out...

The many-worlds interpretation (MWI) is sometimes presented as a waves-only theory, including by its originator, Hugh Everett who referred to MWI as "the wave interpretation".

The three wave hypothesis of R. Horodecki relates the particle to wave. The hypothesis implies that a massive particle is an intrinsically spatially, as well as temporally extended, wave phenomenon by a nonlinear law.

The deterministic collapse theory considers collapse and measurement as two independent physical processes. Collapse occurs when two wavepackets spatially overlap and satisfy a mathematical criterion, which depends on the parameters of both wavepackets. It is a contraction to the overlap volume. In a measurement apparatus one of the two wavepackets is one of the atomic clusters, which constitute the apparatus, and the wavepackets collapse to at most the volume of such a cluster. This mimics the action of a point particle.

Particle-only view

Still in the days of the old quantum theory, a pre-quantum-mechanical version of wave–particle duality was pioneered by William Duane, and developed by others including Alfred Landé. Duane explained diffraction of x-rays by a crystal in terms solely of their particle aspect. The deflection of the trajectory of each diffracted photon was explained as due to quantized momentum transfer from the spatially regular structure of the diffracting crystal.

Neither-wave-nor-particle view

It has been argued that there are never exact particles or waves, but only some compromise or intermediate between them. For this reason, in 1928 Arthur Eddington coined the name "wavicle" to describe the objects although it is not regularly used today. One consideration is that zero-dimensional mathematical points cannot be observed. Another is that the formal representation of such points, the Dirac delta function is unphysical, because it cannot be normalized. Parallel arguments apply to pure wave states. Roger Penrose states:

Such 'position states' are idealized wavefunctions in the opposite sense from the momentum states. Whereas the momentum states are infinitely spread out, the position states are infinitely concentrated. Neither is normalizable [...].

Relational approach to wave–particle duality

Relational quantum mechanics has been developed as a point of view that regards the event of particle detection as having established a relationship between the quantized field and the detector. The inherent ambiguity associated with applying Heisenberg's uncertainty principle is consequently avoided; hence there is no wave-particle duality.

Uses

Although it is difficult to draw a line separating wave–particle duality from the rest of quantum mechanics, it is nevertheless possible to list some applications of this basic idea.

  • Wave–particle duality is exploited in electron microscopy, where the small wavelengths associated with the electron can be used to view objects much smaller than what is visible using visible light.
  • Similarly, neutron diffraction uses neutrons with a wavelength of about 0.1 nm, the typical spacing of atoms in a solid, to determine the structure of solids.
  • Photos are now able to show this dual nature, which may lead to new ways of examining and recording this behaviour.

Planck constant

From Wikipedia, the free encyclopedia
 
Plaque at the Humboldt University of Berlin: "In this house taught Max Planck, the discoverer of the elementary quantum of action , from 1889 to 1928."

The Planck constant, or Planck's constant, is the quantum of electromagnetic action that relates a photon's energy to its frequency. The Planck constant multiplied by a photon's frequency is equal to a photon's energy. The Planck constant is a fundamental physical constant denoted as , and of fundamental importance in quantum mechanics. In metrology it is used to define the kilogram in SI units.

The Planck constant is defined to have the exact value 6.62607015×10−34 J⋅s in SI units.

At the end of the 19th century, accurate measurements of the spectrum of black body radiation existed, but predictions of the frequency distribution of the radiation by then-existing theories diverged significantly at higher frequencies. In 1900, Max Planck empirically derived a formula for the observed spectrum. He assumed a hypothetical electrically charged oscillator in a cavity that contained black-body radiation could only change its energy in a minimal increment, , that was proportional to the frequency of its associated electromagnetic wave. He was able to calculate the proportionality constant, , from the experimental measurements, and that constant is named in his honor. In 1905, the value was associated by Albert Einstein with a "quantum" or minimal element of the energy of the electromagnetic wave itself. The light quantum behaved in some respects as an electrically neutral particle. It was eventually called a photon. Max Planck received the 1918 Nobel Prize in Physics "in recognition of the services he rendered to the advancement of Physics by his discovery of energy quanta".

Confusion can arise when dealing with frequency or the Planck constant because the units of angular measure (cycle or radian) are omitted in SI. In the language of quantity calculus, the expression for the value of the Planck constant, or a frequency, is the product of a numerical value and a unit of measurement. The symbol f (or ν), when used for the value of a frequency, implies cycles per second or hertz as the unit. When the symbol ω is used for the frequency's value it implies radians per second as the unit. The numerical values of these two ways of expressing the frequency have a ratio of 2π.  Omitting the units of angular measure "cycle" and "radian" can lead to an error of 2π. A similar state of affairs occurs for the Planck constant. The symbol h is used to express the value of the Planck constant in J⋅s/cycle, and the symbol ħ ("h-bar") is used to express its value in J⋅s/radian. Both represent the value of the Planck constant, but, as discussed below, their numerical values have a ratio of 2π. In this Wikipedia article the word "value" as used in the tables means "numerical value", and the equations involving the Planck constant and/or frequency actually involve their numerical values using the appropriate implied units. The distinction between "value" and "numerical value" as it applies to frequency and the Planck constant is explained in more detail in this pdf file.

Since energy and mass are equivalent, the Planck constant also relates mass to frequency.

Origin of the constant

Intensity of light emitted from a black body. Each curve represents behavior at different body temperatures. Max Planck was the first to explain the shape of these curves.

Planck's constant was formulated as part of Max Planck's successful effort to produce a mathematical expression that accurately predicted the observed spectral distribution of thermal radiation from a closed furnace (black-body radiation). This mathematical expression is now known as Planck's law.

In the last years of the 19th century, Max Planck was investigating the problem of black-body radiation first posed by Kirchhoff some 40 years earlier. Every physical body spontaneously and continuously emits electromagnetic radiation. There was no expression or explanation for the overall shape of the observed emission spectrum. At the time, Wien's law fit the data for short wavelengths and high temperatures, but failed for long wavelengths. Also around this time, but unknown to Planck, Lord Rayleigh had derived theoretically a formula, now known as the Rayleigh–Jeans law, that could reasonably predict long wavelengths but failed dramatically at short wavelengths.

Approaching this problem, Planck hypothesized that the equations of motion for light describe a set of harmonic oscillators, one for each possible frequency. He examined how the entropy of the oscillators varied with the temperature of the body, trying to match Wien's law, and was able to derive an approximate mathematical function for the black-body spectrum, which gave a simple empirical formula for long wavelengths.

Planck tried to find a mathematical expression that could reproduce Wien's law (for short wavelengths) and the empirical formula (for long wavelengths). This expression included a constant, , which subsequently became known as the Planck Constant. The expression formulated by Planck showed that the spectral radiance of a body for frequency ν at absolute temperature T is given by

where is the Boltzmann constant, is the Planck constant, and is the speed of light in the medium, whether material or vacuum.

The spectral radiance of a body, , describes the amount of energy it emits at different radiation frequencies. It is the power emitted per unit area of the body, per unit solid angle of emission, per unit frequency. The spectral radiance can also be expressed per unit wavelength instead of per unit frequency. In this case, it is given by

showing how radiated energy emitted at shorter wavelengths increases more rapidly with temperature than energy emitted at longer wavelengths.

Planck's law may also be expressed in other terms, such as the number of photons emitted at a certain wavelength, or the energy density in a volume of radiation. The SI units of are W·sr−1·m−2·Hz−1, while those of are W·sr−1·m−3.

Planck soon realized that his solution was not unique. There were several different solutions, each of which gave a different value for the entropy of the oscillators. To save his theory, Planck resorted to using the then-controversial theory of statistical mechanics, which he described as "an act of despair … I was ready to sacrifice any of my previous convictions about physics." One of his new boundary conditions was

to interpret UN [the vibrational energy of N oscillators] not as a continuous, infinitely divisible quantity, but as a discrete quantity composed of an integral number of finite equal parts. Let us call each such part the energy element ε;

— Planck, On the Law of Distribution of Energy in the Normal Spectrum

With this new condition, Planck had imposed the quantization of the energy of the oscillators, "a purely formal assumption … actually I did not think much about it…" in his own words, but one that would revolutionize physics. Applying this new approach to Wien's displacement law showed that the "energy element" must be proportional to the frequency of the oscillator, the first version of what is now sometimes termed the "Planck–Einstein relation":

Planck was able to calculate the value of from experimental data on black-body radiation: his result, 6.55×10−34 J⋅s, is within 1.2% of the currently accepted value. He also made the first determination of the Boltzmann constant from the same data and theory.

The divergence of the theoretical Rayleigh-Jeans (black) curve from the observed Planck curves at different temperatures.

Development and application

The black-body problem was revisited in 1905, when Rayleigh and Jeans (on the one hand) and Einstein (on the other hand) independently proved that classical electromagnetism could never account for the observed spectrum. These proofs are commonly known as the "ultraviolet catastrophe", a name coined by Paul Ehrenfest in 1911. They contributed greatly (along with Einstein's work on the photoelectric effect) in convincing physicists that Planck's postulate of quantized energy levels was more than a mere mathematical formalism. The first Solvay Conference in 1911 was devoted to "the theory of radiation and quanta".

Photoelectric effect

The photoelectric effect is the emission of electrons (called "photoelectrons") from a surface when light is shone on it. It was first observed by Alexandre Edmond Becquerel in 1839, although credit is usually reserved for Heinrich Hertz, who published the first thorough investigation in 1887. Another particularly thorough investigation was published by Philipp Lenard in 1902. Einstein's 1905 paper discussing the effect in terms of light quanta would earn him the Nobel Prize in 1921, after his predictions had been confirmed by the experimental work of Robert Andrews Millikan. The Nobel committee awarded the prize for his work on the photo-electric effect, rather than relativity, both because of a bias against purely theoretical physics not grounded in discovery or experiment, and dissent amongst its members as to the actual proof that relativity was real.

Before Einstein's paper, electromagnetic radiation such as visible light was considered to behave as a wave: hence the use of the terms "frequency" and "wavelength" to characterize different types of radiation. The energy transferred by a wave in a given time is called its intensity. The light from a theatre spotlight is more intense than the light from a domestic lightbulb; that is to say that the spotlight gives out more energy per unit time and per unit space (and hence consumes more electricity) than the ordinary bulb, even though the color of the light might be very similar. Other waves, such as sound or the waves crashing against a seafront, also have their intensity. However, the energy account of the photoelectric effect didn't seem to agree with the wave description of light.

The "photoelectrons" emitted as a result of the photoelectric effect have a certain kinetic energy, which can be measured. This kinetic energy (for each photoelectron) is independent of the intensity of the light, but depends linearly on the frequency; and if the frequency is too low (corresponding to a photon energy that is less than the work function of the material), no photoelectrons are emitted at all, unless a plurality of photons, whose energetic sum is greater than the energy of the photoelectrons, acts virtually simultaneously (multiphoton effect). Assuming the frequency is high enough to cause the photoelectric effect, a rise in intensity of the light source causes more photoelectrons to be emitted with the same kinetic energy, rather than the same number of photoelectrons to be emitted with higher kinetic energy.

Einstein's explanation for these observations was that light itself is quantized; that the energy of light is not transferred continuously as in a classical wave, but only in small "packets" or quanta. The size of these "packets" of energy, which would later be named photons, was to be the same as Planck's "energy element", giving the modern version of the Planck–Einstein relation:

Einstein's postulate was later proven experimentally: the constant of proportionality between the frequency of incident light and the kinetic energy of photoelectrons was shown to be equal to the Planck constant .

Atomic structure

A schematization of the Bohr model of the hydrogen atom. The transition shown from the n = 3 level to the n = 2 level gives rise to visible light of wavelength 656 nm (red), as the model predicts.

Niels Bohr introduced the first quantized model of the atom in 1913, in an attempt to overcome a major shortcoming of Rutherford's classical model. In classical electrodynamics, a charge moving in a circle should radiate electromagnetic radiation. If that charge were to be an electron orbiting a nucleus, the radiation would cause it to lose energy and spiral down into the nucleus. Bohr solved this paradox with explicit reference to Planck's work: an electron in a Bohr atom could only have certain defined energies

where is the speed of light in vacuum, is an experimentally determined constant (the Rydberg constant) and . Once the electron reached the lowest energy level (), it could not get any closer to the nucleus (lower energy). This approach also allowed Bohr to account for the Rydberg formula, an empirical description of the atomic spectrum of hydrogen, and to account for the value of the Rydberg constant in terms of other fundamental constants.

Bohr also introduced the quantity , now known as the reduced Planck constant, as the quantum of angular momentum. At first, Bohr thought that this was the angular momentum of each electron in an atom: this proved incorrect and, despite developments by Sommerfeld and others, an accurate description of the electron angular momentum proved beyond the Bohr model. The correct quantization rules for electrons – in which the energy reduces to the Bohr model equation in the case of the hydrogen atom – were given by Heisenberg's matrix mechanics in 1925 and the Schrödinger wave equation in 1926: the reduced Planck constant remains the fundamental quantum of angular momentum. In modern terms, if is the total angular momentum of a system with rotational invariance, and the angular momentum measured along any given direction, these quantities can only take on the values

Uncertainty principle

The Planck constant also occurs in statements of Werner Heisenberg's uncertainty principle. Given numerous particles prepared in the same state, the uncertainty in their position, , and the uncertainty in their momentum, , obey

where the uncertainty is given as the standard deviation of the measured value from its expected value. There are several other such pairs of physically measurable conjugate variables which obey a similar rule. One example is time vs. energy. The inverse relationship between the uncertainty of the two conjugate variables forces a tradeoff in quantum experiments, as measuring one quantity more precisely results in the other quantity becoming imprecise.

In addition to some assumptions underlying the interpretation of certain values in the quantum mechanical formulation, one of the fundamental cornerstones to the entire theory lies in the commutator relationship between the position operator and the momentum operator :

where is the Kronecker delta.

Photon energy

The Planck–Einstein relation connects the particular photon energy E with its associated wave frequency f:

This energy is extremely small in terms of ordinarily perceived everyday objects.

Since the frequency f, wavelength λ, and speed of light c are related by , the relation can also be expressed as

The de Broglie wavelength λ of the particle is given by

where p denotes the linear momentum of a particle, such as a photon, or any other elementary particle.

In applications where it is natural to use the angular frequency (i.e. where the frequency is expressed in terms of radians per second instead of cycles per second or hertz) it is often useful to absorb a factor of 2π into the Planck constant. The resulting constant is called the reduced Planck constant. It is equal to the Planck constant divided by 2π, and is denoted ħ (pronounced "h-bar"):

The energy of a photon with angular frequency ω = 2πf is given by

while its linear momentum relates to

where k is an angular wavenumber. In 1923, Louis de Broglie generalized the Planck–Einstein relation by postulating that the Planck constant represents the proportionality between the momentum and the quantum wavelength of not just the photon, but the quantum wavelength of any particle. This was confirmed by experiments soon afterward. This holds throughout the quantum theory, including electrodynamics.


These two relations are the temporal and spatial parts of the special relativistic expression using 4-vectors.

Classical statistical mechanics requires the existence of h (but does not define its value). Eventually, following upon Planck's discovery, it was recognized that physical action cannot take on an arbitrary value. Instead, it must be some integer multiple of a very small quantity, the "quantum of action", now called the reduced Planck constant or the natural unit of action. This is the so-called "old quantum theory" developed by Bohr and Sommerfeld, in which particle trajectories exist but are hidden, but quantum laws constrain them based on their action. This view has been largely replaced by fully modern quantum theory, in which definite trajectories of motion do not even exist, rather, the particle is represented by a wavefunction spread out in space and in time. Thus there is no value of the action as classically defined. Related to this is the concept of energy quantization which existed in old quantum theory and also exists in altered form in modern quantum physics. Classical physics cannot explain either quantization of energy or the lack of classical particle motion.

In many cases, such as for monochromatic light or for atoms, quantization of energy also implies that only certain energy levels are allowed, and values in between are forbidden.

Value

The Planck constant has dimensions of physical action; i.e., energy multiplied by time, or momentum multiplied by distance, or angular momentum. In SI units, the Planck constant is expressed in joule-seconds (J⋅s or Nms or kg⋅m2⋅s−1). Implicit in the dimensions of the Planck constant is the fact that the SI unit of frequency, the hertz, represents one complete cycle, 360 degrees or 2π radians, per second. An angular frequency in radians per second is often more natural in mathematics and physics and many formulas use a reduced Planck constant (pronounced h-bar)

The above values are recommended by 2018 CODATA.

In Hartree atomic units,

Understanding the 'fixing' of the value of h

Since 2019, the numerical value of the Planck constant has been fixed, with finite significant figures. Under the present definition of the kilogram, which states that "The kilogram [...] is defined by taking the fixed numerical value of h to be 6.62607015×10−34 when expressed in the unit J⋅s, which is equal to kg⋅m2⋅s−1, where the metre and the second are defined in terms of speed of light c and duration of hyperfine transition of the ground state of an unperturbed cesium-133 atom ΔνCs." This implies that mass metrology is now aimed to find the value of one kilogram, and thus it is kilogram which is compensating. Every experiment aiming to measure the kilogram (such as the Kibble balance and the X-ray crystal density method), will essentially refine the value of a kilogram.

As an illustration of this, suppose the decision of making h to be exact was taken in 2010, when its measured value was 6.62606957×10−34 J⋅s, thus the present definition of kilogram was also enforced. In future, the value of one kilogram must have become refined to 6.62607015/6.626069571.0000001 times the mass of the International Prototype of the Kilogram (IPK), neglecting the metre and second units' share, for sake of simplicity.

Significance of the value

The Planck constant is related to the quantization of light and matter. It can be seen as a subatomic-scale constant. In a unit system adapted to subatomic scales, the electronvolt is the appropriate unit of energy and the petahertz the appropriate unit of frequency. Atomic unit systems are based (in part) on the Planck constant. The physical meaning of the Planck constant could suggest some basic features of our physical world. These basic features include the properties of the vacuum constants and . The Planck constant can be identified as

,

where Q is the quality factor and is the integrated area of the vector potential at the center of the wave packet representing a particle. 

The Planck constant is one of the smallest constants used in physics. This reflects the fact that on a scale adapted to humans, where energies are typical of the order of kilojoules and times are typical of the order of seconds or minutes, the Planck constant (the quantum of action) is very small. One can regard the Planck constant to be only relevant to the microscopic scale instead of the macroscopic scale in our everyday experience.

Equivalently, the order of the Planck constant reflects the fact that everyday objects and systems are made of a large number of microscopic particles. For example, green light with a wavelength of 555 nanometres (a wavelength that can be perceived by the human eye to be green) has a frequency of 540 THz (540×1012 Hz). Each photon has an energy E = hf = 3.58×10−19 J. That is a very small amount of energy in terms of everyday experience, but everyday experience is not concerned with individual photons any more than with individual atoms or molecules. An amount of light more typical in everyday experience (though much larger than the smallest amount perceivable by the human eye) is the energy of one mole of photons; its energy can be computed by multiplying the photon energy by the Avogadro constant, NA = 6.02214076×1023 mol−1, with the result of 216 kJ/mol, about the food energy in three apples.

Determination

In principle, the Planck constant can be determined by examining the spectrum of a black-body radiator or the kinetic energy of photoelectrons, and this is how its value was first calculated in the early twentieth century. In practice, these are no longer the most accurate methods.

Since the value of the Planck constant is fixed now, it is no longer determined or calculated in laboratories. Some of the practices given below to determine the Planck constant are now used to determine the mass of the kilogram. The below-given methods except the X-ray crystal density method rely on the theoretical basis of the Josephson effect and the quantum Hall effect.

Josephson constant

The Josephson constant KJ relates the potential difference U generated by the Josephson effect at a "Josephson junction" with the frequency ν of the microwave radiation. The theoretical treatment of Josephson effect suggests very strongly that KJ = 2e/h.

The Josephson constant may be measured by comparing the potential difference generated by an array of Josephson junctions with a potential difference which is known in SI volts. The measurement of the potential difference in SI units is done by allowing an electrostatic force to cancel out a measurable gravitational force, in a Kibble balance. Assuming the validity of the theoretical treatment of the Josephson effect, KJ is related to the Planck constant by

Kibble balance

A Kibble balance (formerly known as a watt balance) is an instrument for comparing two powers, one of which is measured in SI watts and the other of which is measured in conventional electrical units

From the definition of the conventional watt W90, this gives a measure of the product KJ2RK in SI units, where RK is the von Klitzing constant which appears in the quantum Hall effect. If the theoretical treatments of the Josephson effect and the quantum Hall effect are valid, and in particular assuming that RK = h/e2, the measurement of KJ2RK is a direct determination of the Planck constant.

Magnetic resonance

The gyromagnetic ratio γ is the constant of proportionality between the frequency ν of nuclear magnetic resonance (or electron paramagnetic resonance for electrons) and the applied magnetic field B: ν = γB. It is difficult to measure gyromagnetic ratios precisely because of the difficulties in precisely measuring B, but the value for protons in water at 25 °C is known to better than one part per million. The protons are said to be "shielded" from the applied magnetic field by the electrons in the water molecule, the same effect that gives rise to chemical shift in NMR spectroscopy, and this is indicated by a prime on the symbol for the gyromagnetic ratio, γp. The gyromagnetic ratio is related to the shielded proton magnetic moment μp, the spin number I (I = ​12 for protons) and the reduced Planck constant.

The ratio of the shielded proton magnetic moment μp to the electron magnetic moment μe can be measured separately and to high precision, as the imprecisely known value of the applied magnetic field cancels itself out in taking the ratio. The value of μe in Bohr magnetons is also known: it is half the electron g-factor ge. Hence

A further complication is that the measurement of γp involves the measurement of an electric current: this is invariably measured in conventional amperes rather than in SI amperes, so a conversion factor is required. The symbol Γp-90 is used for the measured gyromagnetic ratio using conventional electrical units. In addition, there are two methods of measuring the value, a "low-field" method and a "high-field" method, and the conversion factors are different in the two cases. Only the high-field value Γp-90(hi) is of interest in determining the Planck constant.

Substitution gives the expression for the Planck constant in terms of Γp-90(hi):

Faraday constant

The Faraday constant F is the charge of one mole of electrons, equal to the Avogadro constant NA multiplied by the elementary charge e. It can be determined by careful electrolysis experiments, measuring the amount of silver dissolved from an electrode in a given time and for a given electric current. In practice, it is measured in conventional electrical units, and so given the symbol F90. Substituting the definitions of NA and e, and converting from conventional electrical units to SI units, gives the relation to the Planck constant.

X-ray crystal density

The X-ray crystal density method is primarily a method for determining the Avogadro constant NA but as the Avogadro constant is related to the Planck constant it also determines a value for h. The principle behind the method is to determine NA as the ratio between the volume of the unit cell of a crystal, measured by X-ray crystallography, and the molar volume of the substance. Crystals of silicon are used, as they are available in high quality and purity by the technology developed for the semiconductor industry. The unit cell volume is calculated from the spacing between two crystal planes referred to as d220. The molar volume Vm(Si) requires a knowledge of the density of the crystal and the atomic weight of the silicon used. The Planck constant is given by

Particle accelerator

The experimental measurement of the Planck constant in the Large Hadron Collider laboratory was carried out in 2011. The study called PCC using a giant particle accelerator helped to better understand the relationships between the Planck constant and measuring distances in space.

Political psychology

From Wikipedia, the free encyclopedia ...