Search This Blog

Monday, November 10, 2025

Photon

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Photon

CompositionElementary particle
StatisticsBose–Einstein statistics
FamilyGauge boson
InteractionsElectromagnetic, gravity
Symbolγ
TheorizedAlbert Einstein (1905)
The name "photon" is generally attributed to Gilbert N. Lewis (1926)
Mass0 (theoretical value)
< 1×10−18 eV/c2(experimental limit)
Mean lifetimeStable
Electric charge0
< 1×10−35 e(experimental limit)
Color chargeNo
Spinħ
Spin states+1 ħ,  −1 ħ
Parity−1
C parity−1
CondensedI(J PC) = 0, 1 (1−−)

A photon (from Ancient Greek φῶς, φωτός (phôs, phōtós) 'light') is an elementary particle that is a quantum of the electromagnetic field, including electromagnetic radiation such as light and radio waves, and the force carrier for the electromagnetic force. Photons are massless particles that can only move at one speed, the speed of light measured in vacuum. The photon belongs to the class of boson particles.

As with other elementary particles, photons are best explained by quantum mechanics and exhibit wave–particle duality, their behavior featuring properties of both waves and particles. The modern photon concept originated during the first two decades of the 20th century with the work of Albert Einstein, who built upon the research of Max Planck. While Planck was trying to explain how matter and electromagnetic radiation could be in thermal equilibrium with one another, he proposed that the energy stored within a material object should be regarded as composed of an integer number of discrete, equal-sized parts. To explain the photoelectric effect, Einstein introduced the idea that light itself is made of discrete units of energy. In 1926, Gilbert N. Lewis popularized the term photon for these energy units. Subsequently, many other experiments validated Einstein's approach.

In the Standard Model of particle physics, photons and other elementary particles are described as a necessary consequence of physical laws having a certain symmetry at every point in spacetime. The intrinsic properties of particles, such as charge, mass, and spin, are determined by gauge symmetry. The photon concept has led to momentous advances in experimental and theoretical physics, including lasers, Bose–Einstein condensation, quantum field theory, and the probabilistic interpretation of quantum mechanics. It has been applied to photochemistry, high-resolution microscopy, and measurements of molecular distances. Moreover, photons have been studied as elements of quantum computers, and for applications in optical imaging and optical communication such as quantum cryptography.

Physical properties

The photon has no electric charge, is generally considered to have zero rest mass, and is a stable particle. The experimental upper limit on the photon mass is very small, on the order of 10−53 g; its lifetime would be more than 1018 years. For comparison, the age of the universe is about 1.38×1010 years.

In a vacuum, a photon has two possible polarization states. The photon is the gauge boson for electromagnetism, and therefore all other quantum numbers of the photon (such as lepton number, baryon number, and flavour quantum numbers) are zero. Also, photons obey Bose–Einstein statistics, and not Fermi–Dirac statistics. That is, they do not obey the Pauli exclusion principle, and more than one photon can occupy the same bound quantum state.

Photons are emitted when a charge is accelerated and emits synchrotron radiation. During a molecular, atomic, or nuclear transition to a lower energy level, the photons emitted have characteristic energies ranging from radio waves to gamma rays. Photons can also be emitted when a particle and its corresponding antiparticle are annihilated (for example, electron–positron annihilation).

Energy and momentum

The cone shows possible values of wave 4-vector of a photon. The "time" axis gives the angular frequency (rad⋅s−1) and the "space" axis represents the angular wavenumber (rad⋅m−1). Green and indigo represent left and right polarization.

In a quantum mechanical model, electromagnetic waves transfer energy in photons with energy proportional to frequency ()

where h is the Planck constant, a fundamental physical constant. The energy can be written with angular frequency () or wavelength (λ):

where ħh/ 2π is called the reduced Planck constant and c is the speed of light.

The momentum of a photon

where k is the wave vector, where

  • k ≡ |k| =  2π/λ   is the wave number.

Since points in the direction of the photon's propagation, the magnitude of its momentum is

The photon energy can be written as E = pc where p is the magnitude of the momentum vector p. This consistent with the energy–momentum relation of special relativity,

when m = 0.

Polarization and spin angular momentum

The photon also carries spin angular momentum, which is related to photon polarization. (Beams of light also exhibit properties described as orbital angular momentum of light).

The angular momentum of the photon has two possible values, either or −ħ. These two possible values correspond to the two possible pure states of circular polarization. Collections of photons in a light beam may have mixtures of these two values; a linearly polarized light beam will act as if it were composed of equal numbers of the two possible angular momenta.

The spin angular momentum of light does not depend on its frequency, and was experimentally verified by C. V. Raman and Suri Bhagavantam in 1931.

Antiparticle annihilation

The collision of a particle with its antiparticle can create photons. In free space at least two photons must be created since, in the center of momentum frame, the colliding antiparticles have no net momentum, whereas a single photon always has momentum (determined by the photon's frequency or wavelength, which cannot be zero). Hence, conservation of momentum (or equivalently, translational invariance) requires that at least two photons are created, with zero net momentum. The energy of the two photons, or, equivalently, their frequency, may be determined from conservation of four-momentum.

Seen another way, the photon can be considered as its own antiparticle (thus an "antiphoton" is simply a normal photon with opposite momentum, equal polarization, and 180° out of phase). The reverse process, pair production, is the dominant mechanism by which high-energy photons such as gamma rays lose energy while passing through matter. That process is the reverse of "annihilation to one photon" allowed in the electric field of an atomic nucleus.

The classical formulae for the energy and momentum of electromagnetic radiation can be re-expressed in terms of photon events. For example, the pressure of electromagnetic radiation on an object derives from the transfer of photon momentum per unit time and unit area to that object, since pressure is force per unit area and force is the change in momentum per unit time.

Experimental checks on photon mass

Current commonly accepted physical theories imply or assume the photon to be strictly massless. If photons were not purely massless, their speeds would vary with frequency, with lower-energy (redder) photons moving slightly slower than higher-energy photons. Relativity would be unaffected by this; the so-called speed of light, c, would then not be the actual speed at which light moves, but a constant of nature which is the upper bound on speed that any object could theoretically attain in spacetime. Thus, it would still be the speed of spacetime ripples (gravitational waves and gravitons), but it would not be the speed of photons.

If a photon did have non-zero mass, there would be other effects as well. Coulomb's law would be modified and the electromagnetic field would have an extra physical degree of freedom. These effects yield more sensitive experimental probes of the photon mass than the frequency dependence of the speed of light. If Coulomb's law is not exactly valid, then that would allow the presence of an electric field to exist within a hollow conductor when it is subjected to an external electric field. This provides a means for precision tests of Coulomb's law. A null result of such an experiment has set a limit of m10−14 eV/c2.

Sharper upper limits on the mass of light have been obtained in experiments designed to detect effects caused by the galactic vector potential. Although the galactic vector potential is large because the galactic magnetic field exists on great length scales, only the magnetic field would be observable if the photon is massless. In the case that the photon has mass, the mass term 1/2m2AμAμ would affect the galactic plasma. The fact that no such effects are seen implies an upper bound on the photon mass of m < 3×10−27 eV/c2. The galactic vector potential can also be probed directly by measuring the torque exerted on a magnetized ring. Such methods were used to obtain the sharper upper limit of 1.07×10−27 eV/c2 (10−36 Da) given by the Particle Data Group.

These sharp limits from the non-observation of the effects caused by the galactic vector potential have been shown to be model-dependent. If the photon mass is generated via the Higgs mechanism then the upper limit of m10−14 eV/c2 from the test of Coulomb's law is valid.

Historical development

Thomas Young's sketch of interference based on observations of water waves. Young reasoned that the similar effects observed with light supported a wave model and not Newton's particle theory of light.

In most theories up to the eighteenth century, light was pictured as being made of particles. Since particle models cannot easily account for the refraction, diffraction and birefringence of light, wave theories of light were proposed by René Descartes (1637), Robert Hooke (1665), and Christiaan Huygens (1678); however, particle models remained dominant, chiefly due to the influence of Isaac Newton. In the early 19th century, Thomas Young and August Fresnel clearly demonstrated the interference and diffraction of light, and by 1850 wave models were generally accepted. James Clerk Maxwell's 1865 prediction that light was an electromagnetic wave – which was confirmed experimentally in 1888 by Heinrich Hertz's detection of radio waves – seemed to be the final blow to particle models of light.

In 1900, Maxwell's theoretical model of light as oscillating electric and magnetic fields seemed complete. However, several observations could not be explained by any wave model of electromagnetic radiation, leading to the idea that light-energy was packaged into quanta described by E = hν. Later experiments showed that these light-quanta also carry momentum and, thus, can be considered particles: The photon concept was born, leading to a deeper understanding of the electric and magnetic fields themselves.

The Maxwell wave theory, however, does not account for all properties of light. The Maxwell theory predicts that the energy of a light wave depends only on its intensity, not on its frequency; nevertheless, several independent types of experiments show that the energy imparted by light to atoms depends only on the light's frequency, not on its intensity. For example, some chemical reactions are provoked only by light of frequency higher than a certain threshold; light of frequency lower than the threshold, no matter how intense, does not initiate the reaction. Similarly, electrons can be ejected from a metal plate by shining light of sufficiently high frequency on it (the photoelectric effect); the energy of the ejected electron is related only to the light's frequency, not to its intensity.

At the same time, investigations of black-body radiation carried out over four decades (1860–1900) by various researchers culminated in Max Planck's hypothesis that the energy of any system that absorbs or emits electromagnetic radiation of frequency ν is an integer multiple of an energy quantum E = . As shown by Albert Einstein, some form of energy quantization must be assumed to account for the thermal equilibrium observed between matter and electromagnetic radiation; for this explanation of the photoelectric effect, Einstein received the 1921 Nobel Prize in physics.

Since the Maxwell theory of light allows for all possible energies of electromagnetic radiation, most physicists assumed initially that the energy quantization resulted from some unknown constraint on the matter that absorbs or emits the radiation. In 1905, Einstein was the first to propose that energy quantization was a property of electromagnetic radiation itself. Although he accepted the validity of Maxwell's theory, Einstein pointed out that many anomalous experiments could be explained if the energy of a Maxwellian light wave were localized into point-like quanta that move independently of one another, even if the wave itself is spread continuously over space. In 1909 and 1916, Einstein showed that, if Planck's law regarding black-body radiation is accepted, the energy quanta must also carry momentum p =  h / λ  , making them full-fledged particles.

Up to 1923, most physicists were reluctant to accept that light itself was quantized. Instead, they tried to explain photon behaviour by quantizing only matter, as in the Bohr model of the hydrogen atom (shown here). Even though these semiclassical models were only a first approximation, they were accurate for simple systems and they led to quantum mechanics.

As recounted in Robert Millikan's 1923 Nobel lecture, Einstein's 1905 predicted energy relationship was verified experimentally by 1916 but the local concept of the quanta remained unsettled. Most physicists were reluctant to believe that electromagnetic radiation itself might be particulate and thus an example of wave-particle duality. Then in 1922 Arthur Compton experiment showed that photons carried momentum proportional to their wave number (1922) in an experiment now called Compton scattering that appeared to clearly support a localized quantum model. At least for Millikan, this settled the matter. Compton received the Nobel Prize in 1927 for his scattering work.

Even after Compton's experiment, Niels Bohr, Hendrik Kramers and John Slater made one last attempt to preserve the Maxwellian continuous electromagnetic field model of light, the so-called BKS theory. An important feature of the BKS theory is how it treated the conservation of energy and the conservation of momentum. In the BKS theory, energy and momentum are only conserved on the average across many interactions between matter and radiation. However, refined Compton experiments showed that the conservation laws hold for individual interactions. Accordingly, Bohr and his co-workers gave their model "as honorable a funeral as possible". Nevertheless, the failures of the BKS model inspired Werner Heisenberg in his development of matrix mechanics.

By the late 1920, the pivotal question was how to unify Maxwell's wave theory of light with its experimentally observed particle nature. The answer to this question occupied Albert Einstein for the rest of his life, and was solved in quantum electrodynamics and its successor, the Standard Model. (See § Quantum field theory and § As a gauge boson, below.)

A few physicists persisted in developing semiclassical models in which electromagnetic radiation is not quantized, but matter appears to obey the laws of quantum mechanics. Although the evidence from chemical and physical experiments for the existence of photons was overwhelming by the 1970s, this evidence could not be considered as absolutely definitive; since it relied on the interaction of light with matter, and a sufficiently complete theory of matter could in principle account for the evidence.

In the 1970s and 1980s photon-correlation experiments definitively demonstrated quantum photon effects. These experiments produce results that cannot be explained by any classical theory of light, since they involve anticorrelations that result from the quantum measurement process. In 1974, the first such experiment was carried out by Clauser, who reported a violation of a classical Cauchy–Schwarz inequality. In 1977, Kimble et al. demonstrated an analogous anti-bunching effect of photons interacting with a beam splitter; this approach was simplified and sources of error eliminated in the photon-anticorrelation experiment of Grangier, Roger, & Aspect (1986); This work is reviewed and simplified further in Thorn, Neel, et al. (2004).

Nomenclature

Photoelectric effect: the emission of electrons from a metal plate caused by light quanta – photons

The word quanta (singular quantum, Latin for how much) was used before 1900 to mean particles or amounts of different quantities, including electricity. In 1900, the German physicist Max Planck was studying black-body radiation, and he suggested that the experimental observations, specifically at shorter wavelengths, would be explained if the energy was "made up of a completely determinate number of finite equal parts", which he called "energy elements". In 1905, Albert Einstein published a paper in which he proposed that many light-related phenomena—including black-body radiation and the photoelectric effect—would be better explained by modelling electromagnetic waves as consisting of spatially localized, discrete energy quanta. He called these a light quantum (German: ein Lichtquant).

The name photon derives from the Greek word for light, φῶς (transliterated phôs). The name was used 1916 by the American physicist and psychologist Leonard T. Troland for a unit of illumination of the retina and in several other contexts before being adopted for physics. The use of the term photon for the light quantum was popularized by Gilbert N. Lewis, who used the term in a letter to Nature on 18 December 1926. Arthur Compton, who had performed a key experiment demonstrating light quanta, cited Lewis in the 1927 Solvay conference proceedings for suggesting the name photon. Einstein never did use the term.

In physics, a photon is usually denoted by the symbol γ (the Greek letter gamma). This symbol for the photon probably derives from gamma rays, which were discovered in 1900 by Paul Villard, named by Ernest Rutherford in 1903, and shown to be a form of electromagnetic radiation in 1914 by Rutherford and Edward Andrade. In chemistry and optical engineering, photons are usually symbolized by , which is the photon energy, where h is the Planck constant and the Greek letter ν (nu) is the photon's frequency.

Wave–particle duality and uncertainty principles

Photons obey the laws of quantum mechanics, and so their behavior has both wave-like and particle-like aspects. When a photon is detected by a measuring instrument, it is registered as a single, particulate unit. However, the probability of detecting a photon is calculated by equations that describe waves. This combination of aspects is known as wave–particle duality. For example, the probability distribution for the location at which a photon might be detected displays clearly wave-like phenomena such as diffraction and interference. A single photon passing through a double slit has its energy received at a point on the screen with a probability distribution given by its interference pattern determined by Maxwell's wave equations. However, experiments confirm that the photon is not a short pulse of electromagnetic radiation; a photon's Maxwell waves will diffract, but photon energy does not spread out as it propagates, nor does this energy divide when it encounters a beam splitter. Rather, the received photon acts like a point-like particle since it is absorbed or emitted as a whole by arbitrarily small systems, including systems much smaller than its wavelength, such as an atomic nucleus (≈10−15 m across) or even the point-like electron.

While many introductory texts treat photons using the mathematical techniques of non-relativistic quantum mechanics, this is in some ways an awkward oversimplification, as photons are by nature intrinsically relativistic. Because photons have zero rest mass, no wave function defined for a photon can have all the properties familiar from wave functions in non-relativistic quantum mechanics.[a] In order to avoid these difficulties, physicists employ the second-quantized theory of photons described below, quantum electrodynamics, in which photons are quantized excitations of electromagnetic modes.

Another difficulty is finding the proper analogue for the uncertainty principle, an idea frequently attributed to Heisenberg, who introduced the concept in analyzing a thought experiment involving an electron and a high-energy photon. However, Heisenberg did not give precise mathematical definitions of what the "uncertainty" in these measurements meant. The precise mathematical statement of the position–momentum uncertainty principle is due to Kennard, Pauli, and Weyl. The uncertainty principle applies to situations where an experimenter has a choice of measuring either one of two "canonically conjugate" quantities, like the position and the momentum of a particle. According to the uncertainty principle, no matter how the particle is prepared, it is not possible to make a precise prediction for both of the two alternative measurements: if the outcome of the position measurement is made more certain, the outcome of the momentum measurement becomes less so, and vice versa. A coherent state minimizes the overall uncertainty as far as quantum mechanics allows. Quantum optics makes use of coherent states for modes of the electromagnetic field. There is a tradeoff, reminiscent of the position–momentum uncertainty relation, between measurements of an electromagnetic wave's amplitude and its phase. This is sometimes informally expressed in terms of the uncertainty in the number of photons present in the electromagnetic wave, , and the uncertainty in the phase of the wave, . However, this cannot be an uncertainty relation of the Kennard–Pauli–Weyl type, since unlike position and momentum, the phase cannot be represented by a Hermitian operator.

Bose–Einstein model of a photon gas

In 1924, Satyendra Nath Bose derived Planck's law of black-body radiation without using any electromagnetism, but rather by using a modification of coarse-grained counting of phase space. Einstein showed that this modification is equivalent to assuming that photons are rigorously identical and that it implied a "mysterious non-local interaction", now understood as the requirement for a symmetric quantum mechanical state. This work led to the concept of coherent states and the development of the laser. In the same papers, Einstein extended Bose's formalism to material particles (bosons) and predicted that they would condense into their lowest quantum state at low enough temperatures; this Bose–Einstein condensation was observed experimentally in 1995. It was later used by Lene Hau to slow, and then completely stop, light in 1999 and 2001.

The modern view on this is that photons are, by virtue of their integer spin, bosons (as opposed to fermions with half-integer spin). By the spin-statistics theorem, all bosons obey Bose–Einstein statistics (whereas all fermions obey Fermi–Dirac statistics).

Stimulated and spontaneous emission

Stimulated emission (in which photons "clone" themselves) was predicted by Einstein in his kinetic analysis, and led to the development of the laser. Einstein's derivation inspired further developments in the quantum treatment of light, which led to the statistical interpretation of quantum mechanics.

In 1916, Albert Einstein showed that Planck's radiation law could be derived from a semi-classical, statistical treatment of photons and atoms, which implies a link between the rates at which atoms emit and absorb photons. The condition follows from the assumption that functions of the emission and absorption of radiation by the atoms are independent of each other, and that thermal equilibrium is made by way of the radiation's interaction with the atoms. Consider a cavity in thermal equilibrium with all parts of itself and filled with electromagnetic radiation and that the atoms can emit and absorb that radiation. Thermal equilibrium requires that the energy density of photons with frequency (which is proportional to their number density) is, on average, constant in time; hence, the rate at which photons of any particular frequency are emitted must equal the rate at which they are absorbed.

Einstein began by postulating simple proportionality relations for the different reaction rates involved. In his model, the rate for a system to absorb a photon of frequency and transition from a lower energy to a higher energy is proportional to the number of atoms with energy and to the energy density of ambient photons of that frequency,

where is the rate constant for absorption. For the reverse process, there are two possibilities: spontaneous emission of a photon, or the emission of a photon initiated by the interaction of the atom with a passing photon and the return of the atom to the lower-energy state. Following Einstein's approach, the corresponding rate for the emission of photons of frequency and transition from a higher energy to a lower energy is

where is the rate constant for emitting a photon spontaneously, and is the rate constant for emissions in response to ambient photons (induced or stimulated emission). In thermodynamic equilibrium, the number of atoms in state and those in state must, on average, be constant; hence, the rates and must be equal. Also, by arguments analogous to the derivation of Boltzmann statistics, the ratio of and is where and are the degeneracy of the state and that of , respectively, and their energies, the Boltzmann constant and the system's temperature. From this, it is readily derived that

and

The and are collectively known as the Einstein coefficients.[85]

Einstein could not fully justify his rate equations, but claimed that it should be possible to calculate the coefficients , and once physicists had obtained "mechanics and electrodynamics modified to accommodate the quantum hypothesis". Not long thereafter, in 1926, Paul Dirac derived the rate constants by using a semiclassical approach, and, in 1927, succeeded in deriving all the rate constants from first principles within the framework of quantum theory. Dirac's work was the foundation of quantum electrodynamics, i.e., the quantization of the electromagnetic field itself. Dirac's approach is also called second quantization or quantum field theory; earlier quantum mechanical treatments only treat material particles as quantum mechanical, not the electromagnetic field.

Einstein was troubled by the fact that his theory seemed incomplete, since it did not determine the direction of a spontaneously emitted photon. A probabilistic nature of light-particle motion was first considered by Newton in his treatment of birefringence and, more generally, of the splitting of light beams at interfaces into a transmitted beam and a reflected beam. Newton hypothesized that hidden variables in the light particle determined which of the two paths a single photon would take. Similarly, Einstein hoped for a more complete theory that would leave nothing to chance, beginning his separation from quantum mechanics. Ironically, Max Born's probabilistic interpretation of the wave function was inspired by Einstein's later work searching for a more complete theory.

Quantum field theory

Quantization of the electromagnetic field

Different electromagnetic modes (such as those depicted here) can be treated as independent simple harmonic oscillators. A photon corresponds to a unit of energy E =  in its electromagnetic mode.

In 1910, Peter Debye derived Planck's law of black-body radiation from a relatively simple assumption. He decomposed the electromagnetic field in a cavity into its Fourier modes, and assumed that the energy in any mode was an integer multiple of , where is the frequency of the electromagnetic mode. Planck's law of black-body radiation follows immediately as a geometric sum. However, Debye's approach failed to give the correct formula for the energy fluctuations of black-body radiation, which were derived by Einstein in 1909.

In 1925, Born, Heisenberg and Jordan reinterpreted Debye's concept in a key way. As may be shown classically, the Fourier modes of the electromagnetic field—a complete set of electromagnetic plane waves indexed by their wave vector k and polarization state—are equivalent to a set of uncoupled simple harmonic oscillators. Treated quantum mechanically, the energy levels of such oscillators are known to be , where is the oscillator frequency. The key new step was to identify an electromagnetic mode with energy as a state with photons, each of energy . This approach gives the correct energy fluctuation formula.

Feynman diagram of two electrons interacting by exchange of a virtual photon

Dirac took this one step further. He treated the interaction between a charge and an electromagnetic field as a small perturbation that induces transitions in the photon states, changing the numbers of photons in the modes, while conserving energy and momentum overall. Dirac was able to derive Einstein's and coefficients from first principles, and showed that the Bose–Einstein statistics of photons is a natural consequence of quantizing the electromagnetic field correctly (Bose's reasoning went in the opposite direction; he derived Planck's law of black-body radiation by assuming B–E statistics). In Dirac's time, it was not yet known that all bosons, including photons, must obey Bose–Einstein statistics.

Dirac's second-order perturbation theory can involve virtual photons, transient intermediate states of the electromagnetic field; the static electric and magnetic interactions are mediated by such virtual photons. In such quantum field theories, the probability amplitude of observable events is calculated by summing over all possible intermediate steps, even ones that are unphysical; hence, virtual photons are not constrained to satisfy , and may have extra polarization states; depending on the gauge used, virtual photons may have three or four polarization states, instead of the two states of real photons. Although these transient virtual photons can never be observed, they contribute measurably to the probabilities of observable events.

Second-order and higher-order perturbation calculations can give infinite contributions to the sum. Such unphysical results are corrected for using the technique of renormalization.

Other virtual particles may contribute to the summation as well; for example, two photons may interact indirectly through virtual electronpositron pairs. Such photon–photon scattering (see two-photon physics), as well as electron–photon scattering, is meant to be one of the modes of operations of the planned particle accelerator, the International Linear Collider.

In modern physics notation, the quantum state of the electromagnetic field is written as a Fock state, a tensor product of the states for each electromagnetic mode

where represents the state in which photons are in the mode . In this notation, the creation of a new photon in mode (e.g., emitted from an atomic transition) is written as . This notation merely expresses the concept of Born, Heisenberg and Jordan described above, and does not add any physics.

As a gauge boson

The electromagnetic field can be understood as a gauge field, i.e., as a field that results from requiring that a gauge symmetry holds independently at every position in spacetime. For the electromagnetic field, this gauge symmetry is the Abelian U(1) symmetry of complex numbers of absolute value 1, which reflects the ability to vary the phase of a complex field without affecting observables or real valued functions made from it, such as the energy or the Lagrangian.

The quanta of an Abelian gauge field must be massless, uncharged bosons, as long as the symmetry is not broken; hence, the photon is predicted to be massless, and to have zero electric charge and integer spin. The particular form of the electromagnetic interaction specifies that the photon must have spin ±1; thus, its helicity must be . These two spin components correspond to the classical concepts of right-handed and left-handed circularly polarized light. However, the transient virtual photons of quantum electrodynamics may also adopt unphysical polarization states.

In the prevailing Standard Model of physics, the photon is one of four gauge bosons in the electroweak interaction; the other three are denoted W+, W and Z0 and are responsible for the weak interaction. Unlike the photon, these gauge bosons have mass, owing to a mechanism that breaks their SU(2) gauge symmetry. The unification of the photon with W and Z gauge bosons in the electroweak interaction was accomplished by Sheldon Glashow, Abdus Salam and Steven Weinberg, for which they were awarded the 1979 Nobel Prize in physics. Physicists continue to hypothesize grand unified theories that connect these four gauge bosons with the eight gluon gauge bosons of quantum chromodynamics; however, key predictions of these theories, such as proton decay, have not been observed experimentally.

Hadronic properties

Measurements of the interaction between energetic photons and hadrons show that the interaction is much more intense than expected by the interaction of merely photons with the hadron's electric charge. Furthermore, the interaction of energetic photons with protons is similar to the interaction of photons with neutrons in spite of the fact that the electrical charge structures of protons and neutrons are substantially different. A theory called vector meson dominance (VMD) was developed to explain this effect. According to VMD, the photon is a superposition of the pure electromagnetic photon, which interacts only with electric charges, and vector mesons, which mediate the residual nuclear force. However, if experimentally probed at very short distances, the intrinsic structure of the photon appears to have as components a charge-neutral flux of quarks and gluons, quasi-free according to asymptotic freedom in QCD. That flux is described by the photon structure function. A review by Nisius (2000) presented a comprehensive comparison of data with theoretical predictions.

Contributions to the mass of a system

The energy of a system that emits a photon is decreased by the energy of the photon as measured in the rest frame of the emitting system, which may result in a reduction in mass in the amount . Similarly, the mass of a system that absorbs a photon is increased by a corresponding amount. As an application, the energy balance of nuclear reactions involving photons is commonly written in terms of the masses of the nuclei involved, and terms of the form for the gamma photons (and for other relevant energies, such as the recoil energy of nuclei).

This concept is applied in key predictions of quantum electrodynamics (QED, see above). In that theory, the mass of electrons (or, more generally, leptons) is modified by including the mass contributions of virtual photons, in a technique known as renormalization. Such "radiative corrections" contribute to a number of predictions of QED, such as the magnetic dipole moment of leptons, the Lamb shift, and the hyperfine structure of bound lepton pairs, such as muonium and positronium.

Since photons contribute to the stress–energy tensor, they exert a gravitational attraction on other objects, according to the theory of general relativity. Conversely, photons are themselves affected by gravity; their normally straight trajectories may be bent by warped spacetime, as in gravitational lensing, and their frequencies may be lowered by moving to a higher gravitational potential, as in the Pound–Rebka experiment. However, these effects are not specific to photons; exactly the same effects would be predicted for classical electromagnetic waves.

In matter

Light that travels through transparent matter does so at a lower speed than c, the speed of light in vacuum. The factor by which the speed is decreased is called the refractive index of the material. In a classical wave picture, the slowing can be explained by the light inducing electric polarization in the matter, the polarized matter radiating new light, and that new light interfering with the original light wave to form a delayed wave. In a particle picture, the slowing can instead be described as a blending of the photon with quantum excitations of the matter to produce quasi-particles known as polaritons. Polaritons have a nonzero effective mass, which means that they cannot travel at c. Light of different frequencies may travel through matter at different speeds; this is called dispersion (not to be confused with scattering). In some cases, it can result in extremely slow speeds of light in matter. The effects of photon interactions with other quasi-particles may be observed directly in Raman scattering and Brillouin scattering.

Photons can be scattered by matter. For example, photons scatter so many times in the solar radiative zone after leaving the core of the Sun that radiant energy takes about a million years to reach the convection zone. However, photons emitted from the sun's photosphere take only 8.3 minutes to reach Earth.

Photons can also be absorbed by nuclei, atoms or molecules, provoking transitions between their energy levels. A classic example is the molecular transition of retinal (C20H28O), which is responsible for vision, as discovered in 1958 by Nobel laureate biochemist George Wald and co-workers. The absorption provokes a cis–trans isomerization that, in combination with other such transitions, is transduced into nerve impulses. The absorption of photons can even break chemical bonds, as in the photodissociation of chlorine; this is the subject of photochemistry.

Technological applications

Photons have many applications in technology. These examples are chosen to illustrate applications of photons per se, rather than general optical devices such as lenses, etc. that could operate under a classical theory of light. The laser is an important application and is discussed above under stimulated emission.

Individual photons can be detected by several methods. The classic photomultiplier tube exploits the photoelectric effect: a photon of sufficient energy strikes a metal plate and knocks free an electron, initiating an ever-amplifying avalanche of electrons. Semiconductor charge-coupled device chips use a similar effect: an incident photon generates a charge on a microscopic capacitor that can be detected. Other detectors such as Geiger counters use the ability of photons to ionize gas molecules contained in the device, causing a detectable change of conductivity of the gas.

Planck's energy formula is often used by engineers and chemists in design, both to compute the change in energy resulting from a photon absorption and to determine the frequency of the light emitted from a given photon emission. For example, the emission spectrum of a gas-discharge lamp can be altered by filling it with (mixtures of) gases with different electronic energy level configurations.

Under some conditions, an energy transition can be excited by "two" photons that individually would be insufficient. This allows for higher resolution microscopy, because the sample absorbs energy only in the spectrum where two beams of different colors overlap significantly, which can be made much smaller than the excitation volume of a single beam (see two-photon excitation microscopy). Moreover, these photons cause less damage to the sample, since they are of lower energy.

In some cases, two energy transitions can be coupled so that, as one system absorbs a photon, another nearby system "steals" its energy and re-emits a photon of a different frequency. This is the basis of fluorescence resonance energy transfer, a technique that is used in molecular biology to study the interaction of suitable proteins.

Several different kinds of hardware random number generators involve the detection of single photons. In one example, for each bit in the random sequence that is to be produced, a photon is sent to a beam-splitter. In such a situation, there are two possible outcomes of equal probability. The actual outcome is used to determine whether the next bit in the sequence is 0 or 1.

Quantum optics and computation

Much research has been devoted to applications of photons in the field of quantum optics. Photons seem well-suited to be elements of an extremely fast quantum computer, and the quantum entanglement of photons is a focus of research. Nonlinear optical processes are another active research area, with topics such as two-photon absorption, self-phase modulation, modulational instability and optical parametric oscillators. However, such processes generally do not require the assumption of photons per se; they may often be modeled by treating atoms as nonlinear oscillators. The nonlinear process of spontaneous parametric down conversion is often used to produce single-photon states. Finally, photons are essential in some aspects of optical communication, especially for quantum cryptography.

Two-photon physics studies interactions between photons, which are rare. In 2018, Massachusetts Institute of Technology researchers announced the discovery of bound photon triplets, which may involve polaritons.

Pressure

From Wikipedia, the free encyclopedia
Pressure exerted by particle collisions inside a closed container. The collisions that exert the pressure are highlighted in red.
Common symbols
p, P
SI unitpascal (Pa)
In SI base unitskgm−1s−2
Derivations from
other quantities
p = F / A
Dimension

Pressure (symbol: p or P) is the force applied perpendicular to the surface of an object per unit area over which that force is distributed. Gauge pressure (also spelled gage pressure) is the pressure relative to the ambient pressure.

Various units are used to express pressure. Some of these derive from a unit of force divided by a unit of area; the SI unit of pressure, the pascal (Pa), for example, is one newton per square metre (N/m2); similarly, the pound-force per square inch (psi, symbol lbf/in2) is the traditional unit of pressure in the imperial and US customary systems. Pressure may also be expressed in terms of standard atmospheric pressure; the unit atmosphere (atm) is equal to this pressure, and the torr is defined as 1760 of this. Manometric units such as the centimetre of water, millimetre of mercury, and inch of mercury are used to express pressures in terms of the height of column of a particular fluid in a manometer.

Definition

Pressure is the amount of force applied perpendicular to the surface of an object per unit area. The symbol for it is "p" or P. The IUPAC recommendation for pressure is a lower-case p. However, upper-case P is widely used. The usage of P vs p depends upon the field in which one is working, on the nearby presence of other symbols for quantities such as power and momentum, and on writing style.

Formula

Mathematically:  where:

  • is the pressure,
  • is the magnitude of the normal force,
  • is the area of the surface on contact.

Pressure is a scalar quantity. It relates the vector area element (a vector normal to the surface) with the normal force acting on it. The pressure is the scalar proportionality constant that relates these two normal vectors:

The minus sign comes from the convention that the force is considered towards the surface element, while the normal vector points outward. The equation has meaning in that, for any surface S in contact with the fluid, the total force exerted by the fluid on that surface is the surface integral over S of the right-hand side of the above equation.

It is incorrect (although rather usual) to say "the pressure is directed in such or such direction". The pressure, as a scalar, has no direction. The force given by the previous relationship to the quantity has a direction, but the pressure does not. If we change the orientation of the surface element, the direction of the normal force changes accordingly, but the pressure remains the same.

Pressure is distributed to solid boundaries or across arbitrary sections of fluid normal to these boundaries or sections at every point. It is a fundamental parameter in thermodynamics, and it is conjugate to volume. It is defined as a derivative of the internal energy of a system:

where:

  • is the internal energy,
  • is the volume of the system,
  • The subscripts mean that the derivative is taken at fixed entropy () and particle number ().

Units

Mercury column

The SI unit for pressure is the pascal (Pa), equal to one newton per square metre (N/m2, or kg·m−1·s−2). This name for the unit was added in 1971; before that, pressure in SI was expressed in newtons per square metre.

Other units of pressure, such as pounds per square inch (lbf/in2) and bar, are also in common use. The CGS unit of pressure is the barye (Ba), equal to 1 dyn·cm−2, or 0.1 Pa. Pressure is sometimes expressed in grams-force or kilograms-force per square centimetre ("g/cm2" or "kg/cm2") and the like without properly identifying the force units. But using the names kilogram, gram, kilogram-force, or gram-force (or their symbols) as units of force is deprecated in SI. The technical atmosphere (symbol: at) is 1 kgf/cm2 (98.0665 kPa, or 14.223 psi).

Pressure is related to energy density and may be expressed in units such as joules per cubic metre (J/m3, which is equal to Pa). Mathematically:

Some meteorologists prefer the hectopascal (hPa) for atmospheric air pressure, which is equivalent to the older unit millibar (mbar). Similar pressures are given in kilopascals (kPa) in most other fields, except aviation where the hecto- prefix is commonly used. The inch of mercury is still used in the United States. Oceanographers usually measure underwater pressure in decibars (dbar) because pressure in the ocean increases by approximately one decibar per metre depth.

The standard atmosphere (atm) is an established constant. It is approximately equal to typical air pressure at Earth mean sea level and is defined as 101325 Pa (IUPAC recommends the value 100000 Pa, but prior to 1982 the value 101325 Pa (= 1 atm) was usually used).

Because pressure is commonly measured by its ability to displace a column of liquid in a manometer, pressures are often expressed as a depth of a particular fluid (e.g., centimetres of water, millimetres of mercury or inches of mercury). The most common choices are mercury (Hg) and water; water is nontoxic and readily available, while mercury's high density allows a shorter column (and so a smaller manometer) to be used to measure a given pressure. The pressure exerted by a column of liquid of height h and density ρ is given by the hydrostatic pressure equation p = ρgh, where g is the gravitational acceleration. Fluid density and local gravity can vary from one reading to another depending on local factors, so the height of a fluid column does not define pressure precisely.

When millimetres of mercury (or inches of mercury) are quoted today, these units are not based on a physical column of mercury; rather, they have been given precise definitions that can be expressed in terms of SI units. One millimetre of mercury is approximately equal to one torr. The water-based units still depend on the density of water, a measured, rather than defined, quantity. These manometric units are still encountered in many fields. Blood pressure is measured in millimetres (or centimetres) of mercury in most of the world, and lung pressures in centimetres of water are still common.

Underwater divers use the metre sea water (msw or MSW) and foot sea water (fsw or FSW) units of pressure, and these are the units for pressure gauges used to measure pressure exposure in diving chambers and personal decompression computers. A msw is defined as 0.1 bar (= 10,000 Pa), is not the same as a linear metre of depth. 33.066 fsw = 1 atm (1 atm = 101,325 Pa / 33.066 = 3,064.326 Pa). The pressure conversion from msw to fsw is different from the length conversion: 10 msw = 32.6336 fsw, while 10 m = 32.8083 ft.

Gauge pressure is often given in units with "g" appended, e.g. "kPag", "barg" or "psig", and units for measurements of absolute pressure are sometimes given a suffix of "a", to avoid confusion, for example "kPaa", "psia". However, the US National Institute of Standards and Technology recommends that, to avoid confusion, any modifiers be instead applied to the quantity being measured rather than the unit of measure. For example, "pg = 100 psi" rather than "p = 100 psig".

Differential pressure is expressed in units with "d" appended; this type of measurement is useful when considering sealing performance or whether a valve will open or close.

Presently or formerly popular pressure units include the following:

  • atmosphere (atm)
  • manometric units:
    • centimetre, inch, millimetre (torr) and micrometre (mTorr, micron) of mercury,
    • height of equivalent column of water, including millimetre (mm H
      2
      O
      ), centimetre (cm H
      2
      O
      ), metre, inch, and foot of water;
  • imperial and customary units:
  • non-SI metric units:
    • bar, decibar, millibar,
      • msw (metres sea water), used in underwater diving, particularly in connection with diving pressure exposure and decompression,
    • kilogram-force, or kilopond, per square centimetre (technical atmosphere),
    • gram-force and tonne-force (metric ton-force) per square centimetre,
    • barye (dyne per square centimetre),
    • kilogram-force and tonne-force per square metre,
    • sthene per square metre (pieze).

Examples

The effects of an external pressure of 700 bar on an aluminum cylinder with 5 mm (0.197 in) wall thickness

As an example of varying pressures, a finger can be pressed against a wall without making any lasting impression; however, the same finger pushing a thumbtack can easily damage the wall. Although the force applied to the surface is the same, the thumbtack applies more pressure because the point concentrates that force into a smaller area. Pressure is transmitted to solid boundaries or across arbitrary sections of fluid normal to these boundaries or sections at every point. Unlike stress, pressure is defined as a scalar quantity. The negative gradient of pressure is called the force density.

Another example is a knife. If the flat edge is used, force is distributed over a larger surface area resulting in less pressure, and it will not cut. Whereas using the sharp edge, which has less surface area, results in greater pressure, and so the knife cuts smoothly. This is one example of a practical application of pressure.

For gases, pressure is sometimes measured not as an absolute pressure, but relative to atmospheric pressure; such measurements are called gauge pressure. An example of this is the air pressure in an automobile tire, which might be said to be "220 kPa (32 psi)", but is actually 220 kPa (32 psi) above atmospheric pressure. Since atmospheric pressure at sea level is about 100 kPa (14.7 psi), the absolute pressure in the tire is therefore about 320 kPa (46 psi). In technical work, this is written "a gauge pressure of 220 kPa (32 psi)".

Where space is limited, such as on pressure gauges, name plates, graph labels, and table headings, the use of a modifier in parentheses, such as "kPa (gauge)" or "kPa (absolute)", is permitted. In non-SI technical work, a gauge pressure of 32 psi (220 kPa) is sometimes written as "32 psig", and an absolute pressure as "32 psia", though the other methods explained above that avoid attaching characters to the unit of pressure are preferred.

Gauge pressure is the relevant measure of pressure wherever one is interested in the stress on storage vessels and the plumbing components of fluidics systems. However, whenever equation-of-state properties, such as densities or changes in densities, must be calculated, pressures must be expressed in terms of their absolute values. For instance, if the atmospheric pressure is 100 kPa (15 psi), a gas (such as helium) at 200 kPa (29 psi) (gauge) (300 kPa or 44 psi [absolute]) is 50% denser than the same gas at 100 kPa (15 psi) (gauge) (200 kPa or 29 psi [absolute]). Focusing on gauge values, one might erroneously conclude the first sample had twice the density of the second one.[citation needed]

Scalar nature

In a static gas, the gas as a whole does not appear to move. The individual molecules of the gas, however, are in constant random motion. Because there are an extremely large number of molecules and because the motion of the individual molecules is random in every direction, no motion is detected. When the gas is at least partially confined (that is, not free to expand rapidly), the gas will exhibit a hydrostatic pressure. This confinement can be achieved with either a physical container, or in the gravitational well of a large mass, such as a planet, otherwise known as atmospheric pressure.

In the case of planetary atmospheres, the pressure-gradient force of the gas pushing outwards from higher pressure, lower altitudes to lower pressure, higher altitudes is balanced by the gravitational force, preventing the gas from diffusing into outer space and maintaining hydrostatic equilibrium.

In a physical container, the pressure of the gas originates from the molecules colliding with the walls of the container. The walls of the container can be anywhere inside the gas, and the force per unit area (the pressure) is the same. If the "container" is shrunk down to a very small point (becoming less true as the atomic scale is approached), the pressure will still have a single value at that point. Therefore, pressure is a scalar quantity, not a vector quantity. It has magnitude but no direction sense associated with it. Pressure force acts in all directions at a point inside a gas. At the surface of a gas, the pressure force acts perpendicular (at right angle) to the surface.

A closely related quantity is the stress tensor σ, which relates the vector force to the vector area via the linear relation .

This tensor may be expressed as the sum of the viscous stress tensor minus the hydrostatic pressure. The negative of the stress tensor is sometimes called the pressure tensor, but in the following, the term "pressure" will refer only to the scalar pressure.

According to the theory of general relativity, pressure increases the strength of a gravitational field (see stress–energy tensor) and so adds to the mass-energy cause of gravity. This effect is unnoticeable at everyday pressures but is significant in neutron stars, although it has not been experimentally tested.

Types

Fluid pressure

Fluid pressure is most often the compressive stress at some point within a fluid. (The term fluid refers to both liquids and gases – for more information specifically about liquid pressure, see section below.)

Water escapes at high speed from a damaged hydrant that contains water at high pressure (due to high pressure of the water it behaves like it is sprayed)

Fluid pressure occurs in one of two situations:

  • An open condition, called "open channel flow", e.g. the ocean, a swimming pool, or the atmosphere.
  • A closed condition, called "closed conduit", e.g. a water line or gas line.

Pressure in open conditions usually can be approximated as the pressure in "static" or non-moving conditions (even in the ocean where there are waves and currents), because the motions create only negligible changes in the pressure. Such conditions conform with principles of fluid statics. The pressure at any given point of a non-moving (static) fluid is called the hydrostatic pressure.

Closed bodies of fluid are either "static", when the fluid is not moving, or "dynamic", when the fluid can move as in either a pipe or by compressing an air gap in a closed container. The pressure in closed conditions conforms with the principles of fluid dynamics.

The concepts of fluid pressure are predominantly attributed to the discoveries of Blaise Pascal and Daniel Bernoulli. Bernoulli's equation can be used in almost any situation to determine the pressure at any point in a fluid. The equation makes some assumptions about the fluid, such as the fluid being ideal and incompressible. An ideal fluid is a fluid in which there is no friction, it is inviscid (zero viscosity). The equation for all points of a system filled with a constant-density fluid is 

where:

  • p, pressure of the fluid,
  • = ρg, density × acceleration of gravity is the (volume-) specific weight of the fluid,
  • v, velocity of the fluid,
  • g, acceleration of gravity,
  • z, elevation,
  • , pressure head,
  • , velocity head.

Applications

Explosion or deflagration pressures

Explosion or deflagration pressures are the result of the ignition of explosive gases, mists, dust/air suspensions, in unconfined and confined spaces.

Negative pressures

Low-pressure chamber in Bundesleistungszentrum Kienbaum, Germany

While pressures are, in general, positive, there are several situations in which negative pressures may be encountered:

  • When dealing in relative (gauge) pressures. For instance, an absolute pressure of 80 kPa may be described as a gauge pressure of −21 kPa (i.e., 21 kPa below an atmospheric pressure of 101 kPa). For example, abdominal decompression is an obstetric procedure during which negative gauge pressure is applied intermittently to a pregnant woman's abdomen.
  • Negative absolute pressures are possible. They are effectively tension, and both bulk solids and bulk liquids can be put under negative absolute pressure by pulling on them. Microscopically, the molecules in solids and liquids have attractive interactions that overpower the thermal kinetic energy, so some tension can be sustained. Thermodynamically, however, a bulk material under negative pressure is in a metastable state, and it is especially fragile in the case of liquids where the negative pressure state is similar to superheating and is easily susceptible to cavitation. In certain situations, the cavitation can be avoided and negative pressures sustained indefinitely, for example, liquid mercury has been observed to sustain up to −425 atm in clean glass containers. Negative liquid pressures are thought to be involved in the ascent of sap in plants taller than 10 m (the atmospheric pressure head of water).
  • The Casimir effect can create a small attractive force due to interactions with vacuum energy; this force is sometimes termed "vacuum pressure" (not to be confused with the negative gauge pressure of a vacuum).
  • For non-isotropic stresses in rigid bodies, depending on how the orientation of a surface is chosen, the same distribution of forces may have a component of positive stress along one surface normal, with a component of negative stress acting along another surface normal. The pressure is then defined as the average of the three principal stresses.
    • The stresses in an electromagnetic field are generally non-isotropic, with the stress normal to one surface element (the normal stress) being negative, and positive for surface elements perpendicular to this.
  • In cosmology, dark energy creates a very small yet cosmically significant amount of negative pressure, which accelerates the expansion of the universe.

Stagnation pressure

Stagnation pressure is the pressure a fluid exerts when it is forced to stop moving. Consequently, although a fluid moving at higher speed will have a lower static pressure, it may have a higher stagnation pressure when forced to a standstill. Static pressure and stagnation pressure are related by: where

  • is the stagnation pressure,
  • is the density,
  • is the flow velocity,
  • is the static pressure.

The pressure of a moving fluid can be measured using a Pitot tube, or one of its variations such as a Kiel probe or Cobra probe, connected to a manometer. Depending on where the inlet holes are located on the probe, it can measure static pressures or stagnation pressures.

Surface pressure and surface tension

There is a two-dimensional analog of pressure – the lateral force per unit length applied on a line perpendicular to the force.

Surface pressure is denoted by π: and shares many similar properties with three-dimensional pressure. Properties of surface chemicals can be investigated by measuring pressure/area isotherms, as the two-dimensional analog of Boyle's law, πA = k, at constant temperature.

Surface tension is another example of surface pressure, but with a reversed sign, because "tension" is the opposite to "pressure".

Pressure of an ideal gas

In an ideal gas, molecules have no volume and do not interact. According to the ideal gas law, pressure varies linearly with temperature and quantity, and inversely with volume: where:

Real gases exhibit a more complex dependence on the variables of state.

Vapour pressure

Vapour pressure is the pressure of a vapour in thermodynamic equilibrium with its condensed phases in a closed system. All liquids and solids have a tendency to evaporate into a gaseous form, and all gases have a tendency to condense back to their liquid or solid form.

The atmospheric pressure boiling point of a liquid (also known as the normal boiling point) is the temperature at which the vapor pressure equals the ambient atmospheric pressure. With any incremental increase in that temperature, the vapor pressure becomes sufficient to overcome atmospheric pressure and lift the liquid to form vapour bubbles inside the bulk of the substance. Bubble formation deeper in the liquid requires a higher pressure, and therefore higher temperature, because the fluid pressure increases above the atmospheric pressure as the depth increases.

The vapor pressure that a single component in a mixture contributes to the total pressure in the system is called partial vapor pressure.

Liquid pressure

When a person swims under the water, water pressure is felt acting on the person's eardrums. The deeper that person swims, the greater the pressure. The pressure felt is due to the weight of the water above the person. As someone swims deeper, there is more water above the person and therefore greater pressure. The pressure a liquid exerts depends on its depth.

Liquid pressure also depends on the density of the liquid. If someone was submerged in a liquid more dense than water, the pressure would be correspondingly greater. Thus, we can say that the depth, density and liquid pressure are directly proportionate. The pressure due to a liquid in liquid columns of constant density and gravity at a depth within a substance is represented by the following formula: where:

  • p is liquid pressure,
  • g is gravity at the surface of overlaying material,
  • ρ is density of liquid,
  • h is height of liquid column or depth within a substance.

Another way of saying the same formula is the following:

The pressure a liquid exerts against the sides and bottom of a container depends on the density and the depth of the liquid. If atmospheric pressure is neglected, liquid pressure against the bottom is twice as great at twice the depth; at three times the depth, the liquid pressure is threefold; etc. Or, if the liquid is two or three times as dense, the liquid pressure is correspondingly two or three times as great for any given depth. Liquids are practically incompressible – that is, their volume can hardly be changed by pressure (water volume decreases by only 50 millionths of its original volume for each atmospheric increase in pressure). Thus, except for small changes produced by temperature, the density of a particular liquid is practically the same at all depths.

Atmospheric pressure pressing on the surface of a liquid must be taken into account when trying to discover the total pressure acting on a liquid. The total pressure of a liquid, then, is ρgh plus the pressure of the atmosphere. When this distinction is important, the term total pressure is used. Otherwise, discussions of liquid pressure refer to pressure without regard to the normally ever-present atmospheric pressure.

The pressure does not depend on the amount of liquid present. Volume is not the important factor – depth is. The average water pressure acting against a dam depends on the average depth of the water and not on the volume of water held back. For example, a wide but shallow lake with a depth of 3 m (10 ft) exerts only half the average pressure that a small 6 m (20 ft) deep pond does. (The total force applied to the longer dam will be greater, due to the greater total surface area for the pressure to act upon. But for a given 5-foot (1.5 m)-wide section of each dam, the 10 ft (3.0 m) deep water will apply one quarter the force of 20 ft (6.1 m) deep water). A person will feel the same pressure whether their head is dunked a metre beneath the surface of the water in a small pool or to the same depth in the middle of a large lake.

If four interconnected vases contain different amounts of water but are all filled to equal depths, then a fish with its head dunked a few centimetres under the surface will be acted on by water pressure that is the same in any of the vases. If the fish swims a few centimetres deeper, the pressure on the fish will increase with depth and be the same no matter which vase the fish is in. If the fish swims to the bottom, the pressure will be greater, but it makes no difference which vase it is in. All vases are filled to equal depths, so the water pressure is the same at the bottom of each vase, regardless of its shape or volume. If water pressure at the bottom of a vase were greater than water pressure at the bottom of a neighboring vase, the greater pressure would force water sideways and then up the neighboring vase to a higher level until the pressures at the bottom were equalized. Pressure is depth dependent, not volume dependent, so there is a reason that water seeks its own level.

Restating this as an energy equation, the energy per unit volume in an ideal, incompressible liquid is constant throughout its vessel. At the surface of a stationary liquid in a vessel gravitational potential energy is large but liquid pressure is low. At the bottom of the vessel, all the gravitational potential energy is converted to pressure. The two energy components change linearly with the depth so the sum of pressure and gravitational potential energy per unit volume is constant throughout the volume of the fluid. The units of pressure are equivalent to energy per unit volume. (In the SI system of units, the pascal is equivalent to the joule per cubic metre.) Mathematically, it is described by Bernoulli's equation, where velocity head is zero and comparisons per unit volume in the vessel are

Terms have the same meaning as in section Fluid pressure.

Direction of liquid pressure

An experimentally determined fact about liquid pressure is that it is exerted equally in all directions. If someone is submerged in water, no matter which way that person tilts their head, the person will feel the same amount of water pressure on their ears. Because a liquid can flow, this pressure is not only downward. Pressure is seen acting sideways when water spurts sideways from a leak in the side of an upright can. Pressure also acts upward, as demonstrated when someone tries to push a beach ball beneath the surface of the water. The bottom of a ball is pushed upward by water pressure (buoyancy).

When a liquid presses against a surface, there is a net force that is perpendicular to the surface. Although pressure does not have a specific direction, force does. A submerged triangular block has water forced against each point from many directions, but components of the force that are not perpendicular to the surface cancel each other out, leaving only a net perpendicular point. This is why liquid particles' velocity only alters in a normal component after they are collided to the container's wall. Likewise, if the collision site is a hole, water spurting from the hole in a bucket initially exits the bucket in a direction at right angles to the surface of the bucket in which the hole is located. Then it curves downward due to gravity. If there are three holes in a bucket (top, bottom, and middle), then the force vectors perpendicular to the inner container surface will increase with increasing depth – that is, a greater pressure at the bottom makes it so that the bottom hole will shoot water out the farthest. The force exerted by a fluid on a smooth surface is always at right angles to the surface. The speed of liquid out of the hole is , where h is the depth below the free surface.[25] As predicted by Torricelli's law this is the same speed the water (or anything else) would have if freely falling the same vertical distance h.

Kinematic pressure

is the kinematic pressure, where is the pressure and constant mass density. The SI unit of P is m2/s2. Kinematic pressure is used in the same manner as kinematic viscosity in order to compute the Navier–Stokes equation without explicitly showing the density .

Nanobiotechnology

From Wikipedia, the free encyclopedia ...