Search This Blog

Friday, August 4, 2023

Solid-propellant rocket

From Wikipedia, the free encyclopedia
The Space Shuttle was launched with the help of two solid-fuel boosters known as SRBs

A solid-propellant rocket or solid rocket is a rocket with a rocket engine that uses solid propellants (fuel/oxidizer). The earliest rockets were solid-fuel rockets powered by gunpowder; they were used in warfare by the Chinese, Indians, Mongols and Persians as early as the 13th century.

All rockets used some form of solid or powdered propellant up until the 20th century, when liquid-propellant rockets offered more efficient and controllable alternatives. Solid rockets are still used today in military armaments worldwide, model rockets, solid rocket boosters and on larger applications for their simplicity and reliability.

Since solid-fuel rockets can remain in storage for an extended period without much propellant degradation and because they almost always launch reliably, they have been frequently used in military applications such as missiles. The lower performance of solid propellants (as compared to liquids) does not favor their use as primary propulsion in modern medium-to-large launch vehicles customarily used to orbit commercial satellites and launch major space probes. Solids are, however, frequently used as strap-on boosters to increase payload capacity or as spin-stabilized add-on upper stages when higher-than-normal velocities are required. Solid rockets are used as light launch vehicles for low Earth orbit (LEO) payloads under 2 tons or escape payloads up to 500 kilograms (1,100 lb).

Basic concepts

A simplified diagram of a solid-fuel rocket.
  1. A solid fuel-oxidizer mixture (propellant) is packed into the rocket, with a cylindrical hole in the middle.
  2. An igniter combusts the surface of the propellant.
  3. The cylindrical hole in the propellant acts as a combustion chamber.
  4. The hot exhaust is choked at the throat, which, among other things, dictates the amount of thrust produced.
  5. Exhaust exits the rocket.

A simple solid rocket motor consists of a casing, nozzle, grain (propellant charge), and igniter.

The solid grain mass burns in a predictable fashion to produce exhaust gases, the flow of which is described by Taylor–Culick flow. The nozzle dimensions are calculated to maintain a design chamber pressure, while producing thrust from the exhaust gases.

Once ignited, a simple solid rocket motor cannot be shut off, because it contains all the ingredients necessary for combustion within the chamber in which they are burned. More advanced solid rocket motors can be throttled, and also be extinguished, and then re-ignited by control of the nozzle geometry, or through the use of vent ports. Further, pulsed rocket motors that burn in segments, and that can be ignited upon command are available.

Modern designs may also include a steerable nozzle for guidance, avionics, recovery hardware (parachutes), self-destruct mechanisms, APUs, controllable tactical motors, controllable divert and attitude control motors, and thermal management materials.

History

A battery of Katyusha rocket launchers fires at German forces during the Battle of Stalingrad, 6 October 1942
Aerojet 260 motor test, 25 September 1965

The medieval Song dynasty Chinese invented a very primitive form of solid-propellant rocket. Illustrations and descriptions in the 14th century Chinese military treatise Huolongjing by the Ming dynasty military writer and philosopher Jiao Yu confirm that the Chinese in 1232 used proto solid propellant rockets then known as "fire arrows" to drive back the Mongols during the Mongol siege of Kaifeng. Each arrow took a primitive form of a simple, solid-propellant rocket tube that was filled with gunpowder. One open end allowed the gas to escape and was attached to a long stick that acted as a guidance system for flight direction control.

The first rockets with tubes of cast iron were used by the Kingdom of Mysore under Hyder Ali and Tipu Sultan in the 1750s. These rockets had a reach of targets up to a mile and a half away. These were extremely effective in the Second Anglo-Mysore War that ended in a humiliating defeat for the British East India Company. Word of the success of the Mysore rockets against the British triggered research in England, France, Ireland and elsewhere. When the British finally conquered the fort of Srirangapatana in 1799, hundreds of rockets were shipped off to the Royal Arsenal near London to be reverse-engineered. This led to the first industrial manufacture of military rockets with the Congreve rocket in 1804.

In 1921 the Soviet research and development laboratory Gas Dynamics Laboratory began developing solid-propellant rockets, which resulted in the first launch in 1928, that flew for approximately 1,300 metres. These rockets were used in 1931 for the world's first successful use of rockets to assist take-off of aircraft. The research continued from 1933 by the Reactive Scientific Research Institute (RNII) with the development of the RS-82 and RS-132 rockets, including designing several variations for ground-to-air, ground-to-ground, air-to-ground and air-to-air combat. The earliest known use by the Soviet Air Force of aircraft-launched unguided anti-aircraft rockets in combat against heavier-than-air aircraft took place in August 1939, during the Battle of Khalkhin Gol. In June 1938, the RNII began developing a multiple rocket launcher based on the RS-132 rocket. In August 1939, the completed product was the BM-13 / Katyusha rocket launcher. Towards the end of 1938 the first significant large scale testing of the rocket launchers took place, 233 rockets of various types were used. A salvo of rockets could completely straddle a target at a range of 5,500 metres (3.4 mi). By the end of World War II total production of rocket launchers reached about 10,000. with 12 million rockets of the RS type produced for the Soviet armed forces.

In the United States modern castable composite solid rocket motors were invented by the American aerospace engineer Jack Parsons at Caltech in 1942 when he replaced double base propellant with roofing asphalt and potassium perchlorate. This made possible slow-burning rocket motors of adequate size and with sufficient shelf-life for jet-assisted take off applications. Charles Bartley, employed at JPL (Caltech), substituted curable synthetic rubber for the gooey asphalt, creating a flexible but geometrically stable load-bearing propellant grain that bonded securely to the motor casing. This made possible much larger solid rocket motors. Atlantic Research Corporation significantly boosted composite propellant Isp in 1954 by increasing the amount of powdered aluminium in the propellant to as much as 20%.

Solid-propellant rocket technology got its largest boost in technical innovation, size and capability with the various mid-20th century government initiatives to develop increasingly capable military missiles. After initial designs of ballistic missile military technology designed with liquid-propellant rockets in the 1940s and 1950s, both the Soviet Union and the United States embarked on major initiatives to develop solid-propellant local, regional, and intercontinental ballistic missiles, including solid-propellant missiles that could be launched from air or sea. Many other governments also developed these military technologies over the next 50 years.

By the later 1980s and continuing to 2020, these government-developed highly-capable solid rocket technologies have been applied to orbital spaceflight by many government-directed programs, most often as booster rockets to add extra thrust during the early ascent of their primarily liquid rocket launch vehicles. Some designs have had solid rocket upper stages as well. Examples flying in the 2010s include the European Ariane 5, US Atlas V and Space Shuttle, and Japan's H-II.

The largest solid rocket motors ever built were Aerojet's three 6.60-meter (260 in) monolithic solid motors cast in Florida. Motors 260 SL-1 and SL-2 were 6.63 meters (261 in) in diameter, 24.59 meters (80 ft 8 in) long, weighed 842,900 kilograms (1,858,300 lb), and had a maximum thrust of 16 MN (3,500,000 lbf). Burn duration was two minutes. The nozzle throat was large enough to walk through standing up. The motor was capable of serving as a 1-to-1 replacement for the 8-engine Saturn I liquid-propellant first stage but was never used as such. Motor 260 SL-3 was of similar length and weight but had a maximum thrust of 24 MN (5,400,000 lbf) and a shorter duration.

Design

Design begins with the total impulse required, which determines the fuel and oxidizer mass. Grain geometry and chemistry are then chosen to satisfy the required motor characteristics.

The following are chosen or solved simultaneously. The results are exact dimensions for grain, nozzle, and case geometries:

  • The grain burns at a predictable rate, given its surface area and chamber pressure.
  • The chamber pressure is determined by the nozzle throat diameter and grain burn rate.
  • Allowable chamber pressure is a function of casing design.
  • The length of burn time is determined by the grain "web thickness".

The grain may or may not be bonded to the casing. Case-bonded motors are more difficult to design, since the deformation of the case and the grain under flight must be compatible.

Common modes of failure in solid rocket motors include fracture of the grain, failure of case bonding, and air pockets in the grain. All of these produce an instantaneous increase in burn surface area and a corresponding increase in exhaust gas production rate and pressure, which may rupture the casing.

Another failure mode is casing seal failure. Seals are required in casings that have to be opened to load the grain. Once a seal fails, hot gas will erode the escape path and result in failure. This was the cause of the Space Shuttle Challenger disaster.

Grain geometry

Solid rocket fuel deflagrates from the surface of exposed propellant in the combustion chamber. In this fashion, the geometry of the propellant inside the rocket motor plays an important role in the overall motor performance. As the surface of the propellant burns, the shape evolves (a subject of study in internal ballistics), most often changing the propellant surface area exposed to the combustion gases. Since the propellant volume is equal to the cross sectional area times the fuel length, the volumetric propellant consumption rate is the cross section area times the linear burn rate , and the instantaneous mass flow rate of combustion gases generated is equal to the volumetric rate times the fuel density :

Several geometric configurations are often used depending on the application and desired thrust curve:

  • Circular bore: if in BATES configuration, produces progressive-regressive thrust curve.
  • End burner: propellant burns from one axial end to other producing steady long burn, though has thermal difficulties, center of gravity (CG) shift.
  • C-slot: propellant with large wedge cut out of side (along axial direction), producing fairly long regressive thrust, though has thermal difficulties and asymmetric CG characteristics.
  • Moon burner: off-center circular bore produces progressive-regressive long burn, though has slight asymmetric CG characteristics
  • Finocyl: usually a 5- or 6-legged star-like shape that can produce very level thrust, with a bit quicker burn than circular bore due to increased surface area.

Casing

The casing may be constructed from a range of materials. Cardboard is used for small black powder model motors, whereas aluminium is used for larger composite-fuel hobby motors. Steel was used for the space shuttle boosters. Filament-wound graphite epoxy casings are used for high-performance motors.

The casing must be designed to withstand the pressure and resulting stresses of the rocket motor, possibly at elevated temperature. For design, the casing is considered a pressure vessel.

To protect the casing from corrosive hot gases, a sacrificial thermal liner on the inside of the casing is often implemented, which ablates to prolong the life of the motor casing.

Nozzle

A convergent-divergent design accelerates the exhaust gas out of the nozzle to produce thrust. The nozzle must be constructed from a material that can withstand the heat of the combustion gas flow. Often, heat-resistant carbon-based materials are used, such as amorphous graphite or carbon-carbon.

Some designs include directional control of the exhaust. This can be accomplished by gimballing the nozzle, as in the Space Shuttle SRBs, by the use of jet vanes in the exhaust as in the V-2 rocket, or by liquid injection thrust vectoring (LITV).

LITV consists of injecting a liquid into the exhaust stream after the nozzle throat. The liquid then vaporizes, and in most cases chemically reacts, adding mass flow to one side of the exhaust stream and thus providing a control moment. For example, the Titan IIIC solid boosters injected nitrogen tetroxide for LITV; the tanks can be seen on the sides of the rocket between the main center stage and the boosters.

An early Minuteman first stage used a single motor with four gimballed nozzles to provide pitch, yaw, and roll control.

Performance

An exhaust cloud engulfs Launch Pad 39A at NASA's Kennedy Space Center as the Space Shuttle Endeavour lifts off.

A typical, well-designed ammonium perchlorate composite propellant (APCP) first-stage motor may have a vacuum specific impulse (Isp) as high as 285.6 seconds (2.801 km/s) (Titan IVB SRMU). This compares to 339.3 s (3.327 km/s) for RP1/LOX (RD-180) and 452.3 s (4.436 km/s) for LH2/LOX (Block II RS-25) bipropellant engines. Upper stage specific impulses are somewhat greater: as much as 303.8 s (2.979 km/s) for APCP (Orbus 6E), 359 s (3.52 km/s) for RP1/LOX (RD-0124) and 465.5 s (4.565 km/s) for LH2/LOX (RL10B-2).

Propellant fractions are usually somewhat higher for (non-segmented) solid propellant first stages than for upper stages. The 53,000-kilogram (117,000 lb) Castor 120 first stage has a propellant mass fraction of 92.23% while the 14,000-kilogram (31,000 lb) Castor 30 upper stage developed for Orbital Science's Taurus II COTS (Commercial Off The Shelf) (International Space Station resupply) launch vehicle has a 91.3% propellant fraction with 2.9% graphite epoxy motor casing, 2.4% nozzle, igniter and thrust vector actuator, and 3.4% non-motor hardware including such things as payload mount, interstage adapter, cable raceway, instrumentation, etc. Castor 120 and Castor 30 are 2.36 and 2.34 meters (93 and 92 in) in diameter, respectively, and serve as stages on the Athena IC and IIC commercial launch vehicles. A four-stage Athena II using Castor 120s as both first and second stages became the first commercially developed launch vehicle to launch a lunar probe (Lunar Prospector) in 1998.

Solid rockets can provide high thrust for relatively low cost. For this reason, solids have been used as initial stages in rockets (for example the Space Shuttle), while reserving high specific impulse engines, especially less massive hydrogen-fueled engines, for higher stages. In addition, solid rockets have a long history as the final boost stage for satellites due to their simplicity, reliability, compactness and reasonably high mass fraction. A spin-stabilized solid rocket motor is sometimes added when extra velocity is required, such as for a mission to a comet or the outer solar system, because a spinner does not require a guidance system (on the newly added stage). Thiokol's extensive family of mostly titanium-cased Star space motors has been widely used, especially on Delta launch vehicles and as spin-stabilized upper stages to launch satellites from the cargo bay of the Space Shuttle. Star motors have propellant fractions as high as 94.6% but add-on structures and equipment reduce the operating mass fraction by 2% or more.

Higher performing solid rocket propellants are used in large strategic missiles (as opposed to commercial launch vehicles). HMX, C4H8N4(NO2)4, a nitramine with greater energy than ammonium perchlorate, was used in the propellant of the Peacekeeper ICBM and is the main ingredient in NEPE-75 propellant used in the Trident II D-5 Fleet Ballistic Missile. It is because of explosive hazard that the higher energy military solid propellants containing HMX are not used in commercial launch vehicles except when the LV is an adapted ballistic missile already containing HMX propellant (Minotaur IV and V based on the retired Peacekeeper ICBMs). The Naval Air Weapons Station at China Lake, California, developed a new compound, C6H6N6(NO2)6, called simply CL-20 (China Lake compound #20). Compared to HMX, CL-20 has 14% more energy per mass, 20% more energy per volume, and a higher oxygen-to-fuel ratio. One of the motivations for development of these very high energy density military solid propellants is to achieve mid-course exo-atmospheric ABM capability from missiles small enough to fit in existing ship-based below-deck vertical launch tubes and air-mobile truck-mounted launch tubes. CL-20 propellant compliant with Congress' 2004 insensitive munitions (IM) law has been demonstrated and may, as its cost comes down, be suitable for use in commercial launch vehicles, with a very significant increase in performance compared with the currently favored APCP solid propellants. With a specific impulse of 309 s already demonstrated by Peacekeeper's second stage using HMX propellant, the higher energy of CL-20 propellant can be expected to increase specific impulse to around 320 s in similar ICBM or launch vehicle upper stage applications, without the explosive hazard of HMX.

An attractive attribute for military use is the ability for solid rocket propellant to remain loaded in the rocket for long durations and then be reliably launched at a moment's notice.

Propellant families

Black powder (gunpowder) propellant

Black powder (gunpowder) is composed of charcoal (fuel), potassium nitrate (oxidizer), and sulfur (fuel and catalyst). It is one of the oldest pyrotechnic compositions with application to rocketry. In modern times, black powder finds use in low-power model rockets (such as Estes and Quest rockets), as it is cheap and fairly easy to produce. The fuel grain is typically a mixture of pressed fine powder (into a solid, hard slug), with a burn rate that is highly dependent upon exact composition and operating conditions. The specific impulse of black powder is low, around 80 s (0.78 km/s). The grain is sensitive to fracture and, therefore, catastrophic failure. Black powder does not typically find use in motors above 40 newtons (9.0 pounds-force) thrust.

Zinc–sulfur (ZS) propellants

Composed of powdered zinc metal and powdered sulfur (oxidizer), ZS or "micrograin" is another pressed propellant that does not find any practical application outside specialized amateur rocketry circles due to its poor performance (as most ZS burns outside the combustion chamber) and fast linear burn rates on the order of 2 m/s. ZS is most often employed as a novelty propellant as the rocket accelerates extremely quickly leaving a spectacular large orange fireball behind it.

"Candy" propellants

In general, rocket candy propellants are an oxidizer (typically potassium nitrate) and a sugar fuel (typically dextrose, sorbitol, or sucrose) that are cast into shape by gently melting the propellant constituents together and pouring or packing the amorphous colloid into a mold. Candy propellants generate a low-medium specific impulse of roughly 130 s (1.3 km/s) and, thus, are used primarily by amateur and experimental rocketeers.

Double-base (DB) propellants

DB propellants are composed of two monopropellant fuel components where one typically acts as a high-energy (yet unstable) monopropellant and the other acts as a lower-energy stabilizing (and gelling) monopropellant. In typical circumstances, nitroglycerin is dissolved in a nitrocellulose gel and solidified with additives. DB propellants are implemented in applications where minimal smoke is required yet a medium-high Isp of roughly 235 s (2.30 km/s) is required. The addition of metal fuels (such as aluminium) can increase performance to around 250 s (2.5 km/s), though metal oxide nucleation in the exhaust can turn the smoke opaque.

Composite propellants

A powdered oxidizer and powdered metal fuel are intimately mixed and immobilized with a rubbery binder (that also acts as a fuel). Composite propellants are often either ammonium-nitrate-based (ANCP) or ammonium-perchlorate-based (APCP). Ammonium nitrate composite propellant often uses magnesium and/or aluminium as fuel and delivers medium performance (Isp of about 210 s (2.1 km/s)) whereas ammonium perchlorate composite propellant often uses aluminium fuel and delivers high performance: vacuum Isp up to 296 s (2.90 km/s) with a single-piece nozzle or 304 s (2.98 km/s) with a high-area-ratio telescoping nozzle. Aluminium is used as fuel because it has a reasonable specific energy density, a high volumetric energy density, and is difficult to ignite accidentally. Composite propellants are cast, and retain their shape after the rubber binder, such as Hydroxyl-terminated polybutadiene (HTPB), cross-links (solidifies) with the aid of a curative additive. Because of its high performance, moderate ease of manufacturing, and moderate cost, APCP finds widespread use in space, military, and amateur rockets, whereas cheaper and less efficient ANCP finds use in amateur rocketry and gas generators. Ammonium dinitramide, NH4N(NO2)2, is being considered as a 1-to-1 chlorine-free substitute for ammonium perchlorate in composite propellants. Unlike ammonium nitrate, ADN can be substituted for AP without a loss in motor performance.

Polyurethane-bound aluminium-APCP solid fuel was used in the submarine-launched Polaris missiles. APCP used in the space shuttle Solid Rocket Boosters consisted of ammonium perchlorate (oxidizer, 69.6% by weight), aluminium (fuel, 16%), iron oxide (a catalyst, 0.4%), polybutadiene acrylonitrile (PBAN) polymer (a non-urethane rubber binder that held the mixture together and acted as secondary fuel, 12.04%), and an epoxy curing agent (1.96%). It developed a specific impulse of 242 seconds (2.37 km/s) at sea level or 268 seconds (2.63 km/s) in a vacuum. The 2005-2009 Constellation Program was to use a similar PBAN-bound APCP.

In 2009, a group succeeded in creating a propellant of water and nanoaluminium (ALICE).

High-energy composite (HEC) propellants

Typical HEC propellants start with a standard composite propellant mixture (such as APCP) and add a high-energy explosive to the mix. This extra component usually is in the form of small crystals of RDX or HMX, both of which have higher energy than ammonium perchlorate. Despite a modest increase in specific impulse, implementation is limited due to the increased hazards of the high-explosive additives.

Composite modified double base propellants

Composite modified double base propellants start with a nitrocellulose/nitroglycerin double base propellant as a binder and add solids (typically ammonium perchlorate (AP) and powdered aluminium) normally used in composite propellants. The ammonium perchlorate makes up the oxygen deficit introduced by using nitrocellulose, improving the overall specific impulse. The aluminium improves specific impulse as well as combustion stability. High performing propellants such as NEPE-75 used to fuel the Trident II D-5, SLBM replace most of the AP with polyethylene glycol-bound HMX, further increasing specific impulse. The mixing of composite and double base propellant ingredients has become so common as to blur the functional definition of double base propellants.

Minimum-signature (smokeless) propellants

One of the most active areas of solid propellant research is the development of high-energy, minimum-signature propellant using C6H6N6(NO2)6 CL-20 nitroamine (China Lake compound #20), which has 14% higher energy per mass and 20% higher energy density than HMX. The new propellant has been successfully developed and tested in tactical rocket motors. The propellant is non-polluting: acid-free, solid particulates-free, and lead-free. It is also smokeless and has only a faint shock diamond pattern that is visible in the otherwise transparent exhaust. Without the bright flame and dense smoke trail produced by the burning of aluminized propellants, these smokeless propellants all but eliminate the risk of giving away the positions from which the missiles are fired. The new CL-20 propellant is shock-insensitive (hazard class 1.3) as opposed to current HMX smokeless propellants which are highly detonable (hazard class 1.1). CL-20 is considered a major breakthrough in solid rocket propellant technology but has yet to see widespread use because costs remain high.

Electric solid propellants

Electric solid propellants (ESPs) are a family of high performance plastisol solid propellants that can be ignited and throttled by the application of electric current. Unlike conventional rocket motor propellants that are difficult to control and extinguish, ESPs can be ignited reliably at precise intervals and durations. It requires no moving parts and the propellant is insensitive to flames or electrical sparks.

Hobby and amateur rocketry

Solid propellant rocket motors can be bought for use in model rocketry; they are normally small cylinders of black powder fuel with an integral nozzle and optionally a small charge that is set off when the propellant is exhausted after a time delay. This charge can be used to trigger a camera, or deploy a parachute. Without this charge and delay, the motor may ignite a second stage (black powder only).

In mid- and high-power rocketry, commercially made APCP motors are widely used. They can be designed as either single-use or reloadables. These motors are available in impulse ranges from "A" (1.26 Ns– 2.50 Ns) to "O" (20.48 kNs – 40.96 kNs), from several manufacturers. They are manufactured in standardized diameters and varying lengths depending on required impulse. Standard motor diameters are 13, 18, 24, 29, 38, 54, 75, 98, and 150 millimeters. Different propellant formulations are available to produce different thrust profiles, as well as special effects such as colored flames, smoke trails, or large quantities of sparks (produced by adding titanium sponge to the mix).

Use

Sounding rockets

Almost all sounding rockets use solid motors.

Missiles

Due to reliability, ease of storage and handling, solid rockets are used on missiles and ICBMs.

Orbital rockets

Solid rockets are suitable for launching small payloads to orbital velocities, especially if three or more stages are used. Many of these are based on repurposed ICBMs.

Larger liquid-fueled orbital rockets often use solid rocket boosters to gain enough initial thrust to launch the fully fueled rocket.

Solid fuel is also used for some upper stages, particularly the Star 37 (sometimes referred to as the "Burner" upper stage) and the Star 48 (sometimes referred to as the "Payload Assist Module", or PAM), both manufactured originally by Thiokol, and today by Northrop Grumman. They are used to lift large payloads to intended orbits (such as the Global Positioning System satellites), or smaller payloads to interplanetary—or even interstellar—trajectories. Another solid-fuel upper stage, used by the Space Shuttle and the Titan IV, was the Boeing-manufactured Inertial Upper Stage (IUS).

Some rockets, like the Antares (manufactured by Northrop Grumman), have mandatory solid-fuel upper stages. The Antares rocket uses the Northrop Grumman-manufactured Castor 30 as an upper stage.

Advanced research

  • Environmentally sensitive fuel formulations such as ALICE propellant
  • Ramjets with solid fuel
  • Variable thrust designs based on variable nozzle geometry
  • Hybrid rockets that use solid fuel and throttleable liquid or gaseous oxidizer

Quantum mechanics

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Quantum_mechanics
Wave functions of the electron in a hydrogen atom at different energy levels. Quantum mechanics cannot predict the exact location of a particle in space, only the probability of finding it at different locations. The brighter areas represent a higher probability of finding the electron.

Quantum mechanics is a fundamental theory in physics that provides a description of the physical properties of nature at the scale of atoms and subatomic particles. It is the foundation of all quantum physics including quantum chemistry, quantum field theory, quantum technology, and quantum information science.

Classical physics, the collection of theories that existed before the advent of quantum mechanics, describes many aspects of nature at an ordinary (macroscopic) scale, but is not sufficient for describing them at small (atomic and subatomic) scales. Most theories in classical physics can be derived from quantum mechanics as an approximation valid at large (macroscopic) scale.

Quantum mechanics differs from classical physics in that energy, momentum, angular momentum, and other quantities of a bound system are restricted to discrete values (quantization); objects have characteristics of both particles and waves (wave–particle duality); and there are limits to how accurately the value of a physical quantity can be predicted prior to its measurement, given a complete set of initial conditions (the uncertainty principle).

Quantum mechanics arose gradually from theories to explain observations that could not be reconciled with classical physics, such as Max Planck's solution in 1900 to the black-body radiation problem, and the correspondence between energy and frequency in Albert Einstein's 1905 paper, which explained the photoelectric effect. These early attempts to understand microscopic phenomena, now known as the "old quantum theory", led to the full development of quantum mechanics in the mid-1920s by Niels Bohr, Erwin Schrödinger, Werner Heisenberg, Max Born, Paul Dirac and others. The modern theory is formulated in various specially developed mathematical formalisms. In one of them, a mathematical entity called the wave function provides information, in the form of probability amplitudes, about what measurements of a particle's energy, momentum, and other physical properties may yield.

Overview and fundamental concepts

Quantum mechanics allows the calculation of properties and behaviour of physical systems. It is typically applied to microscopic systems: molecules, atoms and sub-atomic particles. It has been demonstrated to hold for complex molecules with thousands of atoms, but its application to human beings raises philosophical problems, such as Wigner's friend, and its application to the universe as a whole remains speculative. Predictions of quantum mechanics have been verified experimentally to an extremely high degree of accuracy.

A fundamental feature of the theory is that it usually cannot predict with certainty what will happen, but only give probabilities. Mathematically, a probability is found by taking the square of the absolute value of a complex number, known as a probability amplitude. This is known as the Born rule, named after physicist Max Born. For example, a quantum particle like an electron can be described by a wave function, which associates to each point in space a probability amplitude. Applying the Born rule to these amplitudes gives a probability density function for the position that the electron will be found to have when an experiment is performed to measure it. This is the best the theory can do; it cannot say for certain where the electron will be found. The Schrödinger equation relates the collection of probability amplitudes that pertain to one moment of time to the collection of probability amplitudes that pertain to another.

One consequence of the mathematical rules of quantum mechanics is a tradeoff in predictability between different measurable quantities. The most famous form of this uncertainty principle says that no matter how a quantum particle is prepared or how carefully experiments upon it are arranged, it is impossible to have a precise prediction for a measurement of its position and also at the same time for a measurement of its momentum.

Another consequence of the mathematical rules of quantum mechanics is the phenomenon of quantum interference, which is often illustrated with the double-slit experiment. In the basic version of this experiment, a coherent light source, such as a laser beam, illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate. The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles. However, the light is always found to be absorbed at the screen at discrete points, as individual particles rather than waves; the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detected photon passes through one slit (as would a classical particle), and not through both slits (as would a wave). However, such experiments demonstrate that particles do not form the interference pattern if one detects which slit they pass through. Other atomic-scale entities, such as electrons, are found to exhibit the same behavior when fired towards a double slit. This behavior is known as wave–particle duality.

Another counter-intuitive phenomenon predicted by quantum mechanics is quantum tunnelling: a particle that goes up against a potential barrier can cross it, even if its kinetic energy is smaller than the maximum of the potential. In classical mechanics this particle would be trapped. Quantum tunnelling has several important consequences, enabling radioactive decay, nuclear fusion in stars, and applications such as scanning tunnelling microscopy and the tunnel diode.

When quantum systems interact, the result can be the creation of quantum entanglement: their properties become so intertwined that a description of the whole solely in terms of the individual parts is no longer possible. Erwin Schrödinger called entanglement "...the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought". Quantum entanglement enables the counter-intuitive properties of quantum pseudo-telepathy, and can be a valuable resource in communication protocols, such as quantum key distribution and superdense coding. Contrary to popular misconception, entanglement does not allow sending signals faster than light, as demonstrated by the no-communication theorem.

Another possibility opened by entanglement is testing for "hidden variables", hypothetical properties more fundamental than the quantities addressed in quantum theory itself, knowledge of which would allow more exact predictions than quantum theory can provide. A collection of results, most significantly Bell's theorem, have demonstrated that broad classes of such hidden-variable theories are in fact incompatible with quantum physics. According to Bell's theorem, if nature actually operates in accord with any theory of local hidden variables, then the results of a Bell test will be constrained in a particular, quantifiable way. Many Bell tests have been performed, using entangled particles, and they have shown results incompatible with the constraints imposed by local hidden variables.

It is not possible to present these concepts in more than a superficial way without introducing the actual mathematics involved; understanding quantum mechanics requires not only manipulating complex numbers, but also linear algebra, differential equations, group theory, and other more advanced subjects. Accordingly, this article will present a mathematical formulation of quantum mechanics and survey its application to some useful and oft-studied examples.

Mathematical formulation

In the mathematically rigorous formulation of quantum mechanics, the state of a quantum mechanical system is a vector belonging to a (separable) complex Hilbert space . This vector is postulated to be normalized under the Hilbert space inner product, that is, it obeys , and it is well-defined up to a complex number of modulus 1 (the global phase), that is, and represent the same physical system. In other words, the possible states are points in the projective space of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of complex square-integrable functions , while the Hilbert space for the spin of a single proton is simply the space of two-dimensional complex vectors with the usual inner product.

Physical quantities of interest – position, momentum, energy, spin – are represented by observables, which are Hermitian (more precisely, self-adjoint) linear operators acting on the Hilbert space. A quantum state can be an eigenvector of an observable, in which case it is called an eigenstate, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as a quantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by the Born rule: in the simplest case the eigenvalue is non-degenerate and the probability is given by , where is its associated eigenvector. More generally, the eigenvalue is degenerate and the probability is given by , where is the projector onto its associated eigenspace. In the continuous case, these formulas give instead the probability density.

After the measurement, if result was obtained, the quantum state is postulated to collapse to , in the non-degenerate case, or to , in the general case. The probabilistic nature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famous Bohr–Einstein debates, in which the two scientists attempted to clarify these fundamental principles by way of thought experiments. In the decades after the formulation of quantum mechanics, the question of what constitutes a "measurement" has been extensively studied. Newer interpretations of quantum mechanics have been formulated that do away with the concept of "wave function collapse" (see, for example, the many-worlds interpretation). The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wave functions become entangled so that the original quantum system ceases to exist as an independent entity. For details, see the article on measurement in quantum mechanics.

The time evolution of a quantum state is described by the Schrödinger equation:

Here denotes the Hamiltonian, the observable corresponding to the total energy of the system, and is the reduced Planck constant. The constant is introduced so that the Hamiltonian is reduced to the classical Hamiltonian in cases where the quantum system can be approximated by a classical system; the ability to make such an approximation in certain limits is called the correspondence principle.

The solution of this differential equation is given by

The operator is known as the time-evolution operator, and has the crucial property that it is unitary. This time evolution is deterministic in the sense that – given an initial quantum state  – it makes a definite prediction of what the quantum state will be at any later time.

Fig. 1: Probability densities corresponding to the wave functions of an electron in a hydrogen atom possessing definite energy levels (increasing from the top of the image to the bottom: n = 1, 2, 3, ...) and angular momenta (increasing across from left to right: s, p, d, ...). Denser areas correspond to higher probability density in a position measurement. Such wave functions are directly comparable to Chladni's figures of acoustic modes of vibration in classical physics and are modes of oscillation as well, possessing a sharp energy and thus, a definite frequency. The angular momentum and energy are quantized and take only discrete values like those shown. (As is the case for resonant frequencies in acoustics.)

Some wave functions produce probability distributions that are independent of time, such as eigenstates of the Hamiltonian. Many systems that are treated dynamically in classical mechanics are described by such "static" wave functions. For example, a single electron in an unexcited atom is pictured classically as a particle moving in a circular trajectory around the atomic nucleus, whereas in quantum mechanics, it is described by a static wave function surrounding the nucleus. For example, the electron wave function for an unexcited hydrogen atom is a spherically symmetric function known as an s orbital (Fig. 1).

Analytic solutions of the Schrödinger equation are known for very few relatively simple model Hamiltonians including the quantum harmonic oscillator, the particle in a box, the dihydrogen cation, and the hydrogen atom. Even the helium atom – which contains just two electrons – has defied all attempts at a fully analytic treatment.

However, there are techniques for finding approximate solutions. One method, called perturbation theory, uses the analytic result for a simple quantum mechanical model to create a result for a related but more complicated model by (for example) the addition of a weak potential energy. Another method is called "semi-classical equation of motion", which applies to systems for which quantum mechanics produces only small deviations from classical behavior. These deviations can then be computed based on the classical motion. This approach is particularly important in the field of quantum chaos.

Uncertainty principle

One consequence of the basic quantum formalism is the uncertainty principle. In its most familiar form, this states that no preparation of a quantum particle can imply simultaneously precise predictions both for a measurement of its position and for a measurement of its momentum. Both position and momentum are observables, meaning that they are represented by Hermitian operators. The position operator and momentum operator do not commute, but rather satisfy the canonical commutation relation:

Given a quantum state, the Born rule lets us compute expectation values for both and , and moreover for powers of them. Defining the uncertainty for an observable by a standard deviation, we have

and likewise for the momentum:

The uncertainty principle states that

Either standard deviation can in principle be made arbitrarily small, but not both simultaneously. This inequality generalizes to arbitrary pairs of self-adjoint operators and . The commutator of these two operators is

and this provides the lower bound on the product of standard deviations:

Another consequence of the canonical commutation relation is that the position and momentum operators are Fourier transforms of each other, so that a description of an object according to its momentum is the Fourier transform of its description according to its position. The fact that dependence in momentum is the Fourier transform of the dependence in position means that the momentum operator is equivalent (up to an factor) to taking the derivative according to the position, since in Fourier analysis differentiation corresponds to multiplication in the dual space. This is why in quantum equations in position space, the momentum is replaced by , and in particular in the non-relativistic Schrödinger equation in position space the momentum-squared term is replaced with a Laplacian times .

Composite systems and entanglement

When two different quantum systems are considered together, the Hilbert space of the combined system is the tensor product of the Hilbert spaces of the two components. For example, let A and B be two quantum systems, with Hilbert spaces and , respectively. The Hilbert space of the composite system is then

If the state for the first system is the vector and the state for the second system is , then the state of the composite system is

Not all states in the joint Hilbert space can be written in this form, however, because the superposition principle implies that linear combinations of these "separable" or "product states" are also valid. For example, if and are both possible states for system , and likewise and are both possible states for system , then

is a valid joint state that is not separable. States that are not separable are called entangled.

If the state for a composite system is entangled, it is impossible to describe either component system A or system B by a state vector. One can instead define reduced density matrices that describe the statistics that can be obtained by making measurements on either component system alone. This necessarily causes a loss of information, though: knowing the reduced density matrices of the individual systems is not enough to reconstruct the state of the composite system. Just as density matrices specify the state of a subsystem of a larger system, analogously, positive operator-valued measures (POVMs) describe the effect on a subsystem of a measurement performed on a larger system. POVMs are extensively used in quantum information theory.

As described above, entanglement is a key feature of models of measurement processes in which an apparatus becomes entangled with the system being measured. Systems interacting with the environment in which they reside generally become entangled with that environment, a phenomenon known as quantum decoherence. This can explain why, in practice, quantum effects are difficult to observe in systems larger than microscopic.

Equivalence between formulations

There are many mathematically equivalent formulations of quantum mechanics. One of the oldest and most common is the "transformation theory" proposed by Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics – matrix mechanics (invented by Werner Heisenberg) and wave mechanics (invented by Erwin Schrödinger). An alternative formulation of quantum mechanics is Feynman's path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over all possible classical and non-classical paths between the initial and final states. This is the quantum-mechanical counterpart of the action principle in classical mechanics.

Symmetries and conservation laws

The Hamiltonian is known as the generator of time evolution, since it defines a unitary time-evolution operator for each value of . From this relation between and , it follows that any observable that commutes with will be conserved: its expectation value will not change over time. This statement generalizes, as mathematically, any Hermitian operator can generate a family of unitary operators parameterized by a variable . Under the evolution generated by , any observable that commutes with will be conserved. Moreover, if is conserved by evolution under , then is conserved under the evolution generated by . This implies a quantum version of the result proven by Emmy Noether in classical (Lagrangian) mechanics: for every differentiable symmetry of a Hamiltonian, there exists a corresponding conservation law.

Examples

Free particle

Position space probability density of a Gaussian wave packet moving in one dimension in free space

The simplest example of a quantum system with a position degree of freedom is a free particle in a single spatial dimension. A free particle is one which is not subject to external influences, so that its Hamiltonian consists only of its kinetic energy:

The general solution of the Schrödinger equation is given by

which is a superposition of all possible plane waves , which are eigenstates of the momentum operator with momentum . The coefficients of the superposition are , which is the Fourier transform of the initial quantum state .

It is not possible for the solution to be a single momentum eigenstate, or a single position eigenstate, as these are not normalizable quantum states. Instead, we can consider a Gaussian wave packet:

which has Fourier transform, and therefore momentum distribution

We see that as we make smaller the spread in position gets smaller, but the spread in momentum gets larger. Conversely, by making larger we make the spread in momentum smaller, but the spread in position gets larger. This illustrates the uncertainty principle.

As we let the Gaussian wave packet evolve in time, we see that its center moves through space at a constant velocity (like a classical particle with no forces acting on it). However, the wave packet will also spread out as time progresses, which means that the position becomes more and more uncertain. The uncertainty in momentum, however, stays constant.

Particle in a box

1-dimensional potential energy box (or infinite potential well)

The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy everywhere inside a certain region, and therefore infinite potential energy everywhere outside that region. For the one-dimensional case in the direction, the time-independent Schrödinger equation may be written

With the differential operator defined by

the previous equation is evocative of the classic kinetic energy analogue,

with state in this case having energy coincident with the kinetic energy of the particle.

The general solutions of the Schrödinger equation for the particle in a box are

or, from Euler's formula,

The infinite potential walls of the box determine the values of and at and where must be zero. Thus, at ,

and . At ,

in which cannot be zero as this would conflict with the postulate that has norm 1. Therefore, since , must be an integer multiple of ,

This constraint on implies a constraint on the energy levels, yielding

A finite potential well is the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of the rectangular potential barrier, which furnishes a model for the quantum tunneling effect that plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy.

Harmonic oscillator

Some trajectories of a harmonic oscillator (i.e. a ball attached to a spring) in classical mechanics (A-B) and quantum mechanics (C-H). In quantum mechanics, the position of the ball is represented by a wave (called the wave function), with the real part shown in blue and the imaginary part shown in red. Some of the trajectories (such as C, D, E, and F) are standing waves (or "stationary states"). Each standing-wave frequency is proportional to a possible energy level of the oscillator. This "energy quantization" does not occur in classical physics, where the oscillator can have any energy.

As in the classical case, the potential for the quantum harmonic oscillator is given by

This problem can either be treated by directly solving the Schrödinger equation, which is not trivial, or by using the more elegant "ladder method" first proposed by Paul Dirac. The eigenstates are given by

where Hn are the Hermite polynomials

and the corresponding energy levels are

This is another example illustrating the discretization of energy for bound states.

Mach–Zehnder interferometer

Schematic of a Mach–Zehnder interferometer

The Mach–Zehnder interferometer (MZI) illustrates the concepts of superposition and interference with linear algebra in dimension 2, rather than differential equations. It can be seen as a simplified version of the double-slit experiment, but it is of interest in its own right, for example in the delayed choice quantum eraser, the Elitzur–Vaidman bomb tester, and in studies of quantum entanglement.

We can model a photon going through the interferometer by considering that at each point it can be in a superposition of only two paths: the "lower" path which starts from the left, goes straight through both beam splitters, and ends at the top, and the "upper" path which starts from the bottom, goes straight through both beam splitters, and ends at the right. The quantum state of the photon is therefore a vector that is a superposition of the "lower" path and the "upper" path , that is, for complex . In order to respect the postulate that we require that .

Both beam splitters are modelled as the unitary matrix , which means that when a photon meets the beam splitter it will either stay on the same path with a probability amplitude of , or be reflected to the other path with a probability amplitude of . The phase shifter on the upper arm is modelled as the unitary matrix , which means that if the photon is on the "upper" path it will gain a relative phase of , and it will stay unchanged if it is in the lower path.

A photon that enters the interferometer from the left will then be acted upon with a beam splitter , a phase shifter , and another beam splitter , and so end up in the state

and the probabilities that it will be detected at the right or at the top are given respectively by

One can therefore use the Mach–Zehnder interferometer to estimate the phase shift by estimating these probabilities.

It is interesting to consider what would happen if the photon were definitely in either the "lower" or "upper" paths between the beam splitters. This can be accomplished by blocking one of the paths, or equivalently by removing the first beam splitter (and feeding the photon from the left or the bottom, as desired). In both cases there will be no interference between the paths anymore, and the probabilities are given by , independently of the phase . From this we can conclude that the photon does not take one path or another after the first beam splitter, but rather that it is in a genuine quantum superposition of the two paths.

Applications

Quantum mechanics has had enormous success in explaining many of the features of our universe, with regards to small-scale and discrete quantities and interactions which cannot be explained by classical methods. Quantum mechanics is often the only theory that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electrons, protons, neutrons, photons, and others). Solid-state physics and materials science are dependent upon quantum mechanics.

In many aspects modern technology operates at a scale where quantum effects are significant. Important applications of quantum theory include quantum chemistry, quantum optics, quantum computing, superconducting magnets, light-emitting diodes, the optical amplifier and the laser, the transistor and semiconductors such as the microprocessor, medical and research imaging such as magnetic resonance imaging and electron microscopy. Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-molecule DNA.

Relation to other scientific theories

Classical mechanics

The rules of quantum mechanics assert that the state space of a system is a Hilbert space and that observables of the system are Hermitian operators acting on vectors in that space – although they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system, a necessary step in making physical predictions. An important guide for making these choices is the correspondence principle, a heuristic which states that the predictions of quantum mechanics reduce to those of classical mechanics in the regime of large quantum numbers. One can also start from an established classical model of a particular system, and then try to guess the underlying quantum model that would give rise to the classical model in the correspondence limit. This approach is known as quantization.

When quantum mechanics was originally formulated, it was applied to models whose correspondence limit was non-relativistic classical mechanics. For instance, the well-known model of the quantum harmonic oscillator uses an explicitly non-relativistic expression for the kinetic energy of the oscillator, and is thus a quantum version of the classical harmonic oscillator.

Complications arise with chaotic systems, which do not have good quantum numbers, and quantum chaos studies the relationship between classical and quantum descriptions in these systems.

Quantum decoherence is a mechanism through which quantum systems lose coherence, and thus become incapable of displaying many typically quantum effects: quantum superpositions become simply probabilistic mixtures, and quantum entanglement becomes simply classical correlations. Quantum coherence is not typically evident at macroscopic scales, except maybe at temperatures approaching absolute zero at which quantum behavior may manifest macroscopically.

Many macroscopic properties of a classical system are a direct consequence of the quantum behavior of its parts. For example, the stability of bulk matter (consisting of atoms and molecules which would quickly collapse under electric forces alone), the rigidity of solids, and the mechanical, thermal, chemical, optical and magnetic properties of matter are all results of the interaction of electric charges under the rules of quantum mechanics.

Special relativity and electrodynamics

Early attempts to merge quantum mechanics with special relativity involved the replacement of the Schrödinger equation with a covariant equation such as the Klein–Gordon equation or the Dirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field (rather than a fixed set of particles). The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the electromagnetic interaction. Quantum electrodynamics is, along with general relativity, one of the most accurate physical theories ever devised.

The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one that has been used since the inception of quantum mechanics, is to treat charged particles as quantum mechanical objects being acted on by a classical electromagnetic field. For example, the elementary quantum model of the hydrogen atom describes the electric field of the hydrogen atom using a classical Coulomb potential. This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons by charged particles.

Quantum field theories for the strong nuclear force and the weak nuclear force have also been developed. The quantum field theory of the strong nuclear force is called quantum chromodynamics, and describes the interactions of subnuclear particles such as quarks and gluons. The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory (known as electroweak theory), by the physicists Abdus Salam, Sheldon Glashow and Steven Weinberg.

Relation to general relativity

Even though the predictions of both quantum theory and general relativity have been supported by rigorous and repeated empirical evidence, their abstract formalisms contradict each other and they have proven extremely difficult to incorporate into one consistent, cohesive model. Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those particular applications. However, the lack of a correct theory of quantum gravity is an important issue in physical cosmology and the search by physicists for an elegant "Theory of Everything" (TOE). Consequently, resolving the inconsistencies between both theories has been a major goal of 20th- and 21st-century physics. This TOE would combine not only the models of subatomic physics but also derive the four fundamental forces of nature from a single force or phenomenon.

One proposal for doing so is string theory, which posits that the point-like particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how these strings propagate through space and interact with each other. On distance scales larger than the string scale, a string looks just like an ordinary particle, with its mass, charge, and other properties determined by the vibrational state of the string. In string theory, one of the many vibrational states of the string corresponds to the graviton, a quantum mechanical particle that carries gravitational force.

Another popular theory is loop quantum gravity (LQG), which describes quantum properties of gravity and is thus a theory of quantum spacetime. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. This theory describes space as an extremely fine fabric "woven" of finite loops called spin networks. The evolution of a spin network over time is called a spin foam. The characteristic length scale of a spin foam is the Planck length, approximately 1.616×10−35 m, and so lengths shorter than the Planck length are not physically meaningful in LQG.

Philosophical implications

Unsolved problem in physics:

Is there a preferred interpretation of quantum mechanics? How does the quantum description of reality, which includes elements such as the "superposition of states" and "wave function collapse", give rise to the reality we perceive?

Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strong philosophical debates and many interpretations. The arguments centre on the probabilistic nature of quantum mechanics, the difficulties with wavefunction collapse and the related measurement problem, and quantum nonlocality. Perhaps the only consensus that exists about these issues is that there is no consensus. Richard Feynman once said, "I think I can safely say that nobody understands quantum mechanics." According to Steven Weinberg, "There is now in my opinion no entirely satisfactory interpretation of quantum mechanics."

The views of Niels Bohr, Werner Heisenberg and other physicists are often grouped together as the "Copenhagen interpretation". According to these views, the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but is instead a final renunciation of the classical idea of "causality". Bohr in particular emphasized that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to the complementary nature of evidence obtained under different experimental situations. Copenhagen-type interpretations remain popular in the 21st century.

Albert Einstein, himself one of the founders of quantum theory, was troubled by its apparent failure to respect some cherished metaphysical principles, such as determinism and locality. Einstein's long-running exchanges with Bohr about the meaning and status of quantum mechanics are now known as the Bohr–Einstein debates. Einstein believed that underlying quantum mechanics must be a theory that explicitly forbids action at a distance. He argued that quantum mechanics was incomplete, a theory that was valid but not fundamental, analogous to how thermodynamics is valid, but the fundamental theory behind it is statistical mechanics. In 1935, Einstein and his collaborators Boris Podolsky and Nathan Rosen published an argument that the principle of locality implies the incompleteness of quantum mechanics, a thought experiment later termed the Einstein–Podolsky–Rosen paradox. In 1964, John Bell showed that EPR's principle of locality, together with determinism, was actually incompatible with quantum mechanics: they implied constraints on the correlations produced by distance systems, now known as Bell inequalities, that can be violated by entangled particles. Since then several experiments have been performed to obtain these correlations, with the result that they do in fact violate Bell inequalities, and thus falsify the conjunction of locality with determinism.

Bohmian mechanics shows that it is possible to reformulate quantum mechanics to make it deterministic, at the price of making it explicitly nonlocal. It attributes not only a wave function to a physical system, but in addition a real position, that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation; there is never a collapse of the wave function. This solves the measurement problem.

Everett's many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes. This is a consequence of removing the axiom of the collapse of the wave packet. All possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical quantum superposition. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we do not observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Several attempts have been made to make sense of this and derive the Born rule,with no consensus on whether they have been successful.

Relational quantum mechanics appeared in the late 1990s as a modern derivative of Copenhagen-type ideas, and QBism was developed some years later.

History

Max Planck is considered the father of the quantum theory.

Quantum mechanics was developed in the early decades of the 20th century, driven by the need to explain phenomena that, in some cases, had been observed in earlier times. Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such as Robert Hooke, Christiaan Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations. In 1803 English polymath Thomas Young described the famous double-slit experiment. This experiment played a major role in the general acceptance of the wave theory of light.

During the early 19th century, chemical research by John Dalton and Amedeo Avogadro lent weight to the atomic theory of matter, an idea that James Clerk Maxwell, Ludwig Boltzmann and others built upon to establish the kinetic theory of gases. The successes of kinetic theory gave further credence to the idea that matter is composed of atoms, yet the theory also had shortcomings that would only be resolved by the development of quantum mechanics. While the early conception of atoms from Greek philosophy had been that they were indivisible units – the word "atom" deriving from the Greek for "uncuttable" – the 19th century saw the formulation of hypotheses about subatomic structure. One important discovery in that regard was Michael Faraday's 1838 observation of a glow caused by an electrical discharge inside a glass tube containing gas at low pressure. Julius Plücker, Johann Wilhelm Hittorf and Eugen Goldstein carried on and improved upon Faraday's work, leading to the identification of cathode rays, which J. J. Thomson found to consist of subatomic particles that would be called electrons.

The black-body radiation problem was discovered by Gustav Kirchhoff in 1859. In 1900, Max Planck proposed the hypothesis that energy is radiated and absorbed in discrete "quanta" (or energy packets), yielding a calculation that precisely matched the observed patterns of black-body radiation. The word quantum derives from the Latin, meaning "how great" or "how much". According to Planck, quantities of energy could be thought of as divided into "elements" whose size (E) would be proportional to their frequency (ν):

,

where h is Planck's constant. Planck cautiously insisted that this was only an aspect of the processes of absorption and emission of radiation and was not the physical reality of the radiation. In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery. However, in 1905 Albert Einstein interpreted Planck's quantum hypothesis realistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material. Niels Bohr then developed Planck's ideas about radiation into a model of the hydrogen atom that successfully predicted the spectral lines of hydrogen. Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle (later called the photon), with a discrete amount of energy that depends on its frequency. In his paper "On the Quantum Theory of Radiation," Einstein expanded on the interaction between energy and matter to explain the absorption and emission of energy by atoms. Although overshadowed at the time by his general theory of relativity, this paper articulated the mechanism underlying the stimulated emission of radiation, which became the basis of the laser.

The 1927 Solvay Conference in Brussels was the fifth world physics conference.

This phase is known as the old quantum theory. Never complete or self-consistent, the old quantum theory was rather a set of heuristic corrections to classical mechanics. The theory is now understood as a semi-classical approximation to modern quantum mechanics. Notable results from this period include, in addition to the work of Planck, Einstein and Bohr mentioned above, Einstein and Peter Debye's work on the specific heat of solids, Bohr and Hendrika Johanna van Leeuwen's proof that classical physics cannot account for diamagnetism, and Arnold Sommerfeld's extension of the Bohr model to include special-relativistic effects.

In the mid-1920s quantum mechanics was developed to become the standard formulation for atomic physics. In 1923, the French physicist Louis de Broglie put forward his theory of matter waves by stating that particles can exhibit wave characteristics and vice versa. Building on de Broglie's approach, modern quantum mechanics was born in 1925, when the German physicists Werner Heisenberg, Max Born, and Pascual Jordan developed matrix mechanics and the Austrian physicist Erwin Schrödinger invented wave mechanics. Born introduced the probabilistic interpretation of Schrödinger's wave function in July 1926. Thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927.

By 1930 quantum mechanics had been further unified and formalized by David Hilbert, Paul Dirac and John von Neumann with greater emphasis on measurement, the statistical nature of our knowledge of reality, and philosophical speculation about the 'observer'. It has since permeated many disciplines, including quantum chemistry, quantum electronics, quantum optics, and quantum information science. It also provides a useful framework for many features of the modern periodic table of elements, and describes the behaviors of atoms during chemical bonding and the flow of electrons in computer semiconductors, and therefore plays a crucial role in many modern technologies. While quantum mechanics was constructed to describe the world of the very small, it is also needed to explain some macroscopic phenomena such as superconductors and superfluids.

Politics of Europe

From Wikipedia, the free encyclopedia ...