Search This Blog

Sunday, April 30, 2017

Tetrahydrocannabinol

From Wikipedia, the free encyclopedia

Tetrahydrocannabinol.svg
Delta-9-tetrahydrocannabinol-from-tosylate-xtal-3D-balls.png
Clinical data
Trade names Marinol
License data
Pregnancy
category
  • US: C (Risk not ruled out)
Dependence
liability
8–10% (Relatively low risk of tolerance)[1]
Addiction
liability
Low
Routes of
administration
Orally, local/topical, transdermal, sublingual, inhaled
ATC code
Legal status
Legal status
Pharmacokinetic data
Bioavailability 10–35% (inhalation), 6–20% (oral)[3]
Protein binding 97–99%[3][4][5]
Metabolism Mostly hepatic by CYP2C[3]
Biological half-life 1.6–59 h,[3] 25–36 h (orally administered dronabinol)
Excretion 65–80% (feces), 20–35% (urine) as acid metabolites[3]
Identifiers
Synonyms  (6aR,10aR)-delta-9-tetrahydrocannabinol, (−)-trans-Δ⁹-tetrahydrocannabinol
CAS Number
PubChem CID
IUPHAR/BPS
DrugBank
ChemSpider
UNII
ChEBI
ChEMBL
ECHA InfoCard 100.153.676
Chemical and physical data
Formula C21H30O2
Molar mass 314.469 g/mol
3D model (Jmol)
Specific rotation −152° (ethanol)
Boiling point 157 °C (315 °F) [7]
Solubility in water 0.0028,[6] (23 °C) mg/mL (20 °C)
Tetrahydrocannabinol (THC, dronabinol, trade name Marinol) is the principal psychoactive constituent (or cannabinoid) of cannabis. The pharmaceutical formulation dronabinol, is available by prescription in the U.S. and Canada. It can be a clear, amber or gold colored glassy solid when cold, which becomes viscous and sticky if warmed.

Like most pharmacologically-active secondary metabolites of plants, THC in Cannabis is assumed to be involved in self-defense, perhaps against herbivores.[8] THC also possesses high UV-B (280–315 nm) absorption properties, which, it has been speculated, could protect the plant from harmful UV radiation exposure.[9][10][11]

THC, along with its double bond isomers and their stereoisomers, is one of only three cannabinoids scheduled by the UN Convention on Psychotropic Substances (the other two are dimethylheptylpyran and parahexyl). It was listed under Schedule I in 1971, but reclassified to Schedule II in 1991 following a recommendation from the WHO. Based on subsequent studies, the WHO has recommended the reclassification to the less-stringent Schedule III.[12] Cannabis as a plant is scheduled by the Single Convention on Narcotic Drugs (Schedule I and IV). It is specifically still listed under Schedule I by US federal law[13] under the Controlled Substances Act signed by the US Congress in 1970.

Medical uses

Not to be confused with Droperidol.

Dronabinol is the INN for a pure isomer of THC, (–)-trans-Δ⁹-tetrahydrocannabinol,[14] which is the main isomer found in cannabis. It is used to treat anorexia in people with HIV/AIDS as well as for refractory nausea and vomiting in people undergoing chemotherapy. It is safe and effective for these uses.[15][16]

THC is also an active ingredient in nabiximols, a specific extract of Cannabis that was approved as a botanical drug in the United Kingdom in 2010 as a mouth spray for people with multiple sclerosis to alleviate neuropathic pain, spasticity, overactive bladder, and other symptoms.[17][18]

Adverse effects

A hybrid Cannabis strain (White Widow) (which contains one of the highest amounts of cannabidiol), flower coated with trichomes, which contain more THC than any other part of the plant
Closeup of THC-filled trichomes on a Cannabis sativa leaf

An overdose of dronabinol usually presents with lethargy, decreased motor coordination, slurred speech, and postural hypotension.[19] Non-fatal overdoses have occurred.[20]

A meta analysis of clinical trials conducted using standardized cannabis extracts or THC conducted by the American Academy of Neurology found that of 1619 persons treated with cannabis products (including some treated with smoked cannabis and nabiximols), 6.9% discontinued due to side effects, compared to 2.2% of 1,118 treated with placebo. Detailed information regarding side effects was not available from all trials, but nausea, increased weakness, behavioral or mood changes, suicidal ideation, hallucinations, dizziness, and vasovagal symptoms, fatigue, and feelings of intoxication were each described as side effects in at least two trials. There was a single death rated by the investigator as "possibly related" to treatment. This person had a seizure followed by aspiration pneumonia. The paper does not describe whether this was one of the subjects from the epilepsy trials.[21]

Pharmacology

Mechanism of action

The actions of THC result from its partial agonist activity at the cannabinoid receptor CB1 (Ki=10nM[22]), located mainly in the central nervous system, and the CB2 receptor (Ki=24nM[22]), mainly expressed in cells of the immune system.[23] The psychoactive effects of THC are primarily mediated by the activation of cannabinoid receptors, which result in a decrease in the concentration of the second messenger molecule cAMP through inhibition of adenylate cyclase.[24]
The presence of these specialized cannabinoid receptors in the brain led researchers to the discovery of endocannabinoids, such as anandamide and 2-arachidonoyl glyceride (2-AG). THC targets receptors in a manner far less selective than endocannabinoid molecules released during retrograde signaling, as the drug has a relatively low cannabinoid receptor efficacy and affinity. In populations of low cannabinoid receptor density, THC may act to antagonize endogenous agonists that possess greater receptor efficacy.[25] THC is a lipophilic molecule[26] and may bind non-specifically to a variety of entities in the brain and body, such as adipose tissue (fat).[27][28]

THC, similarly to cannabidiol, albeit less potently, is a positive allosteric modulator of the μ- and δ-opioid receptors.[29]

Due to its partial agonistic activity, THC appears to result in greater downregulation of cannabinoid receptors than endocannabinoids, further limiting its efficacy over other cannabinoids. While tolerance may limit the maximal effects of certain drugs, evidence suggests that tolerance develops irregularly for different effects with greater resistance for primary over side-effects, and may actually serve to enhance the drug's therapeutic window.[25] However, this form of tolerance appears to be irregular throughout mouse brain areas. THC, as well as other cannabinoids that contain a phenol group, possesses mild antioxidant activity sufficient to protect neurons against oxidative stress, such as that produced by glutamate-induced excitotoxicity.[23]

Pharmacokinetics

THC is metabolized mainly to 11-OH-THC by the body. This metabolite is still psychoactive and is further oxidized to 11-nor-9-carboxy-THC (THC-COOH). In humans and animals, more than 100 metabolites could be identified, but 11-OH-THC and THC-COOH are the dominating metabolites.[30] Metabolism occurs mainly in the liver by cytochrome P450 enzymes CYP2C9, CYP2C19, and CYP3A4.[31] More than 55% of THC is excreted in the feces and ~20% in the urine. The main metabolite in urine is the ester of glucuronic acid and THC-COOH and free THC-COOH. In the feces, mainly 11-OH-THC was detected.[32]

Physical and chemical properties

Discovery and structure identification

The discovery of THC, by a team of researchers from Hebrew University Pharmacy School, was first reported in 1964,[33] with substantial later work reported by Raphael Mechoulam in June 1970.[34]

Solubility

An aromatic terpenoid, THC has a very low solubility in water, but good solubility in most organic solvents, specifically lipids and alcohols.[6] THC, CBD, CBN, CBC, CBG and over 113 other molecules make up the phytocannabinoid family.[35][36]

Total Synthesis

A total synthesis of the compound was reported in 1965; that procedure called for the intramolecular alkyl lithium attack on a starting carbonyl to form the fused rings, and a tosyl chloride mediated formation of the ether.[37][third-party source needed]

Biosynthesis

Biosynthesis of THCA

In the Cannabis plant, THC occurs mainly as tetrahydrocannabinolic acid (THCA, 2-COOH-THC, THC-COOH). Geranyl pyrophosphate and olivetolic acid react, catalysed by an enzyme to produce cannabigerolic acid,[38] which is cyclized by the enzyme THC acid synthase to give THCA. Over time, or when heated, THCA is decarboxylated, producing THC. The pathway for THCA biosynthesis is similar to that which produces the bitter acid humulone in hops.[39][40]

Detection in body fluids

THC, 11-OH-THC and THC-COOH can be detected and quantified in blood, urine, hair, oral fluid or sweat using a combination of immunoassay and chromatographic techniques as part of a drug use testing program or in a forensic investigation.[41][42][43]

History

THC was first isolated in 1964 by Raphael Mechoulam and Yechiel Gaoni at the Weizmann Institute of Science.[33][44][45]
Since at least 1986, the trend has been for THC in general, and especially the Marinol preparation, to be downgraded to less and less stringently-controlled schedules of controlled substances, in the U.S. and throughout the rest of the world.[citation needed]

On May 13, 1986, the Drug Enforcement Administration (DEA) issued a Final Rule and Statement of Policy authorizing the "Rescheduling of Synthetic Dronabinol in Sesame Oil and Encapsulated in Soft Gelatin Capsules From Schedule I to Schedule II" (DEA 51 FR 17476-78). This permitted medical use of Marinol, albeit with the severe restrictions associated with Schedule II status.[46] For instance, refills of Marinol prescriptions were not permitted. At its 10th meeting, on April 29, 1991, the Commission on Narcotic Drugs, in accordance with article 2, paragraphs 5 and 6, of the Convention on Psychotropic Substances, decided that Δ⁹-tetrahydrocannabinol (also referred to as Δ⁹-THC) and its stereochemical variants should be transferred from Schedule I to Schedule II of that Convention. This released Marinol from the restrictions imposed by Article 7 of the Convention (See also United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances).[citation needed]

An article published in the April–June 1998 issue of the Journal of Psychoactive Drugs found that "Healthcare professionals have detected no indication of scrip-chasing or doctor-shopping among the patients for whom they have prescribed dronabinol". The authors state that Marinol has a low potential for abuse.[47]

In 1999, Marinol was rescheduled from Schedule II to III of the Controlled Substances Act, reflecting a finding that THC had a potential for abuse less than that of cocaine and heroin. This rescheduling constituted part of the argument for a 2002 petition for removal of cannabis from Schedule I of the Controlled Substances Act, in which petitioner Jon Gettman noted, "Cannabis is a natural source of dronabinol (THC), the ingredient of Marinol, a Schedule III drug. There are no grounds to schedule cannabis in a more restrictive schedule than Marinol".[48]

At its 33rd meeting, in 2003, the World Health Organization Expert Committee on Drug Dependence recommended transferring THC to Schedule IV of the Convention, citing its medical uses and low abuse potential.[49]

Society and culture

Brand names


Dronabinol is marketed as Marinol,[50] a registered trademark of Solvay Pharmaceuticals. Dronabinol is also marketed, sold, and distributed by PAR Pharmaceutical Companies under the terms of a license and distribution agreement with SVC pharma LP, an affiliate of Rhodes Technologies.[citation needed] Dronabinol is available as a prescription drug (under Marinol[51]) in several countries including the United States, Germany, South Africa and Australia.[52] In the United States, Marinol is a Schedule III drug, available by prescription, considered to be non-narcotic and to have a low risk of physical or mental dependence. Efforts to get cannabis rescheduled as analogous to Marinol have not succeeded thus far, though a 2002 petition has been accepted by the DEA. As a result of the rescheduling of Marinol from Schedule II to Schedule III, refills are now permitted for this substance. Marinol's U.S. Food and Drug Administration (FDA) approvals for medical use has raised much controversy[53] as to why natural THC is considered a schedule I drug.[54]

Comparisons with medical cannabis

Female cannabis plants contain at least 113 cannabinoids,[55] including cannabidiol (CBD), thought to be the major anticonvulsant that helps people with multiple sclerosis;[56] and cannabichromene (CBC), an anti-inflammatory which may contribute to the pain-killing effect of cannabis.[57]

It takes over one hour for Marinol to reach full systemic effect,[58] compared to seconds or minutes for smoked or vaporized cannabis.[59] Some people accustomed to inhaling just enough cannabis smoke to manage symptoms have complained of too-intense intoxication from Marinol's predetermined dosages[citation needed]. Many people using Marinol have said that Marinol produces a more acute psychedelic effect than cannabis, and it has been speculated that this disparity can be explained by the moderating effect of the many non-THC cannabinoids present in cannabis.[citation needed] For that reason, alternative THC-containing medications based on botanical extracts of the cannabis plant such as nabiximols are being developed. Mark Kleiman, director of the Drug Policy Analysis Program at UCLA's School of Public Affairs said of Marinol, "It wasn't any fun and made the user feel bad, so it could be approved without any fear that it would penetrate the recreational market, and then used as a club with which to beat back the advocates of whole cannabis as a medicine."[60] Mr. Kleiman's opinion notwithstanding, clinical trials comparing the use of cannabis extracts with Marinol in the treatment of cancer cachexia have demonstrated equal efficacy and well-being among subjects in the two treatment arms.[61] United States federal law currently registers dronabinol as a Schedule III controlled substance, but all other cannabinoids remain Schedule I, except synthetics like nabilone.[62]

Research

Its status as an illegal drug in most countries can make research difficult; for instance in the United States where the National Institute on Drug Abuse was the only legal source of cannabis for researchers until it recently became legalized in Colorado, Washington state, Oregon, Alaska, California, Massachusets and Washington D.C.[63]

In April 2014 the American Academy of Neurology published a systematic review of the efficacy and safety of medical marijuana and marijuana-derived products in certain neurological disorders.[21] The review identified 34 studies meeting inclusion criteria, of which 8 were rated as Class I quality.[21] The study found evidence supporting the effectiveness of the cannabis extracts that were tested and THC in treating certain symptoms of multiple sclerosis, but found insufficient evidence to determine the effectiveness of the tested cannabis products in treating several other neurological diseases.[21]

Several of the clinical trials exploring the safety and efficacy of "oral cannabis extract" that were reviewed by the AAN were conducted using "Cannador", made by the Institute for Clinical Research (IKF) in Berlin,[64] which is a capsule with a standardized Cannabis sativa extract; the cannabis grown in Switzerland and processed in Germany.[65]:88 Each capsule of Cannador contains 2.5 mg Δ⁹- tetrahydrocannabinol and cannabidiols are standardized to a range 0.8–1.8 mg.[66]

Multiple sclerosis symptoms

  • Spasticity. Based on the results of 3 high quality trials and 5 of lower quality, oral cannabis extract was rated as effective, and THC as probably effective, for improving people's subjective experience of spasticity. Oral cannabis extract and THC both were rated as possibly effective for improving objective measures of spasticity.[21]
  • Centrally mediated pain and painful spasms. Based on the results of 4 high quality trials and 4 low quality trials, oral cannabis extract was rated as effective, and THC as probably effective in treating central pain and painful spasms.[21]
  • Bladder dysfunction. Based on a single high quality study, oral cannabis extract and THC were rated as probably ineffective for controlling bladder complaints in multiple sclerosis[21]

Neurodegenerative disorders

  • Huntington disease. No reliable conclusions could be drawn regarding the effectiveness of THC or oral cannabis extract in treating the symptoms of Huntington disease as the available trials were too small to reliably detect any difference[21]
  • Parkinson disease. Based on a single study, oral cannabis extract was rated probably ineffective in treating levodopa-induced dyskinesia in Parkinson disease.[21]
  • Alzheimer's disease. A 2011 Cochrane Review found insufficient evidence to conclude whether cannabis products have any utility in the treatment of Alzheimer's disease.[67]

Other neurological disorders

  • Tourette syndrome. The available data was determined to be insufficient to allow reliable conclusions to be drawn regarding the effectiveness of oral cannabis extract or THC in controlling tics.[21]
  • Cervical dystonia. Insufficient data was available to assess the effectiveness of oral cannabis extract of THC in treating cervical dystonia.[21]
  • Epilepsy. Data was considered insufficient to judge the utility of cannabis products in reducing seizure frequency or severity.[21]

Saturday, April 29, 2017

Paradigm

From Wikipedia, the free encyclopedia

In science and philosophy, a paradigm /ˈpærədm/ is a distinct set of concepts or thought patterns, including theories, research methods, postulates, and standards for what constitutes legitimate contributions to a field.

Etymology

Paradigm comes from Greek παράδειγμα (paradeigma), "pattern, example, sample"[1] from the verb παραδείκνυμι (paradeiknumi), "exhibit, represent, expose"[2] and that from παρά (para), "beside, beyond"[3] and δείκνυμι (deiknumi), "to show, to point out".[4]
In rhetoric, paradeigma is known as a type of proof. The purpose of paradeigma is to provide an audience with an illustration of similar occurrences. This illustration is not meant to take the audience to a conclusion, however it is used to help guide them there. A personal accountant is a good comparison of paradeigma to explain how it is meant to guide the audience. It is not the job of a personal accountant to tell their client exactly what (and what not) to spend their money on, but to aid in guiding their client as to how money should be spent based on their financial goals. Anaximenes defined paradeigma as, "actions that have occurred previously and are similar to, or the opposite of, those which we are now discussing."[5]

The original Greek term παράδειγμα (paradeigma) was used in Greek texts such as Plato's Timaeus (28A) as the model or the pattern that the Demiurge (god) used to create the cosmos. The term had a technical meaning in the field of grammar: the 1900 Merriam-Webster dictionary defines its technical use only in the context of grammar or, in rhetoric, as a term for an illustrative parable or fable. In linguistics, Ferdinand de Saussure used paradigm to refer to a class of elements with similarities.

The Merriam-Webster Online dictionary defines this usage as "a philosophical and theoretical framework of a scientific school or discipline within which theories, laws, and generalizations and the experiments performed in support of them are formulated; broadly: a philosophical or theoretical framework of any kind."[6]

The Oxford Dictionary of Philosophy attributes the following description of the term to Thomas Kuhn's The Structure of Scientific Revolutions:
Kuhn suggests that certain scientific works, such as Newton's Principia or John Dalton's New System of Chemical Philosophy (1808), provide an open-ended resource: a framework of concepts, results, and procedures within which subsequent work is structured. Normal science proceeds within such a framework or paradigm. A paradigm does not impose a rigid or mechanical approach, but can be taken more or less creatively and flexibly.[7]

Scientific paradigm

The Oxford English Dictionary defines the basic meaning of the term paradigm as "a typical example or pattern of something; a pattern or model".[8] The historian of science Thomas Kuhn gave it its contemporary meaning when he adopted the word to refer to the set of concepts and practices that define a scientific discipline at any particular period of time. In his book The Structure of Scientific Revolutions (first published in 1962), Kuhn defines a scientific paradigm as: "universally recognized scientific achievements that, for a time, provide model problems and solutions for a community of practitioners,[9] i.e.,
  • what is to be observed and scrutinized
  • the kind of questions that are supposed to be asked and probed for answers in relation to this subject
  • how these questions are to be structured
  • what predictions made by the primary theory within the discipline
  • how the results of scientific investigations should be interpreted
  • how an experiment is to be conducted, and what equipment is available to conduct the experiment.
In The Structure of Scientific Revolutions, Kuhn saw the sciences as going through alternating periods of normal science, when an existing model of reality dominates a protracted period of puzzle-solving, and revolution, when the model of reality itself undergoes sudden drastic change. Paradigms have two aspects. Firstly, within normal science, the term refers to the set of exemplary experiments that are likely to be copied or emulated. Secondly, underpinning this set of exemplars are shared preconceptions, made prior to – and conditioning – the collection of evidence.[10] These preconceptions embody both hidden assumptions and elements that he describes as quasi-metaphysical;[11] the interpretations of the paradigm may vary among individual scientists.[12]

Kuhn was at pains to point out that the rationale for the choice of exemplars is a specific way of viewing reality: that view and the status of "exemplar" are mutually reinforcing. For well-integrated members of a particular discipline, its paradigm is so convincing that it normally renders even the possibility of alternatives unconvincing and counter-intuitive. Such a paradigm is opaque, appearing to be a direct view of the bedrock of reality itself, and obscuring the possibility that there might be other, alternative imageries hidden behind it. The conviction that the current paradigm is reality tends to disqualify evidence that might undermine the paradigm itself; this in turn leads to a build-up of unreconciled anomalies. It is the latter that is responsible for the eventual revolutionary overthrow of the incumbent paradigm, and its replacement by a new one. Kuhn used the expression paradigm shift (see below) for this process, and likened it to the perceptual change that occurs when our interpretation of an ambiguous image "flips over" from one state to another.[13] (The rabbit-duck illusion is an example: it is not possible to see both the rabbit and the duck simultaneously.) This is significant in relation to the issue of incommensurability (see below).

An example of a currently accepted paradigm would be the standard model of physics. The scientific method allows for orthodox scientific investigations into phenomena that might contradict or disprove the standard model; however grant funding would be proportionately more difficult to obtain for such experiments, depending on the degree of deviation from the accepted standard model theory the experiment would test for. To illustrate the point, an experiment to test for the mass of neutrinos or the decay of protons (small departures from the model) is more likely to receive money than experiments that look for the violation of the conservation of momentum, or ways to engineer reverse time travel.

Mechanisms similar to the original Kuhnian paradigm have been invoked in various disciplines other than the philosophy of science. These include: the idea of major cultural themes,[14][15] worldviews (and see below), ideologies, and mindsets. They have somewhat similar meanings that apply to smaller and larger scale examples of disciplined thought. In addition, Michel Foucault used the terms episteme and discourse, mathesis and taxinomia, for aspects of a "paradigm" in Kuhn's original sense.

Paradigm shifts

In The Structure of Scientific Revolutions, Kuhn wrote that "the successive transition from one paradigm to another via revolution is the usual developmental pattern of mature science." (p. 12)
Paradigm shifts tend to appear in response to the accumulation of critical anomalies as well as the proposal of a new theory with the power to encompass both older relevant data and explain relevant anomalies. New paradigms tend to be most dramatic in sciences that appear to be stable and mature, as in physics at the end of the 19th century. At that time, a statement generally attributed to physicist Lord Kelvin famously claimed, "There is nothing new to be discovered in physics now. All that remains is more and more precise measurement."[16] Five years later, Albert Einstein published his paper on special relativity, which challenged the set of rules laid down by Newtonian mechanics, which had been used to describe force and motion for over two hundred years. In this case, the new paradigm reduces the old to a special case in the sense that Newtonian mechanics is still a good model for approximation for speeds that are slow compared to the speed of light. Many philosophers and historians of science, including Kuhn himself, ultimately accepted a modified version of Kuhn's model, which synthesizes his original view with the gradualist model that preceded it. Kuhn's original model is now generally seen as too limited[citation needed].

Kuhn's idea was itself revolutionary in its time, as it caused a major change in the way that academics talk about science. Thus, it may be that it caused or was itself part of a "paradigm shift" in the history and sociology of science. However, Kuhn would not recognize such a paradigm shift. Being in the social sciences, people can still use earlier ideas to discuss the history of science.

Paradigm paralysis

Perhaps the greatest barrier to a paradigm shift, in some cases, is the reality of paradigm paralysis: the inability or refusal to see beyond the current models of thinking.[17] This is similar to what psychologists term Confirmation bias. Examples include rejection of Aristarchus of Samos', Copernicus', and Galileo's theory of a heliocentric solar system, the discovery of electrostatic photography, xerography and the quartz clock.[citation needed]

Incommensurability

Kuhn pointed out that it could be difficult to assess whether a particular paradigm shift had actually led to progress, in the sense of explaining more facts, explaining more important facts, or providing better explanations, because the understanding of "more important", "better", etc. changed with the paradigm. The two versions of reality are thus incommensurable. Kuhn's version of incommensurability has an important psychological dimension; this is apparent from his analogy between a paradigm shift and the flip-over involved in some optical illusions.[18] However, he subsequently diluted his commitment to incommensurability considerably, partly in the light of other studies of scientific development that did not involve revolutionary change.[19] One of the examples of incommensurability that Kuhn used was the change in the style of chemical investigations that followed the work of Lavoisier on atomic theory in the late 18th Century.[13] In this change, the focus had shifted from the bulk properties of matter (such as hardness, colour, reactivity, etc.) to studies of atomic weights and quantitative studies of reactions. He suggested that it was impossible to make the comparison needed to judge which body of knowledge was better or more advanced. However, this change in research style (and paradigm) eventually (after more than a century) led to a theory of atomic structure that accounts well for the bulk properties of matter; see, for example, Brady's General Chemistry.[20] According to P J Smith, this ability of science to back off, move sideways, and then advance is characteristic of the natural sciences,[21] but contrasts with the position in some social sciences, notably economics.[22]

This apparent ability does not guarantee that the account is veridical at any one time, of course, and most modern philosophers of science are fallibilists. However, members of other disciplines do see the issue of incommensurability as a much greater obstacle to evaluations of "progress"; see, for example, Martin Slattery's Key Ideas in Sociology.[23][24]

Subsequent developments

Opaque Kuhnian paradigms and paradigm shifts do exist. A few years after the discovery of the mirror-neurons that provide a hard-wired basis for the human capacity for empathy, the scientists involved were unable to identify the incidents that had directed their attention to the issue. Over the course of the investigation, their language and metaphors had changed so that they themselves could no longer interpret all of their own earlier laboratory notes and records.[25]

Imre Lakatos and research programmes

However, many instances exist in which change in a discipline's core model of reality has happened in a more evolutionary manner, with individual scientists exploring the usefulness of alternatives in a way that would not be possible if they were constrained by a paradigm. Imre Lakatos suggested (as an alternative to Kuhn's formulation) that scientists actually work within research programmes.[26] In Lakatos' sense, a research programme is a sequence of problems, placed in order of priority. This set of priorities, and the associated set of preferred techniques, is the positive heuristic of a programme. Each programme also has a negative heuristic; this consists of a set of fundamental assumptions that – temporarily, at least – takes priority over observational evidence when the two appear to conflict.

This latter aspect of research programmes is inherited from Kuhn's work on paradigms,[citation needed] and represents an important departure from the elementary account of how science works. According to this, science proceeds through repeated cycles of observation, induction, hypothesis-testing, etc., with the test of consistency with empirical evidence being imposed at each stage. Paradigms and research programmes allow anomalies to be set aside, where there is reason to believe that they arise from incomplete knowledge (about either the substantive topic, or some aspect of the theories implicitly used in making observations.

Larry Laudan: Dormant anomalies, fading credibility, and research traditions

Larry Laudan[27] has also made two important contributions to the debate. Laudan believed that something akin to paradigms exist in the social sciences (Kuhn had contested this, see below); he referred to these as research traditions. Laudan noted that some anomalies become "dormant", if they survive a long period during which no competing alternative has shown itself capable of resolving the anomaly. He also presented cases in which a dominant paradigm had withered away because its lost credibility when viewed against changes in the wider intellectual milieu.

In social sciences

Kuhn himself did not consider the concept of paradigm as appropriate for the social sciences. He explains in his preface to The Structure of Scientific Revolutions that he developed the concept of paradigm precisely to distinguish the social from the natural sciences. While visiting the Center for Advanced Study in the Behavioral Sciences in 1958 and 1959, surrounded by social scientists, he observed that they were never in agreement about the nature of legitimate scientific problems and methods. He explains that he wrote this book precisely to show that there can never be any paradigms in the social sciences. Mattei Dogan, a French sociologist, in his article "Paradigms in the Social Sciences," develops Kuhn's original thesis that there are no paradigms at all in the social sciences since the concepts are polysemic, involving the deliberate mutual ignorance between scholars and the proliferation of schools in these disciplines. Dogan provides many examples of the non-existence of paradigms in the social sciences in his essay, particularly in sociology, political science and political anthropology.

However, both Kuhn's original work and Dogan's commentary are directed at disciplines that are defined by conventional labels (such as well as "sociology"). While it is true that such broad groupings in the social sciences are usually not based on a Kuhnian paradigm, each of the competing sub-disciplines may still be underpinned by a paradigm, research programme, research tradition, and/ or professional imagery. These structures will be motivating research, providing it with an agenda, defining what is and is not anomalous evidence, and inhibiting debate with other groups that fall under the same broad disciplinary label. (A good example is provided by the contrast between Skinnerian behaviourism and Personal Construct Theory (PCT) within psychology. The most significant of the many ways these two sub-disciplines of psychology differ concerns meanings and intentions. In PCT, they are seen as the central concern of psychology; in behaviourism, they are not scientific evidence at all, as they cannot be directly observed.)

Such considerations explain the conflict between the Kuhn/ Dogan view, and the views of others (including Larry Laudan, see above), who do apply these concepts to social sciences.

Handa,[28] M.L. (1986) introduced the idea of "social paradigm" in the context of social sciences. He identified the basic components of a social paradigm. Like Kuhn, Handa addressed the issue of changing paradigm; the process popularly known as "paradigm shift". In this respect, he focused on social circumstances that precipitate such a shift and the effects of the shift on social institutions, including the institution of education. This broad shift in the social arena, in turn, changes the way the individual perceives reality.

Another use of the word paradigm is in the sense of "worldview". For example, in social science, the term is used to describe the set of experiences, beliefs and values that affect the way an individual perceives reality and responds to that perception. Social scientists have adopted the Kuhnian phrase "paradigm shift" to denote a change in how a given society goes about organizing and understanding reality. A "dominant paradigm" refers to the values, or system of thought, in a society that are most standard and widely held at a given time. Dominant paradigms are shaped both by the community's cultural background and by the context of the historical moment. Hutchin [29] outlines some conditions that facilitate a system of thought to become an accepted dominant paradigm:
  • Professional organizations that give legitimacy to the paradigm
  • Dynamic leaders who introduce and purport the paradigm
  • Journals and editors who write about the system of thought. They both disseminate the information essential to the paradigm and give the paradigm legitimacy
  • Government agencies who give credence to the paradigm
  • Educators who propagate the paradigm's ideas by teaching it to students
  • Conferences conducted that are devoted to discussing ideas central to the paradigm
  • Media coverage
  • Lay groups, or groups based around the concerns of lay persons, that embrace the beliefs central to the paradigm
  • Sources of funding to further research on the paradigm

Other uses

The word paradigm is also still used to indicate a pattern or model or an outstandingly clear or typical example or archetype. The term is frequently used in this sense in the design professions. Design Paradigms or archetypes comprise functional precedents for design solutions. The best known references on design paradigms are Design Paradigms: A Sourcebook for Creative Visualization, by Wake, and Design Paradigms by Petroski.

This term is also used in cybernetics. Here it means (in a very wide sense) a (conceptual) protoprogram for reducing the chaotic mass to some form of order. Note the similarities to the concept of entropy in chemistry and physics. A paradigm there would be a sort of prohibition to proceed with any action that would increase the total entropy of the system. To create a paradigm requires a closed system that accepts changes. Thus a paradigm can only apply to a system that is not in its final stage.

Beyond its use in the physical and social sciences, Kuhn's paradigm concept has been analysed in relation to its applicability in identifying 'paradigms' with respect to worldviews at specific points in history. One example is Matthew Edward Harris' book The Notion of Papal Monarchy in the Thirteenth Century: The Idea of Paradigm in Church History.[30] Harris stresses the primarily sociological importance of paradigms, pointing towards Kuhn's second edition of The Structure of Scientific Revolutions. Although obedience to popes such as Innocent III and Boniface VIII was widespread, even written testimony from the time showing loyalty to the pope does not demonstrate that the writer had the same worldview as the Church, and therefore pope, at the centre. The difference between paradigms in the physical sciences and in historical organisations such as the Church is that the former, unlike the latter, requires technical expertise rather than repeating statements. In other words, after scientific training through what Kuhn calls 'exemplars', one could not genuinely believe that, to take a trivial example, the earth is flat, whereas thinkers such as Giles of Rome in the thirteenth century wrote in favour of the pope, then could easily write similarly glowing things about the king. A writer such as Giles would have wanted a good job from the pope; he was a papal publicist. However, Harris writes that 'scientific group membership is not concerned with desire, emotions, gain, loss and any idealistic notions concerning the nature and destiny of humankind...but simply to do with aptitude, explanation, [and] cold description of the facts of the world and the universe from within a paradigm'.[31]

Thursday, April 27, 2017

Wave–particle duality

From Wikipedia, the free encyclopedia

Wave–particle duality is the concept that every elementary particle or quantic entity may be partly described in terms not only of particles, but also of waves. It expresses the inability of the classical concepts "particle" or "wave" to fully describe the behavior of quantum-scale objects. As Albert Einstein wrote: "It seems as though we must use sometimes the one theory and sometimes the other, while at times we may use either. We are faced with a new kind of difficulty. We have two contradictory pictures of reality; separately neither of them fully explains the phenomena of light, but together they do."[1]

Through the work of Max Planck, Einstein, Louis de Broglie, Arthur Compton, Niels Bohr and many others, current scientific theory holds that all particles also have a wave nature (and vice versa).[2] This phenomenon has been verified not only for elementary particles, but also for compound particles like atoms and even molecules. For macroscopic particles, because of their extremely short wavelengths, wave properties usually cannot be detected.[3]

Although the use of the wave-particle duality has worked well in physics, the meaning or interpretation has not been satisfactorily resolved; see Interpretations of quantum mechanics.
Niels Bohr regarded the "duality paradox" as a fundamental or metaphysical fact of nature. A given kind of quantum object will exhibit sometimes wave, sometimes particle, character, in respectively different physical settings. He saw such duality as one aspect of the concept of complementarity.[4] Bohr regarded renunciation of the cause-effect relation, or complementarity, of the space-time picture, as essential to the quantum mechanical account.[5]

Werner Heisenberg considered the question further. He saw the duality as present for all quantic entities, but not quite in the usual quantum mechanical account considered by Bohr. He saw it in what is called second quantization, which generates an entirely new concept of fields which exist in ordinary space-time, causality still being visualizable. Classical field values (e.g. the electric and magnetic field strengths of Maxwell) are replaced by an entirely new kind of field value, as considered in quantum field theory. Turning the reasoning around, ordinary quantum mechanics can be deduced as a specialized consequence of quantum field theory.[6][7]

Brief history of wave and particle viewpoints

Democritus—the original atomist—argued that all things in the universe, including light, are composed of indivisible sub-components (light being some form of solar atom).[8] At the beginning of the 11th Century, the Arabic scientist Alhazen wrote the first comprehensive treatise on optics; describing refraction, reflection, and the operation of a pinhole lens via rays of light traveling from the point of emission to the eye. He asserted that these rays were composed of particles of light. In 1630, René Descartes popularized and accredited the opposing wave description in his treatise on light, showing that the behavior of light could be re-created by modeling wave-like disturbances in a universal medium ("plenum"). Beginning in 1670 and progressing over three decades, Isaac Newton developed and championed his corpuscular hypothesis, arguing that the perfectly straight lines of reflection demonstrated light's particle nature; only particles could travel in such straight lines. He explained refraction by positing that particles of light accelerated laterally upon entering a denser medium. Around the same time, Newton's contemporaries Robert Hooke and Christiaan Huygens—and later Augustin-Jean Fresnel—mathematically refined the wave viewpoint, showing that if light traveled at different speeds in different media (such as water and air), refraction could be easily explained as the medium-dependent propagation of light waves. The resulting Huygens–Fresnel principle was extremely successful at reproducing light's behavior and was subsequently supported by Thomas Young's 1803 discovery of double-slit interference.[9][10] The wave view did not immediately displace the ray and particle view, but began to dominate scientific thinking about light in the mid 19th century, since it could explain polarization phenomena that the alternatives could not.[11]
Thomas Young's sketch of two-slit diffraction of waves, 1803

James Clerk Maxwell discovered that he could apply his equations for electromagnetism, which had been previously discovered, along with a slight modification to describe self-propagating waves of oscillating electric and magnetic fields. When the propagation speed of these electromagnetic waves was calculated, the speed of light fell out. It quickly became apparent that visible light, ultraviolet light, and infrared light (phenomena thought previously to be unrelated) were all electromagnetic waves of differing frequency. The wave theory had prevailed—or at least it seemed to.

While the 19th century had seen the success of the wave theory at describing light, it had also witnessed the rise of the atomic theory at describing matter. Antoine Lavoisier deduced the law of conservation of mass and categorized many new chemical elements and compounds; and Joseph Louis Proust advanced chemistry towards the atom by showing that elements combined in definite proportions. This led John Dalton to propose that elements were invisible sub components; Amedeo Avogadro discovered diatomic gases and completed the basic atomic theory, allowing the correct molecular formulae of most known compounds—as well as the correct weights of atoms—to be deduced and categorized in a consistent manner. Dimitri Mendeleev saw an order in recurring chemical properties, and created a table presenting the elements in unprecedented order and symmetry.
File:Wave-particle duality.ogvAnimation showing the wave-particle duality with a double slit experiment and effect of an observer. Increase size to see explanations in the video itself. 
 
Particle impacts make visible the interference pattern of waves.
A quantum particle is represented by a wave packet.
Interference of a quantum particle with itself.
Click images for animations.

Turn of the 20th century and the paradigm shift

Particles of electricity

At the close of the 19th century, the reductionism of atomic theory began to advance into the atom itself; determining, through physics, the nature of the atom and the operation of chemical reactions. Electricity, first thought to be a fluid, was now understood to consist of particles called electrons. This was first demonstrated by J. J. Thomson in 1897 when, using a cathode ray tube, he found that an electrical charge would travel across a vacuum (which would possess infinite resistance in classical theory). Since the vacuum offered no medium for an electric fluid to travel, this discovery could only be explained via a particle carrying a negative charge and moving through the vacuum. This electron flew in the face of classical electrodynamics, which had successfully treated electricity as a fluid for many years (leading to the invention of batteries, electric motors, dynamos, and arc lamps). More importantly, the intimate relation between electric charge and electromagnetism had been well documented following the discoveries of Michael Faraday and James Clerk Maxwell. Since electromagnetism was known to be a wave generated by a changing electric or magnetic field (a continuous, wave-like entity itself) an atomic/particle description of electricity and charge was a non sequitur. Furthermore, classical electrodynamics was not the only classical theory rendered incomplete.

Radiation quantization

In 1901, Max Planck published an analysis that succeeded in reproducing the observed spectrum of light emitted by a glowing object. To accomplish this, Planck had to make an ad hoc mathematical assumption of quantized energy of the oscillators (atoms of the black body) that emit radiation. Einstein later proposed that electromagnetic radiation itself is quantized, not the energy of radiating atoms.

Black-body radiation, the emission of electromagnetic energy due to an object's heat, could not be explained from classical arguments alone. The equipartition theorem of classical mechanics, the basis of all classical thermodynamic theories, stated that an object's energy is partitioned equally among the object's vibrational modes. But applying the same reasoning to the electromagnetic emission of such a thermal object was not so successful. That thermal objects emit light had been long known. Since light was known to be waves of electromagnetism, physicists hoped to describe this emission via classical laws. This became known as the black body problem. Since the equipartition theorem worked so well in describing the vibrational modes of the thermal object itself, it was natural to assume that it would perform equally well in describing the radiative emission of such objects. But a problem quickly arose: if each mode received an equal partition of energy, the short wavelength modes would consume all the energy. This became clear when plotting the Rayleigh–Jeans law which, while correctly predicting the intensity of long wavelength emissions, predicted infinite total energy as the intensity diverges to infinity for short wavelengths. This became known as the ultraviolet catastrophe.

In 1900, Max Planck hypothesized that the frequency of light emitted by the black body depended on the frequency of the oscillator that emitted it, and the energy of these oscillators increased linearly with frequency (according to his constant h, where E = hν). This was not an unsound proposal considering that macroscopic oscillators operate similarly: when studying five simple harmonic oscillators of equal amplitude but different frequency, the oscillator with the highest frequency possesses the highest energy (though this relationship is not linear like Planck's). By demanding that high-frequency light must be emitted by an oscillator of equal frequency, and further requiring that this oscillator occupy higher energy than one of a lesser frequency, Planck avoided any catastrophe; giving an equal partition to high-frequency oscillators produced successively fewer oscillators and less emitted light. And as in the Maxwell–Boltzmann distribution, the low-frequency, low-energy oscillators were suppressed by the onslaught of thermal jiggling from higher energy oscillators, which necessarily increased their energy and frequency.

The most revolutionary aspect of Planck's treatment of the black body is that it inherently relies on an integer number of oscillators in thermal equilibrium with the electromagnetic field. These oscillators give their entire energy to the electromagnetic field, creating a quantum of light, as often as they are excited by the electromagnetic field, absorbing a quantum of light and beginning to oscillate at the corresponding frequency. Planck had intentionally created an atomic theory of the black body, but had unintentionally generated an atomic theory of light, where the black body never generates quanta of light at a given frequency with an energy less than . However, once realizing that he had quantized the electromagnetic field, he denounced particles of light as a limitation of his approximation, not a property of reality.

Photoelectric effect illuminated

While Planck had solved the ultraviolet catastrophe by using atoms and a quantized electromagnetic field, most contemporary physicists agreed that Planck's "light quanta" represented only flaws in his model. A more-complete derivation of black body radiation would yield a fully continuous and 'wave-like' electromagnetic field with no quantization. However, in 1905 Albert Einstein took Planck's black body model to produce his solution to another outstanding problem of the day: the photoelectric effect, wherein electrons are emitted from atoms when they absorb energy from light. Since their discovery eight years previously, electrons had been studied in physics laboratories worldwide.

In 1902 Philipp Lenard discovered that the energy of these ejected electrons did not depend on the intensity of the incoming light, but instead on its frequency. So if one shines a little low-frequency light upon a metal, a few low energy electrons are ejected. If one now shines a very intense beam of low-frequency light upon the same metal, a whole slew of electrons are ejected; however they possess the same low energy, there are merely more of them. The more light there is, the more electrons are ejected. Whereas in order to get high energy electrons, one must illuminate the metal with high-frequency light. Like blackbody radiation, this was at odds with a theory invoking continuous transfer of energy between radiation and matter. However, it can still be explained using a fully classical description of light, as long as matter is quantum mechanical in nature.[12]

If one used Planck's energy quanta, and demanded that electromagnetic radiation at a given frequency could only transfer energy to matter in integer multiples of an energy quantum , then the photoelectric effect could be explained very simply. Low-frequency light only ejects low-energy electrons because each electron is excited by the absorption of a single photon. Increasing the intensity of the low-frequency light (increasing the number of photons) only increases the number of excited electrons, not their energy, because the energy of each photon remains low. Only by increasing the frequency of the light, and thus increasing the energy of the photons, can one eject electrons with higher energy. Thus, using Planck's constant h to determine the energy of the photons based upon their frequency, the energy of ejected electrons should also increase linearly with frequency; the gradient of the line being Planck's constant. These results were not confirmed until 1915, when Robert Andrews Millikan, who had previously determined the charge of the electron, produced experimental results in perfect accord with Einstein's predictions. While the energy of ejected electrons reflected Planck's constant, the existence of photons was not explicitly proven until the discovery of the photon antibunching effect, of which a modern experiment can be performed in undergraduate-level labs.[13] This phenomenon could only be explained via photons, and not through any semi-classical theory (which could alternatively explain the photoelectric effect). When Einstein received his Nobel Prize in 1921, it was not for his more difficult and mathematically laborious special and general relativity, but for the simple, yet totally revolutionary, suggestion of quantized light. Einstein's "light quanta" would not be called photons until 1925, but even in 1905 they represented the quintessential example of wave-particle duality. Electromagnetic radiation propagates following linear wave equations, but can only be emitted or absorbed as discrete elements, thus acting as a wave and a particle simultaneously.

Einstein's explanation of the photoelectric effect

The photoelectric effect. Incoming photons on the left strike a metal plate (bottom), and eject electrons, depicted as flying off to the right.

In 1905, Albert Einstein provided an explanation of the photoelectric effect, a hitherto troubling experiment that the wave theory of light seemed incapable of explaining. He did so by postulating the existence of photons, quanta of light energy with particulate qualities.

In the photoelectric effect, it was observed that shining a light on certain metals would lead to an electric current in a circuit. Presumably, the light was knocking electrons out of the metal, causing current to flow. However, using the case of potassium as an example, it was also observed that while a dim blue light was enough to cause a current, even the strongest, brightest red light available with the technology of the time caused no current at all. According to the classical theory of light and matter, the strength or amplitude of a light wave was in proportion to its brightness: a bright light should have been easily strong enough to create a large current. Yet, oddly, this was not so.

Einstein explained this conundrum by postulating that the electrons can receive energy from electromagnetic field only in discrete portions (quanta that were called photons): an amount of energy E that was related to the frequency f of the light by
E=hf\,
where h is Planck's constant (6.626 × 10−34 J seconds). Only photons of a high enough frequency (above a certain threshold value) could knock an electron free. For example, photons of blue light had sufficient energy to free an electron from the metal, but photons of red light did not. One photon of light above the threshold frequency could release only one electron; the higher the frequency of a photon, the higher the kinetic energy of the emitted electron, but no amount of light (using technology available at the time) below the threshold frequency could release an electron. To "violate" this law would require extremely high-intensity lasers which had not yet been invented. Intensity-dependent phenomena have now been studied in detail with such lasers.[14]

Einstein was awarded the Nobel Prize in Physics in 1921 for his discovery of the law of the photoelectric effect.

De Broglie's wavelength

Propagation of de Broglie waves in 1d—real part of the complex amplitude is blue, imaginary part is green. The probability (shown as the colour opacity) of finding the particle at a given point x is spread out like a waveform; there is no definite position of the particle. As the amplitude increases above zero the curvature decreases, so the amplitude decreases again, and vice versa—the result is an alternating amplitude: a wave. Top: Plane wave. Bottom: Wave packet.

In 1924, Louis-Victor de Broglie formulated the de Broglie hypothesis, claiming that all matter,[15][16] not just light, has a wave-like nature; he related wavelength (denoted as λ), and momentum (denoted as p):
\lambda ={\frac  {h}{p}}
This is a generalization of Einstein's equation above, since the momentum of a photon is given by p = {\tfrac  {E}{c}} and the wavelength (in a vacuum) by λ = {\tfrac  {c}{f}}, where c is the speed of light in vacuum.

De Broglie's formula was confirmed three years later for electrons (which differ from photons in having a rest mass) with the observation of electron diffraction in two independent experiments. At the University of Aberdeen, George Paget Thomson passed a beam of electrons through a thin metal film and observed the predicted interference patterns. At Bell Labs, Clinton Joseph Davisson and Lester Halbert Germer guided their beam through a crystalline grid.

De Broglie was awarded the Nobel Prize for Physics in 1929 for his hypothesis. Thomson and Davisson shared the Nobel Prize for Physics in 1937 for their experimental work.

Heisenberg's uncertainty principle

In his work on formulating quantum mechanics, Werner Heisenberg postulated his uncertainty principle, which states:
\Delta x\Delta p\geq {\frac  {\hbar }{2}}
where
\Delta here indicates standard deviation, a measure of spread or uncertainty;
x and p are a particle's position and linear momentum respectively.
\hbar is the reduced Planck's constant (Planck's constant divided by 2\pi ).
Heisenberg originally explained this as a consequence of the process of measuring: Measuring position accurately would disturb momentum and vice versa, offering an example (the "gamma-ray microscope") that depended crucially on the de Broglie hypothesis. The thought is now, however, that this only partly explains the phenomenon, but that the uncertainty also exists in the particle itself, even before the measurement is made.

In fact, the modern explanation of the uncertainty principle, extending the Copenhagen interpretation first put forward by Bohr and Heisenberg, depends even more centrally on the wave nature of a particle: Just as it is nonsensical to discuss the precise location of a wave on a string, particles do not have perfectly precise positions; likewise, just as it is nonsensical to discuss the wavelength of a "pulse" wave traveling down a string, particles do not have perfectly precise momenta (which corresponds to the inverse of wavelength). Moreover, when position is relatively well defined, the wave is pulse-like and has a very ill-defined wavelength (and thus momentum). And conversely, when momentum (and thus wavelength) is relatively well defined, the wave looks long and sinusoidal, and therefore it has a very ill-defined position.

de Broglie–Bohm theory

Couder experiments,[17] "materializing" the pilot wave model.

De Broglie himself had proposed a pilot wave construct to explain the observed wave-particle duality. In this view, each particle has a well-defined position and momentum, but is guided by a wave function derived from Schrödinger's equation. The pilot wave theory was initially rejected because it generated non-local effects when applied to systems involving more than one particle. Non-locality, however, soon became established as an integral feature of quantum theory (see EPR paradox), and David Bohm extended de Broglie's model to explicitly include it.

In the resulting representation, also called the de Broglie–Bohm theory or Bohmian mechanics,[18] the wave-particle duality vanishes, and explains the wave behaviour as a scattering with wave appearance, because the particle's motion is subject to a guiding equation or quantum potential. "This idea seems to me so natural and simple, to resolve the wave-particle dilemma in such a clear and ordinary way, that it is a great mystery to me that it was so generally ignored",[19] J.S.Bell.

The best illustration of the pilot-wave model was given by Couder's 2010 "walking droplets" experiments,[20] demonstrating the pilot-wave behaviour in a macroscopic mechanical analog.[17]

Wave behavior of large objects

Since the demonstrations of wave-like properties in photons and electrons, similar experiments have been conducted with neutrons and protons. Among the most famous experiments are those of Estermann and Otto Stern in 1929.[21] Authors of similar recent experiments with atoms and molecules, described below, claim that these larger particles also act like waves. A wave is basically a group of particles which moves in a particular form of motion, i.e. to and fro. If we break that flow by an object it will convert into radiants.

A dramatic series of experiments emphasizing the action of gravity in relation to wave–particle duality was conducted in the 1970s using the neutron interferometer.[22] Neutrons, one of the components of the atomic nucleus, provide much of the mass of a nucleus and thus of ordinary matter. In the neutron interferometer, they act as quantum-mechanical waves directly subject to the force of gravity. While the results were not surprising since gravity was known to act on everything, including light (see tests of general relativity and the Pound–Rebka falling photon experiment), the self-interference of the quantum mechanical wave of a massive fermion in a gravitational field had never been experimentally confirmed before.

In 1999, the diffraction of C60 fullerenes by researchers from the University of Vienna was reported.[23] Fullerenes are comparatively large and massive objects, having an atomic mass of about 720 u. The de Broglie wavelength of the incident beam was about 2.5 pm, whereas the diameter of the molecule is about 1 nm, about 400 times larger. In 2012, these far-field diffraction experiments could be extended to phthalocyanine molecules and their heavier derivatives, which are composed of 58 and 114 atoms respectively. In these experiments the build-up of such interference patterns could be recorded in real time and with single molecule sensitivity.[24][25]

In 2003, the Vienna group also demonstrated the wave nature of tetraphenylporphyrin[26]—a flat biodye with an extension of about 2 nm and a mass of 614 u. For this demonstration they employed a near-field Talbot Lau interferometer.[27][28] In the same interferometer they also found interference fringes for C60F48., a fluorinated buckyball with a mass of about 1600 u, composed of 108 atoms.[26] Large molecules are already so complex that they give experimental access to some aspects of the quantum-classical interface, i.e., to certain decoherence mechanisms.[29][30] In 2011, the interference of molecules as heavy as 6910 u could be demonstrated in a Kapitza–Dirac–Talbot–Lau interferometer.[31] In 2013, the interference of molecules beyond 10,000 u has been demonstrated.[32]

Whether objects heavier than the Planck mass (about the weight of a large bacterium) have a de Broglie wavelength is theoretically unclear and experimentally unreachable; above the Planck mass a particle's Compton wavelength would be smaller than the Planck length and its own Schwarzschild radius, a scale at which current theories of physics may break down or need to be replaced by more general ones.[33]

Recently Couder, Fort, et al. showed[34] that we can use macroscopic oil droplets on a vibrating surface as a model of wave–particle duality—localized droplet creates periodical waves around and interaction with them leads to quantum-like phenomena: interference in double-slit experiment,[35] unpredictable tunneling[36] (depending in complicated way on practically hidden state of field), orbit quantization[37] (that particle has to 'find a resonance' with field perturbations it creates—after one orbit, its internal phase has to return to the initial state) and Zeeman effect.[38]

Treatment in modern quantum mechanics

Wave–particle duality is deeply embedded into the foundations of quantum mechanics. In the formalism of the theory, all the information about a particle is encoded in its wave function, a complex-valued function roughly analogous to the amplitude of a wave at each point in space. This function evolves according to a differential equation (generically called the Schrödinger equation). For particles with mass this equation has solutions that follow the form of the wave equation. Propagation of such waves leads to wave-like phenomena such as interference and diffraction. Particles without mass, like photons, have no solutions of the Schrödinger equation so have another wave.

The particle-like behavior is most evident due to phenomena associated with measurement in quantum mechanics. Upon measuring the location of the particle, the particle will be forced into a more localized state as given by the uncertainty principle. When viewed through this formalism, the measurement of the wave function will randomly "collapse", or rather "decohere", to a sharply peaked function at some location. For particles with mass the likelihood of detecting the particle at any particular location is equal to the squared amplitude of the wave function there. The measurement will return a well-defined position, (subject to uncertainty), a property traditionally associated with particles. It is important to note that a measurement is only a particular type of interaction where some data is recorded and the measured quantity is forced into a particular eigenstate. The act of measurement is therefore not fundamentally different from any other interaction.

Following the development of quantum field theory the ambiguity disappeared. The field permits solutions that follow the wave equation, which are referred to as the wave functions. The term particle is used to label the irreducible representations of the Lorentz group that are permitted by the field. An interaction as in a Feynman diagram is accepted as a calculationally convenient approximation where the outgoing legs are known to be simplifications of the propagation and the internal lines are for some order in an expansion of the field interaction. Since the field is non-local and quantized, the phenomena which previously were thought of as paradoxes are explained. Within the limits of the wave-particle duality the quantum field theory gives the same results.

Visualization

There are two ways to visualize the wave-particle behaviour: by the "standard model", described below; and by the Broglie–Bohm model, where no duality is perceived.

Below is an illustration of wave–particle duality as it relates to De Broglie's hypothesis and Heisenberg's uncertainty principle (above), in terms of the position and momentum space wavefunctions for one spinless particle with mass in one dimension. These wavefunctions are Fourier transforms of each other.

The more localized the position-space wavefunction, the more likely the particle is to be found with the position coordinates in that region, and correspondingly the momentum-space wavefunction is less localized so the possible momentum components the particle could have are more widespread.

Conversely the more localized the momentum-space wavefunction, the more likely the particle is to be found with those values of momentum components in that region, and correspondingly the less localized the position-space wavefunction, so the position coordinates the particle could occupy are more widespread.
Position x and momentum p wavefunctions corresponding to quantum particles. The colour opacity (%) of the particles corresponds to the probability density of finding the particle with position x or momentum component p.
Top: If wavelength λ is unknown, so are momentum p, wave-vector k and energy E (de Broglie relations). As the particle is more localized in position space, Δx is smaller than for Δpx.
Bottom: If λ is known, so are p, k, and E. As the particle is more localized in momentum space, Δp is smaller than for Δx.

Alternative views

Wave–particle duality is an ongoing conundrum in modern physics. Most physicists accept wave-particle duality as the best explanation for a broad range of observed phenomena; however, it is not without controversy. Alternative views are also presented here. These views are not generally accepted by mainstream physics, but serve as a basis for valuable discussion within the community.

Both-particle-and-wave view

The pilot wave model, originally developed by Louis de Broglie and further developed by David Bohm into the hidden variable theory proposes that there is no duality, but rather a system exhibits both particle properties and wave properties simultaneously, and particles are guided, in a deterministic fashion, by the pilot wave (or its "quantum potential") which will direct them to areas of constructive interference in preference to areas of destructive interference. This idea is held by a significant minority within the physics community.[39]

At least one physicist considers the "wave-duality" as not being an incomprehensible mystery. L.E. Ballentine, Quantum Mechanics, A Modern Development, p. 4, explains:
When first discovered, particle diffraction was a source of great puzzlement. Are "particles" really "waves?" In the early experiments, the diffraction patterns were detected holistically by means of a photographic plate, which could not detect individual particles. As a result, the notion grew that particle and wave properties were mutually incompatible, or complementary, in the sense that different measurement apparatuses would be required to observe them. That idea, however, was only an unfortunate generalization from a technological limitation. Today it is possible to detect the arrival of individual electrons, and to see the diffraction pattern emerge as a statistical pattern made up of many small spots (Tonomura et al., 1989). Evidently, quantum particles are indeed particles, but whose behaviour is very different from classical physics would have us to expect.
The Afshar experiment[40] (2007) may suggest that it is possible to simultaneously observe both wave and particle properties of photons. This claim is, however, disputed by other scientists.[41][42][43][44]

Wave-only view

At least one scientist proposes that the duality can be replaced by a "wave-only" view. In his book Collective Electrodynamics: Quantum Foundations of Electromagnetism (2000), Carver Mead purports to analyze the behavior of electrons and photons purely in terms of electron wave functions, and attributes the apparent particle-like behavior to quantization effects and eigenstates. According to reviewer David Haddon:[45]
Mead has cut the Gordian knot of quantum complementarity. He claims that atoms, with their neutrons, protons, and electrons, are not particles at all but pure waves of matter. Mead cites as the gross evidence of the exclusively wave nature of both light and matter the discovery between 1933 and 1996 of ten examples of pure wave phenomena, including the ubiquitous laser of CD players, the self-propagating electrical currents of superconductors, and the Bose–Einstein condensate of atoms.
Albert Einstein, who, in his search for a Unified Field Theory, did not accept wave-particle duality, wrote:[46]
This double nature of radiation (and of material corpuscles)...has been interpreted by quantum-mechanics in an ingenious and amazingly successful fashion. This interpretation...appears to me as only a temporary way out...
The many-worlds interpretation (MWI) is sometimes presented as a waves-only theory, including by its originator, Hugh Everett who referred to MWI as "the wave interpretation".[47]

The Three Wave Hypothesis of R. Horodecki relates the particle to wave.[48][49] The hypothesis implies that a massive particle is an intrinsically spatially as well as temporally extended wave phenomenon by a nonlinear law.

Particle-only view

Still in the days of the old quantum theory, a pre-quantum-mechanical version of wave–particle duality was pioneered by William Duane,[50] and developed by others including Alfred Landé.[51] Duane explained diffraction of x-rays by a crystal in terms solely of their particle aspect. The deflection of the trajectory of each diffracted photon was explained as due to quantized momentum transfer from the spatially regular structure of the diffracting crystal.[52]

Neither-wave-nor-particle view

It has been argued that there are never exact particles or waves, but only some compromise or intermediate between them. For this reason, in 1928 Arthur Eddington[53] coined the name "wavicle" to describe the objects although it is not regularly used today. One consideration is that zero-dimensional mathematical points cannot be observed. Another is that the formal representation of such points, the Dirac delta function is unphysical, because it cannot be normalized. Parallel arguments apply to pure wave states. Roger Penrose states:[54]
"Such 'position states' are idealized wavefunctions in the opposite sense from the momentum states. Whereas the momentum states are infinitely spread out, the position states are infinitely concentrated. Neither is normalizable [...]."

Relational approach to wave–particle duality

Relational quantum mechanics is developed which regards the detection event as establishing a relationship between the quantized field and the detector. The inherent ambiguity associated with applying Heisenberg's uncertainty principle and thus wave–particle duality is subsequently avoided.[55]

Applications

Although it is difficult to draw a line separating wave–particle duality from the rest of quantum mechanics, it is nevertheless possible to list some applications of this basic idea.
  • Wave–particle duality is exploited in electron microscopy, where the small wavelengths associated with the electron can be used to view objects much smaller than what is visible using visible light.
  • Similarly, neutron diffraction uses neutrons with a wavelength of about 0.1 nm, the typical spacing of atoms in a solid, to determine the structure of solids.
  • Photos are now able to show this dual nature, which may lead to new ways of examining and recording this behaviour.[56]

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...