Search This Blog

Wednesday, February 4, 2015

Organic chemistry


From Wikipedia, the free encyclopedia


Methane, CH4, in line-angle representation, showing four carbon-hydrogen single (σ) bonds in black, and the 3D shape of such tetrahedral molecules, with ~109° interior bond angles, in green. Methane is the simplest organic chemical and simplest hydrocarbon, and molecules can be built up conceptually from it by exchanging up to all 4 hydrogens with carbon or other atoms.

Organic chemistry is a chemistry subdiscipline involving the scientific study of the structure, properties, and reactions of organic compounds and organic materials, i.e., matter in its various forms that contain carbon atoms.[1][2] Study of structure includes using spectroscopy (e.g., NMR), mass spectrometry, and other physical and chemical methods to determine the chemical composition and constitution of organic compounds and materials. Study of properties includes both physical properties and chemical properties, and uses similar methods as well as methods to evaluate chemical reactivity, with the aim to understand the behavior of the organic matter in its pure form (when possible), but also in solutions, mixtures, and fabricated forms. The study of organic reactions includes probing their scope through use in preparation of target compounds (e.g., natural products, drugs, polymers, etc.) by chemical synthesis, as well as the focused study of the reactivities of individual organic molecules, both in the laboratory and via theoretical (in silico) study.

The range of chemicals studied in organic chemistry include hydrocarbons (compounds containing only carbon and hydrogen), as well as myriad compositions based always on carbon, but also containing other elements,[1][3][4] especially:
In the modern era, the range extends further into the periodic table, with main group elements, including:
In addition, much modern research focuses on organic chemistry involving further organometallics, including the lanthanides, but especially the:
  • transition metals (e.g., zinc, copper, palladium, nickel, cobalt, titanium, chromium, etc.).
To be supplied
Line-angle representation
To be supplied
Ball-and-stick representation
To be supplied
Space-filling representation
Three representations of an organic compound, 5α-Dihydroprogesterone (5α-DHP), a steroid hormone. For molecules showing color, the carbon atoms are in black, hydrogens in gray, and oxygens in red. In the line angle representation, carbon atoms are implied at every terminus of a line and vertex of multiple lines, and hydrogen atoms are implied to fill the remaining needed valences (up to 4).

Finally, organic compounds form the basis of all earthly life and constitute a significant part of human endeavors in chemistry. The bonding patterns open to carbon, with its valence of four—formal single, double, and triple bonds, as well as various structures with delocalized electrons—make the array of organic compounds structurally diverse, and their range of applications enormous. They either form the basis of, or are important constituents of, many commercial products including pharmaceuticals; petrochemicals and products made from them (including lubricants, solvents, etc.); plastics; fuels and explosives; etc. As indicated, the study of organic chemistry overlaps with organometallic chemistry and biochemistry, but also with medicinal chemistry, polymer chemistry, as well as many aspects of materials science.[1]

Periodic table of elements of interest in organic chemistry. The table illustrates all elements of current interest in modern organic and organometallic chemistry, indicating main group elements in orange, and transition metals and lanthanides (Lan) in grey.

History

Main article: History of chemistry

Before the nineteenth century, chemists generally believed that compounds obtained from living organisms were endowed with a vital force that distinguished them from inorganic compounds. According to the concept of vitalism (vital force theory), organic matter was endowed with a "vital force".[5] During the first half of the nineteenth century, some of the first systematic studies of organic compounds were reported. Around 1816 Michel Chevreul started a study of soaps made from various fats and alkalis. He separated the different acids that, in combination with the alkali, produced the soap. Since these were all individual compounds, he demonstrated that it was possible to make a chemical change in various fats (which traditionally come from organic sources), producing new compounds, without "vital force". In 1828 Friedrich Wöhler produced the organic chemical urea (carbamide), a constituent of urine, from the inorganic ammonium cyanate NH4CNO, in what is now called the Wöhler synthesis. Although Wöhler was always cautious about claiming that he had disproved the theory of vital force, this event has often been thought of as a turning point.[5]

In 1856 William Henry Perkin, while trying to manufacture quinine, accidentally manufactured the organic dye now known as Perkin's mauve. Through its great financial success, this discovery greatly increased interest in organic chemistry.[6]

The crucial breakthrough for organic chemistry was the concept of chemical structure, developed independently and simultaneously by Friedrich August Kekulé and Archibald Scott Couper in 1858.[7] Both men suggested that tetravalent carbon atoms could link to each other to form a carbon lattice, and that the detailed patterns of atomic bonding could be discerned by skillful interpretations of appropriate chemical reactions.

The pharmaceutical industry began in the last decade of the 19th century when the manufacturing of acetylsalicylic acid (more commonly referred to as aspirin) in Germany was started by Bayer.[8] The first time a drug was systematically improved was with arsphenamine (Salvarsan). Though numerous derivatives of the dangerous toxic atoxyl were examined by Paul Ehrlich and his group, the compound with best effectiveness and toxicity characteristics was selected for production.[citation needed]

An example of an organometallic molecule, a catalyst called Grubbs' catalyst, as a ball-and-stick model based on an X-ray crystal structure.[9] The formula of the catalyst is often given as RuCl2(PCy3)2(=CHPh), where the ruthenium metal atom, Ru, is at very center in turquoise, carbons are in black, hydrogens in gray-white, chlorine in green, and phosphorus in orange. The metal ligand at the bottom is a tricyclohexyl phosphine, abbreviated PCy, and another of these appears at the top of the image (where its rings are obscuring one another). The group projecting out to the right has a metal-carbon double bond, as is known as an alkylidene. Robert Grubbs shared the 2005 Nobel prize in chemistry with Richard R. Schrock and Yves Chauvin for their work on the reactions such catalysts mediate, called olefin metathesis.

Early examples of organic reactions and applications were often serendipitous. The latter half of the 19th century however witnessed systematic studies of organic compounds, Illustrative is the development of synthetic indigo. The production of indigo from plant sources dropped from 19,000 tons in 1897 to 1,000 tons by 1914 thanks to the synthetic methods developed by Adolf von Baeyer. In 2002, 17,000 tons of synthetic indigo were produced from petrochemicals.[10]

In the early part of the 20th Century, polymers and enzymes were shown to be large organic molecules, and petroleum was shown to be of biological origin.

The multistep synthesis of complex organic compounds is called total synthesis. Total synthesis of complex natural compounds increased in complexity to glucose and terpineol. For example, cholesterol-related compounds have opened ways to synthesize complex human hormones and their modified derivatives. Since the start of the 20th century, complexity of total syntheses has been increased to include molecules of high complexity such as lysergic acid and vitamin B12.[11]

The total synthesis of vitamin B12 marked a major achievement in organic chemistry.

The development of organic chemistry benefited from the discovery of petroleum and the development of the petrochemical industry. The conversion of individual compounds obtained from petroleum into different compound types by various chemical processes led to the birth of the petrochemical industry, which successfully manufactured artificial rubbers, various organic adhesives, property-modifying petroleum additives, and plastics.

The majority of chemical compounds occurring in biological organisms are in fact carbon compounds, so the association between organic chemistry and biochemistry is so close that biochemistry might be regarded as in essence a branch of organic chemistry. Although the history of biochemistry might be taken to span some four centuries, fundamental understanding of the field only began to develop in the late 19th century and the actual term biochemistry was coined around the start of 20th century. Research in the field increased throughout the twentieth century, without any indication of slackening in the rate of increase, as may be verified by inspection of abstraction and indexing services such as BIOSIS Previews and Biological Abstracts, which began in the 1920s as a single annual volume, but has grown so drastically that by the end of the 20th century it was only available to the everyday user as an online electronic database.[12]

Characterization

Since organic compounds often exist as mixtures, a variety of techniques have also been developed to assess purity, especially important being chromatography techniques such as HPLC and gas chromatography. Traditional methods of separation include distillation, crystallization, and solvent extraction.

Organic compounds were traditionally characterized by a variety of chemical tests, called "wet methods", but such tests have been largely displaced by spectroscopic or other computer-intensive methods of analysis.[13] Listed in approximate order of utility, the chief analytical methods are:
  • Nuclear magnetic resonance (NMR) spectroscopy is the most commonly used technique, often permitting complete assignment of atom connectivity and even stereochemistry using correlation spectroscopy. The principal constituent atoms of organic chemistry - hydrogen and carbon - exist naturally with NMR-responsive isotopes, respectively 1H and 13C.
  • Elemental analysis: A destructive method used to determine the elemental composition of a molecule. See also mass spectrometry, below.
  • Mass spectrometry indicates the molecular weight of a compound and, from the fragmentation patterns, its structure. High resolution mass spectrometry can usually identify the exact formula of a compound and is used in lieu of elemental analysis. In former times, mass spectrometry was restricted to neutral molecules exhibiting some volatility, but advanced ionization techniques allow one to obtain the "mass spec" of virtually any organic compound.
  • Crystallography is an unambiguous method for determining molecular geometry, the proviso being that single crystals of the material must be available and the crystal must be representative of the sample. Highly automated software allows a structure to be determined within hours of obtaining a suitable crystal.
Traditional spectroscopic methods such as infrared spectroscopy, optical rotation, UV/VIS spectroscopy provide relatively nonspecific structural information but remain in use for specific classes of compounds.

Properties

Physical properties of organic compounds typically of interest include both quantitative and
qualitative features. Quantitative information includes melting point, boiling point, and index of refraction. Qualitative properties include odor, consistency, solubility, and color.

Melting and boiling properties

Organic compounds typically melt and many boil. In contrast, while inorganic materials generally can be melted, many do not boil, tending instead to degrade. In earlier times, the melting point (m.p.) and boiling point (b.p.) provided crucial information on the purity and identity of organic compounds. The melting and boiling points correlate with the polarity of the molecules and their molecular weight. Some organic compounds, especially symmetrical ones, sublime, that is they evaporate without melting. A well-known example of a sublimable organic compound is para-dichlorobenzene, the odiferous constituent of modern mothballs. Organic compounds are usually not very stable at temperatures above 300 °C, although some exceptions exist.

Solubility

Neutral organic compounds tend to be hydrophobic; that is, they are less soluble in water than in organic solvents. Exceptions include organic compounds that contain ionizable(which can be converted in ions) groups as well as low molecular weight alcohols, amines, and carboxylic acids where hydrogen bonding occurs. Organic compounds tend to dissolve in organic solvents. Solvents can be either pure substances like ether or ethyl alcohol, or mixtures, such as the paraffinic solvents such as the various petroleum ethers and white spirits, or the range of pure or mixed aromatic solvents obtained from petroleum or tar fractions by physical separation or by chemical conversion.
Solubility in the different solvents depends upon the solvent type and on the functional groups if present.

Solid state properties

Various specialized properties of molecular crystals and organic polymers with conjugated systems are of interest depending on applications, e.g. thermo-mechanical and electro-mechanical such as piezoelectricity, electrical conductivity (see conductive polymers and organic semiconductors), and electro-optical (e.g. non-linear optics) properties. For historical reasons, such properties are mainly the subjects of the areas of polymer science and materials science.

Nomenclature

Various names and depictions for one organic compound.

The names of organic compounds are either systematic, following logically from a set of rules, or nonsystematic, following various traditions. Systematic nomenclature is stipulated by specifications from IUPAC. Systematic nomenclature starts with the name for a parent structure within the molecule of interest. This parent name is then modified by prefixes, suffixes, and numbers to unambiguously convey the structure. Given that millions of organic compounds are known, rigorous use of systematic names can be cumbersome. Thus, IUPAC recommendations are more closely followed for simple compounds, but not complex molecules. To use the systematic naming, one must know the structures and names of the parent structures. Parent structures include unsubstituted hydrocarbons, heterocycles, and monofunctionalized derivatives thereof.

Nonsystematic nomenclature is simpler and unambiguous, at least to organic chemists. Nonsystematic names do not indicate the structure of the compound. They are common for complex molecules, which includes most natural products. Thus, the informally named lysergic acid diethylamide is systematically named (6aR,9R)-N,N-diethyl-7-methyl-4,6,6a,7,8,9-hexahydroindolo-[4,3-fg] quinoline-9-carboxamide.

With the increased use of computing, other naming methods have evolved that are intended to be interpreted by machines. Two popular formats are SMILES and InChI.

Structural drawings

Organic molecules are described more commonly by drawings or structural formulas, combinations of drawings and chemical symbols. The line-angle formula is simple and unambiguous. In this system, the endpoints and intersections of each line represent one carbon, and hydrogen atoms can either be notated explicitly or assumed to be present as implied by tetravalent carbon. The depiction of organic compounds with drawings is greatly simplified by the fact that carbon in almost all organic compounds has four bonds, nitrogen three, oxygen two, and hydrogen one.

Classification of organic compounds

Functional groups

The family of carboxylic acids contains a carboxyl (-COOH) functional group. Acetic acid, shown here, is an example.

The concept of functional groups is central in organic chemistry, both as a means to classify structures and for predicting properties. A functional group is a molecular module, and the reactivity of that functional group is assumed, within limits, to be the same in a variety of molecules. Functional groups can have decisive influence on the chemical and physical properties of organic compounds. Molecules are classified on the basis of their functional groups. Alcohols, for example, all have the subunit C-O-H. All alcohols tend to be somewhat hydrophilic, usually form esters, and usually can be converted to the corresponding halides. Most functional groups feature heteroatoms (atoms other than C and H). Organic compounds are classified according to functional groups, alcohols, carboxylic acids, amines, etc.

Aliphatic compounds

The aliphatic hydrocarbons are subdivided into three groups of homologous series according to their state of saturation:
  • paraffins, which are alkanes without any double or triple bonds,
  • olefins or alkenes which contain one or more double bonds, i.e. di-olefins (dienes) or poly-olefins.
  • alkynes, which have one or more triple bonds.
The rest of the group is classed according to the functional groups present. Such compounds can be "straight-chain", branched-chain or cyclic. The degree of branching affects characteristics, such as the octane number or cetane number in petroleum chemistry.

Both saturated (alicyclic) compounds and unsaturated compounds exist as cyclic derivatives. The most stable rings contain five or six carbon atoms, but large rings (macrocycles) and smaller rings are common. The smallest cycloalkane family is the three-membered cyclopropane ((CH2)3). Saturated cyclic compounds contain single bonds only, whereas aromatic rings have an alternating (or conjugated) double bond. Cycloalkanes do not contain multiple bonds, whereas the cycloalkenes and the cycloalkynes do.

Aromatic compounds


Benzene is one of the best-known aromatic compounds as it is one of the simplest and most stable aromatics.

Aromatic hydrocarbons contain conjugated double bonds. This means that every carbon atom in the ring is sp2 hybridized, allowing for added stability. The most important example is benzene, the structure of which was formulated by Kekulé who first proposed the delocalization or resonance principle for explaining its structure. For "conventional" cyclic compounds, aromaticity is conferred by the presence of 4n + 2 delocalized pi electrons, where n is an integer. Particular instability (antiaromaticity) is conferred by the presence of 4n conjugated pi electrons.

Heterocyclic compounds

The characteristics of the cyclic hydrocarbons are again altered if heteroatoms are present, which can exist as either substituents attached externally to the ring (exocyclic) or as a member of the ring itself (endocyclic). In the case of the latter, the ring is termed a heterocycle. Pyridine and furan are examples of aromatic heterocycles while piperidine and tetrahydrofuran are the corresponding alicyclic heterocycles. The heteroatom of heterocyclic molecules is generally oxygen, sulfur, or nitrogen, with the latter being particularly common in biochemical systems.
Examples of groups among the heterocyclics are the aniline dyes, the great majority of the compounds discussed in biochemistry such as alkaloids, many compounds related to vitamins, steroids, nucleic acids (e.g. DNA, RNA) and also numerous medicines. Heterocyclics with relatively simple structures are pyrrole (5-membered) and indole (6-membered carbon ring).

Rings can fuse with other rings on an edge to give polycyclic compounds. The purine nucleoside bases are notable polycyclic aromatic heterocycles. Rings can also fuse on a "corner" such that one atom (almost always carbon) has two bonds going to one ring and two to another. Such compounds are termed spiro and are important in a number of natural products.

Polymers

This swimming board is made of polystyrene, an example of a polymer.

One important property of carbon is that it readily forms chains, or networks, that are linked by carbon-carbon (carbon to carbon) bonds. The linking process is called polymerization, while the chains, or networks, are called polymers. The source compound is called a monomer.

Two main groups of polymers exist: synthetic polymers and biopolymers. Synthetic polymers are artificially manufactured, and are commonly referred to as industrial polymers.[14] Biopolymers occur within a respectfully natural environment, or without human intervention.

Since the invention of the first synthetic polymer product, bakelite, synthetic polymer products have frequently been invented.[citation needed]

Common synthetic organic polymers are polyethylene (polythene), polypropylene, nylon, teflon (PTFE), polystyrene, polyesters, polymethylmethacrylate (called perspex and plexiglas), and polyvinylchloride (PVC).[citation needed]

Both synthetic and natural rubber are polymers.[citation needed]

Varieties of each synthetic polymer product may exist, for purposes of a specific use. Changing the conditions of polymerization alters the chemical composition of the product and its properties. These alterations include the chain length, or branching, or the tacticity.[citation needed]

With a single monomer as a start, the product is a homopolymer.[citation needed]

Secondary component(s) may be added to create a heteropolymer (co-polymer) and the degree of clustering of the different components can also be controlled.[citation needed]

Physical characteristics, such as hardness, density, mechanical or tensile strength, abrasion resistance, heat resistance, transparency, colour, etc. will depend on the final composition.[citation needed]

Biomolecules


Maitotoxin, a complex organic biological toxin.

Biomolecular chemistry is a major category within organic chemistry which is frequently studied by biochemists. Many complex multi-functional group molecules are important in living organisms. Some are long-chain biopolymers, and these include peptides, DNA, RNA and the polysaccharides such as starches in animals and celluloses in plants. The other main classes are amino acids (monomer building blocks of peptides and proteins), carbohydrates (which includes the polysaccharides), the nucleic acids (which include DNA and RNA as polymers), and the lipids. In addition, animal biochemistry contains many small molecule intermediates which assist in energy production through the Krebs cycle, and produces isoprene, the most common hydrocarbon in animals. Isoprenes in animals form the important steroid structural (cholesterol) and steroid hormone compounds; and in plants form terpenes, terpenoids, some alkaloids, and a class of hydrocarbons called biopolymer polyisoprenoids present in the latex of various species of plants, which is the basis for making rubber.

Small molecules


Molecular models of caffeine.

In pharmacology, an important group of organic compounds is small molecules, also referred to as 'small organic compounds'. In this context, a small molecule is a small organic compound that is biologically active, but is not a polymer. In practice, small molecules have a molar mass less than approximately 1000 g/mol.

Fullerenes

Fullerenes and carbon nanotubes, carbon compounds with spheroidal and tubular structures, have stimulated much research into the related field of materials science. The first fullerene was discovered in 1985 by Sir Harold W. Kroto (one of the authors of this article) of the United Kingdom and by Richard E. Smalley and Robert F. Curl, Jr., of the United States. Using a laser to vaporize graphite rods in an atmosphere of helium gas, these chemists and their assistants obtained cagelike molecules composed of 60 carbon atoms (C60) joined together by single and double bonds to form a hollow sphere with 12 pentagonal and 20 hexagonal faces—a design that resembles a football, or soccer ball. In 1996 the trio was awarded the Nobel Prize for their pioneering efforts. The C60 molecule was named buckminsterfullerene (or, more simply, the buckyball) after the American architect R. Buckminster Fuller, whose geodesic dome is constructed on the same structural principles. The elongated cousins of buckyballs, carbon nanotubes, were identified in 1991 by Iijima Sumio of Japan.

Others

Organic compounds containing bonds of carbon to nitrogen, oxygen and the halogens are not normally grouped separately. Others are sometimes put into major groups within organic chemistry and discussed under titles such as organosulfur chemistry, organometallic chemistry, organophosphorus chemistry and organosilicon chemistry.

Organic synthesis

A synthesis designed by E.J. Corey for oseltamivir (Tamiflu). This synthesis has 11 distinct reactions.

Synthetic organic chemistry is an applied science as it borders engineering, the "design, analysis, and/or construction of works for practical purposes". Organic synthesis of a novel compound is a problem solving task, where a synthesis is designed for a target molecule by selecting optimal reactions from optimal starting materials. Complex compounds can have tens of reaction steps that sequentially build the desired molecule. The synthesis proceeds by utilizing the reactivity of the functional groups in the molecule. For example, a carbonyl compound can be used as a nucleophile by converting it into an enolate, or as an electrophile; the combination of the two is called the aldol reaction. Designing practically useful syntheses always requires conducting the actual synthesis in the laboratory. The scientific practice of creating novel synthetic routes for complex molecules is called total synthesis.

Strategies to design a synthesis include retrosynthesis, popularized by E.J. Corey, starts with the target molecule and splices it to pieces according to known reactions. The pieces, or the proposed precursors, receive the same treatment, until available and ideally inexpensive starting materials are reached. Then, the retrosynthesis is written in the opposite direction to give the synthesis. A "synthetic tree" can be constructed, because each compound and also each precursor has multiple syntheses.

Organic reactions

Organic reactions are chemical reactions involving organic compounds. Many of these reactions are associated with functional groups. The general theory of these reactions involves careful analysis of such properties as the electron affinity of key atoms, bond strengths and steric hindrance. These factors can determine the relative stability of short-lived reactive intermediates, which usually directly determine the path of the reaction.

The basic reaction types are: addition reactions, elimination reactions, substitution reactions, pericyclic reactions, rearrangement reactions and redox reactions. An example of a common reaction is a substitution reaction written as:
Nu + C-X → C-Nu + X
where X is some functional group and Nu is a nucleophile.

The number of possible organic reactions is basically infinite. However, certain general patterns are observed that can be used to describe many common or useful reactions. Each reaction has a stepwise reaction mechanism that explains how it happens in sequence—although the detailed description of steps is not always clear from a list of reactants alone.

The stepwise course of any given reaction mechanism can be represented using arrow pushing techniques in which curved arrows are used to track the movement of electrons as starting materials transition through intermediates to final products.

AAAS Scientists: Consensus on GMO Safety Firmer Than For Human-Induced Climate Change

Posted: Updated:
TOMATOES

In sharp contrast to public views about GMOs, 89% of scientists believe genetically modified foods are safe.

That's the most eye-opening finding in a Pew Research Center study on science literacy, undertaken in cooperation with the American Association for the Advancement of Science (AAAS), and released on January 29.

The overwhelming scientific consensus exceeds the percentage of scientists, 88%, who think humans are mostly responsible for climate change. However, the public appears far more suspicious of scientific claims about GMO safety than they do about the consensus on global warming.

Some 57% of Americans say GM foods are unsafe and a startling 67% do not trust scientists, believing they don't understand the science behind GMOs. AAAS researchers blame poor reporting by mainstream scientists for the trust and literacy gaps.

The survey also contrasts sharply with a statement published earlier this week in a pay-for-play European journal by a group of anti-GMO scientists and activists, including Michael Hansen of the Center for Food Safety, and philosopher Vandana Shiva, claiming, "no scientific consensus on GMO safety."

A huge literacy gap between scientists and the public on biotechnology is one of the many disturbing nuggets that emerged from the Pew Research Center survey, which was conducted in cooperation with the AAAS, the world's largest independent general scientific society. The full study, released on January 29, is available here.

The first of several reports to be released in coming months, this study compares the views of scientists and the general public on the role of science in the United States and globally.

The eye opening take-away: The American population in general borders on scientific illiteracy. The gap between what scientists believe, grounded on empirical evidence, often sharply differs from what the general public thinks is true. The differences are sharpest over biomedical research, including GMOs.
  • 88% of AAAS scientists think eating GM food is safe, while only 37% of the public believes that’s true--a 51-percentage point gap
  • 68% of scientists say it is safe to eat food grown with pesticides, compared with 28% of citizens--a 40% gap.
  • A 42-percentage point gap over the issue of using animals in research--89% of scientists favor it, while only 47% of the public backs the idea.
2015-01-29-widedifferencs.pngThe scientist/public perception gap is less pronounced over climate, energy and space issues.
  • 37-percentage point gap over whether humans are the primary cause of climate change--87% of AAAS scientists say it is, while 50% of the public does.
  • 33-percentage point gap on the question about whether humans have evolved over time--98% of scientists say we have, compared with 65% of the public.
  • By a 20-percentage point margin, citizens are more likely than scientists to favor offshore oil drilling.
  • By a 12-point margin, the public is more likely to say that astronauts are essential for the future of the U.S. space program.
2015-01-29-climateenergy.pngThe survey represents a sample of 2,002 adult citizens and 3,748 scientists who are all members of the AAAS.

"As scientists size up the culture and their place in it," Pew said in a statement. "Scientists are notably less upbeat than they were five years ago and express serious concerns about public knowledge of science and the way scientific findings are covered by journalists."

The scientists believe that media hype is one possible reason for large gaps in opinion between their views and that of the public, particularly in the debate over GMOs. Seventy-nine percent of scientists said that the media doesn't distinguish between "well-founded" and "not well-founded" scientific research. Additionally, 52 percent agreed that the media oversimplifies the science.

Three years ago, the AAAS released an unequivocal statement on the safety of GM foods and why a consensus of its members oppose mandatory labelling:
There are several current efforts to require labeling of foods containing products derived from genetically modified crop plants, commonly known as GM crops or GMOs. These efforts are not driven by evidence that GM foods are actually dangerous. Indeed, the science is quite clear: crop improvement by the modern molecular techniques of biotechnology is safe. Rather, these initiatives are driven by a variety of factors, ranging from the persistent perception that such foods are somehow "unnatural" and potentially dangerous to the desire to gain competitive advantage by legislating attachment of a label meant to alarm. Another misconception used as a rationale for labeling is that GM crops are untested.
The AAAS also has addressed claims by anti-GMO advocacy groups, frequently echoed in the media and on activist websites, that GM foods are less tested or nutritionally deficient when compared to organic or other conventional foods.
... contrary to popular misconceptions, GM crops are the most extensively tested crops ever added to our food supply. There are occasional claims that feeding GM foods to animals causes aberrations ranging from digestive disorders, to sterility, tumors and premature death. Although such claims are often sensationalized and receive a great deal of media attention, none have stood up to rigorous scientific scrutiny. Indeed, a recent review of a dozen well-designed long-term animal feeding studies comparing GM and non-GM potatoes, soy, rice, corn and triticale found that the GM and their non-GM counterparts are nutritionally equivalent.
Looking further at the demographics of respondents, the survey finds that those with a college degree are split on GMO safety: 49% say it's generally safe while 47% say it's generally unsafe. Women are more wary than men: only 28% of women think eating GM foods are safe compared to 47% of men.
Race also divides the issue with blacks (24% say its safe) and Hispanics (32%) being more cautious than whites (41%).

The demographics of respondents on pesticide are quite similar to the responses on GMOs. More men say foods with pesticides are safe than women do. Those with more education are more likely to say food grown with pesticides are safe.2015-01-29-seekinggmolabels.png

When it comes to GM labeling, exactly half of respondents said they "always" or "sometimes" check for a non-GMO label when they are shopping. Of course, those who check labels correlate higher with those who think genetically modified foods are unsafe to eat.

So why are citizens so out of step with scientists on GMO safety?

"One possible reason for the gap: when it comes to GM crops, two-thirds of the public say scientists do not have a clear understanding about the health effects," surmised the researchers.

Yet, oddly enough for a society that doesn't trust scientists on the GMO debate, science itself still holds an esteemed position in the minds of adults. Seventy-nine percent of respondents believe that science has contributed positively to society with 62% saying it has been beneficial for the quality of food. However, the percent of people who believe that science has contributed negatively to food is up 10 points from 2009: 34 percent of respondents say that science has had a negative affect on food.

The public also highly values government investment in science research: 71% support government-funded basic science research and 61% said government funding is essential for scientific progress.

Pew also asked scientists another question: How good is the general state of science is today? Scientists were more negative this year than they were in 2009. Only 52% say that it is a good time for science today while 74% said it was good in 2009. 2015-01-29-scientistsperpsective.png
And due to the public perception of GMOs at least, scientists' more sober assessment might make sense.

Who's to blame? Scientists (75%) say lack of STEM education in grades K-12 is the biggest culprit. The release of the next report is expected in mid-February.

How can scientists and the government bridge the disturbing literacy gap between the mainstream scientific community and a skeptical public? asks Alan Leshner, CEO of the AAAS, in an editorial accompanying the survey release?
Speaking up for the importance of science to society is our only hope, and scientists must not shy away from engaging with the public, even on the most polarizing science-based topics. Scientists need to speak clearly with journalists who provide a great vehicle for translating the nature and implications of their work. Scientists should also meet with members of the public and discuss what makes each side uncomfortable. In these situations, scientists must respond forthrightly to public concerns. In other words, there needs to be a conversation, not a lecture. The public's perceptions of scientists' expertise and trustworthiness are very important but they are not enough. Acceptance of scientific facts is not based solely on comprehension levels. It can be compromised anytime information confronts people's personal, religious, or political views, and whenever scientific facts provoke fear or make people feel that they have no control over a situation. The only recourse is to have genuine, respectful dialogues with people.
Jon Entine, executive director of the Genetic Literacy Project, is a Senior Fellow at the World Food Center Institute for Food and Agricultural Literacy, University of California-Davis. Follow @JonEntine on Twitter.

Electromagnetic field


From Wikipedia, the free encyclopedia

An electromagnetic field (also EMF or EM field) is a physical field produced by electrically charged objects. It affects the behavior of charged objects in the vicinity of the field. The electromagnetic field extends indefinitely throughout space and describes the electromagnetic interaction. It is one of the four fundamental forces of nature (the others are gravitation, weak interaction and strong interaction).

The field can be viewed as the combination of an electric field and a magnetic field. The electric field is produced by stationary charges, and the magnetic field by moving charges (currents); these two are often described as the sources of the field. The way in which charges and currents interact with the electromagnetic field is described by Maxwell's equations and the Lorentz force law.
From a classical perspective in the history of electromagnetism, the electromagnetic field can be regarded as a smooth, continuous field, propagated in a wavelike manner; whereas from the perspective of quantum field theory, the field is seen as quantized, being composed of individual particles.[citation needed]

Structure of the electromagnetic field

The electromagnetic field may be viewed in two distinct ways: a continuous structure or a discrete structure.

Continuous structure

Classically, electric and magnetic fields are thought of as being produced by smooth motions of charged objects. For example, oscillating charges produce electric and magnetic fields that may be viewed in a 'smooth', continuous, wavelike fashion. In this case, energy is viewed as being transferred continuously through the electromagnetic field between any two locations. For instance, the metal atoms in a radio transmitter appear to transfer energy continuously. This view is useful to a certain extent (radiation of low frequency), but problems are found at high frequencies (see ultraviolet catastrophe).

Discrete structure

The electromagnetic field may be thought of in a more 'coarse' way. Experiments reveal that in some circumstances electromagnetic energy transfer is better described as being carried in the form of packets called quanta (in this case, photons) with a fixed frequency. Planck's relation links the energy E of a photon to its frequency ν through the equation:[1]
E= \, h \, \nu
where h is Planck's constant, named in honor of Max Planck, and ν is the frequency of the photon . Although modern quantum optics tells us that there also is a semi-classical explanation of the photoelectric effect—the emission of electrons from metallic surfaces subjected to electromagnetic radiation—the photon was historically (although not strictly necessarily) used to explain certain observations. It is found that increasing the intensity of the incident radiation (so long as one remains in the linear regime) increases only the number of electrons ejected, and has almost no effect on the energy distribution of their ejection. Only the frequency of the radiation is relevant to the energy of the ejected electrons.

This quantum picture of the electromagnetic field (which treats it as analogous to harmonic oscillators) has proved very successful, giving rise to quantum electrodynamics, a quantum field theory describing the interaction of electromagnetic radiation with charged matter. It also gives rise to quantum optics, which is different from quantum electrodynamics in that the matter itself is modelled using quantum mechanics rather than quantum field theory.

Dynamics of the electromagnetic field

In the past, electrically charged objects were thought to produce two different, unrelated types of field associated with their charge property. An electric field is produced when the charge is stationary with respect to an observer measuring the properties of the charge, and a magnetic field (as well as an electric field) is produced when the charge moves (creating an electric current) with respect to this observer. Over time, it was realized that the electric and magnetic fields are better thought of as two parts of a greater whole — the electromagnetic field. Recall that until 1831 electricity and magnetism had been viewed as unrelated phenomena. In 1831, Michael Faraday, one of the great thinkers of his time, made the seminal observation that time-varying magnetic fields could induce electric currents and then, in 1864, James Clerk Maxwell published his famous paper on a dynamical theory of the electromagnetic field. See Maxwell 1864 5, page 499; also David J. Griffiths (1999), Introduction to electrodynamics, third Edition, ed. Prentice Hall, pp. 559-562"(as quoted in Gabriela, 2009).

Once this electromagnetic field has been produced from a given charge distribution, other charged objects in this field will experience a force (in a similar way that planets experience a force in the gravitational field of the Sun). If these other charges and currents are comparable in size to the sources producing the above electromagnetic field, then a new net electromagnetic field will be produced. Thus, the electromagnetic field may be viewed as a dynamic entity that causes other charges and currents to move, and which is also affected by them. These interactions are described by Maxwell's equations and the Lorentz force law. (This discussion ignores the radiation reaction force.)

Electromagnetic field as a feedback loop

The behavior of the electromagnetic field can be resolved into four different parts of a loop:
  • the electric and magnetic fields are generated by electric charges,
  • the electric and magnetic fields interact with each other,
  • the electric and magnetic fields produce forces on electric charges,
  • the electric charges move in space.
A common misunderstanding is that (a) the quanta of the fields act in the same manner as (b) the charged particles that generate the fields. In our everyday world, charged particles, such as electrons, move slowly through matter with a drift velocity of a fraction of a centimeter (or inch) per second, but fields propagate at the speed of light - approximately 300 thousand kilometers (or 186 thousand miles) a second. The mundane speed difference between charged particles and field quanta is on the order of one to a million, more or less. Maxwell's equations relate (a) the presence and movement of charged particles with (b) the generation of fields. Those fields can then affect the force on, and can then move other slowly moving charged particles. Charged particles can move at relativistic speeds nearing field propagation speeds, but, as Einstein showed[citation needed], this requires enormous field energies, which are not present in our everyday experiences with electricity, magnetism, matter, and time and space.

The feedback loop can be summarized in a list, including phenomena belonging to each part of the loop:
  • charged particles generate electric and magnetic fields
  • the fields interact with each other
    • changing electric field acts like a current, generating 'vortex' of magnetic field
    • Faraday induction: changing magnetic field induces (negative) vortex of electric field
    • Lenz's law: negative feedback loop between electric and magnetic fields
  • fields act upon particles
    • Lorentz force: force due to electromagnetic field
      • electric force: same direction as electric field
      • magnetic force: perpendicular both to magnetic field and to velocity of charge
  • particles move
    • current is movement of particles
  • particles generate more electric and magnetic fields; cycle repeats

Mathematical description

There are different mathematical ways of representing the electromagnetic field. The first one views the electric and magnetic fields as three-dimensional vector fields. These vector fields each have a value defined at every point of space and time and are thus often regarded as functions of the space and time coordinates. As such, they are often written as E(x, y, z, t) (electric field) and B(x, y, z, t) (magnetic field).
If only the electric field (E) is non-zero, and is constant in time, the field is said to be an electrostatic field. Similarly, if only the magnetic field (B) is non-zero and is constant in time, the field is said to be a magnetostatic field. However, if either the electric or magnetic field has a time-dependence, then both fields must be considered together as a coupled electromagnetic field using Maxwell's equations.[2]

With the advent of special relativity, physical laws became susceptible to the formalism of tensors. Maxwell's equations can be written in tensor form, generally viewed by physicists as a more elegant means of expressing physical laws.

The behaviour of electric and magnetic fields, whether in cases of electrostatics, magnetostatics, or electrodynamics (electromagnetic fields), is governed by Maxwell's equations. In the vector field formalism, these are:
\nabla \cdot \mathbf{E} = \frac{\rho}{\varepsilon_0} (Gauss's law)
\nabla \cdot \mathbf{B} = 0 (Gauss's law for magnetism)
\nabla \times \mathbf{E} = -\frac {\partial \mathbf{B}}{\partial t} (Faraday's law)
\nabla \times \mathbf{B} = \mu_0 \mathbf{J} + \mu_0\varepsilon_0  \frac{\partial \mathbf{E}}{\partial t} (Ampère-Maxwell law)
where \rho is the charge density, which can (and often does) depend on time and position, \epsilon_0 is the permittivity of free space, \mu_0 is the permeability of free space, and J is the current density vector, also a function of time and position. The units used above are the standard SI units. Inside a linear material, Maxwell's equations change by switching the permeability and permittivity of free space with the permeability and permittivity of the linear material in question. Inside other materials which possess more complex responses to electromagnetic fields, these terms are often represented by complex numbers, or tensors.

The Lorentz force law governs the interaction of the electromagnetic field with charged matter.
When a field travels across to different media, the properties of the field change according to the various boundary conditions. These equations are derived from Maxwell's equations. The tangential components of the electric and magnetic fields as they relate on the boundary of two media are as follows:[3]
\mathbf{E}_{1} = \mathbf{E}_{2}
\mathbf{H}_{1} = \mathbf{H}_{2} (current-free)
\mathbf{D}_{1} = \mathbf{D}_{2} (charge-free)
\mathbf{B}_{1} = \mathbf{B}_{2}
The angle of refraction of an electric field between media is related to the permittivity (\varepsilon) of each medium:
\frac{{\tan\theta_1}}{{\tan\theta_2}} = \frac{{\varepsilon_{r2}}}{{\varepsilon_{r1}}}
The angle of refraction of a magnetic field between media is related to the permeability (\mu) of each medium:
\frac{{\tan\theta_1}}{{\tan\theta_2}} = \frac{{\mu_{r2}}}{{\mu_{r1}}}

Properties of the field

Reciprocal behavior of electric and magnetic fields

The two Maxwell equations, Faraday's Law and the Ampère-Maxwell Law, illustrate a very practical feature of the electromagnetic field. Faraday's Law may be stated roughly as 'a changing magnetic field creates an electric field'. This is the principle behind the electric generator.

Ampere's Law roughly states that 'a changing electric field creates a magnetic field'. Thus, this law can be applied to generate a magnetic field and run an electric motor.

Light as an electromagnetic disturbance

Maxwell's equations take the form of an electromagnetic wave in a volume of space not containing charges or currents (free space) – that is, where \rho and J are zero. Under these conditions, the electric and magnetic fields satisfy the electromagnetic wave equation:[4]
  \left( \nabla^2 - { 1 \over {c}^2 } {\partial^2 \over \partial t^2} \right) \mathbf{E} \ \ = \ \ 0
  \left( \nabla^2 - { 1 \over {c}^2 } {\partial^2 \over \partial t^2} \right) \mathbf{B} \ \ = \ \ 0
James Clerk Maxwell was the first to obtain this relationship by his completion of Maxwell's equations with the addition of a displacement current term to Ampere's Circuital law.

Relation to and comparison with other physical fields

Being one of the four fundamental forces of nature, it is useful to compare the electromagnetic field with the gravitational, strong and weak fields. The word 'force' is sometimes replaced by 'interaction' because modern particle physics models electromagnetism as an exchange of particles known as gauge bosons.

Electromagnetic and gravitational fields

Sources of electromagnetic fields consist of two types of charge – positive and negative. This contrasts with the sources of the gravitational field, which are masses. Masses are sometimes described as gravitational charges, the important feature of them being that there are only positive masses and no negative masses. Further, gravity differs from electromagnetism in that positive masses attract other positive masses whereas same charges in electromagnetism repel each other.

The relative strengths and ranges of the four interactions and other information are tabulated below:
Theory Interaction mediator Relative Magnitude Behavior Range
Chromodynamics Strong interaction gluon 1038 1 10−15 m
Electrodynamics Electromagnetic interaction photon 1036 1/r2 infinite
Flavordynamics Weak interaction W and Z bosons 1025 1/r5 to 1/r7 10−16 m
Geometrodynamics Gravitation graviton 100 1/r2 infinite

Applications

Static E and M fields and static EM fields

When an EM field (see electromagnetic tensor) is not varying in time, it may be seen as a purely electrical field or a purely magnetic field, or a mixture of both. However the general case of a static EM field with both electric and magnetic components present, is the case that appears to most observers. Observers who see only an electric or magnetic field component of a static EM field, have the other (electric or magnetic) component suppressed, due to the special case of the immobile state of the charges that produce the EM field in that case. In such cases the other component becomes manifest in other observer frames.
A consequence of this, is that any case that seems to consist of a "pure" static electric or magnetic field, can be converted to an EM field, with both E and M components present, by simply moving the observer into a frame of reference which is moving with regard to the frame in which only the “pure” electric or magnetic field appears. That is, a pure static electric field will show the familiar magnetic field associated with a current, in any frame of reference where the charge moves. Likewise, any new motion of a charge in a region that seemed previously to contain only a magnetic field, will show that that the space now contains an electric field as well, which will be found to produces an additional Lorentz force upon the moving charge.

Thus, electrostatics, as well as magnetism and magnetostatics, are now seen as studies of the static EM field when a particular frame has been selected to suppress the other type of field, and since an EM field with both electric and magnetic will appear in any other frame, these "simpler" effects are merely the observer's. The "applications" of all such non-time varying (static) fields are discussed in the main articles linked in this section.

Time-varying EM fields in Maxwell’s equations

An EM field that varies in time has two “causes” in Maxwell’s equations. One is charges and currents (so-called “sources”), and the other cause for an E or M field is a change in the other type of field (this last cause also appears in “free space” very far from currents and charges).
An electromagnetic field very far from currents and charges (sources) is called electromagnetic radiation (EMR) since it radiates from the charges and currents in the source, and has no "feedback" effect on them, and is also not affected directly by them in the present time (rather, it is indirectly produced by a sequences of changes in fields radiating out from them in the past). EMR consists of the radiations in the electromagnetic spectrum, including radio waves, microwave, infrared, visible light, ultraviolet light, X-rays, and gamma rays. The many commercial applications of these radiations are discussed in the named and linked articles.

A notable application of visible light is that this type of energy from the Sun powers all life on Earth that either makes or uses oxygen.

A changing electromagnetic field which is physically close to currents and charges (see near and far field for a definition of “close”) will have a dipole characteristic that is dominated by either a changing electric dipole, or a changing magnetic dipole. This type of dipole field near sources is called an electromagnetic near-field.

Changing electric dipole fields, as such, are used commercially as near-fields mainly as a source of dielectric heating. Otherwise, they appear parasitically around conductors which absorb EMR, and around antennas which have the purpose of generating EMR at greater distances.

Changing magnetic dipole fields (i.e., magnetic near-fields) are used commercially for many types of magnetic induction devices. These include motors and electrical transformers at low frequencies, and devices such as metal detectors and MRI scanner coils at higher frequencies. Sometimes these high-frequency magnetic fields change at radio frequencies without being far-field waves and thus radio waves; see RFID tags. See also near-field communication. Further uses of near-field EM effects commercially, may be found in the article on virtual photons, since at the quantum level, these fields are represented by these particles. Far-field effects (EMR) in the quantum picture of radiation, are represented by ordinary photons.

Health and safety

The potential health effects of the very low frequency EMFs surrounding power lines and electrical devices are the subject of on-going research and a significant amount of public debate. The US National Institute for Occupational Safety and Health (NIOSH) has issued some cautionary advisories but stresses that the data is currently too limited to draw good conclusions.[5]

The potential effects of electromagnetic fields on human health vary widely depending on the frequency and intensity of the fields. For more information on the health effects due to specific parts of the electromagnetic spectrum, see the following articles:

Introduction to gauge theory


From Wikipedia, the free encyclopedia

A gauge theory is a type of theory in physics. Modern physical theories, such as the theory of electromagnetism, describe the nature of reality in terms of fields, e.g., the electromagnetic field, the gravitational field, and fields for the electron and all other elementary particles. A general feature of these field theories is that the fundamental fields cannot be directly measured; however, there are observable quantities that can be measured experimentally, such as charges, energies, and velocities. In field theories, different configurations of the unobservable fields can result in identical observable quantities. A transformation from one such field configuration to another is called a gauge transformation;[1][2] the lack of change in the measurable quantities, despite the field being transformed, is a property called gauge invariance. Since any kind of invariance under a field transformation is considered a symmetry, gauge invariance is sometimes called gauge symmetry.
Generally, any theory that has the property of gauge invariance is considered a gauge theory.

For example, in electromagnetism the electric and magnetic fields, E and B, are observable, while the potentials V ("voltage") and A (the vector potential) are not.[3] Under a gauge transformation in which a constant is added to V, no observable change occurs in E or B.

With the advent of quantum mechanics in the 1920s, and with successive advances in quantum field theory, the importance of gauge transformations has steadily grown. Gauge theories constrain the laws of physics, because all the changes induced by a gauge transformation have to cancel each other out when written in terms of observable quantities. Over the course of the 20th century, physicists gradually realized that all forces (fundamental interactions) arise from the constraints imposed by local gauge symmetries, in which case the transformations vary from point to point in space and time. Perturbative quantum field theory (usually employed for scattering theory) describes forces in terms of force-mediating particles called gauge bosons. The nature of these particles is determined by the nature of the gauge transformations. The culmination of these efforts is the Standard Model, a quantum field theory explaining all of the fundamental interactions except gravity.

History and importance

The earliest field theory having a gauge symmetry was Maxwell's formulation of electrodynamics in 1864. The importance of this symmetry remained unnoticed in the earliest formulations. Similarly unnoticed, Hilbert had derived Einstein's equations of general relativity by postulating a symmetry under any change of coordinates. Later Hermann Weyl, in an attempt to unify general relativity and electromagnetism, conjectured (incorrectly, as it turned out) that invariance under the change of scale or "gauge" (a term inspired by the various track gauges of railroads) might also be a local symmetry of general relativity. Although Weyl's choice of the gauge was incorrect, the name "gauge" stuck to the approach. After the development of quantum mechanics, Weyl, Fock and London modified their gauge choice by replacing the scale factor with a change of wave phase, and applying it successfully to electromagnetism. Gauge symmetry was generalized mathematically in 1954 by Chen Ning Yang and Robert Mills in an attempt to describe the strong nuclear forces. This idea, dubbed Yang-Mills, later found application in the quantum field theory of the weak force, and its unification with electromagnetism in the electroweak theory.

The importance of gauge theories for physics stems from their tremendous success in providing a unified framework to describe the quantum-mechanical behavior of electromagnetism, the weak force and the strong force. This gauge theory, known as the Standard Model, accurately describes experimental predictions regarding three of the four fundamental forces of nature.

In classical physics

Electromagnetism

Historically, the first example of gauge symmetry to be discovered was classical electromagnetism. A static electric field can be described in terms of an electric potential (voltage) that is defined at every point in space, and in practical work it is conventional to take the Earth as a physical reference that defines the zero level of the potential, or ground. But only differences in potential are physically measurable, which is the reason that a voltmeter must have two probes, and can only report the voltage difference between them. Thus one could choose to define all voltage differences relative to some other standard, rather than the Earth, resulting in the addition of a constant offset.[4] If the potential V is a solution to Maxwell's equations then, after this gauge transformation, the new potential V \rightarrow V+C is also a solution to Maxwell's equations and no experiment can distinguish between these two solutions. In other words the laws of physics governing electricity and magnetism (that is, Maxwell equations) are invariant under gauge transformation.[5] That is, Maxwell's equations have a gauge symmetry.
Generalizing from static electricity to electromagnetism, we have a second potential, the magnetic vector potential A, which can also undergo gauge transformations. These transformations may be local. That is, rather than adding a constant onto V, one can add a function that takes on different values at different points in space and time. If A is also changed in certain corresponding ways, then the same E and B fields result. The detailed mathematical relationship between the fields E and B and the potentials V and A is given in the article Gauge fixing, along with the precise statement of the nature of the gauge transformation. The relevant point here is that the fields remain the same under the gauge transformation, and therefore Maxwell's equations are still satisfied.

Gauge symmetry is closely related to charge conservation. Suppose that there existed some process by which one could violate conservation of charge, at least temporarily, by creating a charge q at a certain point in space, 1, moving it to some other point 2, and then destroying it. We might imagine that this process was consistent with conservation of energy. We could posit a rule stating that creating the charge required an input of energy E1=qV1 and destroying it released E2=qV2, which would seem natural since qV measures the extra energy stored in the electric field because of the existence of a charge at a certain point. (There may also be energy associated, e.g., with the rest mass of the particle, but that is not relevant to the present argument.) Conservation of energy would be satisfied, because the net energy released by creation and destruction of the particle, qV2-qV1, would be equal to the work done in moving the particle from 1 to 2, qV2-qV1. But although this scenario salvages conservation of energy, it violates gauge symmetry. Gauge symmetry requires that the laws of physics be invariant under the transformation V \rightarrow V+C, which implies that no experiment should be able to measure the absolute potential, without reference to some external standard such as an electrical ground. But the proposed rules E1=qV1 and E2=qV2 for the energies of creation and destruction would allow an experimenter to determine the absolute potential, simply by checking how much energy input was required in order to create the charge q at a particular point in space. The conclusion is that if gauge symmetry holds, and energy is conserved, then charge must be conserved.[6]

The Cartesian coordinate grid on this square has been distorted by a coordinate transformation, so that there is a nonlinear relationship between the old (x,y) coordinates and the new ones. Einstein's equations of general relativity are still valid in the new coordinate system. Such changes of coordinate system are the gauge transformations of general relativity.

General relativity

As discussed above, the gauge transformations for classical (i.e., non-quantum mechanical) general relativity are arbitrary coordinate transformations.[7] (Technically, the transformations must be invertible, and both the transformation and its inverse must be smooth, in the sense of being differentiable an arbitrary number of times.)

An example of a symmetry in a physical theory: translation invariance

Some global symmetries under changes of coordinate predate both general relativity and the concept of a gauge. For example, translation invariance was introduced in the era of Galileo, who eliminated the Aristotelian concept that various places in space, such as the earth and the heavens, obeyed different physical rules.

Suppose, for example, that one observer examines the properties of a hydrogen atom on Earth, the other—on the Moon (or any other place in the universe), the observer will find that their hydrogen atoms exhibit completely identical properties. Again, if one observer had examined a hydrogen atom today and the other—100 years ago (or any other time in the past or in the future), the two experiments would again produce completely identical results. The invariance of the properties of a hydrogen atom with respect to the time and place where these properties were investigated is called translation invariance.

Recalling our two observers from different ages: the time in their experiments is shifted by 100 years. If the time when the older observer did the experiment was t, the time of the modern experiment is t+100 years. Both observers discover the same laws of physics. Because light from hydrogen atoms in distant galaxies may reach the earth after having traveled across space for billions of years, in effect one can do such observations covering periods of time almost all the way back to the Big Bang, and they show that the laws of physics have always been the same.

In other words, if in the theory we change the time t to t+100 years (or indeed any other time shift) the theoretical predictions do not change.[8]

Another example of a symmetry: the invariance of Einstein's field equation under arbitrary coordinate transformations

In Einstein's general relativity, coordinates like x, y, z, and t are not only "relative" in the global sense of translations like t \rightarrow t+C, rotations, etc., but become completely arbitrary, so that for example one can define an entirely new timelike coordinate according to some arbitrary rule such as t \rightarrow t+t^3/t_0^2, where t_0 has units of time, and yet Einstein's equations will have the same form.[7][9]

Invariance of the form of an equation under an arbitrary coordinate transformation is customarily referred to as general covariance and equations with this property are referred to as written in the covariant form. General covariance is a special case of gauge invariance.

Maxwell's equations can also be expressed in a generally covariant form, which is as invariant under general coordinate transformation as Einstein's field equation.

In quantum mechanics

Quantum electrodynamics

Until the advent of quantum mechanics, the only well known example of gauge symmetry was in electromagnetism, and the general significance of the concept was not fully understood. For example, it was not clear whether it was the fields E and B or the potentials V and A that were the fundamental quantities; if the former, then the gauge transformations could be considered as nothing more than a mathematical trick.

Aharonov–Bohm experiment


Double-slit diffraction and interference pattern

In quantum mechanics a particle, such as an electron, is also described as a wave. For example, if the double-slit experiment is performed with electrons, then a wave-like interference pattern is observed. The electron has the highest probability of being detected at locations where the parts of the wave passing through the two slits are in phase with one another, resulting in constructive interference. The frequency of the electron wave is related to the kinetic energy of an individual electron particle via the quantum-mechanical relation E = hf. If there are no electric or magnetic fields present in this experiment, then the electron's energy is constant, and, for example, there will be a high probability of detecting the electron along the central axis of the experiment, where by symmetry the two parts of the wave are in phase.

But now suppose that the electrons in the experiment are subject to electric or magnetic fields. For example, if an electric field was imposed on one side of the axis but not on the other, the results of the experiment would be affected. The part of the electron wave passing through that side oscillates at a different rate, since its energy has had −eV added to it, where −e is the charge of the electron and V the electrical potential. The results of the experiment will be different, because phase relationships between the two parts of the electron wave have changed, and therefore the locations of constructive and destructive interference will be shifted to one side or the other. It is the electric potential that occurs here, not the electric field, and this is a manifestation of the fact that it is the potentials and not the fields that are of fundamental significance in quantum mechanics.

Schematic of double-slit experiment in which Aharonov–Bohm effect can be observed: electrons pass through two slits, interfering at an observation screen, with the interference pattern shifted when a magnetic field B is turned on in the cylindrical solenoid, marked in blue on the diagram.

Explanation with potentials

It is even possible to have cases in which an experiment's results differ when the potentials are changed, even if no charged particle is ever exposed to a different field. One such example is the Aharonov–Bohm effect, shown in the figure.[10] In this example, turning on the solenoid only causes a magnetic field B to exist within the solenoid. But the solenoid has been positioned so that the electron cannot possibly pass through its interior. If one believed that the fields were the fundamental quantities, then one would expect that the results of the experiment would be unchanged. In reality, the results are different, because turning on the solenoid changed the vector potential A in the region that the electrons do pass through. Now that it has been established that it is the potentials V and A that are fundamental, and not the fields E and B, we can see that the gauge transformations, which change V and A, have real physical significance, rather than being merely mathematical artifacts.

Gauge invariance: the results of the experiments are independent of the choice of the gauge for the potentials

Note that in these experiments, the only quantity that affects the result is the difference in phase between the two parts of the electron wave. Suppose we imagine the two parts of the electron wave as tiny clocks, each with a single hand that sweeps around in a circle, keeping track of its own phase. Although this cartoon ignores some technical details, it retains the physical phenomena that are important here.[11] If both clocks are sped up by the same amount, the phase relationship between them is unchanged, and the results of experiments are the same. Not only that, but it is not even necessary to change the speed of each clock by a fixed amount. We could change the angle of the hand on each clock by a varying amount θ, where θ could depend on both the position in space and on time. This would have no effect on the result of the experiment, since the final observation of the location of the electron occurs at a single place and time, so that the phase shift in each electron's "clock" would be the same, and the two effects would cancel out. This is another example of a gauge transformation: it is local, and it does not change the results of experiments.

Summary

In summary, gauge symmetry attains its full importance in the context of quantum mechanics. In the application of quantum mechanics to electromagnetism, i.e., quantum electrodynamics, gauge symmetry applies to both electromagnetic waves and electron waves. These two gauge symmetries are in fact intimately related. If a gauge transformation θ is applied to the electron waves, for example, then one must also apply a corresponding transformation to the potentials that describe the electromagnetic waves.[12] Gauge symmetry is required in order to make quantum electrodynamics a renormalizable theory, i.e., one in which the calculated predictions of all physically measurable quantities are finite.

Types of gauge symmetries

The description of the electrons in the subsection above as little clocks is in effect a statement of the mathematical rules according to which the phases of electrons are to be added and subtracted: they are to be treated as ordinary numbers, except that in the case where the result of the calculation falls outside the range of 0≤θ<360°, we force it to "wrap around" into the allowed range, which covers a circle. Another way of putting this is that a phase angle of, say, 5° is considered to be completely equivalent to an angle of 365°. Experiments have verified this testable statement about the interference patterns formed by electron waves. Except for the "wrap-around" property, the algebraic properties of this mathematical structure are exactly the same as those of the ordinary real numbers.

In mathematical terminology, electron phases form an Abelian group under addition, called the circle group or U(1). "Abelian" means that addition commutes, so that θ + φ = φ + θ. Group means that addition associates and has an identity element, namely "0". Also, for every phase there exists an inverse such that the sum of a phase and its inverse is 0. Other examples of abelian groups are the integers under addition, 0, and negation, and the nonzero fractions under product, 1, and reciprocal.

Gauge fixing of a twisted cylinder.

As a way of visualizing the choice of a gauge, consider whether it is possible to tell if a cylinder has been twisted. If the cylinder has no bumps, marks, or scratches on it, we cannot tell. We could, however, draw an arbitrary curve along the cylinder, defined by some function θ(x), where x measures distance along the axis of the cylinder. Once this arbitrary choice (the choice of gauge) has been made, it becomes possible to detect it if someone later twists the cylinder.

In 1954, Chen Ning Yang and Robert Mills proposed to generalize these ideas to noncommutative groups. A noncommutative gauge group can describe a field that, unlike the electromagnetic field, interacts with itself. For example, general relativity states that gravitational fields have energy, and special relativity concludes that energy is equivalent to mass. Hence a gravitational field induces a further gravitational field. The nuclear forces also have this self-interacting property.

Gauge bosons

Surprisingly, gauge symmetry can give a deeper explanation for the existence of interactions, such as the electrical and nuclear interactions. This arises from a type of gauge symmetry relating to the fact that all particles of a given type are experimentally indistinguishable from one other. Imagine that Alice and Betty are identical twins, labeled at birth by bracelets reading A and B. Because the girls are identical, nobody would be able to tell if they had been switched at birth; the labels A and B are arbitrary, and can be interchanged. Such a permanent interchanging of their identities is like a global gauge symmetry. There is also a corresponding local gauge symmetry, which describes the fact that from one moment to the next, Alice and Betty could swap roles while nobody was looking, and nobody would be able to tell. If we observe that Mom's favorite vase is broken, we can only infer that the blame belongs to one twin or the other, but we cannot tell whether the blame is 100% Alice's and 0% Betty's, or vice versa. If Alice and Betty are in fact quantum-mechanical particles rather than people, then they also have wave properties, including the property of superposition, which allows waves to be added, subtracted, and mixed arbitrarily. It follows that we are not even restricted to a complete swaps of identity. For example, if we observe that a certain amount of energy exists in a certain location in space, there is no experiment that can tell us whether that energy is 100% A's and 0% B's, 0% A's and 100% B's, or 20% A's and 80% B's, or some other mixture. The fact that the symmetry is local means that we cannot even count on these proportions to remain fixed as the particles propagate through space. The details of how this is represented mathematically depend on technical issues relating to the spins of the particles, but for our present purposes we consider a spinless particle, for which it turns out that the mixing can be specified by some arbitrary choice of gauge θ(x), where an angle θ = 0° represents 100% A and 0% B, θ = 90° means 0% A and 100% B, and intermediate angles represent mixtures.

According to the principles of quantum mechanics, particles do not actually have trajectories through space. Motion can only be described in terms of waves, and the momentum p of an individual particle is related to its wavelength λ by p = h/λ. In terms of empirical measurements, the wavelength can only be determined by observing a change in the wave between one point in space and another nearby point (mathematically, by differentiation). A wave with a shorter wavelength oscillates more rapidly, and therefore changes more rapidly between nearby points. Now suppose that we arbitrarily fix a gauge at one point in space, by saying that the energy at that location is 20% A's and 80% B's. We then measure the two waves at some other, nearby point, in order to determine their wavelengths. But there are two entirely different reasons that the waves could have changed. They could have changed because they were oscillating with a certain wavelength, or they could have changed because the gauge function changed from a 20-80 mixture to, say, 21-79. If we ignore the second possibility, the resulting theory doesn't work; strange discrepancies in momentum will show up, violating the principle of conservation of momentum. Something in the theory must be changed.

Again there are technical issues relating to spin, but in several important cases, including electrically charged particles and particles interacting via nuclear forces, the solution to the problem is to impute physical reality to the gauge function θ(x). We say that if the function θ oscillates, it represents a new type of quantum-mechanical wave, and this new wave has its own momentum p = h/λ, which turns out to patch up the discrepancies that otherwise would have broken conservation of momentum. In the context of electromagnetism, the particles A and B would be charged particles such as electrons, and the quantum mechanical wave represented by θ would be the electromagnetic field. (Here we ignore the technical issues raised by the fact that electrons actually have spin 1/2, not spin zero. This oversimplification is the reason that the gauge field θ comes out to be a scalar, whereas the electromagnetic field is actually represented by a vector consisting of V and A.) The result is that we have an explanation for the presence of electromagnetic interactions: if we try to construct a gauge-symmetric theory of identical, non-interacting particles, the result is not self-consistent, and can only be repaired by adding electrical and magnetic fields that cause the particles to interact.

Although the function θ(x) describes a wave, the laws of quantum mechanics require that it also have particle properties. In the case of electromagnetism, the particle corresponding to electromagnetic waves is the photon. In general, such particles are called gauge bosons, where the term "boson" refers to a particle with integer spin. In the simplest versions of the theory, gauge bosons are massless, but it is also possible to construct versions in which they have mass, as is the case for the gauge bosons that transmit the nuclear decay forces.

Narcissism

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Narcissism Narcissus (1597–99) by Caravaggio ; the man in...