Search This Blog

Wednesday, February 4, 2015

Organic chemistry


From Wikipedia, the free encyclopedia


Methane, CH4, in line-angle representation, showing four carbon-hydrogen single (σ) bonds in black, and the 3D shape of such tetrahedral molecules, with ~109° interior bond angles, in green. Methane is the simplest organic chemical and simplest hydrocarbon, and molecules can be built up conceptually from it by exchanging up to all 4 hydrogens with carbon or other atoms.

Organic chemistry is a chemistry subdiscipline involving the scientific study of the structure, properties, and reactions of organic compounds and organic materials, i.e., matter in its various forms that contain carbon atoms.[1][2] Study of structure includes using spectroscopy (e.g., NMR), mass spectrometry, and other physical and chemical methods to determine the chemical composition and constitution of organic compounds and materials. Study of properties includes both physical properties and chemical properties, and uses similar methods as well as methods to evaluate chemical reactivity, with the aim to understand the behavior of the organic matter in its pure form (when possible), but also in solutions, mixtures, and fabricated forms. The study of organic reactions includes probing their scope through use in preparation of target compounds (e.g., natural products, drugs, polymers, etc.) by chemical synthesis, as well as the focused study of the reactivities of individual organic molecules, both in the laboratory and via theoretical (in silico) study.

The range of chemicals studied in organic chemistry include hydrocarbons (compounds containing only carbon and hydrogen), as well as myriad compositions based always on carbon, but also containing other elements,[1][3][4] especially:
In the modern era, the range extends further into the periodic table, with main group elements, including:
In addition, much modern research focuses on organic chemistry involving further organometallics, including the lanthanides, but especially the:
  • transition metals (e.g., zinc, copper, palladium, nickel, cobalt, titanium, chromium, etc.).
To be supplied
Line-angle representation
To be supplied
Ball-and-stick representation
To be supplied
Space-filling representation
Three representations of an organic compound, 5α-Dihydroprogesterone (5α-DHP), a steroid hormone. For molecules showing color, the carbon atoms are in black, hydrogens in gray, and oxygens in red. In the line angle representation, carbon atoms are implied at every terminus of a line and vertex of multiple lines, and hydrogen atoms are implied to fill the remaining needed valences (up to 4).

Finally, organic compounds form the basis of all earthly life and constitute a significant part of human endeavors in chemistry. The bonding patterns open to carbon, with its valence of four—formal single, double, and triple bonds, as well as various structures with delocalized electrons—make the array of organic compounds structurally diverse, and their range of applications enormous. They either form the basis of, or are important constituents of, many commercial products including pharmaceuticals; petrochemicals and products made from them (including lubricants, solvents, etc.); plastics; fuels and explosives; etc. As indicated, the study of organic chemistry overlaps with organometallic chemistry and biochemistry, but also with medicinal chemistry, polymer chemistry, as well as many aspects of materials science.[1]

Periodic table of elements of interest in organic chemistry. The table illustrates all elements of current interest in modern organic and organometallic chemistry, indicating main group elements in orange, and transition metals and lanthanides (Lan) in grey.

History

Main article: History of chemistry

Before the nineteenth century, chemists generally believed that compounds obtained from living organisms were endowed with a vital force that distinguished them from inorganic compounds. According to the concept of vitalism (vital force theory), organic matter was endowed with a "vital force".[5] During the first half of the nineteenth century, some of the first systematic studies of organic compounds were reported. Around 1816 Michel Chevreul started a study of soaps made from various fats and alkalis. He separated the different acids that, in combination with the alkali, produced the soap. Since these were all individual compounds, he demonstrated that it was possible to make a chemical change in various fats (which traditionally come from organic sources), producing new compounds, without "vital force". In 1828 Friedrich Wöhler produced the organic chemical urea (carbamide), a constituent of urine, from the inorganic ammonium cyanate NH4CNO, in what is now called the Wöhler synthesis. Although Wöhler was always cautious about claiming that he had disproved the theory of vital force, this event has often been thought of as a turning point.[5]

In 1856 William Henry Perkin, while trying to manufacture quinine, accidentally manufactured the organic dye now known as Perkin's mauve. Through its great financial success, this discovery greatly increased interest in organic chemistry.[6]

The crucial breakthrough for organic chemistry was the concept of chemical structure, developed independently and simultaneously by Friedrich August Kekulé and Archibald Scott Couper in 1858.[7] Both men suggested that tetravalent carbon atoms could link to each other to form a carbon lattice, and that the detailed patterns of atomic bonding could be discerned by skillful interpretations of appropriate chemical reactions.

The pharmaceutical industry began in the last decade of the 19th century when the manufacturing of acetylsalicylic acid (more commonly referred to as aspirin) in Germany was started by Bayer.[8] The first time a drug was systematically improved was with arsphenamine (Salvarsan). Though numerous derivatives of the dangerous toxic atoxyl were examined by Paul Ehrlich and his group, the compound with best effectiveness and toxicity characteristics was selected for production.[citation needed]

An example of an organometallic molecule, a catalyst called Grubbs' catalyst, as a ball-and-stick model based on an X-ray crystal structure.[9] The formula of the catalyst is often given as RuCl2(PCy3)2(=CHPh), where the ruthenium metal atom, Ru, is at very center in turquoise, carbons are in black, hydrogens in gray-white, chlorine in green, and phosphorus in orange. The metal ligand at the bottom is a tricyclohexyl phosphine, abbreviated PCy, and another of these appears at the top of the image (where its rings are obscuring one another). The group projecting out to the right has a metal-carbon double bond, as is known as an alkylidene. Robert Grubbs shared the 2005 Nobel prize in chemistry with Richard R. Schrock and Yves Chauvin for their work on the reactions such catalysts mediate, called olefin metathesis.

Early examples of organic reactions and applications were often serendipitous. The latter half of the 19th century however witnessed systematic studies of organic compounds, Illustrative is the development of synthetic indigo. The production of indigo from plant sources dropped from 19,000 tons in 1897 to 1,000 tons by 1914 thanks to the synthetic methods developed by Adolf von Baeyer. In 2002, 17,000 tons of synthetic indigo were produced from petrochemicals.[10]

In the early part of the 20th Century, polymers and enzymes were shown to be large organic molecules, and petroleum was shown to be of biological origin.

The multistep synthesis of complex organic compounds is called total synthesis. Total synthesis of complex natural compounds increased in complexity to glucose and terpineol. For example, cholesterol-related compounds have opened ways to synthesize complex human hormones and their modified derivatives. Since the start of the 20th century, complexity of total syntheses has been increased to include molecules of high complexity such as lysergic acid and vitamin B12.[11]

The total synthesis of vitamin B12 marked a major achievement in organic chemistry.

The development of organic chemistry benefited from the discovery of petroleum and the development of the petrochemical industry. The conversion of individual compounds obtained from petroleum into different compound types by various chemical processes led to the birth of the petrochemical industry, which successfully manufactured artificial rubbers, various organic adhesives, property-modifying petroleum additives, and plastics.

The majority of chemical compounds occurring in biological organisms are in fact carbon compounds, so the association between organic chemistry and biochemistry is so close that biochemistry might be regarded as in essence a branch of organic chemistry. Although the history of biochemistry might be taken to span some four centuries, fundamental understanding of the field only began to develop in the late 19th century and the actual term biochemistry was coined around the start of 20th century. Research in the field increased throughout the twentieth century, without any indication of slackening in the rate of increase, as may be verified by inspection of abstraction and indexing services such as BIOSIS Previews and Biological Abstracts, which began in the 1920s as a single annual volume, but has grown so drastically that by the end of the 20th century it was only available to the everyday user as an online electronic database.[12]

Characterization

Since organic compounds often exist as mixtures, a variety of techniques have also been developed to assess purity, especially important being chromatography techniques such as HPLC and gas chromatography. Traditional methods of separation include distillation, crystallization, and solvent extraction.

Organic compounds were traditionally characterized by a variety of chemical tests, called "wet methods", but such tests have been largely displaced by spectroscopic or other computer-intensive methods of analysis.[13] Listed in approximate order of utility, the chief analytical methods are:
  • Nuclear magnetic resonance (NMR) spectroscopy is the most commonly used technique, often permitting complete assignment of atom connectivity and even stereochemistry using correlation spectroscopy. The principal constituent atoms of organic chemistry - hydrogen and carbon - exist naturally with NMR-responsive isotopes, respectively 1H and 13C.
  • Elemental analysis: A destructive method used to determine the elemental composition of a molecule. See also mass spectrometry, below.
  • Mass spectrometry indicates the molecular weight of a compound and, from the fragmentation patterns, its structure. High resolution mass spectrometry can usually identify the exact formula of a compound and is used in lieu of elemental analysis. In former times, mass spectrometry was restricted to neutral molecules exhibiting some volatility, but advanced ionization techniques allow one to obtain the "mass spec" of virtually any organic compound.
  • Crystallography is an unambiguous method for determining molecular geometry, the proviso being that single crystals of the material must be available and the crystal must be representative of the sample. Highly automated software allows a structure to be determined within hours of obtaining a suitable crystal.
Traditional spectroscopic methods such as infrared spectroscopy, optical rotation, UV/VIS spectroscopy provide relatively nonspecific structural information but remain in use for specific classes of compounds.

Properties

Physical properties of organic compounds typically of interest include both quantitative and
qualitative features. Quantitative information includes melting point, boiling point, and index of refraction. Qualitative properties include odor, consistency, solubility, and color.

Melting and boiling properties

Organic compounds typically melt and many boil. In contrast, while inorganic materials generally can be melted, many do not boil, tending instead to degrade. In earlier times, the melting point (m.p.) and boiling point (b.p.) provided crucial information on the purity and identity of organic compounds. The melting and boiling points correlate with the polarity of the molecules and their molecular weight. Some organic compounds, especially symmetrical ones, sublime, that is they evaporate without melting. A well-known example of a sublimable organic compound is para-dichlorobenzene, the odiferous constituent of modern mothballs. Organic compounds are usually not very stable at temperatures above 300 °C, although some exceptions exist.

Solubility

Neutral organic compounds tend to be hydrophobic; that is, they are less soluble in water than in organic solvents. Exceptions include organic compounds that contain ionizable(which can be converted in ions) groups as well as low molecular weight alcohols, amines, and carboxylic acids where hydrogen bonding occurs. Organic compounds tend to dissolve in organic solvents. Solvents can be either pure substances like ether or ethyl alcohol, or mixtures, such as the paraffinic solvents such as the various petroleum ethers and white spirits, or the range of pure or mixed aromatic solvents obtained from petroleum or tar fractions by physical separation or by chemical conversion.
Solubility in the different solvents depends upon the solvent type and on the functional groups if present.

Solid state properties

Various specialized properties of molecular crystals and organic polymers with conjugated systems are of interest depending on applications, e.g. thermo-mechanical and electro-mechanical such as piezoelectricity, electrical conductivity (see conductive polymers and organic semiconductors), and electro-optical (e.g. non-linear optics) properties. For historical reasons, such properties are mainly the subjects of the areas of polymer science and materials science.

Nomenclature

Various names and depictions for one organic compound.

The names of organic compounds are either systematic, following logically from a set of rules, or nonsystematic, following various traditions. Systematic nomenclature is stipulated by specifications from IUPAC. Systematic nomenclature starts with the name for a parent structure within the molecule of interest. This parent name is then modified by prefixes, suffixes, and numbers to unambiguously convey the structure. Given that millions of organic compounds are known, rigorous use of systematic names can be cumbersome. Thus, IUPAC recommendations are more closely followed for simple compounds, but not complex molecules. To use the systematic naming, one must know the structures and names of the parent structures. Parent structures include unsubstituted hydrocarbons, heterocycles, and monofunctionalized derivatives thereof.

Nonsystematic nomenclature is simpler and unambiguous, at least to organic chemists. Nonsystematic names do not indicate the structure of the compound. They are common for complex molecules, which includes most natural products. Thus, the informally named lysergic acid diethylamide is systematically named (6aR,9R)-N,N-diethyl-7-methyl-4,6,6a,7,8,9-hexahydroindolo-[4,3-fg] quinoline-9-carboxamide.

With the increased use of computing, other naming methods have evolved that are intended to be interpreted by machines. Two popular formats are SMILES and InChI.

Structural drawings

Organic molecules are described more commonly by drawings or structural formulas, combinations of drawings and chemical symbols. The line-angle formula is simple and unambiguous. In this system, the endpoints and intersections of each line represent one carbon, and hydrogen atoms can either be notated explicitly or assumed to be present as implied by tetravalent carbon. The depiction of organic compounds with drawings is greatly simplified by the fact that carbon in almost all organic compounds has four bonds, nitrogen three, oxygen two, and hydrogen one.

Classification of organic compounds

Functional groups

The family of carboxylic acids contains a carboxyl (-COOH) functional group. Acetic acid, shown here, is an example.

The concept of functional groups is central in organic chemistry, both as a means to classify structures and for predicting properties. A functional group is a molecular module, and the reactivity of that functional group is assumed, within limits, to be the same in a variety of molecules. Functional groups can have decisive influence on the chemical and physical properties of organic compounds. Molecules are classified on the basis of their functional groups. Alcohols, for example, all have the subunit C-O-H. All alcohols tend to be somewhat hydrophilic, usually form esters, and usually can be converted to the corresponding halides. Most functional groups feature heteroatoms (atoms other than C and H). Organic compounds are classified according to functional groups, alcohols, carboxylic acids, amines, etc.

Aliphatic compounds

The aliphatic hydrocarbons are subdivided into three groups of homologous series according to their state of saturation:
  • paraffins, which are alkanes without any double or triple bonds,
  • olefins or alkenes which contain one or more double bonds, i.e. di-olefins (dienes) or poly-olefins.
  • alkynes, which have one or more triple bonds.
The rest of the group is classed according to the functional groups present. Such compounds can be "straight-chain", branched-chain or cyclic. The degree of branching affects characteristics, such as the octane number or cetane number in petroleum chemistry.

Both saturated (alicyclic) compounds and unsaturated compounds exist as cyclic derivatives. The most stable rings contain five or six carbon atoms, but large rings (macrocycles) and smaller rings are common. The smallest cycloalkane family is the three-membered cyclopropane ((CH2)3). Saturated cyclic compounds contain single bonds only, whereas aromatic rings have an alternating (or conjugated) double bond. Cycloalkanes do not contain multiple bonds, whereas the cycloalkenes and the cycloalkynes do.

Aromatic compounds


Benzene is one of the best-known aromatic compounds as it is one of the simplest and most stable aromatics.

Aromatic hydrocarbons contain conjugated double bonds. This means that every carbon atom in the ring is sp2 hybridized, allowing for added stability. The most important example is benzene, the structure of which was formulated by Kekulé who first proposed the delocalization or resonance principle for explaining its structure. For "conventional" cyclic compounds, aromaticity is conferred by the presence of 4n + 2 delocalized pi electrons, where n is an integer. Particular instability (antiaromaticity) is conferred by the presence of 4n conjugated pi electrons.

Heterocyclic compounds

The characteristics of the cyclic hydrocarbons are again altered if heteroatoms are present, which can exist as either substituents attached externally to the ring (exocyclic) or as a member of the ring itself (endocyclic). In the case of the latter, the ring is termed a heterocycle. Pyridine and furan are examples of aromatic heterocycles while piperidine and tetrahydrofuran are the corresponding alicyclic heterocycles. The heteroatom of heterocyclic molecules is generally oxygen, sulfur, or nitrogen, with the latter being particularly common in biochemical systems.
Examples of groups among the heterocyclics are the aniline dyes, the great majority of the compounds discussed in biochemistry such as alkaloids, many compounds related to vitamins, steroids, nucleic acids (e.g. DNA, RNA) and also numerous medicines. Heterocyclics with relatively simple structures are pyrrole (5-membered) and indole (6-membered carbon ring).

Rings can fuse with other rings on an edge to give polycyclic compounds. The purine nucleoside bases are notable polycyclic aromatic heterocycles. Rings can also fuse on a "corner" such that one atom (almost always carbon) has two bonds going to one ring and two to another. Such compounds are termed spiro and are important in a number of natural products.

Polymers

This swimming board is made of polystyrene, an example of a polymer.

One important property of carbon is that it readily forms chains, or networks, that are linked by carbon-carbon (carbon to carbon) bonds. The linking process is called polymerization, while the chains, or networks, are called polymers. The source compound is called a monomer.

Two main groups of polymers exist: synthetic polymers and biopolymers. Synthetic polymers are artificially manufactured, and are commonly referred to as industrial polymers.[14] Biopolymers occur within a respectfully natural environment, or without human intervention.

Since the invention of the first synthetic polymer product, bakelite, synthetic polymer products have frequently been invented.[citation needed]

Common synthetic organic polymers are polyethylene (polythene), polypropylene, nylon, teflon (PTFE), polystyrene, polyesters, polymethylmethacrylate (called perspex and plexiglas), and polyvinylchloride (PVC).[citation needed]

Both synthetic and natural rubber are polymers.[citation needed]

Varieties of each synthetic polymer product may exist, for purposes of a specific use. Changing the conditions of polymerization alters the chemical composition of the product and its properties. These alterations include the chain length, or branching, or the tacticity.[citation needed]

With a single monomer as a start, the product is a homopolymer.[citation needed]

Secondary component(s) may be added to create a heteropolymer (co-polymer) and the degree of clustering of the different components can also be controlled.[citation needed]

Physical characteristics, such as hardness, density, mechanical or tensile strength, abrasion resistance, heat resistance, transparency, colour, etc. will depend on the final composition.[citation needed]

Biomolecules


Maitotoxin, a complex organic biological toxin.

Biomolecular chemistry is a major category within organic chemistry which is frequently studied by biochemists. Many complex multi-functional group molecules are important in living organisms. Some are long-chain biopolymers, and these include peptides, DNA, RNA and the polysaccharides such as starches in animals and celluloses in plants. The other main classes are amino acids (monomer building blocks of peptides and proteins), carbohydrates (which includes the polysaccharides), the nucleic acids (which include DNA and RNA as polymers), and the lipids. In addition, animal biochemistry contains many small molecule intermediates which assist in energy production through the Krebs cycle, and produces isoprene, the most common hydrocarbon in animals. Isoprenes in animals form the important steroid structural (cholesterol) and steroid hormone compounds; and in plants form terpenes, terpenoids, some alkaloids, and a class of hydrocarbons called biopolymer polyisoprenoids present in the latex of various species of plants, which is the basis for making rubber.

Small molecules


Molecular models of caffeine.

In pharmacology, an important group of organic compounds is small molecules, also referred to as 'small organic compounds'. In this context, a small molecule is a small organic compound that is biologically active, but is not a polymer. In practice, small molecules have a molar mass less than approximately 1000 g/mol.

Fullerenes

Fullerenes and carbon nanotubes, carbon compounds with spheroidal and tubular structures, have stimulated much research into the related field of materials science. The first fullerene was discovered in 1985 by Sir Harold W. Kroto (one of the authors of this article) of the United Kingdom and by Richard E. Smalley and Robert F. Curl, Jr., of the United States. Using a laser to vaporize graphite rods in an atmosphere of helium gas, these chemists and their assistants obtained cagelike molecules composed of 60 carbon atoms (C60) joined together by single and double bonds to form a hollow sphere with 12 pentagonal and 20 hexagonal faces—a design that resembles a football, or soccer ball. In 1996 the trio was awarded the Nobel Prize for their pioneering efforts. The C60 molecule was named buckminsterfullerene (or, more simply, the buckyball) after the American architect R. Buckminster Fuller, whose geodesic dome is constructed on the same structural principles. The elongated cousins of buckyballs, carbon nanotubes, were identified in 1991 by Iijima Sumio of Japan.

Others

Organic compounds containing bonds of carbon to nitrogen, oxygen and the halogens are not normally grouped separately. Others are sometimes put into major groups within organic chemistry and discussed under titles such as organosulfur chemistry, organometallic chemistry, organophosphorus chemistry and organosilicon chemistry.

Organic synthesis

A synthesis designed by E.J. Corey for oseltamivir (Tamiflu). This synthesis has 11 distinct reactions.

Synthetic organic chemistry is an applied science as it borders engineering, the "design, analysis, and/or construction of works for practical purposes". Organic synthesis of a novel compound is a problem solving task, where a synthesis is designed for a target molecule by selecting optimal reactions from optimal starting materials. Complex compounds can have tens of reaction steps that sequentially build the desired molecule. The synthesis proceeds by utilizing the reactivity of the functional groups in the molecule. For example, a carbonyl compound can be used as a nucleophile by converting it into an enolate, or as an electrophile; the combination of the two is called the aldol reaction. Designing practically useful syntheses always requires conducting the actual synthesis in the laboratory. The scientific practice of creating novel synthetic routes for complex molecules is called total synthesis.

Strategies to design a synthesis include retrosynthesis, popularized by E.J. Corey, starts with the target molecule and splices it to pieces according to known reactions. The pieces, or the proposed precursors, receive the same treatment, until available and ideally inexpensive starting materials are reached. Then, the retrosynthesis is written in the opposite direction to give the synthesis. A "synthetic tree" can be constructed, because each compound and also each precursor has multiple syntheses.

Organic reactions

Organic reactions are chemical reactions involving organic compounds. Many of these reactions are associated with functional groups. The general theory of these reactions involves careful analysis of such properties as the electron affinity of key atoms, bond strengths and steric hindrance. These factors can determine the relative stability of short-lived reactive intermediates, which usually directly determine the path of the reaction.

The basic reaction types are: addition reactions, elimination reactions, substitution reactions, pericyclic reactions, rearrangement reactions and redox reactions. An example of a common reaction is a substitution reaction written as:
Nu + C-X → C-Nu + X
where X is some functional group and Nu is a nucleophile.

The number of possible organic reactions is basically infinite. However, certain general patterns are observed that can be used to describe many common or useful reactions. Each reaction has a stepwise reaction mechanism that explains how it happens in sequence—although the detailed description of steps is not always clear from a list of reactants alone.

The stepwise course of any given reaction mechanism can be represented using arrow pushing techniques in which curved arrows are used to track the movement of electrons as starting materials transition through intermediates to final products.

AAAS Scientists: Consensus on GMO Safety Firmer Than For Human-Induced Climate Change

Posted: Updated:
TOMATOES

In sharp contrast to public views about GMOs, 89% of scientists believe genetically modified foods are safe.

That's the most eye-opening finding in a Pew Research Center study on science literacy, undertaken in cooperation with the American Association for the Advancement of Science (AAAS), and released on January 29.

The overwhelming scientific consensus exceeds the percentage of scientists, 88%, who think humans are mostly responsible for climate change. However, the public appears far more suspicious of scientific claims about GMO safety than they do about the consensus on global warming.

Some 57% of Americans say GM foods are unsafe and a startling 67% do not trust scientists, believing they don't understand the science behind GMOs. AAAS researchers blame poor reporting by mainstream scientists for the trust and literacy gaps.

The survey also contrasts sharply with a statement published earlier this week in a pay-for-play European journal by a group of anti-GMO scientists and activists, including Michael Hansen of the Center for Food Safety, and philosopher Vandana Shiva, claiming, "no scientific consensus on GMO safety."

A huge literacy gap between scientists and the public on biotechnology is one of the many disturbing nuggets that emerged from the Pew Research Center survey, which was conducted in cooperation with the AAAS, the world's largest independent general scientific society. The full study, released on January 29, is available here.

The first of several reports to be released in coming months, this study compares the views of scientists and the general public on the role of science in the United States and globally.

The eye opening take-away: The American population in general borders on scientific illiteracy. The gap between what scientists believe, grounded on empirical evidence, often sharply differs from what the general public thinks is true. The differences are sharpest over biomedical research, including GMOs.
  • 88% of AAAS scientists think eating GM food is safe, while only 37% of the public believes that’s true--a 51-percentage point gap
  • 68% of scientists say it is safe to eat food grown with pesticides, compared with 28% of citizens--a 40% gap.
  • A 42-percentage point gap over the issue of using animals in research--89% of scientists favor it, while only 47% of the public backs the idea.
2015-01-29-widedifferencs.pngThe scientist/public perception gap is less pronounced over climate, energy and space issues.
  • 37-percentage point gap over whether humans are the primary cause of climate change--87% of AAAS scientists say it is, while 50% of the public does.
  • 33-percentage point gap on the question about whether humans have evolved over time--98% of scientists say we have, compared with 65% of the public.
  • By a 20-percentage point margin, citizens are more likely than scientists to favor offshore oil drilling.
  • By a 12-point margin, the public is more likely to say that astronauts are essential for the future of the U.S. space program.
2015-01-29-climateenergy.pngThe survey represents a sample of 2,002 adult citizens and 3,748 scientists who are all members of the AAAS.

"As scientists size up the culture and their place in it," Pew said in a statement. "Scientists are notably less upbeat than they were five years ago and express serious concerns about public knowledge of science and the way scientific findings are covered by journalists."

The scientists believe that media hype is one possible reason for large gaps in opinion between their views and that of the public, particularly in the debate over GMOs. Seventy-nine percent of scientists said that the media doesn't distinguish between "well-founded" and "not well-founded" scientific research. Additionally, 52 percent agreed that the media oversimplifies the science.

Three years ago, the AAAS released an unequivocal statement on the safety of GM foods and why a consensus of its members oppose mandatory labelling:
There are several current efforts to require labeling of foods containing products derived from genetically modified crop plants, commonly known as GM crops or GMOs. These efforts are not driven by evidence that GM foods are actually dangerous. Indeed, the science is quite clear: crop improvement by the modern molecular techniques of biotechnology is safe. Rather, these initiatives are driven by a variety of factors, ranging from the persistent perception that such foods are somehow "unnatural" and potentially dangerous to the desire to gain competitive advantage by legislating attachment of a label meant to alarm. Another misconception used as a rationale for labeling is that GM crops are untested.
The AAAS also has addressed claims by anti-GMO advocacy groups, frequently echoed in the media and on activist websites, that GM foods are less tested or nutritionally deficient when compared to organic or other conventional foods.
... contrary to popular misconceptions, GM crops are the most extensively tested crops ever added to our food supply. There are occasional claims that feeding GM foods to animals causes aberrations ranging from digestive disorders, to sterility, tumors and premature death. Although such claims are often sensationalized and receive a great deal of media attention, none have stood up to rigorous scientific scrutiny. Indeed, a recent review of a dozen well-designed long-term animal feeding studies comparing GM and non-GM potatoes, soy, rice, corn and triticale found that the GM and their non-GM counterparts are nutritionally equivalent.
Looking further at the demographics of respondents, the survey finds that those with a college degree are split on GMO safety: 49% say it's generally safe while 47% say it's generally unsafe. Women are more wary than men: only 28% of women think eating GM foods are safe compared to 47% of men.
Race also divides the issue with blacks (24% say its safe) and Hispanics (32%) being more cautious than whites (41%).

The demographics of respondents on pesticide are quite similar to the responses on GMOs. More men say foods with pesticides are safe than women do. Those with more education are more likely to say food grown with pesticides are safe.2015-01-29-seekinggmolabels.png

When it comes to GM labeling, exactly half of respondents said they "always" or "sometimes" check for a non-GMO label when they are shopping. Of course, those who check labels correlate higher with those who think genetically modified foods are unsafe to eat.

So why are citizens so out of step with scientists on GMO safety?

"One possible reason for the gap: when it comes to GM crops, two-thirds of the public say scientists do not have a clear understanding about the health effects," surmised the researchers.

Yet, oddly enough for a society that doesn't trust scientists on the GMO debate, science itself still holds an esteemed position in the minds of adults. Seventy-nine percent of respondents believe that science has contributed positively to society with 62% saying it has been beneficial for the quality of food. However, the percent of people who believe that science has contributed negatively to food is up 10 points from 2009: 34 percent of respondents say that science has had a negative affect on food.

The public also highly values government investment in science research: 71% support government-funded basic science research and 61% said government funding is essential for scientific progress.

Pew also asked scientists another question: How good is the general state of science is today? Scientists were more negative this year than they were in 2009. Only 52% say that it is a good time for science today while 74% said it was good in 2009. 2015-01-29-scientistsperpsective.png
And due to the public perception of GMOs at least, scientists' more sober assessment might make sense.

Who's to blame? Scientists (75%) say lack of STEM education in grades K-12 is the biggest culprit. The release of the next report is expected in mid-February.

How can scientists and the government bridge the disturbing literacy gap between the mainstream scientific community and a skeptical public? asks Alan Leshner, CEO of the AAAS, in an editorial accompanying the survey release?
Speaking up for the importance of science to society is our only hope, and scientists must not shy away from engaging with the public, even on the most polarizing science-based topics. Scientists need to speak clearly with journalists who provide a great vehicle for translating the nature and implications of their work. Scientists should also meet with members of the public and discuss what makes each side uncomfortable. In these situations, scientists must respond forthrightly to public concerns. In other words, there needs to be a conversation, not a lecture. The public's perceptions of scientists' expertise and trustworthiness are very important but they are not enough. Acceptance of scientific facts is not based solely on comprehension levels. It can be compromised anytime information confronts people's personal, religious, or political views, and whenever scientific facts provoke fear or make people feel that they have no control over a situation. The only recourse is to have genuine, respectful dialogues with people.
Jon Entine, executive director of the Genetic Literacy Project, is a Senior Fellow at the World Food Center Institute for Food and Agricultural Literacy, University of California-Davis. Follow @JonEntine on Twitter.

Electromagnetic field


From Wikipedia, the free encyclopedia

An electromagnetic field (also EMF or EM field) is a physical field produced by electrically charged objects. It affects the behavior of charged objects in the vicinity of the field. The electromagnetic field extends indefinitely throughout space and describes the electromagnetic interaction. It is one of the four fundamental forces of nature (the others are gravitation, weak interaction and strong interaction).

The field can be viewed as the combination of an electric field and a magnetic field. The electric field is produced by stationary charges, and the magnetic field by moving charges (currents); these two are often described as the sources of the field. The way in which charges and currents interact with the electromagnetic field is described by Maxwell's equations and the Lorentz force law.
From a classical perspective in the history of electromagnetism, the electromagnetic field can be regarded as a smooth, continuous field, propagated in a wavelike manner; whereas from the perspective of quantum field theory, the field is seen as quantized, being composed of individual particles.[citation needed]

Structure of the electromagnetic field

The electromagnetic field may be viewed in two distinct ways: a continuous structure or a discrete structure.

Continuous structure

Classically, electric and magnetic fields are thought of as being produced by smooth motions of charged objects. For example, oscillating charges produce electric and magnetic fields that may be viewed in a 'smooth', continuous, wavelike fashion. In this case, energy is viewed as being transferred continuously through the electromagnetic field between any two locations. For instance, the metal atoms in a radio transmitter appear to transfer energy continuously. This view is useful to a certain extent (radiation of low frequency), but problems are found at high frequencies (see ultraviolet catastrophe).

Discrete structure

The electromagnetic field may be thought of in a more 'coarse' way. Experiments reveal that in some circumstances electromagnetic energy transfer is better described as being carried in the form of packets called quanta (in this case, photons) with a fixed frequency. Planck's relation links the energy E of a photon to its frequency ν through the equation:[1]
E= \, h \, \nu
where h is Planck's constant, named in honor of Max Planck, and ν is the frequency of the photon . Although modern quantum optics tells us that there also is a semi-classical explanation of the photoelectric effect—the emission of electrons from metallic surfaces subjected to electromagnetic radiation—the photon was historically (although not strictly necessarily) used to explain certain observations. It is found that increasing the intensity of the incident radiation (so long as one remains in the linear regime) increases only the number of electrons ejected, and has almost no effect on the energy distribution of their ejection. Only the frequency of the radiation is relevant to the energy of the ejected electrons.

This quantum picture of the electromagnetic field (which treats it as analogous to harmonic oscillators) has proved very successful, giving rise to quantum electrodynamics, a quantum field theory describing the interaction of electromagnetic radiation with charged matter. It also gives rise to quantum optics, which is different from quantum electrodynamics in that the matter itself is modelled using quantum mechanics rather than quantum field theory.

Dynamics of the electromagnetic field

In the past, electrically charged objects were thought to produce two different, unrelated types of field associated with their charge property. An electric field is produced when the charge is stationary with respect to an observer measuring the properties of the charge, and a magnetic field (as well as an electric field) is produced when the charge moves (creating an electric current) with respect to this observer. Over time, it was realized that the electric and magnetic fields are better thought of as two parts of a greater whole — the electromagnetic field. Recall that until 1831 electricity and magnetism had been viewed as unrelated phenomena. In 1831, Michael Faraday, one of the great thinkers of his time, made the seminal observation that time-varying magnetic fields could induce electric currents and then, in 1864, James Clerk Maxwell published his famous paper on a dynamical theory of the electromagnetic field. See Maxwell 1864 5, page 499; also David J. Griffiths (1999), Introduction to electrodynamics, third Edition, ed. Prentice Hall, pp. 559-562"(as quoted in Gabriela, 2009).

Once this electromagnetic field has been produced from a given charge distribution, other charged objects in this field will experience a force (in a similar way that planets experience a force in the gravitational field of the Sun). If these other charges and currents are comparable in size to the sources producing the above electromagnetic field, then a new net electromagnetic field will be produced. Thus, the electromagnetic field may be viewed as a dynamic entity that causes other charges and currents to move, and which is also affected by them. These interactions are described by Maxwell's equations and the Lorentz force law. (This discussion ignores the radiation reaction force.)

Electromagnetic field as a feedback loop

The behavior of the electromagnetic field can be resolved into four different parts of a loop:
  • the electric and magnetic fields are generated by electric charges,
  • the electric and magnetic fields interact with each other,
  • the electric and magnetic fields produce forces on electric charges,
  • the electric charges move in space.
A common misunderstanding is that (a) the quanta of the fields act in the same manner as (b) the charged particles that generate the fields. In our everyday world, charged particles, such as electrons, move slowly through matter with a drift velocity of a fraction of a centimeter (or inch) per second, but fields propagate at the speed of light - approximately 300 thousand kilometers (or 186 thousand miles) a second. The mundane speed difference between charged particles and field quanta is on the order of one to a million, more or less. Maxwell's equations relate (a) the presence and movement of charged particles with (b) the generation of fields. Those fields can then affect the force on, and can then move other slowly moving charged particles. Charged particles can move at relativistic speeds nearing field propagation speeds, but, as Einstein showed[citation needed], this requires enormous field energies, which are not present in our everyday experiences with electricity, magnetism, matter, and time and space.

The feedback loop can be summarized in a list, including phenomena belonging to each part of the loop:
  • charged particles generate electric and magnetic fields
  • the fields interact with each other
    • changing electric field acts like a current, generating 'vortex' of magnetic field
    • Faraday induction: changing magnetic field induces (negative) vortex of electric field
    • Lenz's law: negative feedback loop between electric and magnetic fields
  • fields act upon particles
    • Lorentz force: force due to electromagnetic field
      • electric force: same direction as electric field
      • magnetic force: perpendicular both to magnetic field and to velocity of charge
  • particles move
    • current is movement of particles
  • particles generate more electric and magnetic fields; cycle repeats

Mathematical description

There are different mathematical ways of representing the electromagnetic field. The first one views the electric and magnetic fields as three-dimensional vector fields. These vector fields each have a value defined at every point of space and time and are thus often regarded as functions of the space and time coordinates. As such, they are often written as E(x, y, z, t) (electric field) and B(x, y, z, t) (magnetic field).
If only the electric field (E) is non-zero, and is constant in time, the field is said to be an electrostatic field. Similarly, if only the magnetic field (B) is non-zero and is constant in time, the field is said to be a magnetostatic field. However, if either the electric or magnetic field has a time-dependence, then both fields must be considered together as a coupled electromagnetic field using Maxwell's equations.[2]

With the advent of special relativity, physical laws became susceptible to the formalism of tensors. Maxwell's equations can be written in tensor form, generally viewed by physicists as a more elegant means of expressing physical laws.

The behaviour of electric and magnetic fields, whether in cases of electrostatics, magnetostatics, or electrodynamics (electromagnetic fields), is governed by Maxwell's equations. In the vector field formalism, these are:
\nabla \cdot \mathbf{E} = \frac{\rho}{\varepsilon_0} (Gauss's law)
\nabla \cdot \mathbf{B} = 0 (Gauss's law for magnetism)
\nabla \times \mathbf{E} = -\frac {\partial \mathbf{B}}{\partial t} (Faraday's law)
\nabla \times \mathbf{B} = \mu_0 \mathbf{J} + \mu_0\varepsilon_0  \frac{\partial \mathbf{E}}{\partial t} (Ampère-Maxwell law)
where \rho is the charge density, which can (and often does) depend on time and position, \epsilon_0 is the permittivity of free space, \mu_0 is the permeability of free space, and J is the current density vector, also a function of time and position. The units used above are the standard SI units. Inside a linear material, Maxwell's equations change by switching the permeability and permittivity of free space with the permeability and permittivity of the linear material in question. Inside other materials which possess more complex responses to electromagnetic fields, these terms are often represented by complex numbers, or tensors.

The Lorentz force law governs the interaction of the electromagnetic field with charged matter.
When a field travels across to different media, the properties of the field change according to the various boundary conditions. These equations are derived from Maxwell's equations. The tangential components of the electric and magnetic fields as they relate on the boundary of two media are as follows:[3]
\mathbf{E}_{1} = \mathbf{E}_{2}
\mathbf{H}_{1} = \mathbf{H}_{2} (current-free)
\mathbf{D}_{1} = \mathbf{D}_{2} (charge-free)
\mathbf{B}_{1} = \mathbf{B}_{2}
The angle of refraction of an electric field between media is related to the permittivity (\varepsilon) of each medium:
\frac{{\tan\theta_1}}{{\tan\theta_2}} = \frac{{\varepsilon_{r2}}}{{\varepsilon_{r1}}}
The angle of refraction of a magnetic field between media is related to the permeability (\mu) of each medium:
\frac{{\tan\theta_1}}{{\tan\theta_2}} = \frac{{\mu_{r2}}}{{\mu_{r1}}}

Properties of the field

Reciprocal behavior of electric and magnetic fields

The two Maxwell equations, Faraday's Law and the Ampère-Maxwell Law, illustrate a very practical feature of the electromagnetic field. Faraday's Law may be stated roughly as 'a changing magnetic field creates an electric field'. This is the principle behind the electric generator.

Ampere's Law roughly states that 'a changing electric field creates a magnetic field'. Thus, this law can be applied to generate a magnetic field and run an electric motor.

Light as an electromagnetic disturbance

Maxwell's equations take the form of an electromagnetic wave in a volume of space not containing charges or currents (free space) – that is, where \rho and J are zero. Under these conditions, the electric and magnetic fields satisfy the electromagnetic wave equation:[4]
  \left( \nabla^2 - { 1 \over {c}^2 } {\partial^2 \over \partial t^2} \right) \mathbf{E} \ \ = \ \ 0
  \left( \nabla^2 - { 1 \over {c}^2 } {\partial^2 \over \partial t^2} \right) \mathbf{B} \ \ = \ \ 0
James Clerk Maxwell was the first to obtain this relationship by his completion of Maxwell's equations with the addition of a displacement current term to Ampere's Circuital law.

Relation to and comparison with other physical fields

Being one of the four fundamental forces of nature, it is useful to compare the electromagnetic field with the gravitational, strong and weak fields. The word 'force' is sometimes replaced by 'interaction' because modern particle physics models electromagnetism as an exchange of particles known as gauge bosons.

Electromagnetic and gravitational fields

Sources of electromagnetic fields consist of two types of charge – positive and negative. This contrasts with the sources of the gravitational field, which are masses. Masses are sometimes described as gravitational charges, the important feature of them being that there are only positive masses and no negative masses. Further, gravity differs from electromagnetism in that positive masses attract other positive masses whereas same charges in electromagnetism repel each other.

The relative strengths and ranges of the four interactions and other information are tabulated below:
Theory Interaction mediator Relative Magnitude Behavior Range
Chromodynamics Strong interaction gluon 1038 1 10−15 m
Electrodynamics Electromagnetic interaction photon 1036 1/r2 infinite
Flavordynamics Weak interaction W and Z bosons 1025 1/r5 to 1/r7 10−16 m
Geometrodynamics Gravitation graviton 100 1/r2 infinite

Applications

Static E and M fields and static EM fields

When an EM field (see electromagnetic tensor) is not varying in time, it may be seen as a purely electrical field or a purely magnetic field, or a mixture of both. However the general case of a static EM field with both electric and magnetic components present, is the case that appears to most observers. Observers who see only an electric or magnetic field component of a static EM field, have the other (electric or magnetic) component suppressed, due to the special case of the immobile state of the charges that produce the EM field in that case. In such cases the other component becomes manifest in other observer frames.
A consequence of this, is that any case that seems to consist of a "pure" static electric or magnetic field, can be converted to an EM field, with both E and M components present, by simply moving the observer into a frame of reference which is moving with regard to the frame in which only the “pure” electric or magnetic field appears. That is, a pure static electric field will show the familiar magnetic field associated with a current, in any frame of reference where the charge moves. Likewise, any new motion of a charge in a region that seemed previously to contain only a magnetic field, will show that that the space now contains an electric field as well, which will be found to produces an additional Lorentz force upon the moving charge.

Thus, electrostatics, as well as magnetism and magnetostatics, are now seen as studies of the static EM field when a particular frame has been selected to suppress the other type of field, and since an EM field with both electric and magnetic will appear in any other frame, these "simpler" effects are merely the observer's. The "applications" of all such non-time varying (static) fields are discussed in the main articles linked in this section.

Time-varying EM fields in Maxwell’s equations

An EM field that varies in time has two “causes” in Maxwell’s equations. One is charges and currents (so-called “sources”), and the other cause for an E or M field is a change in the other type of field (this last cause also appears in “free space” very far from currents and charges).
An electromagnetic field very far from currents and charges (sources) is called electromagnetic radiation (EMR) since it radiates from the charges and currents in the source, and has no "feedback" effect on them, and is also not affected directly by them in the present time (rather, it is indirectly produced by a sequences of changes in fields radiating out from them in the past). EMR consists of the radiations in the electromagnetic spectrum, including radio waves, microwave, infrared, visible light, ultraviolet light, X-rays, and gamma rays. The many commercial applications of these radiations are discussed in the named and linked articles.

A notable application of visible light is that this type of energy from the Sun powers all life on Earth that either makes or uses oxygen.

A changing electromagnetic field which is physically close to currents and charges (see near and far field for a definition of “close”) will have a dipole characteristic that is dominated by either a changing electric dipole, or a changing magnetic dipole. This type of dipole field near sources is called an electromagnetic near-field.

Changing electric dipole fields, as such, are used commercially as near-fields mainly as a source of dielectric heating. Otherwise, they appear parasitically around conductors which absorb EMR, and around antennas which have the purpose of generating EMR at greater distances.

Changing magnetic dipole fields (i.e., magnetic near-fields) are used commercially for many types of magnetic induction devices. These include motors and electrical transformers at low frequencies, and devices such as metal detectors and MRI scanner coils at higher frequencies. Sometimes these high-frequency magnetic fields change at radio frequencies without being far-field waves and thus radio waves; see RFID tags. See also near-field communication. Further uses of near-field EM effects commercially, may be found in the article on virtual photons, since at the quantum level, these fields are represented by these particles. Far-field effects (EMR) in the quantum picture of radiation, are represented by ordinary photons.

Health and safety

The potential health effects of the very low frequency EMFs surrounding power lines and electrical devices are the subject of on-going research and a significant amount of public debate. The US National Institute for Occupational Safety and Health (NIOSH) has issued some cautionary advisories but stresses that the data is currently too limited to draw good conclusions.[5]

The potential effects of electromagnetic fields on human health vary widely depending on the frequency and intensity of the fields. For more information on the health effects due to specific parts of the electromagnetic spectrum, see the following articles:

Collapsology

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Collapsology   The term collap...