Search This Blog

Sunday, August 3, 2014

Inter-Related Meanings of Organic

The word "organic" has a number of meanings, both in science, and among the general public.  These meanings are often inter-related.  Here, I will condense several Wiki articles to cover a few of them.
_________________________________________

Organic matter

Organic matter (or organic material, natural organic matter, NOM) is matter composed of organic compounds that has come from the remains of dead organisms such as plants and animals and their waste products in the environment.[1] Basic structures are created from cellulose, tannin, cutin, and lignin, along with other various proteins, lipids, and carbohydrates. It is very important in the movement of nutrients in the environment and plays a role in water retention on the surface of the planet.[citation needed]

Formation

Living organisms are composed of organic compounds. In life they secrete or excrete organic materials into their environment, shed body parts such as leaves and roots and after the organism dies, its body is broken down by bacterial and fungal action. Larger molecules of organic matter can be formed from the polymerization of different parts of already broken down matter.[citation needed] Natural organic matter can vary greatly, depending on its origin, transformation mode, age, and existing environment, thus its bio-physico-chemical functions vary with different environments."[2]

Natural ecosystem functions

Organic matter is present throughout the ecosystem. After degrading and reacting, it can then move into soil and mainstream water via waterflow. Organic matter provides nutrition to living organisms.. Organic matter acts as a buffer, when in aqueous solution, to maintain a less acidic pH in the environment. The buffer acting component has been proposed to be relevant for neutralizing acid rain.[3]

Source cycle

A majority of organic matter not already in the soil comes from groundwater. When the groundwater saturates the soil or sediment around it, organic matter can freely move between the phases. Groundwater has its own sources of natural organic matter also:
  • "organic matter deposits, such as kerogen and coal
  • soil and sediment organic matter
  • organic matter infiltrating into the subsurface from rivers, lakes, and marine systems"[4]
Note that one source of groundwater organic matter is soil organic matter and sedimentary organic matter. The major method of movement into soil is from groundwater, but organic matter from soil moves into groundwater as well. Most of the organic matter in lakes, rivers, and surface water areas comes from deteriorated material in the water and surrounding shores. However, organic matter can pass into or out of water to soil and sediment in the same respect as with the soil.

Importance of the cycle

Organic matter can migrate through soil, sediment, water. This movement enables a cycle. Organisms decompose into organic matter, which can then be transported and recycled. Not all biomass migrates, some is rather stationary, turning over only over the course of millions of years.[5]

Soil organic matter

The organic matter in soil derives from plants and animals. In a forest, for example, leaf litter and woody material falls to the forest floor. This is sometimes referred to as organic material.[6] When it decays to the point in which it is no longer recognizable it is called soil organic matter. When the organic matter has broken down into a stable substance that resist further decomposition it is called humus. Thus soil organic matter comprises all of the organic matter in the soil exclusive of the material that has not decayed.[7]

One of the advantages of humus is that it is able to withhold water and nutrients, therefore giving plants the capacity for growth. Another advantage of humus is that it helps the soil to stick together which allows nematodes, or microscopic bacteria, to easily decay the nutrients in the soil.[8]

There are several ways to quickly increase the amount of humus. Combining compost, plant or animal materials/waste, or green manure with soil will increase the amount of humus in the soil.
  1. Compost: decomposed organic material.
  2. Plant and animal material and waste: dead plants or plant waste such as leaves or bush and tree trimmings, or animal manure.
  3. Green manure: plants or plant material that is grown for the sole purpose of being incorporated with soil.
These three materials supply nematodes and bacteria with nutrients for them to thrive and produce more humus, which will give plants enough nutrients to survive and grow.[8]

Factors controlling rates of decomposition

    • Environmental factors
      • 1. Aeration
      • 2. Temperature
      • 3. Soil Moisture
      • 4. Soil pH
    • Quality of added residues
      • 1. Size of organic residues
      • 2. C/N of organic residues
  • Rate of decomposition of plant residues, in order from fastest to slowest decomposition rates:
    • 1. Sugars, starches, simple proteins
    • 2. Hemicellulose
    • 3. Cellulose
    • 4. Fats, waxes, oils, resins
    • 5. Lignin, phenolic compounds

Priming effect

The priming effect is characterized by intense changes in the natural process of soil organic matter (SOM) turnover, resulting from relatively moderate intervention with the soil.[9] The phenomenon is generally caused by either pulsed or continuous changes to inputs of fresh organic matter (FOM).[10] Priming effects usually result in an acceleration of mineralization due to a trigger such as the FOM inputs. The cause of this increase in decomposition has often been attributed to an increase in microbial activity resulting from higher energy and nutrient availability released from the FOM.
After the input of FOM, specialized microorganisms are believed to grow quickly and only decompose this newly added organic matter.[11] The turnover rate of SOM in these areas is at least one order of magnitude higher than the bulk soil.[10]

Other soil treatments, besides organic matter inputs, which lead to this short-term change in turnover rates, include "input of mineral fertilizer, exudation of organic substances by roots, mere mechanical treatment of soil or its drying and rewetting."[9]

Priming effects can be either positive or negative depending on the reaction of the soil with the added substance. A positive priming effect results in the acceleration of mineralization while a negative priming effect results in immobilization, leading to N unavailability. Although most changes have been documented in C and N pools, the priming effect can also be found in phosphorus and sulfur, as well as other nutrients.[9]

Löhnis was the first to discover the priming effect phenomenon in 1926 through his studies of green
manure decomposition and its effects on legume plants in soil. He noticed that when adding fresh organic residues to the soil, it resulted in intensified mineralization by the humus N. It was not until 1953, though, that the term priming effect was given by Bingemann in his paper titled, The effect of the addition of organic materials on the decomposition of an organic soil. Several other terms had been used before priming effect was coined, including priming action, added nitrogen interaction (ANI), extra N and additional N.[9] Despite these early contributions, the concept of the priming effect was widely disregarded until about the 1980s-1990s.[10]

The priming effect has been found in many different studies and is regarded as a common occurrence, appearing in most plant soil systems.[12] However, the mechanisms which lead to the priming effect are more complex then originally thought, and still remain generally misunderstood.[11]

Although there is a lot of uncertainty surrounding the reason for the priming effect, a few undisputed facts have emerged from the collection of recent research:
  1. The priming effect can arise either instantaneously or very shortly (potentially days or weeks)[10] after the addition of a substance is made to the soil.
  2. The priming effect is larger in soils that are rich in C and N as compared to those poor in these nutrients.
  3. Real priming effects have not been observed in sterile environments.
  4. The size of the priming effect increases as the amount of added treatment to the soil increases.[9]
Recent findings suggest that the same priming effect mechanisms acting in soil systems may also be present in aquatic environments, which suggests a need for broader considerations of this phenomenon in the future.[10][13]

Decomposition

One suitable definition of organic matter is biological material[disambiguation needed] in the process of decaying or decomposing, such as humus. A closer look at the biological material in the process of decaying reveals so-called organic compounds (biological molecules) in the process of breaking up (disintegrating).

The main processes by which soil molecules disintegrates are by bacterial or fungal enzymatic catalysis. If bacteria or fungi were not present on Earth, the process of decomposition would have proceeded much slower.
_________________________________________

Organic chemistry

 
Organic chemistry is a chemistry subdiscipline involving the scientific study of the structure, properties, and reactions of organic compounds and organic materials, i.e., matter in its various forms that contain carbon atoms.[1] Study of structure includes using spectroscopy (e.g., NMR), mass spectrometry, and other physical and chemical methods to determine the chemical composition and constitution of organic compounds and materials. Study of properties includes both physical properties and chemical properties, and uses similar methods as well as methods to evaluate chemical reactivity, with the aim to understand the behavior of the organic matter in its pure form (when possible), but also in solutions, mixtures, and fabricated forms. The study of organic reactions includes probing their scope through use in preparation of target compounds (e.g., natural products, drugs, polymers, etc.) by chemical synthesis, as well as the focused study of the reactivities of individual organic molecules, both in the laboratory and via theoretical (in silico) study.
 
The range of chemicals studied in organic chemistry include hydrocarbons, compounds containing only carbon and hydrogen, as well as myriad compositions based always on carbon, but also containing other elements,[1][2][3] especially:
In the modern era, the range extends further into the periodic table, with main group elements, including:
In addition, much modern research focuses on organic chemistry involving further organometallics, including the lanthanides, but especially the:
  • transition metals (e.g., zinc, copper, palladium, nickel, cobalt, titanium, chromium, etc.).
To be supplied
Line-angle representation
To be supplied
Ball-and-stick representation
To be supplied
Space-filling representation
 
Three representations of an organic compound, 5α-Dihydroprogesterone (5α-DHP), a steroid hormone. For molecules showing color, the carbon atoms are in black, hydrogens in gray, and oxygens in red. In the line angle representation, carbon atoms are implied at every terminus of a line and vertex of multiple lines, and hydrogen atoms are implied to fill the remaining needed valences (up to 4).

Finally, organic compounds form the basis of all earthly life and constitute a significant part of human endeavors in chemistry. The bonding patterns open to carbon, with its valence of four—formal single, double, and triple bonds, as well as various structures with delocalized electrons—make the array of organic compounds structurally diverse, and their range of applications enormous. They either form the basis of, or are important constituents of, many commercial products including pharmaceuticals; petrochemicals and products made from them (including lubricants, solvents, etc.); plastics; fuels and explosives; etc. As indicated, the study of organic chemistry overlaps with organometallic chemistry and biochemistry, but also with medicinal chemistry, polymer chemistry, as well as many aspects of materials science.[1]

Characterization

Since organic compounds often exist as mixtures, a variety of techniques have also been developed to assess purity, especially important being chromatography techniques such as HPLC and gas chromatography. Traditional methods of separation include distillation, crystallization, and solvent extraction.

Organic compounds were traditionally characterized by a variety of chemical tests, called "wet methods", but such tests have been largely displaced by spectroscopic or other computer-intensive methods of analysis.[12] Listed in approximate order of utility, the chief analytical methods are:
  • Nuclear magnetic resonance (NMR) spectroscopy is the most commonly used technique, often permitting complete assignment of atom connectivity and even stereochemistry using correlation spectroscopy. The principal constituent atoms of organic chemistry - hydrogen and carbon - exist naturally with NMR-responsive isotopes, respectively 1H and 13C.
  • Elemental analysis: A destructive method used to determine the elemental composition of a molecule. See also mass spectrometry, below.
  • Mass spectrometry indicates the molecular weight of a compound and, from the fragmentation patterns, its structure. High resolution mass spectrometry can usually identify the exact formula of a compound and is used in lieu of elemental analysis. In former times, mass spectrometry was restricted to neutral molecules exhibiting some volatility, but advanced ionization techniques allow one to obtain the "mass spec" of virtually any organic compound.
  • Crystallography is an unambiguous method for determining molecular geometry, the proviso being that single crystals of the material must be available and the crystal must be representative of the sample. Highly automated software allows a structure to be determined within hours of obtaining a suitable crystal.
Traditional spectroscopic methods such as infrared spectroscopy, optical rotation, UV/VIS spectroscopy provide relatively nonspecific structural information but remain in use for specific classes of compounds.

Properties

Physical properties of organic compounds typically of interest include both quantitative and qualitative features. Quantitative information includes melting point, boiling point, and index of refraction. Qualitative properties include odor, consistency, solubility, and color.

Melting and boiling properties

Organic compounds typically melt and many boil. In contrast, while inorganic materials generally can be melted, many do not boil, tending instead to degrade. In earlier times, the melting point (m.p.) and boiling point (b.p.) provided crucial information on the purity and identity of organic compounds.
The melting and boiling points correlate with the polarity of the molecules and their molecular weight. Some organic compounds, especially symmetrical ones, sublime, that is they evaporate without melting. A well-known example of a sublimable organic compound is para-dichlorobenzene, the odiferous constituent of modern mothballs. Organic compounds are usually not very stable at temperatures above 300 °C, although some exceptions exist.

Solubility

Neutral organic compounds tend to be hydrophobic; that is, they are less soluble in water than in organic solvents. Exceptions include organic compounds that contain ionizable groups as well as low molecular weight alcohols, amines, and carboxylic acids where hydrogen bonding occurs. Organic compounds tend to dissolve in organic solvents. Solvents can be either pure substances like ether or ethyl alcohol, or mixtures, such as the paraffinic solvents such as the various petroleum ethers and white spirits, or the range of pure or mixed aromatic solvents obtained from petroleum or tar fractions by physical separation or by chemical conversion. Solubility in the different solvents depends upon the solvent type and on the functional groups if present.

Solid state properties

Various specialized properties of molecular crystals and organic polymers with conjugated systems are of interest depending on applications, e.g. thermo-mechanical and electro-mechanical such as piezoelectricity, electrical conductivity (see conductive polymers and organic semiconductors), and electro-optical (e.g. non-linear optics) properties. For historical reasons, such properties are mainly the subjects of the areas of polymer science and materials science.

Nomenclature

 
The names of organic compounds are either systematic, following logically from a set of rules, or nonsystematic, following various traditions. Systematic nomenclature is stipulated by specifications from IUPAC. Systematic nomenclature starts with the name for a parent structure within the molecule of interest. This parent name is then modified by prefixes, suffixes, and numbers to unambiguously convey the structure. Given that millions of organic compounds are known, rigorous use of systematic names can be cumbersome. Thus, IUPAC recommendations are more closely followed for simple compounds, but not complex molecules. To use the systematic naming, one must know the structures and names of the parent structures. Parent structures include unsubstituted hydrocarbons, heterocycles, and monofunctionalized derivatives thereof.

Nonsystematic nomenclature is simpler and unambiguous, at least to organic chemists.
Nonsystematic names do not indicate the structure of the compound. They are common for complex molecules, which includes most natural products. Thus, the informally named lysergic acid diethylamide is systematically named (6aR,9R)-N,N-diethyl-7-methyl-4,6,6a,7,8,9-hexahydroindolo-[4,3-fg] quinoline-9-carboxamide.

With the increased use of computing, other naming methods have evolved that are intended to be interpreted by machines. Two popular formats are SMILES and InChI.

Structural drawings

Organic molecules are described more commonly by drawings or structural formulas, combinations of drawings and chemical symbols. The line-angle formula is simple and unambiguous. In this system, the endpoints and intersections of each line represent one carbon, and hydrogen atoms can either be notated explicitly or assumed to be present as implied by tetravalent carbon. The depiction of organic compounds with drawings is greatly simplified by the fact that carbon in almost all organic compounds has four bonds, nitrogen three, oxygen two, and hydrogen one.

Classification of organic compounds

Functional groups

 
The family of carboxylic acids contains a carboxyl (-COOH) functional group. Acetic acid, shown here, is an example.

The concept of functional groups is central in organic chemistry, both as a means to classify structures and for predicting properties. A functional group is a molecular module, and the reactivity of that functional group is assumed, within limits, to be the same in a variety of molecules.
Functional groups can have decisive influence on the chemical and physical properties of organic compounds. Molecules are classified on the basis of their functional groups. Alcohols, for example, all have the subunit C-O-H. All alcohols tend to be somewhat hydrophilic, usually form esters, and usually can be converted to the corresponding halides. Most functional groups feature heteroatoms (atoms other than C and H). Organic compounds are classified according to functional groups, alcohols, carboxylic acids, amines, etc.

Aliphatic compounds

 
The aliphatic hydrocarbons are subdivided into three groups of homologous series according to their state of saturation:
  • paraffins, which are alkanes without any double or triple bonds,
  • olefins or alkenes which contain one or more double bonds, i.e. di-olefins (dienes) or poly-olefins.
  • alkynes, which have one or more triple bonds.
The rest of the group is classed according to the functional groups present. Such compounds can be "straight-chain", branched-chain or cyclic. The degree of branching affects characteristics, such as the octane number or cetane number in petroleum chemistry.

Both saturated (alicyclic) compounds and unsaturated compounds exist as cyclic derivatives. The most stable rings contain five or six carbon atoms, but large rings (macrocycles) and smaller rings are common. The smallest cycloalkane family is the three-membered cyclopropane ((CH2)3). Saturated cyclic compounds contain single bonds only, whereas aromatic rings have an alternating (or conjugated) double bond. Cycloalkanes do not contain multiple bonds, whereas the cycloalkenes and the cycloalkynes do.

Aromatic compounds

Benzene is one of the best-known aromatic compounds as it is one of the simplest and most stable aromatics.

Aromatic hydrocarbons contain conjugated double bonds. This means that every carbon atom in the ring is sp2 hybridized, allowing for added stability. The most important example is benzene, the structure of which was formulated by Kekulé who first proposed the delocalization or resonance principle for explaining its structure. For "conventional" cyclic compounds, aromaticity is conferred by the presence of 4n + 2 delocalized pi electrons, where n is an integer. Particular instability (antiaromaticity) is conferred by the presence of 4n conjugated pi electrons.

Heterocyclic compounds

The characteristics of the cyclic hydrocarbons are again altered if heteroatoms are present, which can exist as either substituents attached externally to the ring (exocyclic) or as a member of the ring itself (endocyclic). In the case of the latter, the ring is termed a heterocycle. Pyridine and furan are examples of aromatic heterocycles while piperidine and tetrahydrofuran are the corresponding alicyclic heterocycles. The heteroatom of heterocyclic molecules is generally oxygen, sulfur, or nitrogen, with the latter being particularly common in biochemical systems.

Examples of groups among the heterocyclics are the aniline dyes, the great majority of the compounds discussed in biochemistry such as alkaloids, many compounds related to vitamins, steroids, nucleic acids (e.g. DNA, RNA) and also numerous medicines. Heterocyclics with relatively simple structures are pyrrole (5-membered) and indole (6-membered carbon ring).

Rings can fuse with other rings on an edge to give polycyclic compounds. The purine nucleoside bases are notable polycyclic aromatic heterocycles. Rings can also fuse on a "corner" such that one atom (almost always carbon) has two bonds going to one ring and two to another. Such compounds are termed spiro and are important in a number of natural products.
_________________________________________

Organic farming

   
Organic farming is a form of agriculture that relies on techniques such as crop rotation, green manure, compost, and biological pest control. Depending on whose definition is used, organic farming uses fertilizers and pesticides (which include herbicides, insecticides and fungicides) if they are considered natural (such as bone meal from animals or pyrethrin from flowers), but it excludes or strictly limits the use of various methods (including synthetic petrochemical fertilizers and pesticides; plant growth regulators such as hormones; antibiotic use in livestock; genetically modified organisms;[1] human sewage sludge; and nanomaterials.[2]) for reasons including sustainability, openness, independence, health, and safety.

Organic agricultural methods are internationally regulated and legally enforced by many nations, based in large part on the standards set by the International Federation of Organic Agriculture Movements (IFOAM), an international umbrella organization for organic farming organizations established in 1972.[3] The USDA National Organic Standards Board (NOSB) definition as of April 1995 is:
“Organic agriculture is an ecological production management system that promotes and enhances biodiversity, biological cycles and soil biological activity. It is based on minimal use of off-farm inputs and on management practices that restore, maintain and enhance ecological harmony."[4]
Since 1990 the market for organic food and other products has grown rapidly, reaching $63 billion worldwide in 2012.[5]:25 This demand has driven a similar increase in organically managed farmland which has grown over the years 2001-2011 at a compounding rate of 8.9% per annum.[6] As of 2011, approximately 37,000,000 hectares (91,000,000 acres) worldwide were farmed organically, representing approximately 0.9 percent of total world farmland (2009).[7]

Organic farming systems

There are several organic farming systems. Biodynamic farming is a comprehensive approach, with its own international governing body. The Do Nothing Farming method focuses on a minimum of mechanical cultivation and labor for grain crops. French intensive and biointensive, methods are well-suited to organic principles. Other examples of techniques are holistic management, permaculture, SRI and no-till farming (the last two which may be implemented in conventional or organic systems[23][24]).

Methods

Organic cultivation of mixed vegetables in Capay, California. Note the hedgerow in the background.
"An organic farm, properly speaking, is not one that uses certain methods and substances and avoids others; it is a farm whose structure is formed in imitation of the structure of a natural system that has the integrity, the independence and the benign dependence of an organism"
Wendell Berry, "The Gift of Good Land"
Organic farming methods combine scientific knowledge of ecology and modern technology with traditional farming practices based on naturally occurring biological processes. Organic farming methods are studied in the field of agroecology. While conventional agriculture uses synthetic pesticides and water-soluble synthetically purified fertilizers, organic farmers are restricted by regulations to using natural pesticides and fertilizers. The principal methods of organic farming include crop rotation, green manures and compost, biological pest control, and mechanical cultivation. These measures use the natural environment to enhance agricultural productivity: legumes are planted to fix nitrogen into the soil, natural insect predators are encouraged, crops are rotated to confuse pests and renew soil, and natural materials such as potassium bicarbonate[25] and mulches are used to control disease and weeds. Hardier plants are generated through plant breeding rather than genetic engineering.

While organic is fundamentally different from conventional because of the use of carbon based fertilizers compared with highly soluble synthetic based fertilizers and biological pest control instead of synthetic pesticides, organic farming and large-scale conventional farming are not entirely mutually exclusive. Many of the methods developed for organic agriculture have been borrowed by more conventional agriculture. For example, Integrated Pest Management is a multifaceted strategy that uses various organic methods of pest control whenever possible, but in conventional farming could include synthetic pesticides only as a last resort.[26]

Crop diversity

Crop diversity is a distinctive characteristic of organic farming. Conventional farming focuses on mass production of one crop in one location, a practice called monoculture. The science of agroecology has revealed the benefits of polyculture (multiple crops in the same space), which is often employed in organic farming.[27] Planting a variety of vegetable crops supports a wider range of beneficial insects, soil microorganisms, and other factors that add up to overall farm health. Crop diversity helps environments thrive and protect species from going extinct.[28]

Soil management

Organic farming relies heavily on the natural breakdown of organic matter, using techniques like green manure and composting, to replace nutrients taken from the soil by previous crops. This biological process, driven by microorganisms such as mycorrhiza, allows the natural production of nutrients in the soil throughout the growing season, and has been referred to as feeding the soil to feed the plant. Organic farming uses a variety of methods to improve soil fertility, including crop rotation, cover cropping, reduced tillage, and application of compost. By reducing tillage, soil is not inverted and exposed to air; less carbon is lost to the atmosphere resulting in more soil organic carbon. This has an added benefit of carbon sequestration which can reduce green house gases and aid in reversing climate change.

Plants need nitrogen, phosphorus, and potassium, as well as micronutrients and symbiotic relationships with fungi and other organisms to flourish, but getting enough nitrogen, and particularly synchronization so that plants get enough nitrogen at the right time (when plants need it most), is a challenge for organic farmers.[29] Crop rotation and green manure ("cover crops") help to provide nitrogen through legumes (more precisely, the Fabaceae family) which fix nitrogen from the atmosphere through symbiosis with rhizobial bacteria. Intercropping, which is sometimes used for insect and disease control, can also increase soil nutrients, but the competition between the legume and the crop can be problematic and wider spacing between crop rows is required. Crop residues can be ploughed back into the soil, and different plants leave different amounts of nitrogen, potentially aiding synchronization.[29] Organic farmers also use animal manure, certain processed fertilizers such as seed meal and various mineral powders such as rock phosphate and greensand, a naturally occurring form of potash which provides potassium. Together these methods help to control erosion. In some cases pH may need to be amended. Natural pH amendments include lime and sulfur, but in the U.S. some compounds such as iron sulfate, aluminum sulfate, magnesium sulfate, and soluble boron products are allowed in organic farming.[30]:43

Mixed farms with both livestock and crops can operate as ley farms, whereby the land gathers fertility through growing nitrogen-fixing forage grasses such as white clover or alfalfa and grows cash crops or cereals when fertility is established. Farms without livestock ("stockless") may find it more difficult to maintain soil fertility, and may rely more on external inputs such as imported manure as well as grain legumes and green manures, although grain legumes may fix limited nitrogen because they are harvested. Horticultural farms growing fruits and vegetables which operate in protected conditions are often even more reliant upon external inputs.[29]

Biological research into soil and soil organisms has proven beneficial to organic farming. Varieties of bacteria and fungi break down chemicals, plant matter and animal waste into productive soil nutrients. In turn, they produce benefits of healthier yields and more productive soil for future crops.[31] Fields with less or no manure display significantly lower yields, due to decreased soil microbe community, providing a healthier, more arable soil system.[32]

Weed management

Organic weed management promotes weed suppression, rather than weed elimination, by enhancing crop competition and phytotoxic effects on weeds.[33] Organic farmers integrate cultural, biological, mechanical, physical and chemical tactics to manage weeds without synthetic herbicides.

Organic standards require rotation of annual crops,[34] meaning that a single crop cannot be grown in the same location without a different, intervening crop. Organic crop rotations frequently include weed-suppressive cover crops and crops with dissimilar life cycles to discourage weeds associated with a particular crop.[33] Research is ongoing to develop organic methods to promote the growth of natural microorganisms that suppress the growth or germination of common weeds.[35]

Other cultural practices used to enhance crop competitiveness and reduce weed pressure include selection of competitive crop varieties, high-density planting, tight row spacing, and late planting into warm soil to encourage rapid crop germination.[33]

Mechanical and physical weed control practices used on organic farms can be broadly grouped as:[36]
  • Tillage - Turning the soil between crops to incorporate crop residues and soil amendments; remove existing weed growth and prepare a seedbed for planting; turning soil after seeding to kill weeds, including cultivation of row crops;
  • Mowing and cutting - Removing top growth of weeds;
  • Flame weeding and thermal weeding - Using heat to kill weeds; and
  • Mulching - Blocking weed emergence with organic materials, plastic films, or landscape fabric.[37]
Some critics, citing work published in 1997 by David Pimentel of Cornell University,[38] which described an epidemic of soil erosion worldwide, have raised concerned that tillage contribute to the erosion epidemic.[39] The FAO and other organizations have advocated a "no-till" approach to both conventional and organic farming, and point out in particular that crop rotation techniques used in organic farming are excellent no-till approaches.[39][40] A study published in 2005 by Pimentel and colleagues[41] confirmed that "Crop rotations and cover cropping (green manure) typical of organic agriculture reduce soil erosion, pest problems, and pesticide use." Some naturally sourced chemicals are allowed for herbicidal use. These include certain formulations of acetic acid (concentrated vinegar), corn gluten meal, and essential oils. A few selective bioherbicides based on fungal pathogens have also been developed. At this time, however, organic herbicides and bioherbicides play a minor role in the organic weed control toolbox.[36]

Weeds can be controlled by grazing. For example, geese have been used successfully to weed a range of organic crops including cotton, strawberries, tobacco, and corn,[42] reviving the practice of keeping cotton patch geese, common in the southern U.S. before the 1950s. Similarly, some rice farmers introduce ducks and fish to wet paddy fields to eat both weeds and insects.[43]

Controlling other organisms

Chloroxylon is used for Pest Management in Organic Rice Cultivation in Chhattisgarh, India
 
Organisms aside from weeds that cause problems on organic farms include arthropods (e.g., insects, mites), nematodes, fungi and bacteria. Organic practices include, but are not limited to:
Examples of predatory beneficial insects include minute pirate bugs, big-eyed bugs, and to a lesser extent ladybugs (which tend to fly away), all of which eat a wide range of pests. Lacewings are also effective, but tend to fly away. Praying mantis tend to move more slowly and eat less heavily. Parasitoid wasps tend to be effective for their selected prey, but like all small insects can be less effective outdoors because the wind controls their movement. Predatory mites are effective for controlling other mites.[30]:66–90

Naturally derived insecticides allowed for use on organic farms use include Bacillus thuringiensis (a bacterial toxin), pyrethrum (a chrysanthemum extract), spinosad (a bacterial metabolite), neem (a tree extract) and rotenone (a legume root extract). Fewer than 10% of organic farmers use these pesticides regularly; one survey found that only 5.3% of vegetable growers in California use rotenone while 1.7% use pyrethrum.[45]:26 These pesticides are not always more safe or environmentally friendly than synthetic pesticides and can cause harm.[30]:92 The main criterion for organic pesticides is that they are naturally derived, and some naturally derived substances have been controversial. Controversial natural pesticides include rotenone, copper, nicotine sulfate, and pyrethrums[46][47] Rotenone and pyrethrum are particularly controversial because they work by attacking the nervous system, like most conventional insecticides. Rotenone is extremely toxic to fish[48] and can induce symptoms resembling Parkinson's disease in mammals.[49][50] Although pyrethrum (natural pyrethrins) is more effective against insects when used with piperonyl butoxide (which retards degradation of the pyrethrins),[51] organic standards generally do not permit use of the latter substance.[52][53][54]

Naturally derived fungicides allowed for use on organic farms include the bacteria Bacillus subtilis and Bacillus pumilus; and the fungus Trichoderma harzianum. These are mainly effective for diseases affecting roots. Compost tea contains a mix of beneficial microbes, which may attack or out-compete certain plant pathogens,[55] but variability among formulations and preparation methods may contribute to inconsistent results or even dangerous growth of toxic microbes in compost teas.[56]
Some naturally derived pesticides are not allowed for use on organic farms. These include nicotine sulfate, arsenic, and strychnine.[57]

Synthetic pesticides allowed for use on organic farms include insecticidal soaps and horticultural oils for insect management; and Bordeaux mixture, copper hydroxide and sodium bicarbonate for managing fungi.[57] Copper sulfate and Bordeaux mixture (copper sulfate plus lime), approved for organic use in various jurisdictions,[52][53][57] can be more environmentally problematic than some synthetic fungicides dissallowed in organic farming[58][59] Similar concerns apply to copper hydroxide. Repeated application of copper sulfate or copper hydroxide as a fungicide may eventually result in copper accumulation to toxic levels in soil,[60] and admonitions to avoid excessive accumulations of copper in soil appear in various organic standards and elsewhere. Environmental concerns for several kinds of biota arise at average rates of use of such substances for some crops.[61] In the European Union, where replacement of copper-based fungicides in organic agriculture is a policy priority,[62] research is seeking alternatives for organic production.[63]

Livestock

For livestock like these healthy cows vaccines play an important part in animal health since antibiotic therapy is prohibited in organic farming

Raising livestock and poultry, for meat, dairy and eggs, is another traditional, farming activity that complements growing. Organic farms attempt to provide animals with natural living conditions and feed. While the USDA does not require any animal welfare requirements be met for a product to be marked as organic, this is a variance from older organic farming practices.[64]

Also, horses and cattle used to be a basic farm feature that provided labor, for hauling and plowing, fertility, through recycling of manure, and fuel, in the form of food for farmers and other animals. While today, small growing operations often do not include livestock, domesticated animals are a desirable part of the organic farming equation, especially for true sustainability, the ability of a farm to function as a self-renewing unit.

Genetic modification

 
A key characteristic of organic farming is the rejection of genetically engineered plants and animals. On October 19, 1998, participants at IFOAM's 12th Scientific Conference issued the Mar del Plata Declaration, where more than 600 delegates from over 60 countries voted unanimously to exclude the use of genetically modified organisms in food production and agriculture.

Although opposition to the use of any transgenic technologies in organic farming is strong, agricultural researchers Luis Herrera-Estrella and Ariel Alvarez-Morales continue to advocate integration of transgenic technologies into organic farming as the optimal means to sustainable agriculture, particularly in the developing world,[65] as does author and scientist Pamela Ronald, who views this kind of biotechnology as being consistent with organic principles.[66]

Although GMOs are excluded from organic farming, there is concern that the pollen from genetically modified crops is increasingly penetrating organic and heirloom seed stocks, making it difficult, if not impossible, to keep these genomes from entering the organic food supply. Differing regulations among countries limits the availability of GMOs to certain countries, as described in the article on regulation of the release of genetic modified organisms.

Standards

Standards regulate production methods and in some cases final output for organic agriculture. Standards may be voluntary or legislated. As early as the 1970s private associations certified organic producers. In the 1980s, governments began to produce organic production guidelines. In the 1990s, a trend toward legislated standards began, most notably with the 1991 EU-Eco-regulation developed for European Union,[67] which set standards for 12 countries, and a 1993 UK program. The EU's program was followed by a Japanese program in 2001, and in 2002 the U.S. created the National Organic Program (NOP).[68] As of 2007 over 60 countries regulate organic farming (IFOAM 2007:11). In 2005 IFOAM created the Principles of Organic Agriculture, an international guideline for certification criteria.[69] Typically the agencies accredit certification groups rather than individual farms.

Organic production materials used in and foods are tested independently by the Organic Materials Review Institute.[70]

Composting

Under USDA organic standards, manure must be subjected to proper thermophilic composting and allowed to reach a sterilizing temperature. If raw animal manure is used, 120 days must pass before the crop is harvested if the final product comes into direct contact with the soil. For products which do not come into direct contact with soil, 90 days must pass prior to harvest.[71]
 

Inflation (cosmology)

Inflation (cosmology)

From Wikipedia, the free encyclopedia:  http://en.wikipedia.org/wiki/Inflation_(cosmology)
 
Evidence of gravitational waves in the infant universe may have been uncovered by the BICEP2 radio telescope.[1][2][3][4]
 
In physical cosmology, cosmic inflation, cosmological inflation, or just inflation is the exponential expansion of space in the early universe. The inflationary epoch lasted from 10−36 seconds after the Big Bang to sometime between 10−33 and 10−32 seconds. Following the inflationary period, the universe continues to expand, but at a less accelerated rate.
 
The inflationary hypothesis was developed in the 1980s by physicists Alan Guth and Andrei Linde.[5]
Inflation explains the origin of the large-scale structure of the cosmos. Quantum fluctuations in the microscopic inflationary region, magnified to cosmic size, become the seeds for the growth of structure in the universe (see galaxy formation and evolution and structure formation).[6] Many physicists also believe that inflation explains why the Universe appears to be the same in all directions (isotropic), why the cosmic microwave background radiation is distributed evenly, why the universe is flat, and why no magnetic monopoles have been observed.
 
While the detailed particle physics mechanism responsible for inflation is not known, the basic picture makes a number of predictions that have been confirmed by observation.[7][8] The hypothetical field thought to be responsible for inflation is called the inflaton.[9]
 
On 17 March 2014, astrophysicists of the BICEP2 collaboration announced the detection of inflationary gravitational waves in the B-mode power spectrum, which if confirmed, would provide clear experimental evidence for the theory of inflation.[1][2][3][4][10][11] However, on 19 June 2014, lowered confidence in confirming the findings was reported.[10][12][13] 

Overview

An expanding universe generally has a cosmological horizon, which, by analogy with the more familiar horizon caused by the curvature of the Earth's surface, marks the boundary of the part of the universe that an observer can see. Light (or other radiation) emitted by objects beyond the cosmological horizon never reaches the observer, because the space in between the observer and the object is expanding too rapidly.
 
History of the Universe - gravitational waves are hypothesized to arise from cosmic inflation, a faster-than-light expansion just after the Big Bang (17 March 2014).[1][2][3]
 
The observable universe is one causal patch of a much larger unobservable universe; there are parts of the universe that cannot communicate with us yet. These parts of the universe are outside our current cosmological horizon. In the standard hot big bang model, without inflation, the cosmological horizon moves out, bringing new regions into view. Yet as a local observer sees these regions for the first time, they look no different from any other region of space the local observer has already seen: they have a background radiation that is at nearly exactly the same temperature as the background radiation of other regions, and their space-time curvature is evolving lock-step with ours. This presents a mystery: how did these new regions know what temperature and curvature they were supposed to have? They couldn't have learned it by getting signals, because they were not in communication with our past light cone before.[14][15]
 
Inflation answers this question by postulating that all the regions come from an earlier era with a big vacuum energy, or cosmological constant. A space with a cosmological constant is qualitatively different: instead of moving outward, the cosmological horizon stays put. For any one observer, the distance to the cosmological horizon is constant. With exponentially expanding space, two nearby observers are separated very quickly; so much so, that the distance between them quickly exceeds the limits of communications. The spatial slices are expanding very fast to cover huge volumes. Things are constantly moving beyond the cosmological horizon, which is a fixed distance away, and everything becomes homogeneous very quickly.
 
As the inflationary field slowly relaxes to the vacuum, the cosmological constant goes to zero, and space begins to expand normally. The new regions that come into view during the normal expansion phase are exactly the same regions that were pushed out of the horizon during inflation, and so they are necessarily at nearly the same temperature and curvature, because they come from the same little patch of space.
 
The theory of inflation thus explains why the temperatures and curvatures of different regions are so nearly equal. It also predicts that the total curvature of a space-slice at constant global time is zero. This prediction implies that the total ordinary matter, dark matter, and residual vacuum energy in the universe have to add up to the critical density, and the evidence strongly supports this. More strikingly, inflation allows physicists to calculate the minute differences in temperature of different regions from quantum fluctuations during the inflationary era, and many of these quantitative predictions have been confirmed.[16][17]

Space expands

To say that space expands exponentially means that two inertial observers are moving farther apart with accelerating velocity. In stationary coordinates for one observer, a patch of an inflating universe has the following polar metric:[18][19]

ds^2 = - (1- \Lambda r^2) \, dt^2 + {1\over 1-\Lambda r^2} \, dr^2 + r^2 \, d\Omega^2.
This is just like an inside-out black hole metric—it has a zero in the dt component on a fixed radius sphere called the cosmological horizon. Objects are drawn away from the observer at r=0 towards the cosmological horizon, which they cross in a finite proper time. This means that any inhomogeneities are smoothed out, just as any bumps or matter on the surface of a black hole horizon are swallowed and disappear.

Since the space–time metric has no explicit time dependence, once an observer has crossed the cosmological horizon, observers closer in take its place. This process of falling outward and replacement points closer in are always steadily replacing points further out—an exponential expansion of space–time.

This steady-state exponentially expanding spacetime is called a de Sitter space, and to sustain it there must be a cosmological constant, a vacuum energy proportional to \Lambda everywhere. In this case, the equation of state is \! p=-\rho. The physical conditions from one moment to the next are stable: the rate of expansion, called the Hubble parameter, is nearly constant, and the scale factor of the universe is proportional to e^{Ht}. Inflation is often called a period of accelerated expansion because the distance between two fixed observers is increasing exponentially (i.e. at an accelerating rate as they move apart), while \Lambda can stay approximately constant (see deceleration parameter).

Few inhomogeneities remain

Cosmological inflation has the important effect of smoothing out inhomogeneities, anisotropies and the curvature of space. This pushes the universe into a very simple state, in which it is completely dominated by the inflaton field, the source of the cosmological constant, and the only significant inhomogeneities are the tiny quantum fluctuations in the inflaton. Inflation also dilutes exotic heavy particles, such as the magnetic monopoles predicted by many extensions to the Standard Model of particle physics. If the universe was only hot enough to form such particles before a period of inflation, they would not be observed in nature, as they would be so rare that it is quite likely that there are none in the observable universe. Together, these effects are called the inflationary "no-hair theorem"[20] by analogy with the no hair theorem for black holes.

The "no-hair" theorem works essentially because the cosmological horizon is no different from a black-hole horizon, except for philosophical disagreements about what is on the other side. The interpretation of the no-hair theorem is that the universe (observable and unobservable) expands by an enormous factor during inflation. In an expanding universe, energy densities generally fall, or get diluted, as the volume of the universe increases. For example, the density of ordinary "cold" matter (dust) goes down as the inverse of the volume: when linear dimensions double, the energy density goes down by a factor of eight; the radiation energy density goes down even more rapidly as the universe expands since the wavelength of each photon is stretched (redshifted), in addition to the photons being dispersed by the expansion. When linear dimensions are doubled, the energy density in radiation falls by a factor of sixteen (see the solution of the energy density continuity equation for an ultra-relativistic fluid). During inflation, the energy density in the inflaton field is roughly constant. However, the energy density in everything else, including inhomogeneities, curvature, anisotropies, exotic particles, and standard-model particles is falling, and through sufficient inflation these all become negligible. This leaves the universe flat and symmetric, and (apart from the homogeneous inflaton field) mostly empty, at the moment inflation ends and reheating begins.[21]

Key requirement

A key requirement is that inflation must continue long enough to produce the present observable universe from a single, small inflationary Hubble volume. This is necessary to ensure that the universe appears flat, homogeneous and isotropic at the largest observable scales. This requirement is generally thought to be satisfied if the universe expanded by a factor of at least 1026 during inflation[22]

Motivations

Inflation resolves several problems in the Big Bang cosmology that were discovered in the 1970s.[26] Inflation was first discovered by Guth while investigating the problem of why no magnetic monopoles are seen today; he found that a positive-energy false vacuum would, according to general relativity, generate an exponential expansion of space. It was very quickly realised that such an expansion would resolve many other long-standing problems. These problems arise from the observation that to look like it does today, the universe would have to have started from very finely tuned, or "special" initial conditions at the Big Bang. Inflation attempts to resolve these problems by providing a dynamical mechanism that drives the universe to this special state, thus making a universe like ours much more likely in the context of the Big Bang theory.

Horizon problem

The horizon problem is the problem of determining why the universe appears statistically homogeneous and isotropic in accordance with the cosmological principle.[27][28][29] For example, molecules in a canister of gas are distributed homogeneously and isotropically because they are in thermal equilibrium: gas throughout the canister has had enough time to interact to dissipate inhomogeneities and anisotropies. The situation is quite different in the big bang model without inflation, because gravitational expansion does not give the early universe enough time to equilibrate. In a big bang with only the matter and radiation known in the Standard Model, two widely separated regions of the observable universe cannot have equilibrated because they move apart from each other faster than the speed of light—thus have never come into causal contact: in the history of the universe, back to the earliest times, it has not been possible to send a light signal between the two regions. Because they have no interaction, it is difficult to explain why they have the same temperature (are thermally equilibrated). This is because the Hubble radius in a radiation or matter-dominated universe expands much more quickly than physical lengths and so points that are out of communication are coming into communication. Historically, two proposed solutions were the Phoenix universe of Georges Lemaître[30] and the related oscillatory universe of Richard Chase Tolman,[31] and the Mixmaster universe of Charles Misner.[28][32] Lemaître and Tolman proposed that a universe undergoing a number of cycles of contraction and expansion could come into thermal equilibrium. Their models failed, however, because of the buildup of entropy over several cycles. Misner made the (ultimately incorrect) conjecture that the Mixmaster mechanism, which made the universe more chaotic, could lead to statistical homogeneity and isotropy.

Flatness problem

Another problem is the flatness problem (which is sometimes called one of the Dicke coincidences, with the other being the cosmological constant problem).[33][34] It had been known in the 1960s that the density of matter in the universe was comparable to the critical density necessary for a flat universe (that is, a universe whose large scale geometry is the usual Euclidean geometry, rather than a non-Euclidean hyperbolic or spherical geometry).[35]:61

Therefore, regardless of the shape of the universe the contribution of spatial curvature to the expansion of the universe could not be much greater than the contribution of matter. But as the universe expands, the curvature redshifts away more slowly than matter and radiation. Extrapolated into the past, this presents a fine-tuning problem because the contribution of curvature to the universe must be exponentially small (sixteen orders of magnitude less than the density of radiation at big bang nucleosynthesis, for example). This problem is exacerbated by recent observations of the cosmic microwave background that have demonstrated that the universe is flat to the accuracy of a few percent.[36]

Magnetic-monopole problem

The magnetic monopole problem (sometimes called the exotic-relics problem) says that if the early universe were very hot, a large number of very heavy[why?], stable magnetic monopoles would be produced. This is a problem with Grand Unified Theories, which proposes that at high temperatures (such as in the early universe) the electromagnetic force, strong, and weak nuclear forces are not actually fundamental forces but arise due to spontaneous symmetry breaking from a single gauge theory.[37] These theories predict a number of heavy, stable particles that have not yet been observed in nature. The most notorious is the magnetic monopole, a kind of stable, heavy "knot" in the magnetic field.[38][39] Monopoles are expected to be copiously produced in Grand Unified Theories at high temperature,[40][41] and they should have persisted to the present day, to such an extent that they would become the primary constituent of the universe.[42][43] Not only is that not the case, but all searches for them have failed, placing stringent limits on the density of relic magnetic monopoles in the universe.[44] A period of inflation that occurs below the temperature where magnetic monopoles can be produced would offer a possible resolution of this problem: monopoles would be separated from each other as the universe around them expands, potentially lowering their observed density by many orders of magnitude. Though, as cosmologist Martin Rees has written, "Skeptics about exotic physics might not be hugely impressed by a theoretical argument to explain the absence of particles that are themselves only hypothetical. Preventive medicine can readily seem 100 percent effective against a disease that doesn't exist!"[45]

Reheating

Inflation is a period of supercooled expansion, when the temperature drops by a factor of 100,000 or so. (The exact drop is model dependent, but in the first models it was typically from 1027K down to 1022K.[23]) This relatively low temperature is maintained during the inflationary phase. When inflation ends the temperature returns to the pre-inflationary temperature; this is called reheating or thermalization because the large potential energy of the inflaton field decays into particles and fills the universe with Standard Model particles, including electromagnetic radiation, starting the radiation dominated phase of the Universe. Because the nature of the inflation is not known, this process is still poorly understood, although it is believed to take place through a parametric resonance.[24][25]

Spin-based electronics: New material successfully tested

Spin-based electronics: New material successfully tested

Jul 30, 2014
From:http://phys.org/news/2014-07-spin-based-electronics-material-successfully.html

Spintronics is an emerging field of electronics, where devices work by manipulating the spin of electrons rather than the current generated by their motion. This field can offer significant advantages to computer technology. Controlling electron spin can be achieved with materials called 'topological insulators', which conduct electrons only across their surface but not through their interior. One such material, samarium hexaboride (SmB6), has long been theorized to be an ideal and robust topological insulator, but this has never been shown practically. Publishing in Nature Communications, scientists from the Paul Scherrer Institute, the IOP (Chinese Academy of Science) and Hugo Dil's team at EPFL, have demonstrated experimentally, for the first time, that SmB6 is indeed a topological insulator.

Electronic technologies in the future could utilize an intrinsic property of electrons called spin, which is what gives them their . Spin can take either of two possible states: "up" or "down", which can be pictured respectively as clockwise or counter-clockwise rotation of the electron around its axis.

Spin control can be achieved with materials called , which can conduct spin-polarized electrons across their surface with 100% efficiency while the interior acts as an insulator.
However, topological insulators are still in the experimental phase. One particular insulator, samarium hexaboride (SmB6), has been of great interest. Unlike other topological insulators, SmB6's insulating properties are based on a special phenomenon called the 'Kondo effect'. The Kondo effect prevents the flow of electrons from being destroyed by irregularities in the material's structure, making SmB6 a very robust and efficient topological 'Kondo' insulator.

Scientists from the Paul Scherrer Institute (PSI), the Institute of Physics (Chinese Academy of Science) and Hugo Dil's team at EPFL have now shown experimentally that samarium hexaboride (SmB6) is the first topological Kondo insulator. In experiments carried out at the PSI, the researchers illuminated samples of SmB6 with a special type of light called 'synchroton radiation'. The energy of this light was transferred to electrons in SmB6, causing them to be ejected from it. The properties of ejected electrons (including ) were measured with a detector, which gave clues about how the electrons behaved while they were still on the surface of SmB6. The data showed consistent agreement with the predictions for a topological insulator.

"The only real verification that SmB6 is a topological Kondo insulator comes from directly measuring the and how it's affected in a Kondo insulator", says Hugo Dil. Although SmB6 shows insulating behavior only at very low temperatures the experiments provide a proof of principle, and more importantly, that Kondo topological insulators actually exist, offering an exciting stepping-stone into a new era of technology.

Explore further: Spintronics: Deciphering a material for future electronics      

More information: Nature Communications, 30 Jul 2014 DOI: 10.1038/ncomms5566
Journal reference: Nature Communications

Read more at: http://phys.org/news/2014-07-spin-based-electronics-material-successfully.html#jCp

Chemists demonstrate 'bricks-and-mortar' assembly of new molecular structures

Chemists demonstrate 'bricks-and-mortar' assembly of new molecular structures

Jul 31, 2014
From:  http://phys.org/news/2014-07-chemists-bricks-and-mortar-molecular.html
Chemists demonstrate 'bricks-and-mortar' assembly of new molecular structures        
This artwork will appear on the cover of Chemical Communications. It depicts the cyanostar molecules moving in solution, ordering on the surface, and stacking by anion binding. Imaging of the surface structure is performed by scanning …more
Chemists at Indiana University Bloomington have described the self-assembly of large, symmetrical molecules in bricks-and-mortar fashion, a development with potential value for the field of organic electronic devices such as field-effect transistors and photovoltaic cells.

Their paper, "Anion-Induced Dimerization of 5-fold Symmetric Cyanostars in 3D Crystalline Solids and 2D Self-Assembled Crystals," has been published online by Chemical Communications, a journal of the Royal Society of Chemistry. It is the first collaboration by Amar Flood, the James F. Jackson Associate Professor of Chemistry, and Steven L. Tait, assistant professor of chemistry. Both are in the materials chemistry program in the IU Bloomington Department of Chemistry, part of the College of Arts and Sciences.

The article will appear as the cover article of an upcoming issue of the journal. The cover illustration was created by Albert William, a lecturer in the media arts and science program of the School of Informatics and Computing at Indiana University-Purdue University Indianapolis. William specializes in using advanced graphics and animation to convey complex scientific concepts.

Lead author of the paper is Brandon Hirsch, who earned the cover by winning a poster contest at the fall 2013 meeting of the International Symposium on Macrocyclic and Supramolecular Chemistry. Co-authors, along with Flood and Tait, include doctoral students Semin Lee, Bo Qiao and Kevin P. McDonald and research scientist Chun-Hsing Chen.

The researchers demonstrate the self-assembly and packing of a five-sided, symmetrical molecule, called cyanostar, that was developed by Flood's IU research team. While researchers have created many such large, cyclic , or macrocycles, cyanostar is unusual in that it can be readily synthesized in a "one pot" process. It also has an unprecedented ability to bind with large, negatively charged anions such as perchlorate.

"This great piece of work, with state-of-the-art studies of the assembly of some beautiful compounds pioneered by the group in Indiana, shows how anions can help organize molecules that could have very interesting properties," said David Amabilino, nanomaterials group leader at the Institute of Materials Science of Barcelona. "Symmetry is all important when molecules pack together, and here the supramolecular aspects of these compounds with a very particular shape present tantalizing possibilities. This research is conceptually extremely novel and really interdisciplinary: It has really unveiled how anions could help pull molecules together to behave in completely new ways."
The paper describes how cyanostar molecules bind with anions in 2-to-1 sandwich-like complexes, with anions sandwiched between two saucer-shaped cyanostars. The study shows the packing of the molecules in repeating patterns reminiscent of the two-dimensional packing of pentagons shown by artist Albrecht Durer in 1525. It further shows the packing to take place not only at but away from the surface of materials.

The future of organic electronics will rely upon packing molecules onto electrode surfaces, yet it has been challenging to get packing of the molecules away from the surface, Tait and Flood said. With this paper, they present a collaborative effort, combining their backgrounds in traditionally distinct fields of , as a new foray to achieve this goal using a bricks-and-mortar approach.

The paper relies on two complementary technologies that provide high-resolution images of molecules:
  • X-ray crystallography, which is being celebrated worldwide for its invention 100 years ago, can provide images of molecules from analysis of the three-dimensional crystalline solids.
  • Scanning tunneling microscopy, or STM, developed in 1981, shows two-dimensional packing of molecules immobilized on a surface.
The results are distinct, with submolecular views of the star-shaped molecules that are a few nanometers in diameter. (A human hair is about 100,000 nanometers thick).

Explore further: Two teams pave way for advances in 2D materials      


 Read more at: http://phys.org/news/2014-07-chemists-bricks-and-mortar-molecular.html#jCp

Nanostructured metal-oxide catalyst efficiently converts CO2 to methanol

Nanostructured metal-oxide catalyst efficiently converts CO2 to methanol (w/ Video)

Jul 31, 2014

Nanostructured metal-oxide catalyst efficiently converts CO2 to methanol        














Scanning tunneling microscope image of a cerium-oxide and copper catalyst (CeOx-Cu) used in the transformation of carbon dioxide (CO2) and hydrogen (H2) gases to methanol (CH3OH) and water (H2O). In the presence of hydrogen, the Ce4+ and Cu+1 …more
Scientists at the U.S. Department of Energy's (DOE) Brookhaven National Laboratory have discovered a new catalytic system for converting carbon dioxide (CO2) to methanol-a key commodity used to create a wide range of industrial chemicals and fuels. With significantly higher activity than other catalysts now in use, the new system could make it easier to get normally unreactive CO2 to participate in these reactions.

"Developing an effective for synthesizing methanol from CO2 could greatly expand the use of this abundant gas as an economical feedstock," said Brookhaven chemist Jose Rodriguez, who led the research. It's even possible to imagine a future in which such catalysts help mitigate the accumulation of this greenhouse gas, by capturing CO2 emitted from methanol-powered combustion engines and fuel cells, and recycling it to synthesize new fuel.

That future, of course, will be determined by a variety of factors, including economics. "Our basic research studies are focused on the science-the discovery of how such catalysts work, and the use of this knowledge to improve their activity and selectivity," Rodriguez emphasized.
The research team, which included scientists from Brookhaven, the University of Seville in Spain, and Central University of Venezuela, describes their results in the August 1, 2014, issue of the journal Science.
 New tools for discovery Because CO2 is normally such a reluctant participant in , interacting weakly with most catalysts, it's also rather difficult to study. These studies required the use of newly developed in-situ (or on-site, meaning under reaction conditions) imaging and chemical "fingerprinting" techniques. These techniques allowed the scientists to peer into the dynamic evolution of a variety of catalysts as they operated in real time. The scientists also used computational modeling at the University of Seville and the Barcelona Supercomputing Center to provide a molecular description of the methanol synthesis mechanism.

The team was particularly interested in exploring a catalyst composed of copper and ceria (cerium-oxide) nanoparticles, sometimes also mixed with titania. The scientists' previous studies with such metal-oxide nanoparticle catalysts have demonstrated their exceptional reactivity in a variety of reactions. In those studies, the interfaces of the two types of nanoparticles turned out to be critical to the reactivity of the catalysts, with highly reactive sites forming at regions where the two phases meet.

To explore the reactivity of such dual particle catalytic systems in converting CO2 to methanol, the scientists used spectroscopic techniques to investigate the interaction of CO2 with plain copper, plain cerium-oxide, and cerium-oxide/copper surfaces at a range of reaction temperatures and pressures. Chemical fingerprinting was combined with to reveal the most probable progression of intermediates as the reaction from CO2 to methanol proceeded.

These studies revealed that the metal component of the catalysts alone could not carry out all the chemical steps necessary for the production of methanol. The most effective binding and activation of CO2 occurred at the interfaces between metal and oxide nanoparticles in the cerium-oxide/copper catalytic system.

"The key active sites for the chemical transformations involved atoms from the metal [copper] and oxide [ceria or ceria/titania] phases," said Jesus Graciani, a chemist from the University of Seville and first author on the paper. The resulting catalyst converts CO2 to methanol more than a thousand times faster than plain copper particles, and almost 90 times faster than a common copper/zinc-oxide catalyst currently in industrial use.

This study illustrates the substantial benefits that can be obtained by properly tuning the properties of a metal-oxide interface in catalysts for methanol synthesis.

"It is a very interesting step, and appears to create a new strategy for the design of highly active catalysts for the synthesis of alcohols and related molecules," said Brookhaven Lab Chemistry Department Chair Alex Harris.
Explore further: Ionic liquid boosts efficiency of CO2 reduction catalyst

More information: www.sciencemag.org/lookup/doi/… 1126/science.1253057

Journal reference: Science

Read more at: http://phys.org/news/2014-07-nanostructured-metal-oxide-catalyst-efficiently-co2.html#jCp

Scientists develop pioneering new spray-on solar cells

Scientists develop pioneering new spray-on solar cells

Aug 01, 2014 by Hannah Postles
Link:  http://phys.org/news/2014-08-scientists-spray-on-solar-cells.html
Scientists develop pioneering new spray-on solar cells
An artist's impression of spray-coating glass with the polymer to create a solar cell
(Phys.org) —A team of scientists at the University of Sheffield are the first to fabricate perovskite solar cells using a spray-painting process – a discovery that could help cut the cost of solar electricity.


Experts from the University's Department of Physics and Astronomy and Department of Chemical and Biological Engineering have previously used the spray-painting method to produce solar cells using organic semiconductors - but using perovskite is a major step forward.
Efficient organometal halide perovskite based photovoltaics were first demonstrated in 2012. They are now a very promising new material for solar cells as they combine high efficiency with low materials costs.
The spray-painting process wastes very little of the perovskite material and can be scaled to high volume manufacturing – similar to applying paint to cars and graphic printing.
Lead researcher Professor David Lidzey said: "There is a lot of excitement around perovskite based photovoltaics.
"Remarkably, this class of material offers the potential to combine the high performance of mature solar cell technologies with the low embedded energy costs of production of organic photovoltaics."
While most solar cells are manufactured using energy intensive materials like silicon, perovskites, by comparison, requires much less energy to make. By spray-painting the perovskite layer in air the team hope the overall energy used to make a solar cell can be reduced further.
 
Share Video
  00:00      
00:00
 
00:00
        
 
Professor Lidzey said: "The best certified efficiencies from are around 10 per cent.
"Perovskite cells now have efficiencies of up to 19 per cent. This is not so far behind that of silicon at 25 per cent - the material that dominates the world-wide solar market."
He added: "The perovskite devices we have created still use similar structures to organic cells. What we have done is replace the key light absorbing layer - the organic layer - with a spray-painted perovskite.
"Using a perovskite absorber instead of an organic absorber gives a significant boost in terms of efficiency."
The Sheffield team found that by spray-painting the perovskite they could make prototype with efficiency of up to 11 per cent.

Professor Lidzey said: "This study advances existing work where the perovskite layer has been deposited from solution using laboratory scale techniques. It's a significant step towards efficient, low-cost solar cell devices made using high volume roll-to-roll processing methods."
Solar power is becoming an increasingly important component of the world-wide renewables energy market and continues to grow at a remarkable rate despite the difficult economic environment.
Professor Lidzey said: "I believe that new thin-film photovoltaic technologies are going to have an important role to play in driving the uptake of solar-, and that perovskite based cells are emerging as likely thin-film candidates. "
Explore further: A new stable and cost-cutting type of perovskite solar cell

Read more at: http://phys.org/news/2014-08-scientists-spray-on-solar-cells.html#jCp

Big data confirms climate extremes are here to stay

Big data confirms climate extremes are here to stay

Jul 30, 2014
Original Link:  http://phys.org/news/2014-07-big-climate-extremes.html

In a paper published online today in the journal Scientific Reports, published by Nature, Northeastern researchers Evan Kodra and Auroop Ganguly found that while global temperature is indeed increasing, so too is the variability in temperature extremes. For instance, while each year's average hottest and coldest temperatures will likely rise, those averages will also tend to fall within a wider range of potential high and low temperate extremes than are currently being observed. This means that even as overall temperatures rise, we may still continue to experience extreme cold snaps, said Kodra.

"Just because you have a year that's colder than the usual over the last decade isn't a rejection of the hypothesis," Kodra explained.

With funding from a $10-million multi-university Expeditions in Computing grant, the duo used computational tools from big data science for the first time in order to extract nuanced insights about climate extremes.

The research also opens new areas of interest for future work, both in climate and data science. It suggests that the natural processes that drive weather anomalies today could continue to do so in a warming future. For instance, the team speculates that ice melt in hotter years may cause colder subsequent winters, but these hypotheses can only be confirmed in physics-based studies.

The study used simulations from the most recent climate models developed by groups around the world for the Intergovernmental Panel on Climate Change and "reanalysis data sets," which are generated by blending the best available weather observations with numerical weather models. The team combined a suite of methods in a relatively new way to characterize extremes and explain how their variability is influenced by things like the seasons, geographical region, and the land-sea interface. The analysis of multiple climate model runs and reanalysis data sets was necessary to account for uncertainties in the physics and model imperfections.

The new results provide important scientific as well as societal implications, Ganguly noted. For one thing, knowing that models project a wider range of extreme temperature behavior will allow sectors like agriculture, public health, and insurance planning to better prepare for the future. For example, Kodra said, "an agriculture insurance company wants to know next year what is the coldest snap we could see and hedge against that. So, if the range gets wider they have a broader array of policies to consider."

Explore further: Arctic warming linked to fewer European and US cold weather extremes, study shows

Read more at: http://phys.org/news/2014-07-big-climate-extremes.html#jCp

Education

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Education Education is the transmissio...