Search This Blog

Thursday, January 23, 2014

A new wrinkle in the control of waves: Flexible materials could provide new ways to control sound and light

by David Chandler


Flexible, layered materials textured with nanoscale wrinkles could provide a new way of controlling the wavelengths and distribution of waves, whether of sound or light. The new method, developed by researchers at MIT, could eventually find applications from nondestructive testing of materials to sound suppression, and could also provide new insights into soft biological systems and possibly lead to new diagnostic tools.
The findings are described in a paper published this week in the journal Physical Review Letters, written by MIT postdoc Stephan Rudykh and Mary Boyce, a former professor of mechanical engineering at MIT who is now dean of the Fu Foundation School of Engineering and Applied Science at Columbia University.
While materials' properties are known to affect the propagation of light and sound, in most cases these properties are fixed when the material is made or grown, and are difficult to alter later. But in these layered materials, changing the properties—for example, to "tune" a material to filter out specific colors of light—can be as simple as stretching the flexible material.

"These effects are highly tunable, reversible, and controllable," Rudykh says. "For example, we could change the color of the material, or potentially make it optically or acoustically invisible."

The materials can be made through a layer-by-layer deposition process, refined by researchers at MIT and elsewhere, that can be controlled with high precision. The process allows the thickness of each layer to be determined to within a fraction of a . The material is then compressed, creating within it a series of precise wrinkles whose spacing can cause scattering of selected frequencies of (of either sound or light).

Surprisingly, Rudykh says, these effects work even in materials where the alternating layers have almost identical densities. "We can use polymers with very similar densities and still get the effect," he says. "How waves propagate through a material, or not, depends on the microstructure, and we can control it," he says.

By designing that microstructure to produce a desired set of effects, then altering those properties by deforming the material, "we can actually control these effects through external stimuli," Rudykh says.
"You can design a material that will wrinkle to a different wavelength and amplitude. If you know you want to control a particular range of frequencies, you can design it that way."

The research, which is based on computer modeling, could also provide insights into the properties of natural biological materials, Rudykh says. "Understanding how the waves propagate through biological tissues could be useful for ," he says.

For example, current diagnostic techniques for certain cancers involve painful and invasive procedures. In principle, ultrasound could provide the same information noninvasively, but today's ultrasound systems lack sufficient resolution. The new work with wrinkled materials could lead to more precise control of these ultrasound waves, and thus to systems with better resolution, Rudykh says.

The system could also be used for sound cloaking—an advanced form of noise cancellation in which outside sounds could be completely blocked from a certain volume of space rather than just a single spot, as in current noise-canceling headphones.

"The microstructure we start with is very simple," Rudykh says, and is based on well-established, layer-by-layer manufacturing. "From this layered material, we can extend to more complicated microstructures, and get effects you could never get" from conventional . Ultimately, such systems could be used to control a variety of effects in the propagation of light, sound, and even heat.

The technology is being patented, and the researchers are already in discussions with companies about possible commercialization, Rudykh says.
Explore further: Light and sound fire scientists' imaginations: Researchers lead review of photonic, phononic metamaterials
   


How Grazing Cows Can Save the Planet, and Other Surprising Ways of Healing the Earth

January 12, 2014
By Dr. Mercola
Judith Schwartz is a freelance writer and author of the book Cows Save the Planet: And Other Improbable Ways of Restoring Soil to Heal the Earth. I recently met Judy at a conference held by Allan Savory of the Savory Institute in Boulder, Colorado.
The Savory Institute helps farmers to holistically manage their livestock in order to improve soil quality and heal the environment. In fact, according to Savory, an African ecologist, dramatically increasing the number of grazing livestock is the only thing that can reverse desertification (when land turns to desert).
This was Savory's first conference, and turned out to be quite a memorable event. Judy has summarized a big portion of what was presented in that conference in her book. But what made her hone in on the issue of soil health to begin with?
Surprisingly, it all began with an investigation into the economy. Around 2008, just before the economic downturn, she'd started writing about the transition movement:
"One of the things that transition initiatives were dealing with was local currencies," she says"Looking into local currencies kind of helped me understand how local economies work and primed me to ask questions when the economic downturn hit, like 'What is money? What is wealth?'
I was on that trajectory, writing about environmental economics and new economics... Basically, it's the notion that our economy can and should serve the people the planet as opposed to the other way around.

This I fear is the scenario that we've kind of gotten stuck in – that people and the planet, meaning all of our natural systems, exist to serve the economy.
From that framework, I started looking at ecology and observed the disconnect between our financial system and the natural world, which just cannot be separate. That disconnect doesn't work."

The Environmental Impact of Conventional Farming

This led her to learn more about soil health, economical land use, and how modern agricultural practices affect our environment.
 
For example, did you know that our modern agricultural system is responsible for putting more carbon dioxide into the atmosphere than the actual burning of fossil fuels? Understanding this reveals an obvious answer to pressing global problems.
There are only three places for carbon to go: land, air and water. Our agricultural practices have removed massive amounts of valuable carbon from land, transferring it into air and water. By paying greater attention to carbon management, we have the opportunity to make a dramatic difference in this area, which is having major negative consequences to our agriculture, and the pollution of our water and air.
As explained by Judy, early this past summer, concentrations of atmospheric CO2 crossed the 400 parts per million-threshold—the highest it's been in thousands of years. According to an organization called 350.org, scientists believe our CO2 levels need to be around 350 parts per million in order to maintain favorable living conditions on earth.
Carbon management is a critical aspect of environmental health and the growing of food.
That said, CO2 levels are not constantly or continuously rising in a straight line. The level rises and falls, and this is a clue to what's going on.
"Depending on the season, depending on how much photosynthesis is happening, it dips down, and then goes up again," Judy explains. "When we've got a lot of plants, as we get towards the warmer part of the year, more photosynthesis is happening, and the CO2 levels drop slightly.
That's so important to know, because photosynthesis is key to what we're talking about.
When I talk about bringing carbon back into the soil, I'm talking about supporting and stimulating the process of photosynthesis – in other words, growing more plants. Those plants then take in the CO2. They make carbon compounds. Those carbon compounds are drawn down, and they go into the soil."
Sequestering carbon in the Earth's soils is a good thing. There's actually more carbon in our world soils than in all plants, including trees, and the atmosphere together. However, due to modern agricultural methods, we've lost between 50 and 80 percent of the carbon that used to be in the soil... This means there's plenty of "room" to put it back in.
"It's useful to understand that the notion of bringing carbon back into the soil, one thing that it does is withdraw carbon down from the atmosphere. That's hugely important," Judy says.
"Carbon is the main component of soil organic matter. That's the good stuff that you want in soil anyway for fertility. It also absorbs water. When you have carbon-rich soil, you also have soil that is resilient to floods and drought. When you start looking at soil carbon, the news keeps getting better and better."

The Importance of Holistic Herd Management

Another major factor that needs to be considered is the management of livestock. Herds raised according to modern, conventional practices contribute to desertification—turning land into desert—which, of course, doesn't support plant life and photosynthesis, thereby shifting the equation in the wrong direction. When land turns to desert, it no longer holds water, and it loses the ability to sustain microbial life and nourish plant growth...
One of the reasons Allan Savory has become so popular is his promotion of holistic herd management, which causes desert areas to convert back to grasslands that support plant life. As explained by Judy:
"It occurred to him that the land needed the animals in the same way that the animals needed the land. He began to really observe how animals functioned on land, and came to understand the really intricate dynamics, the system, that had been naturally in operation.
Basically, when grazing animals graze, they're nibbling on the grasses in a way that exposes their growth points to sunlight and stimulates growth... Their trampling [of the land also] did several things: it breaks any capped earth so that the soil is aerated. It presses in seeds [giving them] a chance to germinate, so you have a greater diversity of plants. [Grazing herds] also press down dying and decaying grasses, so that they can be better acted upon by microorganisms in the soil. It keeps the decaying process going. Their waste also fertilizes the soil."
This natural symbiotic relationship between animals, soils and plants—where each benefits the other mutually—is a powerful insight. And it's one that can be replicated with great benefit. Besides the environmental benefits, grass-fed, pastured livestock is also an excellent source of high quality meat. In fact, it's the only type of meat I recommend eating, as raising cattle in confined animal feeding operations (CAFOs) alters the nutritional composition of the meat—not to mention such animals are fed antibiotics, growth promoters and other veterinary drugs.

You Can Make a Difference in More Ways Than One  

As for recommendations for what we can do to get us going in the right direction with regards to improving not only animal and human health but the health of the planet, Judy says:
"Most recommendations are very simple. The simplest thing is to avoid having bare soil. Because when you have bare uncovered soil, the land degradation process begins. When you have bare soil, that means that the carbon is binding with oxygen and becoming carbon dioxide."
We also need to shift our focus to emphasize the biological system as a whole. Soil is not a static "thing." It's a living symbiotic system, and soil microorganisms also play a very important role in this system. When I visited Elaine Ingham at the Rodale Institute, I learned the value of compost tea for promoting beneficial soil microbes, and I now use a vortex compost tea brewing system to revitalize my own garden. Interestingly, the better you farm or garden, the less land you need. According to Judy, a biological farmer using appropriate methods can grow on 1,000 acres the same amount of food another farmer might need 5,000 acres to produce...
 
Another factor is the importance of integrating animals on the land. Most biological farmers understand this, and will tell you that in order for soil to get to its highest potential of productivity and health, there needs to be animals on the land. (According to Savory, grazing large herds of livestock on half of the world's barren or semi-barren grasslands could also take enough carbon from the atmosphere to bring us back to preindustrial levels!) But what if you're not a gardener yet, or a farmer? How can you help achieve this much needed shift?
"I think people can make a difference in all sorts of ways that people make decisions every day, such as asking yourself how the food you're buying was grown," Judy says. "Because once you start asking where the food comes from, even posing that question, will lead you to make different choices.
Apart from food, what decisions are being made in your community about the use of land? Can your community save money by working with soil rather than, say, putting in an expensive waste or water treatment plant? That's another thing, getting involved on a local level. There are all kinds of organizations that are working on different environmental and different food aspects locally and nationally, etc."

Biological Farming Solves Many Pressing Problems

My first passion and career was being a physician, then an Internet educator, and now I'm moving into high-performance biological agriculture because I really believe it's the next step in our evolution. We must shift the way we produce food because the current system is unsustainable. And while this information really is ancient, it's not widely discussed. There's only a small segment of the population that even understands this natural system, and the potential it has for radically transforming the way we feed the masses AND protect the environment at the same time.
 
I thoroughly agree with the recommendation to get involved personally, because it's so exciting. For me, it's become a rather addictive hobby. Once you integrate biological farming principles, you can get plant performances that are 200-400 percent greater than what you would typically get from a plant! What's more, not only does it improve the quantity, it also improves the quality of the food you're growing. These facts should really be at the forefront of everybody's mind when they think about farming, as it's the solution to so many pressing problems. Judy agrees, noting:
"The challenge is that we've been led to believe that our agricultural model, which is an extractive model, is the way it needs to be. But we can shift to a regenerative model. That's where we need to go."

Final Thoughts

As Judy says, there's a lot to be optimistic about, because whether we're talking about the degradation of the environment or our food supply, there are answers!
"Many people just sort of give up and say, 'I can't do anything about this.' I was speaking to someone the other day who said that her son, who just finished college, said, 'You know, it's over. We're doomed.' To me, that is just so sad. How can we let the next generation feel that way? I think that betrays a huge lack of imagination. Because when we talk about our environmental challenges, one thing we don't talk about is nature's desire to heal itself. Once we ally with that natural process, it's amazing what we can do."
Ending the burning of fossil fuels is not the one and only way for us to turn the tide on rising carbon dioxide levels. Granted, solar energy and wind power would certainly be preferable to burning fossil fuels. But even if we didn't stop burning fossil fuels, we can still reverse rising CO2 levels by addressing the way we farm, using sound, time-honored agricultural principles.

And—something else to consider—even if we completely stop burning fossil fuels but do not change agriculture, we'll still be left with problems like lands turning to desert, flooding, and drought for example. In short, we really must address how we manage our lands and soils... You can learn more about biological farming by reviewing the related articles listed in the right-hand side bar on this page. I also highly recommend Judy's book, Cows Save the Planet: And Other Improbable Ways of Restoring Soil to Heal the Earth. It's a great read for anyone wanting to learn more about this topic.

Habitability, Taphonomy, and the Search for Organic Carbon on Mars

John P. Grotzinger
Science
Vol. 343 no. 6169 pp. 386-387
DOI: 10.1126/science.1249944                        
+ Author Affiliations
Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA 91125, USA.

 


















The quest for extraterrestrial life is an old and inevitable ambition of modern science. In its practical implementation, the core challenge is to reduce this grand vision into bite-sized pieces of research. In the search for organic remnants of past life, it is enormously helpful to have a paradigm to guide exploration. This begins with assessing habitability: Was the former environment supportive of life? If so, was it also conducive to preservation of organism remains, specifically large organic molecules?
 
The 2004 arrival of the Mars Exploration Rovers (MERs) Spirit and Opportunity provided a chance to investigate ancient aqueous environments in situ and deduce their likelihood of supporting life. Initial results confirmed what data from orbiters had long suggested—that diverse aqueous environments existed on the surface of Mars billions of years ago. The success of the MERs led to the development of the Mars Science Laboratory (MSL) mission. The Curiosity rover landed at Gale crater in 2012 and was designed to specifically test whether ancient aqueous environments had also been habitable. In addition to water, did these ancient environments also record evidence for the chemical building blocks of life (C, H, N, O, P, S), as well as chemical and/or mineralogic evidence for redox gradients that would have enabled microbial metabolism, such as chemoautotrophy?

Curiosity also has the capability to detect organic carbon, but it is not equipped for life detection.
Figure
Image taken by the Mars Hand Lens Imager of the John Klein drill hole (1.6 cm in diameter) at Yellowknife Bay reveals gray cuttings of mudstone (ancient lake sediments), rock powder, and interior wall.                                                
An array of eight ChemCam laser shot points can be seen. The gray color suggests that reduced, rather than oxidized, chemical compounds and minerals dominated the pore fluid chemistry of the ancient sediment. A cross-cutting network of sulfate-filled fractures indicates later flow of groundwater through fractures after the sediment was lithified.
CREDIT: NASA/JPL-CALTECH/MALIN SPACE SCIENCE SYSTEMS
Five articles presented in full in the online edition of Science (www.sciencemag.org/extra/curiosity), with abstracts in print, describe the detection at Gale crater of a system of ancient environments (including streams, lakes, and groundwater networks) that would have been habitable by chemoautotrophic microorganisms. The Hesperian age (<∼3.7 billion years) rocks mentioned in these articles are at the young end of the spectrum of ancient martian aqueous environments. Yet, a sixth article details a more ancient and also potentially habitable environment detected in Noachian age (>∼3.7 billion years) rocks at Meridiani Planum. A seventh article describes the present radiation environment on the surface of Mars at Gale crater and its influence on the preservation of organic compounds in rocks.
 
Opportunity landed at Meridiani Planum on 25 January 2004. Coincident with the 10th anniversary of this landing, Arvidson et al. report the detection of an ancient clay-forming, subsurface aqueous environment at Endeavour crater, Meridiani Planum. Though Opportunity does not have the ability to directly detect C or N, it has been able to establish that several of the other key factors that allow for the identification of a formerly habitable environment were in place. This potentially habitable environment stratigraphically underlies and is considerably older than the rocks detected earlier in the mission that represent acidic, hypersaline environments that would have challenged even the hardiest extremophiles. The presence of both Fe+3- and Al-rich smectite clay minerals in rocks on the rim of the Noachian age Endeavour impact crater were inferred from the joint use of hyperspectral observations by the Mars Reconnaissance Orbiter and extensive surface observations by the Opportunity rover. The rover was guided tactically by orbiter-based mapping. Extensive leaching and formation of Al-rich smectites occurred in subsurface groundwater fracture systems.
Grotzinger et al. show that an ancient habitable environment existed at Yellowknife Bay, Gale crater, where stream waters flowed from the crater rim and pooled in a curvilinear depression at the base of Gale's central mountain to form a lake-streamgroundwater system that might have existed for millions of years. Vaniman et al. provide evidence for moderate to neutral pH, as shown by the presence of smectite clay minerals and the absence of acid-environment sulfate minerals, and show that the environment had variable redox states due to the presence of mixed-valence Fe (magnetite) and S (sulfide, sulfate) minerals formed within the sediment and cementing rock. McLennan et al. show that lake salinities were low because of the very low concentration of salt in the lake deposits.
Elemental data indicate that clays were formed in the lake environment and that minimal weathering of the crater rim occurred, suggesting that a colder and/or drier climate was prevalent.
Ming et al. show that the thermal decomposition of rock powder yielded NO and CO2, indicating the presence of nitrogen- and carbon-bearing materials. CO2 may have been generated by either carbonate or organic materials. Concurrent evolution of O2 and chlorinated hydrocarbons indicates the presence of oxychlorine species. Higher abundances of chlorinated hydrocarbons in the lake mudstones, as compared with modern windblown materials, suggest that indigenous martian or meteoritic organic C sources are preserved in the mudstone. However, the possibility of terrestrial background sources brought by the rover itself cannot be excluded.
 
These results demonstrate that early Mars was habitable, but this does not mean that Mars was inhabited. Even for Earth, it was a formidable challenge to prove that microbial life existed billions of years ago—a discovery that occurred almost 100 years after Darwin predicted it, through the recognition of microfossils preserved in silica (1). The trick was finding a material that could preserve cellular structures. A future mission could do the same for Mars if life had existed there.
Curiosity can help now by aiding our understanding of how organic compounds are preserved in rocks, which, in turn, could provide guidance to narrow down where and how to find materials that could preserve fossils as well. However, it is not obvious that much organic matter, of either abiologic or biologic origin, might survive degradational rock-forming and environmental processes. Our expectations are conditioned by our understanding of Earth's earliest record of life, which is very sparse.
 
Paleontology embraces this challenge of record failure with the subdiscipline of taphonomy, through which we seek to understand the preservation process of materials of potential biologic interest. On Mars, a first step would involve detection of complex organic molecules of either abiotic or biotic origin; the point is that organic molecules are reduced and the planet is generally regarded as oxidizing, and so their preservation requires special conditions. For success, several processes must be optimized. Primary enrichment of organics must first occur, and their destruction should be minimized during the conversion of sediment to rock and by limiting exposure of sampled rocks to ionizing radiation. Of these conditions, the third is the least Earth-like (Earth's thick atmosphere and magnetic field greatly reduce incoming radiation). Curiosity can directly measure both the modern dose of ionizing cosmic radiation and the accumulated dose for the interval of time that ancient rocks have been exposed at the surface of Mars.
 
Hassler et al. quantify the present-day radiation environment on Mars that affects how any organic molecules that might be present in ancient rocks may degrade in the shallow surface (that is, the top few meters). This shallow zone is penetrated by radiation, creating a cascade of atomic and subatomic particles that ionize molecules and atoms in their path. Their measurements over the first year of Curiosity's operations provide an instantaneous sample of radiation dose rates affecting rocks, as well as future astronauts. Extrapolating these rates over geologically important periods of time and merging with modeled radiolysis data yields a predicted 1000-fold decrease in 100–atomic mass unit organic molecules in ∼650 million years.
 
Sediments that were buried and lithified beneath the radiation penetration depth, possibly with organic molecules, would eventually be exhumed by erosion and exposed at the surface. During exhumation, organics would become subject to radiation damage as they entered the upper few meters below the rock-atmosphere interface. The time scale of erosion and exhumation, and thus the duration that any parcel of rock is subjected to ionizing radiation, can be determined by measuring cosmogenically produced noble gas isotopes that accumulate in the rock. 36Ar is produced by the capture of cosmogenic neutrons by Cl, whereas 3He and 21Ne are produced by spallation reactions on the major rock-forming elements. Farley et al. show that the sampled rocks were exposed on the order of ∼80 million years ago, suggesting that preservation of any organics that accumulated in the primary environment was possible, although the signal might have been substantially reduced.
 
Wind-induced saltation abrasion of the rocks in Yellowknife Bay appears to have been the mechanism responsible for erosion and exhumation of the ancient lake bed sampled by Curiosity. The geomorphic expression of this process is a series of rocky scarps that retreated in the downwind direction. Understanding this process leads to the prediction that rocks closest to the scarps were most recently exhumed and are thus most likely to preserve organics, all other factors being equal. In this manner, the MSL mission has evolved from initially seeking to understand the habitability of ancient Mars to developing predictive models for the taphonomy of martian organic matter. This parallels the previous decade, in which the MER mission turned the corner from a mission dedicated to detecting ancient aqueous environments to one devoted to understanding how to search for that subset of aqueous environments that may also have been habitable.

References

Why do biofuels help prevent global warming?

Engines running on biofuels emit carbon dioxide (CO2), the primary source of greenhouse gas emissions, just like those running on gasoline. However, because plants and trees are the raw material for biofuels, and, because they need carbon dioxide to grow, the use of biofuels does not add CO2 to the atmosphere, it just recycles what was already there. The use of fossil fuels, on the other hand, releases carbon that has been stored underground for millions of years, and those emissions represent a net addition of CO2 to the atmosphere. Because it takes fossil fuels – such as natural gas and coal – to make biofuels, they are not quite “carbon neutral.”

Argonne National Laboratory has carried out detailed analyses of the “well-to-wheels” greenhouse gas emissions of many different engine and fuel combinations. The chart at right shows a few selected examples. Argonne’s latest analysis shows reductions in global warming emissions of 20% from corn ethanol and 85% from cellulosic ethanol. Thus,
 

 
greenhouse gas emissions in an E85 blend using corn ethanol would be 17% lower than gasoline, and using cellulosic ethanol would be 64% lower. A separate analysis found that biodiesel reduces greenhouse gas emissions by 41%; thus, a B20 blend would achieve a reduction of about 8%.

Cellulosic ethanol achieves such high reductions for several reasons:
1. Virtually no fossil fuel is used in the conversion process, because waste biomass material, in the form of lignin, makes an excellent boiler fuel and can be substituted for coal or natural gas to provide the heat needed for the ethanol process.

2. Farming of cellulosic biomass is much less chemical- and energy-intensive than farming of corn.

3. Perennial crops store carbon in the soil through their roots, acting as a carbon “sink” and replenishing carbon in the soil. Switchgrass, for example, has a huge root system that penetrates over 10 feet into the soil and weighs as much as one year’s growth aboveground (6-8 tons per acre).                                    

“Cellulosic ethanol is at least as likely as hydrogen to be an energy carrier of choice for a sustainable transportation sector.”
– Natural Resources Defense Council, Union of Concerned Scientists



Compared to conventional diesel fuel, the use of biodiesel results in an overall reduction of smog-forming emissions from particulate matter, unburned hydrocarbons, and carbon monoxide, as shown at right. Biodiesel slightly increases nitrogen oxide emissions, by about 2% in a B20 blend. Sulfur oxides and sulfates, which are major components of acid rain, are not present in biodiesel.

As for ethanol, the oxygen atom in the ethanol molecule leads to more complete fuel combustion and generally fewer emissions. E10 blends have been credited with reducing emissions of carbon monoxide by as much as 30% and particulates by 50%. However, mixing low levels of ethanol (2% to 10%) with gasoline increases the blend’s tendency to evaporate and contribute to low-level ozone unless the gasoline itself is adjusted. This problem diminishes with higher levels of ethanol. At blends between 25% and 45%, the fuel is equivalent to gasoline, and at higher blends it is less evaporative. E85 has about half the volatility (tendency to evaporate) of gasoline.

The effect of E85 on air quality is almost uniformly positive, with the exception of increased emissions of aldehydes, such as acetaldehyde. Conventional catalytic converters control these emissions in ethanol blends of up to 23%, and it is expected that they could be readily adapted to E85 blends. A test of advanced emission control systems in three conventional gasoline vehicles found that advanced systems reduced formaldehyde emissions by an average of 85% and acetaldehyde by an average of 58%.

Even without advanced controls, the benefits of reducing other toxic emissions outweigh the effects of aldehydes. The National Renewable Energy Laboratory tested a 1998 Ford Taurus FFV running on E85 and reported: “Emissions of total potency weighted toxics (including benzene, 1,3-butadiene, formaldehyde, and acetaldehyde) for the FFV Taurus tested on E85 were 55% lower than that of the FFV tested on gasoline.”

Emissions characteristics of E85*

Actual emissions will vary with engine design; these numbers reflect the potential reductions offered by ethanol (E85), relative to conventional gasoline.
• Fewer total toxics are produced.
• Reductions in ozone-forming volatile organic compounds of 15%.
• Reductions in carbon monoxide of 40%.
• Reductions in particulate emissions of 20%.
• Reductions in nitrogen oxide emissions of 10%.
• Reductions in sulfate emissions of 80%.
• Lower reactivity of hydrocarbon emissions.
• Higher ethanol and acetaldehyde emissions.
* Estimates based on ethanol’s inherently “cleaner” chemical properties with an engine that takes full advantage of these fuel properties.
– U.S. Environmental Protection Agency


The principal contributor to toxic air pollution from gasoline is a class of chemical compounds called aromatics, which make up an average of 26% of every gallon of gasoline. Blended with gasoline to increase octane, aromatics have the potential to cause cancer, and they also result in emissions of fine particulates and smog-forming gases that harm lung function and worsen asthma.
 
The EPA was required by the Clean Air Act Amendments of 1990 to seek “the greatest degree of emission reduction achievable” of air toxics in automobiles. In response to recent litigation, the EPA issued a rule to reduce one of these hazardous air pollutants, benzene, but the agency did not address the two other aromatic compounds, toluene and xylene, which form benzene during combustion. Using biofuels instead of aromatics to improve octane would result in public health benefits worth tens of billions of dollars from the reduction in emissions of small particles alone.                                      
Native perennial grasses such as switchgrass had to be tough to survive on the prairie. They are deep-rooted and drought-resistant and require less water than food crops. They also need less fertilizer, herbicide, insecticide, and fungicide per ton of biomass than conventional crops.

Switchgrass is an approved cover crop under the Conservation Reserve Program because it prevents soil erosion and filters runoff from fields planted with traditional row crops. Buffer strips of switchgrass, planted along stream banks and around wetlands, can remove soil particles, pesticides, and fertilizer residues from surface water before they reach ground water or streams.

There are enough varieties of prairie grass and other sources of cellulosic biomass that farmers need not all rely on a single energy crop – so-called monocultures. Indeed, recent research suggests that mixed prairie grasses may be more productive than monocultures. One study found that a diverse mixture of grasses grown on degraded land would yield 51% more energy per acre than ethanol from corn grown on fertile land.
In general, perennial energy crops create more diverse habitats than annual row crops, attracting more species and supporting larger populations. Switchgrass fields are popular with hunters, as they provide habitat for many species of wildlife, including cover for deer and rabbits and a nesting place for wild turkey and quail – and pheasants, as shown at right. As long as farmers avoid work that would disturb the birds during nesting or breeding seasons, their fields will remain popular with wildlife.

Physicists Produce Quantum Version of the Cheshire Cat

2014-01-22 16:45
http://news.sciencemag.org/physics/2014/01/physicists-produce-quantum-version-cheshire-cat

















In Lewis Carroll's famous children's novel Alice's Adventures in Wonderland, Alice meets the Cheshire Cat, which disappears and leaves only its grin behind. Now, physicists have created a quantum version of the feline by separating an object—a neutron—from its physical property—its magnetism. The experiment is the latest example of how quantum mechanics becomes even weirder using a technique called weak measurement and could provide researchers with an odd new experimental tool for performing precision measurements.

In quantum physics, tiny particles can be in opposite conditions or states at the same time, a property known as superposition. For instance, an electron can literally spin in opposite directions simultaneously. Try to measure the spin, however, and that state will "collapse" so that the electron is found spinning one way or the other. That's because quantum theory generally forbids you to measure a particle's state without altering it—at least ordinarily.

But in 1988, Yakir Aharonov, a theorist at Tel Aviv University in Israel, and colleagues dreamed up a way to measure delicate quantum states without disturbing them through so-called weak measurements. There's a price to pay, of course. A weak measurement can't reveal anything about an individual particle, but only the behavior of many particles all in the same state. And it requires not only putting the particles in just the right state to begin with, but also picking only those in a specific different state in the end, so the whole experiment has to be analyzed retrospectively. Nevertheless, weak measurements can probe phenomena that ordinary measurements can't, and last November Aharonov and colleagues described how they could be used to realize a quantum Cheshire Cat.

Here’s the idea. A beam of neutrons all magnetized in the same direction, say right, enters a device called a neutron interferometer (see diagram). The beam strikes a beam splitter, which splits not only the macroscopic beam but also the quantum wave describing each neutron. So after the beam splitter, each neutron is in the bizarre quantum state: in path 1, polarized right, and in path 2, polarized right. This is the "preselected" state. After taking different paths, the waves recombine at the second beam splitter and interfere with each other so that the neutrons all exit the interferometer through one of two "ports," the light port.

Now, here's where things get weird. Experimenters install a few gadgets before the second beam splitter that work like a filter so that if a neutron is in the state in path 1, polarized right and in path 2, polarized left—the "postselected state”—it will come out the dark port instead. That may sound superfluous, because each neutron is not in that state. However, the two states have a common part—in path 1, polarized right—and that overlap ensures that some neutrons emerge from the dark port, just by virtue of trying to filter out this postselected state.

If you look at only these postselected events, you can say for sure that the neutron went through path 1. That's because the only parts of the preselected and postselected states that overlap are the ones for path 1. On the other hand, if you try to measure the magnetism, you'll find that all the magnetism is in path 2. That's because to know the magnetism is there, you essentially have to apply a magnetic field that flips the neutron’s polarization. So after the measurement, the parts of the altered preselected state and postselected state that are identical are the ones for path 2.
The traditional interpretation is that the whole argument is moot. If you reach into path 1 with a neutron detector, then that measurement alters the original quantum state, making it pointless to speculate about what you would have seen if you'd measured magnetism in the path 2 instead, and vice versa. According to Aharonov’s theory though, the measurements could be done weakly, so that they would not alter the neutrons' state. And that's exactly what Yuji Hasegawa of the Vienna University of Technology and colleagues have done, as they report in a paper posted to the arXiv preprint server.

Using a neutron interferometer at the Institut Laue-Langevin in Grenoble, France, the researchers inserted an absorber that soaked up only a few percent of the neutrons—not enough to ruin the interference of the waves. When they put it in path 2, the rate of neutrons leaving the dark port remained the same. When they put it in path 1, the number decreased, proving that the neutrons in the postselected state go through path 1. Then, they applied a small magnetic field to slightly rotate the neutrons’ polarization and perturb the interference pattern. When the field was applied to path 1, it had no effect. But in path two, the number of neutrons exiting the dark port changed, proving the neutrons' magnetism was all in path 2. Thus the cat—the neutron—was separated from its grin—its magnetism.

The experiment will “surely help us understand better the counter-intuitive nature of quantum phenomena,” says Sandu Popescu, a theorist at the University of Bristol in the United Kingdom who was not involved in the experiment. The odd quantum phenomenon might even prove useful for making better precision measurements, he says. Some physicists have been testing whether Newton's law of gravity remains correct at distances shorter than a millimeter or so; the delicate experiments can be muddled by extraneous electromagnetic effects. But if researcher could split the mass of neutrons from their magnetism, then they might be able to study gravitational effects without being disturbed by electromagnetic ones, says Aephraim Steinberg, an experimenter at the University of Toronto in Canada.

Photo caption: Splitsville. The basic setup for the experiment in which a neutron follows one path and its magnetism another.

(Credit: Adapted from Ernecker/Creative Commons)

Water found in stardust suggests life is universal

22 January 2014 by Catherine Brahic
Magazine issue 2953. Subscribe and save

 
A SPRINKLING of stardust is as magical as it sounds. The dust grains that float through our solar system contain the ingredients to make water, which forms when the dust is zapped by a blast of charged wind from the sun.
 
The chemical reaction causing this to happen had previously been mimicked in laboratories, but now water has been found trapped inside real stardust.
Combined with previous findings of organic compounds in interplanetary dust, this suggests that these grains contain the basic ingredients needed for life. As similar dust grains are thought to be found in solar systems all over the universe, this bodes well for the existence of life across the cosmos.
 
"The implications are potentially huge," says Hope Ishii of the University of Hawaii in Honolulu, one of the researchers behind the study. "It is a thrilling possibility that this influx of dust on the surfaces of solar system bodies has acted as a continuous rainfall of little reaction vessels containing both the water and organics needed for the eventual origin of life."
 
Solar systems are full of dust – a result of many processes, including the break-up of comets. John Bradley of the Lawrence Livermore National Laboratory in California and his colleagues used high-resolution imaging and spectroscopy to look beneath the surface of interplanetary particles extracted from Earth's stratosphere. Inside these specks, which measured just 5 to 25 micrometres across, they found trapped pockets of water (PNAS, DOI: 10.1073/pnas.1320115111).
 
Laboratory experiments offer clues to how the water forms. The dust is mostly made of silicates, which contain oxygen. As it travels through space, it encounters the solar wind. This stream of charged particles, including high-energy hydrogen ions, is ejected from the sun's atmosphere. When the two collide, hydrogen and oxygen combine to make water.
 
As interplanetary dust is thought to have rained down on early Earth, it is likely that the stuff brought water to our planet, although it is difficult to conceive how it could account for the millions of cubic kilometres of water that cover Earth today. A more likely origin is wet asteroids that pummelled early Earth. Comets are also a candidate: the European Space Agency's Rosetta spacecraft, due to send a lander to a comet later this year, is tasked with probing their role.
 
However, the results are relevant to the quest for life on other planets. The water-producing reaction is likely to happen in any corner of the universe with a star, says Ishii.
 
What's more, interplanetary dust in our solar system – and in others – contains organic carbon. If stardust contains carbon and water, it means the essentials of life could be present in solar systems anywhere in the universe and raining down on their planets.
 
This article appeared in print under the headline "Fountain of life may be a shower of dust"
Issue 2953 of New Scientist magazine

China Is Dependent On Our Fiscal Health

 
Steve Forbes            
Steve Forbes, Forbes Staff
“With all thy getting, get understanding."
1/22/2014 @ 6:06AM |7,308 views            
English: Beijing CBD 2008-6-9 Jianwai SOHO, Yi...
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Beijing CBD 2008-6-9 Jianwai SOHO, Yitai Center, CCTV (Photo credit: Wikipedia)

China’s holdings in U.S. Treasurys, which reached record levels in 2013, are setting off alarm bells. They shouldn’t. They underscore that Beijing is becoming more dependent on the U.S. and the rest of the world for its strength and prosperity. China’s military leaders may not recognize that truth any more than Germany’s military, its militaristic Prussian aristocracy (which had outsize influence prior to WWI) and some of its intellectuals did in 1914.
Some points to keep in mind.
–If we and Beijing ever engaged in a mortal confrontation, how much would China’s holdings in American government bonds be worth if we said the paper was no longer valid? Beijing is a hostage to our willingness to honor these obligations.

–China has not been a big buyer of our paper for several years because of its concerns about the dollar’s integrity.

–Chinese companies get dollars by selling us products that they either made or assembled from corporate global supply chains. Unless China wants to sit on paper money, it will continue to use those dollars to buy stuff from us, in this case government bonds.

–The fact that the Chinese government is amassing so much in foreign currencies–$3.8 trillion at last count–means that China’s capital markets are still primitive compared with ours and Britain’s. That money is centrally controlled instead of being in the hands of numerous parties–banks, insurance companies, venture capital funds, mutual funds, private companies that wish to invest overseas and so on–that would put it to work creating new products and services. Prior to WWI Britain ran mammoth trade surpluses, which were far greater proportionately than are those of China. But the money didn’t sit in the Bank of England collecting interest. It financed a mind-boggling array of railroads, companies and agricultural enterprises all over the world. Private entrepreneurs, not government officials, directed those investments. British capital, for instance, financed much of the industrialization of the U.S.

–What would happen if China, to damage us, decided to dump its trove of Treasurys? Prices might wobble, but not for long. There are trillions of dollars’ worth of financial securities scattered around the world, and smart asset buyers would gobble up Treasurys if they thought they were underpriced. Moreover, the Fed, which already has a bloated balance sheet of $4 trillion, could easily absorb what China owns in Treasurys–$1.3 trillion.

–If China did sell Treasurys, it would be paid in dollars. Then what? Would it dump the dollars for, say, yen or euros? The European Central Bank and the Bank of Japan, not to mention the Fed, could take countermeasures if they so desired, to make sure currency ratios didn’t get out of line.

The idea that a government gains strength by piling up dollars or other foreign currencies is a mercantilist holdover from the 16th to 18th centuries, when France, Spain and others thought amassing gold and silver was how a country became wealthy. Trade, not hoarding, makes for a powerful economy–an insight Adam Smith understood but one that too many people today don’t.

http://www.forbes.com/sites/steveforbes/2014/01/22/china-is-dependent-on-our-fiscal-health/?utm_campaign=forbestwittersf&utm_source=twitter&utm_medium=social

Commentary: A tale of two pipelines — It’s time for common sense to prevail

Posted on by David Holt in Keystone XL, Pipelines, Politics/Policy   

In March 2012, President Obama visited the Pipe Yard in Cushing, OK where he announced his support for the Gulf Coast Project and directed his administration “to cut through the red tape, break through the bureaucratic hurdles, and make this project a priority, to go ahead and get it done.”
Today – almost two years later – the Keystone Gulf Coast Project has begun shipping about 830,000 barrels of oil a day across the 485 miles from Cushing, OK south to state-of-the-art refineries down on the Gulf Coast in Nederland, TX. The project took 4,000 tradesman and construction workers 16 months to complete.  Additional vendors and local businesses also benefited including the Read Ice Company in Kountze, Texas, 25 miles northwest of Beaumont which contracted with TransCanada to provide ice cold water to construction workers.

America’s energy consumers have reason to celebrate these jobs and the economic benefits resulting from the start of service for the southern leg of the Keystone XL pipeline project. But even as it launches today, delay pervades the remainder of the Keystone XL pipeline. Why more than five years after the initial filing are we still waiting for a final decision from the Obama Administration?

The Keystone XL pipeline project could be so much more. The next phase of the Keystone XL Pipeline project includes building a new pipeline from Hardisty, AB to Steele City, NE, where will it will connect with an existing pipeline that runs to Cushing. This final leg will require 9,000 more tradesman and construction workers. It could provide a $5 billion investment boost into the U.S. economy – not to mention the millions in tax revenue for cities and towns along the route.

Why is building energy pipelines to Cushing, OK so important?  It is the epicenter for storing crude oil in the United States.  From Cushing, crude oil is sent to Gulf Coast refiners where it is made into gasoline, jet fuel, diesel fuel and several other products which the U.S. economy uses to power businesses, manufacturing facilities, automobiles, truck fleets and airplanes.

Building the final leg of Keystone XL will connect landlocked crude oil being produced in the Bakken formation in North Dakota to Gulf Coast refiners.  In turn, there will be downward pressure on prices and a more reliable stream of crude into and out of America’s energy producing network.
 This new found reliability will make the U.S. less dependent on crude oil imports from regimes like Venezuela or Persian Gulf countries (link to other op-ed).

With all of these tremendous benefits available to the United States through the Keystone XL pipeline, why are we still waiting for a decision?

Already subjected to five years of government study and delay, the Keystone XL pipeline has been reviewed by the Nebraska Department of Environmental Quality (NDEQ). The pipeline project has now undergone multiple environmental reviews by the State Department. This new review should be completed sometime this year. After that, it’s all up to President Obama.

However, if it had followed the same regulatory process as the Gulf Coast Project, the Keystone XL pipeline would now be up and running, supplying America’s largest refineries on the Gulf Coast with a secure supply of lower-cost oil from producers in Canada and the U.S. Bakken region. It’s clear that there is a fundamental disconnect between the processes for these two pipelines.

Back in March 2012, President Obama stated to a crowd in Cushing, OK: “Today, we’re making this new pipeline from Cushing to the Gulf a priority…we’re going to go ahead and get that done. The northern portion of it we’re going to have to review properly to make sure that the health and safety of the American people are protected.  That’s common sense.”

We can all agree that a thorough review is appropriate to ensure that the health and safety of the American people are protected. However, after five years of foot dragging by the State Department, now we are past the point of common sense. Instead, political debate on what is now the most thoroughly studied and publicly discussed pipeline in American history has replaced any logical thought common sense. Instead of misinformation continuing to plague the progress of one of the nation’s largest “shovel ready” construction projects, it’s time to make the Keystone XL pipeline a priority.

As the most-studied and safest pipeline in history, the Keystone XL pipeline project is still a good idea and enjoys the support of more than 70 percent of Americans. Mr. President, it’s time for common sense to prevail. It’s time to build the Keystone XL pipeline.

http://fuelfix.com/blog/2014/01/22/a-tale-of-two-pipelines-its-time-for-common-sense-to-prevail/

Issues and Trends: Natural Gas

Production Lookback 2013
Released: January 16, 2014
U.S. natural gas production increases by 1% in 2013
Average dry natural gas production grew modestly in 2013, despite a 35% year-on-year rise in prices. Production grew from 65.7 billion cubic feet per day (Bcf/d) in 2012 to 66.5 Bcf/d in 2013, a 1% increase and the lowest annual growth since 2005. This production growth was essentially flat when compared to the 5% growth in 2012 and the 7% growth in 2011.
Average wholesale (spot) prices for natural gas in 2013 increased significantly throughout the United States compared to 2012. The average wholesale price for natural gas at Henry Hub in Erath, Louisiana, a key benchmark location for pricing throughout the United States, rose to $3.73 per million British thermal units (MMBtu) in 2013. However, 2013 prices were, with the exception of 2012, at their lowest level since 2002.
 
Slower demand growth, low imports limit room for gas production growth
Although prices remained relatively low, total disposition (consumption, gross exports, and net storage injections) of natural gas was flat in 2013 compared with 2012 levels, versus the annual 3% increase in 2012, and the 4% increase in 2011. Domestic consumption in 2013, which makes up 96% of total U.S. natural gas disposition, increased by 2%, despite the decrease in consumption of natural gas for electric generation (power burn) in 2013. Natural gas consumed for power burn was 2.6 Bcf/d below 2012 levels as coal regained some of its market share in response to higher natural gas prices, compared with coal, and as cooler summer temperatures in 2013 reduced total electric generation demand. Increased winter natural gas demand offset the decline in power burn, leading to a net increase in consumption for the year.
Since 2010, domestic production has satisfied 88% of U.S. natural gas disposition, as natural gas imports to the United States have continued to decline. As recently as 2007, the United States depended on imports for 16% of its natural gas needs. Although U.S. production has displaced some natural gas imports to the United States, imports continue, although as a marginal source of supply, largely during cold weather and pipeline maintenance outages.
Storage injections provided another outlet for U.S. natural gas production growth. The net withdrawal in working natural gas storage inventories in 2013 was significantly higher than 2012 because of large withdrawals in January and December. High demand at the end of the year offset the effect of end-of-October working gas inventories that had reached their third-highest level in the past 10 years. By the end of December, inventories had declined to their seventh-highest level in the past 10 years. The net withdrawal in inventories in 2013 was 537 billion cubic feet (Bcf), significantly greater than the net withdrawal of 49 Bcf in 2012. In 2011, there was a net injection into working inventories of 351 Bcf.
 
Growth in Marcellus Shale production offsets decreases in other basins
Greater levels of natural gas output in the Marcellus Shale contributed to the net increase in national production levels despite decreases in other basins. Dry natural gas production from Marcellus rose by 61%, from a 2012 average of 6.5 Bcf/d to a 2013 average of 10.4 Bcf/d (Figure 2), according to U.S. Energy Information Administration (EIA) calculations based on data from Drillinginfo.
Marcellus production alone accounted for 75% of all production growth over the past year in the six basins covered in EIA's recently released Drilling Productivity Report (DPR), which highlights the latest regional trends in drilling, completion, and production from gas- and oil-producing wells.
Despite a 21% decline from 2012 to 2013 in the total rig count in the Marcellus, natural gas output per rig rose by 47%, according to the DPR. Production gains have come largely from northeastern portions of the basin producing drier natural gas, where output has benefitted from gathering line and pipeline capacity expansions. However, infrastructure improvements have also bolstered production in the wetter southwestern portions of the basin, which saw increased drilling in 2013.
Outside of Marcellus, the shift toward liquids-rich production continued in 2013
Wide differences in natural gas and oil prices affect the decisions that upstream operators make about where and how to deploy capital. Although natural gas prices remained at relatively low levels through the end of 2013, crude oil prices remained mostly above $100/barrel (bbl), encouraging operators to target regions with wetter gas and higher returns on investment.
The Haynesville Shale in Texas and Louisiana and the Barnett Shale in Texas generally are considered dry natural gas plays because of the low level of liquids in their production streams. Production from the Haynesville and Barnett declined by 27% and 9%, respectively, between 2013 and 2012. The Barnett and Haynesville reductions exceeded the 3% combined increase in gas production at the Fayetteville Shale in Arkansas and the nearby Woodford Shale in Oklahoma. The Baker Hughes active rig count decreased significantly in all four of these basins between 2011 and 2013. Some of the production declines in these fields are also partially attributable to the normal decline or maturity of their existing wells.
At the same time, increased new production activity in wetter shale basins enabled these basins to pick up some of the drop-off in production from their drier counterparts. At the Eagle Ford in south Texas, where operators target a combination of crude oil, condensate, and natural gas liquids, average daily gas production reached 3.3 Bcf/d in 2013, 54% higher than in 2012. In 2012, the average active rig count in Eagle Ford rose by 29% before declining slightly in 2013. Production also grew by 33% in the Bakken Shale in North Dakota and Montana, where operators predominantly target crude oil, following a rig count increase in 2012.
The shift in new production activity from drier to wetter production fields is demonstrated by data on Lower 48 gross withdrawals from wells producing only natural gas, versus those producing a combination of gas and oil. While gross withdrawals from wells containing only natural gas rose by 4% from 2011 to 2013, from 40.4 Bcf/d to 42.1 Bcf/d, gross withdrawals from wells producing a combination of both gas and oil increased by 7%, from 25.0 Bcf/d to 26.8 Bcf/d (Figure 3), according to EIA calculations with data from Drillinginfo.
The increased gross withdrawals from wells producing both gas and oil coincided with changes in the oil-to-natural gas price ratio. When the oil-to-natural gas price ratio increased by 49% in 2012, from $28.33/bbl of Brent crude oil in 2011 to $42.13/bbl of Brent crude oil for every $1.00/MMBtu of natural gas, gross withdrawals from wells producing both gas and oil rose by 8%. When gas prices rose and the oil-to-gas price ratio decreased by 30% to $29.61/bbl of Brent crude oil for every $1.00/MMBtu of natural gas in 2013, gross withdrawals from wells producing both gas and oil decreased by 1%. The shift in the focus of new natural gas production activity was also evident in terms of the increase since 2010 in the percentage of new wells that produced both natural gas and oil (Figure 4). In 2010, 57% of all new natural gas-producing wells produced both gas and oil. By 2012, 73% of all new natural gas producing wells produced both gas and oil. This share fell to 68% in 2013.
Other production
The shift in natural gas production also involved movement away from geologically more permeable zones that have traditionally accounted for a greater share of North American gas supply. Marketed natural gas production was generally flat or down for inland production areas outside of shale and tight formations in 2012 and 2013, except for New Mexico, and also remained relatively flat in onshore Canada. Production from offshore areas in both Canada and the United States declined between 2012 and 2013
 

Religion and mythology

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Rel...