by Nancy Owano
(Phys.org) —A Friday report in Nature News handles a well-publicized topic, the air quality in Beijing. That may seem like rather old news, but the Friday report has new information on the city's troubling air quality. Scientists were looking for information about the potential for pathogens and allergens in Beijing's air and they turned to "metagenomics" as their study tool. The research team described what they were seeking in their paper, "Inhalable Microorganisms in Beijing's PM2.5 and PM10 Pollutants during a Severe Smog Event," for Environmental Science & Technology. "While the physical and chemical properties of PM pollutants have been extensively studied, much less is known about the inhalable microorganisms. Most existing data on airborne microbial communities using 16S or 18S rRNA gene sequencing to categorize bacteria or fungi into the family or genus levels do not provide information on their allergenic and pathogenic potentials. Here we employed metagenomic methods to analyze the microbial composition of Beijing's PM pollutants during a severe January smog event."
They took 14 air samples over seven consecutive days. Using genome sequencing, they found about 1,300 different microbial species in the heavy smog period of early last year. The scientists compared their results with a large gene database. What about their findings? Most of the microbes they found were benign but a few were responsible for allergies and respiratory disease. As Nature News reported, the most abundant species identified was Geodermatophilus obscurus, That is a common soil bacterium. Streptococcus pneumonia, however, was also part of the brew, which can cause pneumonia, along with Aspergillus fumigatus, a fungal allergen, and other bacteria typically found in faeces. "Our results," wrote the researchers, "suggested that the majority of the inhalable microorganisms were soil-associated and nonpathogenic to humans. Nevertheless, the sequences of several respiratory microbial allergens and pathogens were identified and their relative abundance appeared to have increased with increased concentrations of PM pollution."
The authors suggested that their findings may provide an important reference for environmental scientists, health workers and city planners. The researchers also suggested, according to Nature News, that clinical studies explore signs of the same microbes in the sputum of patients with respiratory tract infections, to assess whether smoggier days lead to more infections.
Metagenomics, which can analyze microbial communities regardless of the ability of member organisms to be cultured in the laboratory, is recognized as a powerful approach. Nature Reviews also describes metagenomics as "based on the genomic analysis of microbial DNA that is extracted directly from communities in environmental samples." Explore further:Delhi says air 'not as bad' as Beijing after smog scrutinyMore information:Environmental Science & Technology paper: pubs.acs.org/doi/abs/10.1021/es4048472
“And when our children’s children look us in the eye,” said President Obama in his State of the Union address, “and ask if we did all we could to leave them a safer, more stable world, with new sources of energy, I want us to be able to say yes, we did.”
I do, too.
But the way to do this is the exact opposite of Obama’s prescribed policies of restricting fossil fuel use and giving energy welfare to producers of unreliable energy from solar and wind. Just consider the last three decades.
When I was born 33 years ago, politicians and leaders were telling my parents’ generation the same thing we’re being told today: that for the sake of future generations, we need to end our supposed addiction to fossil fuels.
Fossil fuels, they explained, were a fast-depleting resource that would inevitably run out. Fossil fuels, they explained, would inevitably make our environment dirtier and dirtier. And of course, fossil fuels would change the climate drastically and for the worse. It was the responsibility of my parents’ generation, they were told, to drastically restrict fossil fuels and live on solar and wind energy.
My generation should be eternally grateful they did the exact opposite.
Since 1980, when I was born, fossil fuel use here and around the world has only grown. In the U.S., between 1980 and 2012 the consumption of oil increased 3.9%, the consumption of natural gas increased 28.5%, and the consumption of coal increased 12.6%. (Had it not been for the economic downturn these numbers would be higher.)
Globally, oil consumption increased 38.6%, natural gas increased 130.5%, and coal increased 106.8%. (Source: BP Statistical Review of World Energy, June 2013)
My parents’ generation was told to expect disastrous consequences.
In 1980, the “Global 2000 Report to the President” wrote: “If present trends continue, . . . the world
in 2000 will be more crowded, more polluted, less stable ecologically, and more vulnerable to disruption than the world we live in now. Serious stresses involving population, resources, and environment are clearly visible ahead.”
In 1989, the New Yorker’s star journalist Bill McKibben, claiming to represent a scientific consensus, prophesized:
We stand at the end of an era—the hundred years’ binge on oil, gas, and coal…The choice of doing nothing—of continuing to burn ever more oil and coal—is not a choice, in other words. It will lead us, if not straight to hell, then straight to a place with a similar temperature.
Al Gore, just before becoming Vice President, said the use of fossil fuels put us on the verge of an “ecological holocaust.”
What actually happened? Thanks in large part to our increasing use of fossil fuel energy and the technologies they power, life has gotten a lot better across the board for billions of people around the globe.
Life expectancy is way up. World life expectancy at birth was just 63 years in 1980. That number has increased to over 70. The US, already far above average with 73 in 1980, today enjoys an average life expectancy of 78. The infant mortality rate of mankind is less than half of what is used to be in 1980 (from 80 to 35 per 1000 live births).
Malnutrition and undernourishment have plummeted. Access to electricity, a proxy for development and health, is constantly increasing. Access to improved water sources, a necessity for basic hygiene standards and human health, has been possible for ever increasing portions of mankind, especially in poor countries (1990 = 76%, 2012 = 89%).
GDP per capita has constantly increased. The percentage of people who live on $2 a day in South Asia, a region with massive fossil fuel use, has been steadily on the decline from 87% in the early 1980s to 67% in 2010. (Source: World Bank Data.)
And then there is the statistic that the climate doomsayers will never acknowledge. Thanks to energy and technology, climate-related deaths have been plummeting for decades: you are 50X less likely to die from a climate-related cause (heat wave, cold snap, drought, flood, storm) than you would have been 80 years ago.
We are the longest-living, best-fed, healthiest, richest generation of humans on this planet and there is unending potential for further progress.
If, that is, we don’t listen to the anti-fossil-fuel “experts.”
Here’s a scary story. In the 1970s, arguably the most revered intellectuals on energy and environment were men named Amory Lovins and Paul Ehrlich. (They are still revered, which, for reasons that will become apparent within a paragraph, is a moral crime.)
In 1977, Lovins, considered an energy wunderkind for his supposedly innovative criticisms of fossil fuels and his support of solar power and reduced energy use, explained that we already used too much energy. And in particular, the kind of energy we least needed was . . . electricity: “[W]e don’t need any more big electric generating stations. We already have about twice as much electricity as we can use to advantage.”
Environmentalist legend Paul Ehrlich had famously declared “The battle to feed all of humanity is over. In the 1970s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now.” Thanks in large part to a surge in energy use that led to a massive alleviation of global hunger, that prediction did not come to pass. But it might have, had we followed Ehrlich’s advice: “Except in special circumstances, all construction of power generating facilities should cease immediately, and power companies should be forbidden to encourage people to use more power.
Power is much too cheap. It should certainly be made more expensive and perhaps rationed, in order to reduce its frivolous use.”
Had we listened to these two “experts,” billions of people—ultimately, every single person alive—would be worse off today. Or dead. You would certainly not be reading this on an “unnecessary” electronic device connecting to the electricity-hungry Internet.
This is what is at stake with energy. And while President Obama wasn’t extreme enough in his recent rhetoric for many environmentalist groups, he is on record as calling for outlawing 80% of fossil fuel use. Which would mean not only prohibiting future power plants, but shutting down existing ones.
Whether he knows it or not, every time he attacks fossil fuels, President Obama is working to make his grandchildren’s world much, much worse.
A scientific first: Team engineers photograph radiation beams in human body through
Cherenkov effect. Long exposure image of Cherenkov emission and induced fluorescence
from Fluorescein dissolving in water during irradiation from a therapeutic LINAC beam.
Credit: Adam Glaser.
Jan 23, 2014
A scientific breakthrough may give the field of radiation oncology new tools to increase the precision and safety of radiation treatment in cancer patients by helping doctors "see" the powerful beams of a linear accelerator as they enter or exit the body.
We don't have X-ray vision. When we have an X-ray or mammogram, we cannot detect the radiation beam that passes through our bone or soft tissue, neither can our doctor. But what if we could see X-rays? When we use powerful X-rays for cancer treatment, we could see how they hit the tumor. If we were off target, we could stop and make adjustments to improve accuracy. Pinpoint precision is important. The goal of radiation is to kill cancer cells without harming healthy tissue.
Safety in Radiation Oncology
As a way to make radiation safer and better, Dartmouth began to investigate a scientific phenomenon called the Cherenkov effect in 2011. Our scientists and engineers theorized that by using Cherenkov emissions the beam of radiation could be "visible" to the treatment team. The ability to capture an X-ray would show:
how the radiation signals travel through the body
the dose of radiation to the skin
any errors in dosage.
TV viewers may have seen images of sunken fuel rods from a nuclear power plant emitting a blue-green glow. That is the Cherenkov effect. When a particle with an electric charge travels faster than the speed of light through something that does not conduct electricity, like the human body or water, it glows. As the matter relaxes from polarization, it emits light. (Yes, for a brief period people glow during radiation.)
The Cherenkov effect in the laboratory
As a first step, engineers at the Thayer School of Engineering at Dartmouth modified a regular camera with a night vision scope to take photos of radiation beams as they passed through water. What appeared on the photos is the Cherenkov effect, a luminescent blue glow. (An engineering student, Adam Glaser, explains how it works in this video.)
To refine the approach for use in radiation treatment, scientists used a mannequin of the human body. They measured and studied the results to refine their ability to capture the luminescence, experimenting with beam size, position, and wavelength.
Cherenkov imaging used for first time in treatment setting
With the clinical aspects refined, Geisel School of Medicine researchers photographed luminescence during the routine radiation treatment of a dog with an oral tumor.
This was the first time Cherenkov studies came out of the laboratory and into a treatment setting. The scientists coined the approach Cherenkoscopy. As they anticipated, during the session they were able to see detailed information about the treatment field and the dose. The results were published in the November 2013 issue of the Journal of Biomedical Optics.
"This first observation in the dog proved that we could image a radiation beam during treatment in real time," said David Gladstone, ScD, chief of Clinical Physics at Norris Cotton Cancer Center. "The images verified the shape of the beam as well as intended motion of the treatment machine."
First image of Cherenkov emissions during treatment of human breast
Now ready to use the technology with a human patient, the team prepared to view radiation as it entered the body of a female breast cancer patient undergoing radiation in July 2013.
"Breast cancer is suited for this because the imaging visualizes the superficial dose of radiation to the skin," said Lesley A. Jarvis, MD, radiation oncologist, Norris Cotton Cancer Center. Skin reactions, similar to sunburn, are a common and bothersome side effect during breast radiation. "By imaging and quantitating the surface dose in a way that has never been done before," said Jarvis, "we hope to learn more about the physical factors contributing to this skin reaction."
By seeing the effect of radiation on the body, radiation oncologists and physicists can make adjustments to avoid side effects to the skin. Most radiation patients undergo somewhere between 8-20 sessions. The Cherenkov images of the breast cancer patient showed a hot spot in her underarm, which physicians and physicists could work to prevent in future sessions.
"The actual images show that we are treating the exact correct location, with the appropriate beam modifications and with the precise dose of radiation," said Jarvis.
Clinical use of Cherenkov emissions proves successful
This trial showed that the Cherenkov effect is feasible for use real-time during radiation. "We have learned the imaging is easy to incorporate into the patient's treatment, adding only minimal time to the treatments," said Jarvis.
"The time needed to acquire this information is negligible, even with our experimental, non-integrated system," said Gladstone. "Cherenkov images were found to contain much richer information than anticipated, specifically, we did not expect to visualize internal blood vessels."
Mapping blood vessels opens up opportunities for radiation oncology to use a person's internal anatomy to confirm precise locations. Skin tattoos on patients determine a preliminary alignment that is verified with X-rays, which show bony anatomy or implanted markers. Cherenkov imaging allow technicians to visualize soft tissue and internal vasculature.
A possible safety net for radiation treatment
By integrating Cherenkov imaging into routine clinical care, Gladstone says the technology could be used to verify that the proper dose is being delivered to patients, helping to avoid misadministration of radiation therapy, a rare, but dangerous occurrence.
Twelve patients are participating in a pilot study, which is almost complete. The research team plans to publish the results in a peer reviewed journal. The Cherenkov effect project team includes Lesley Jarvis, MD, assistant professor of Medicine, Geisel School of Medicine; Brian Pogue, PhD, professor of Engineering, Thayer School, professor of Physics & Astronomy, Dartmouth College, professor of Surgery, Geisel School of Medicine; David J. Gladstone, ScD, DABMP associate professor of Medicine, Geisel School of Medicine; Adam Glaser, engineering student; Rongxiao Zhang, physics student; Whitney Hitchcock, medical school student.
David Strumfels: Watch Jonathan Eisen writhe his hands over this. What? Scientists are above cognitive biases, unlike ordinary mortals (who required them to function). And they -- the NYT -- present no evidence! I have bad news for him. Cognitive bias is built into how the human brain functions. High intelligence and years of education and research do no make it just go away -- far from it, by creating the illusion of being above it, the Eisens and every other scientist in the world (including me) are that much more susceptible to it. The truth is, it should be Eisen to be the one presenting evidence that scientists really are just that far superior. I point to his outrage that he is probably (albeit unconsciously) knows he hasn't a leg to stand on.
It is the cross-checking, competition, and self-correctiveness of science that allows it to make progress, not the perfect lack of biases and other human frailties among the people who do it. Eisen, and every scientist, should know this, know it by heart one might say, if at least not by mind.
End Strumfels
SCIENCE is in crisis, just when we need it most. Two years ago, C. Glenn Begley and Lee M. Ellis reported in Nature that they were able to replicate only six out of 53 “landmark” cancer studies. Scientists now worry that many published scientific results are simply not true. The natural sciences often offer themselves as a model to other disciplines. But this time science might look for help to the humanities, and to literary criticism in particular.
A major root of the crisis is selective use of data. Scientists, eager to make striking new claims, focus only on evidence that supports their preconceptions. Psychologists call this “confirmation bias”: We seek out information that confirms what we already believe. “We each begin probably with a little bias,” as Jane Austen writes in “Persuasion,” “and upon that bias build every circumstance in favor of it.”
Despite the popular belief that anything goes in literary criticism, the field has real standards of scholarly validity. In his 1967 book “Validity in Interpretation,” E. D. Hirsch writes that “an interpretive hypothesis,” about a poem “is ultimately a probability judgment that is supported by evidence.” This is akin to the statistical approach used in the sciences; Mr. Hirsch was strongly influenced by John Maynard Keynes’s “A Treatise on Probability.”
However, Mr. Hirsch also finds that “every interpreter labors under the handicap of an inevitable circularity: All his internal evidence tends to support his hypothesis because much of it was constituted by his hypothesis.” This is essentially the problem faced by science today. According to Mr. Begley and Mr. Ellis’s report in Nature, some of the nonreproducible “landmark” studies inspired hundreds of new studies that tried to extend the original result without verifying if the original result was true. A claim is not likely to be disproved by an experiment that takes that claim as its starting point. Mr. Hirsch warns about falling “victim to the self-confirmability of interpretations.”
It’s a danger the humanities have long been aware of. In his 1960 book “Truth and Method,” the influential German philosopher Hans-Georg Gadamer argues that an interpreter of a text must first question “the validity — of the fore-meanings dwelling within him.” However, “this kind of sensitivity involves neither ‘neutrality’ with respect to content nor the extinction of one’s self.”
Rather, “the important thing is to be aware of one’s own bias.” To deal with the problem of selective use of data, the scientific community must become self-aware and realize that it has a problem. In literary criticism, the question of how one’s arguments are influenced by one’s prejudgments has been a central methodological issue for decades.
Sometimes prejudgments are hard to resist. In December 2010, for example, NASA-funded researchers, perhaps eager to generate public excitement for new forms of life, reported the existence of a bacterium that used arsenic instead of phosphorus in its DNA. Later, this study was found to have major errors. Even if such influences don’t affect one’s research results, we should at least be able to admit that they are possible.
Austen might say that researchers should emulate Mr. Darcy in “Pride and Prejudice,” who submits, “I will venture to say that my investigations and decisions are not usually influenced by my hopes and fears.” At least Mr. Darcy acknowledges the possibility that his personal feelings might influence his investigations.
But it would be wrong to say that the ideal scholar is somehow unbiased or dispassionate. In my freshman physics class at Caltech, David Goodstein, who later became vice provost of the university, showed us Robert Millikan’s lab notebooks for his famed 1909 oil drop experiment with Harvey Fletcher, which first established the electric charge of the electron.
The notebooks showed many fits and starts and many “results” that were obviously wrong, but as they progressed, the results got cleaner, and Millikan could not help but include comments such as “Best yet — Beauty — Publish.” In other words, Millikan excluded the data that seemed erroneous and included data that he liked, embracing his own confirmation bias.
Mr. Goodstein’s point was that the textbook “scientific method” of dispassionately testing a hypothesis is not how science really works. We often have a clear idea of what we want the results to be before we run an experiment. We freshman physics students found this a bit hard to take. What Mr. Goodstein was trying to teach us was that science as a lived, human process is different from our preconception of it. He was trying to give us a glimpse of self-understanding, a moment of self-doubt.
When I began to read the novels of Jane Austen, I became convinced that Austen, by placing sophisticated characters in challenging, complex situations, was trying to explicitly analyze how people acted strategically. There was no fancy name for this kind of analysis in Austen’s time, but today we call it game theory. I believe that Austen anticipated the main ideas of game theory by more than a century.
As a game theorist myself, how do I know I am not imposing my own way of thinking on Austen? I present lots of evidence to back up my claim, but I cannot deny my own preconceptions and training. As Mr. Gadamer writes, a researcher “cannot separate in advance the productive prejudices that enable understanding from the prejudices that hinder it.” We all bring different preconceptions to our inquiries, whether about Austen or the electron, and these preconceptions can spur as well as blind us.
Perhaps because of its self-awareness about what Austen would call the “whims and caprices” of human reasoning, the field of psychology has been most aggressive in dealing with doubts about the validity of its research. In an open email in September 2012 to fellow psychologists, the Nobel laureate Daniel Kahneman suggests that “to deal effectively with the doubts you should acknowledge their existence and confront them straight on, because a posture of defiant denial is self-defeating.”
Everyone, including natural scientists, social scientists and humanists, could use a little more self-awareness. Understanding science as fundamentally a human process might be necessary to save science itself.
Michael Suk-Young Chwe, a political scientist at U.C.L.A., is the author of “Jane Austen, Game Theorist. ”
Researchers at Princeton University have used nanotechnology to develop a nanotechnological mesh that increases efficiency over organic solar cells nearly threefold.
The scientists published their findings in the journal Optics Express¹. The team was able to reduce reflexivity and capture more of the light that isn’t reflected. The resulting solar cell is thinner, less reflective and utilizes sandwiched plastic and metal with the nanomesh. The so-called “Plasmonic Cavity with Subwavelength Hole array,” or “PlaCSH,” reduces the potential for losing light itself and reflects only 4% of direct sunlight, leading to a 52% increase in efficiency compared to conventional, organic solar cells.
Benefits of using the nanomesh
PlaCSH is capable of capturing large amounts of sunlight even when the sunlight is dispersed on cloudy days, which translates to an increase of 81% in efficiency in indirect lighting conditions. PlaCSH is up to 175% more efficient than traditional solar cells.
The mesh is only 30 nanometers thick and the holes in it are only 175 nm in diameter. This replaces the thicker, traditional top layer that is made out of indium-tin-oxide (ITO). Since this mesh is smaller than the wavelength of the light it’s trying to collect, it is able to exploit the bizarre way that light works in subwavelength structures, allowing the cell to capture light once it enters the holes in the mesh instead of letting much of it reflect away.
The scientists believe that the cells can be made cost effectively and that this method should work for silicon and gallium arsenide solar cells as well.
By modifying the molecular structure of a polymer used in solar cells, an international team of researchers has increased solar efficiency by more than thirty percent.
Researchers from North Carolina State University and the Chinese Academy of Sciences have found an easy way to modify the molecular structure of a polymer commonly used in solar cells. Their modification can increase solar cell efficiency by more than 30 percent.
Polymer-based solar cells have two domains, consisting of an electron acceptor and an electron donor material. Excitons are the energy particles created by solar cells when light is absorbed. In order to be harnessed effectively as an energy source, excitons must be able to travel quickly to the interface of the donor and acceptor domains and retain as much of the light’s energy as possible.
One way to increase solar cell efficiency is to adjust the difference between the highest occupied molecular orbit (HOMO) of the acceptor and lowest unoccupied molecular orbit (LUMO) levels of the polymer so that the exciton can be harvested with minimal loss. One of the most common ways to accomplish this is by adding a fluorine atom to the polymer’s molecular backbone, a difficult, multi-step process that can increase the solar cell’s performance, but has considerable material fabrication costs.
A team of chemists led by Jianhui Hou from the Chinese Academy of Sciences created a polymer known as PBT-OP from two commercially available monomers and one easily synthesized monomer. Wei Ma, a post-doctoral physics researcher from NC State and corresponding author on a paper describing the research, conducted the X-ray analysis of the polymer’s structure and the donor:acceptor morphology.
PBT-OP was not only easier to make than other commonly used polymers, but a simple manipulation of its chemical structure gave it a lower HOMO level than had been seen in other polymers with the same molecular backbone. PBT-OP showed an open circuit voltage (the voltage available from a solar cell) value of 0.78 volts, a 36 percent increase over the ~ 0.6 volt average from similar polymers.
According to NC State physicist and co-author Harald Ade, the team’s approach has several advantages. “The possible drawback in changing the molecular structure of these materials is that you may enhance one aspect of the solar cell but inadvertently create unintended consequences in devices that defeat the initial intent,” he says. “In this case, we have found a chemically easy way to change the electronic structure and enhance device efficiency by capturing a lager fraction of the light’s energy, without changing the material’s ability to absorb, create and transport energy.”
The researchers’ findings appear in Advanced Materials. The research was funded by the U.S. Department of Energy, Office of Science, Basic Energy Science and the Chinese Ministry of Science and Technology. Dr. Maojie Zhang synthesized the polymers; Xia Guo,Shaoqing Zhang and Lijun Huo from the Chinese Academy of Sciences also contributed to the work.
Publication: Maojie Zhang, et al., “An Easy and Effective Method to Modulate Molecular Energy Level of the Polymer Based on Benzodithiophene for the Application in Polymer Solar Cells,” Advanced Matererilas, 2013; doi: 10.1002/adma.201304631
Source: Tracey Peake, North Carolina State University
Image: Maojie Zhang, et al. doi: 10.1002/adma.201304631
Paying people to protect their natural environment is a popular conservation tool around the world – but figure out that return on investment, for both people and nature, is a thorny problem, especially since such efforts typically stretch on for years.
"Short attention-span worlds with long attention-span problems" is how Xiaodong Chen, a former Michigan State University doctoral student now on faculty at the University of North Carolina-Chapel Hill sums it up.
Chen, with his adviser Jianguo "Jack" Liu, director of the MSU Center for Systems Integration and Sustainability (CSIS) and others, have developed a new way to evaluate and model the long-term effectiveness of conservation investments. Their achievement is not only factoring in ecological gains – like, more trees growing – but also putting the actions and reactions of people into the equation.
The paper, Assessing the Effectiveness of Payments for Ecosystem Services: an Agent-Based Modeling Approach, appears in this week's online edition of Ecology and Society.
The paper examines payments for ecosystem services – the practice of paying people to perform tasks or engage in practices that aid conservation. The authors examined one of China's most sweeping – the National Forest Conservation Program, in which residents in Wolong Nature Reserve are paid to stop chopping down trees for timber and fuel wood.
Chen explained they tapped into both social data and environmental information to be able to create a computer model to simulate how the policy would fare over many years in a variety of scenarios.
Studies documenting results on land cover change and panda habitat dynamics were merged with studies revealing how people were likely to behave if new households were formed or incentives for conservation activities were varied.
"Usually studies are developed in either the social sciences or the natural sciences, and the importance of the other perspectives are not built into scientific exploration," Chen said. "We were able to develop this kind of simulation because of collaborative interdisciplinary research - by putting people with different backgrounds together."
He also said the model's ability to run scenarios about how policy could work over decades is crucial because many goals of conservation, like restoring wildlife habitat, can take decades. In the meantime, the actions of individuals living in the area can change. Explore further:Tallying the wins and losses of policy
When exploring solutions to income inequality policy makers pay close attention to the costs. The cost of healthcare. The cost of food. The cost of child care. The cost of housing.
What about the cost of energy?
According to the Bureau of Labor Statistics, in 2012 the average U.S. family spent over $4,600 or about 9 percent of their budget to heat and power their homes and fuel their vehicles. Families in the bottom fifth of income earners spent nearly 33 percent more of their budget on energy costs than average $2,500 a year or 12% of their annual budget.
Reference the chart to the left and you will find that low-income families spend two and half times more on energy than on health services. Unlike food and housing, consumers cannot shop around for the lowest cost energy. Bargains can be found in the supermarket, but, prices at the pump do not vary from one station to the next. Conservation similarly is not an option when it’s a choice between driving to work or saving a gallon of gasoline.
A solution to remedying income inequality is tackling rising energy costs. The U.S. Energy Information Administration projects the price of electricity will rise 13.6 percent and the price of gasoline by 15.7 percent from now until 2040. Rising global demand, aging and insufficient energy infrastructure and restrictive government policies all play a role in increasing costs.President Obama has the ability to reverse this trend and lessen the blow to all consumers.
Take the shale gas boom for example. Increasing access to private and state lands and sound state regulatory programs have boosted production of natural gas and led to a significant lowering of prices. IHS CERA predicted that the shale revolution lifted household income by more than $1,200 in 2012 through lower energy costs, more job opportunities and greater federal and state tax revenues.
Policy makers should promote responsible energy development with the knowledge that it will have a positive affect on even the most vulnerable. The president has the power to act. Permitting energy infrastructure – including the Keystone XL Pipeline, opening new offshore areas to oil and natural gas development, and finalizing the nuclear waste confidence rulemaking, could transform the energy economy.
If policy makers want to take meaningful action to help our nation’s low income families, they must pursue actions that help lower – not raise – the cost of energy.
“We are living in an age when sleep is more comfortable than ever and yet more elusive.”
The Ancient Greeks believed that one fell asleep when the brain filled with blood and awakened once it drained back out. Nineteenth-century philosophers contended that sleep happened when the brain was emptied of ambitions and stimulating thoughts. “If sleep doesn’t serve an absolutely vital function, it is the greatest mistake evolution ever made,” biologist Allan Rechtschaffen once remarked. Even today, sleep remains one of the most poorly understood human biological functions, despite some recent strides in understanding the “social jetlag” of our internal clocks and the relationship between dreaming and depression. In Dreamland: Adventures in the Strange Science of Sleep (public library), journalist David K. Randall — who stumbled upon the idea after crashing violently into a wall while sleepwalking — explores “the largest overlooked part of your life and how it affects you even if you don’t have a sleep problem.” From gender differences to how come some people snore and others don’t to why we dream, he dives deep into this mysterious third of human existence to illuminate what happens when night falls and how it impacts every aspect of our days.
Most of us will spend a full third of our lives asleep, and yet we don’t have the faintest idea of what it does for our bodies and our brains. Research labs offer surprisingly few answers. Sleep is one of the dirty little secrets of science. My neurologist wasn’t kidding when he said there was a lot that we don’t know about sleep, starting with the most obvious question of all — why we, and every other animal, need to sleep in the first place.
But before we get too anthropocentrically arrogant in our assumptions, it turns out the quantitative requirement of sleep isn’t correlated with how high up the evolutionary chain an organism is:
Lions and gerbils sleep about thirteen hours a day. Tigers and squirrels nod off for about fifteen hours. At the other end of the spectrum, elephants typically sleep three and a half hours at a time, which seems lavish compared to the hour and a half of shut-eye that the average giraffe gets each night.
Humans need roughly one hour of sleep for every two hours they are awake, and the body innately knows when this ratio becomes out of whack. Each hour of missed sleep one night will result in deeper sleep the next, until the body’s sleep debt is wiped clean.
What, then, happens as we doze off, exactly? Like all science, our understanding of sleep seems to be a constant “revision in progress”:
Despite taking up so much of life, sleep is one of the youngest fields of science. Until the middle of the twentieth century, scientists thought that sleep was an unchanging condition during which time the brain was quiet. The discovery of rapid eye movements in the 1950s upended that. Researchers then realized that sleep is made up of five distinct stages that the body cycles through over roughly ninety-minute periods. The first is so light that if you wake up from it, you might not realize that you have been sleeping. The second is marked by the appearance of sleep-specific brain waves that last only a few seconds at a time. If you reach this point in the cycle, you will know you have been sleeping when you wake up. This stage marks the last drop before your brain takes a long ride away from consciousness. Stages three and four are considered deep sleep. In three, the brain sends out long, rhythmic bursts called delta waves. Stage four is known as slow-wave sleep for the speed of its accompanying brain waves. The deepest form of sleep, this is the farthest that your brain travels from conscious thought. If you are woken up while in stage four, you will be disoriented, unable to answer basic questions, and want nothing more than to go back to sleep, a condition that researchers call sleep drunkenness. The final stage is REM sleep, so named because of the rapid movements of your eyes dancing against your eyelids. In this stage of sleep, the brain is as active as it is when it is awake. This is when most dreams occur.
Randall’s most urgent point, however, echoes what we’ve already heard from German chronobiologist Till Roenneberg, who studies internal time — in our blind lust for the “luxuries” of modern life, with all its 24-hour news cycles, artificial lighting on demand, and expectations of round-the-clock telecommunications availability, we’ve thrown ourselves into a kind of circadian schizophrenia:
We are living in an age when sleep is more comfortable than ever and yet more elusive. Even the worst dorm-room mattress in America is luxurious compared to sleeping arrangements that were common not long ago. During the Victorian era, for instance, laborers living in workhouses slept sitting on benches, with their arms dangling over a taut rope in front of them. They paid for this privilege, implying that it was better than the alternatives. Families up to the time of the Industrial Revolution engaged in the nightly ritual of checking for rats and mites burrowing in the one shared bedroom. Modernity brought about a drastic improvement in living standards, but with it came electric lights, television, and other kinds of entertainment that have thrown our sleep patterns into chaos.
Work has morphed into a twenty-four-hour fact of life, bringing its own set of standards and expectations when it comes to sleep … Sleep is ingrained in our cultural ethos as something that can be put off, dosed with coffee, or ignored. And yet maintaining a healthy sleep schedule is now thought of as one of the best forms of preventative medicine.
Reflecting on his findings, Randall marvels:
As I spent more time investigating the science of sleep, I began to understand that these strange hours of the night underpin nearly every moment of our lives.
Indeed, Dreamland goes on to explore how sleep — its mechanisms, its absence, its cultural norms — affects everyone from police officers and truck drivers to artists and entrepreneurs, permeating everything from our decision-making to our emotional intelligence.
YOUR TURN: Now it’s up to Secretary of State John Kerry to make a recommendation to President Obama on the fate of the proposed Keystone XL pipeline.
By Deena Winter | Nebraska Watchdog Updated 4:05 p.m.
LINCOLN, Neb. — The U.S. State Department’s final environmental review of the proposed Keystone XL oil pipeline mirrors earlier conclusions that the pipeline wouldn’t significantly contribute to greenhouse gas emissions.
The report reiterated last year’s draft report conclusion that the pipeline is unlikely to significantly impact the rate of extraction of oil sands or the continued demand for heavy crude oil in the U.S.
Now that the State Department’s environmental review of TransCanada’s application for a federal permit to build the pipeline is complete, a 90-day review by various federal agencies will commence to determine whether the pipeline is in the national interest, since it crosses a national border. The final decision is expected to be made by Secretary of State John Kerry and President Obama.
Canadian pipeline company TransCanadafirst applied for permission to build the pipeline in late 2008, but it ran into a wall of opposition in Nebraska. Nebraska pipeline fighters have taken part in and helped organize protests from the governor’s mansion to Washington, D.C., even as most of the Republican statewide public officials have pushed for approval.
Courtesy photo
REPORT: The U.S. State Department released its final environmental report Friday on the proposed Keystone XL oil pipeline that would cross America. The report found the pipeline would not significantly affect greenhouse gas emissions.
The Keystone XL pipeline would bisect Nebraska, with nearly 200 miles of pipe buried in a dozen counties. A grassroots group called Bold Nebraska has battled against a foreign company having the power to take land from landowners and possible contamination of the massive Ogallala Aquifer by oil spills.
Pipeline opponents successfully lobbied Obama to reject TransCanada’s initial application in late 2011 and forced the company to reroute the pipeline around the ecologically fragile Sandhills. That’s the route reviewed in the latest State Department reports.
The new report noted that most pipeline spills are small: of the 1,692 incidents between 2002 and 2012, 79 percent were small (up to 2,100 gallons) and just 4 percent were large spills where the oil would migrate away from the release site. It also said modeling indicates “aquifer characteristics would inhibit the spread of released oil, and impacts from a release on water quality would be limited.”
Pipeline opponents in Nebraska have questioned why TransCanada didn’t build the pipeline parallel to its existing Keystone One pipeline that crosses eastern Nebraska, away from the Sandhills and aquifer. The report noted this, but concluded it wasn’t a reasonable alternative because it wouldn’t meet Keystone’s contractual obligations to transport 100,000 barrels per day of crude oil from the Bakken oil play in North Dakota. Also, the corridor would be longer, increasing the risk of spills.
The proposed pipeline has put Obama in a difficult position where he must decide whether to live up to his promises to combat climate change or appease labor unions that generally support the pipeline and jobs it would bring. Obama said last year the pipeline should only be built if it doesn’t increase carbon emissions.
Russ Girling, TransCanada president and chief executive officer, told reporters Friday that while opponents will continue to make noise, “The science continues to show that this pipeline can and will be built safety.”
“This pipeline certainly is in the national interest of the United States,” he said.
Bold Nebraska Executive Director Jane Kleeb saw victories in the fact that the report acknowledged the revised route still crosses the Sandhills, which she called a “big shift” from earlier reports.
Environmental groups vowed to keep the pressure on Obama to reject the project.
“Our side continues to gain ground because landowners and environmentalists are now working together,” Kleeb said Friday.
Regardless of the president’s final verdict, a Nebraska lawsuit could still throw another obstacle in the path of the proposed 1,179-mile pipeline. Landowners who oppose the pipeline sued the state, challenging the constitutionality of a law that changed the pipeline route approval process, giving the governor and state environmental regulators the authority to approve or deny the revised route through Nebraska, rather than the Public Service Commission.
If the route review process is deemed unconstitutional, TransCanada would have to go back to square one with siting. A district judge hasn’t yet made a ruling after a one-day trial in September. Contact Deena Winter at deena@nebraskawatchdog.org.
By Karen B. Roberts
(Phys.org) —A team of researchers at the University of Delaware has developed a highly selective catalyst capable of electrochemically converting carbon dioxide—a greenhouse gas—to carbon monoxide with 92 percent efficiency. The carbon monoxide then can be used to develop useful chemicals.
The researchers recently reported their findings in Nature Communications.
"Converting carbon dioxide to useful chemicals in a selective and efficient way remains a major challenge in renewable and sustainable energy research," according to Feng Jiao, assistant professor of chemical and biomolecular engineering and the project's lead researcher.
Co-authors on the paper include Qi Lu, a postdoctoral fellow, and Jonathan Rosen, a graduate student, working with Jiao.
The researchers found that when they used a nano-porous silver electrocatalyst, it was 3,000 times more active than polycrystalline silver, a catalyst commonly used in converting carbon dioxide to useful chemicals.
Silver is considered a promising material for a carbon dioxide reduction catalyst because of it offers high selectivity—approximately 81 percent—and because it costs much less than other precious metal catalysts. Additionally, because it is inorganic, silver remains more stable under harsh catalytic environments.
The exceptionally high activity, Jiao said, is likely due to the UD-developed electrocatalyst's extremely large and highly curved internal surface, which is approximately 150 times larger and 20 times intrinsically more active than polycrystalline silver.
A UD engineering research team led by Feng Jiao has developed a highly selective catalyst capable of electrochemically converting carbon dioxide to carbon monoxide with 92 percent efficiency.
Credit: Evan Krape
Jiao explained that the active sites on the curved internal surface required a much smaller than expected voltage to overcome the activation energy barrier needed drive the reaction.
The resulting carbon monoxide, he continued, can be used as an industry feedstock for producing synthetic fuels, while reducing industrial carbon dioxide emissions by as much as 40 percent.
To validate whether their findings were unique, the researchers compared the UD-developed nano-porous silver catalyst with other potential carbon dioxide electrocatalysts including polycrystalline silver and other silver nanostructures such as nanoparticles and nanowires.
Testing under identical conditions confirmed the non-porous silver catalyst's significant advantages over other silver catalysts in water environments.
Reducing greenhouse carbon dioxide emissions from fossil fuel use is considered critical for human society. Over the last 20 years, electrocatalytic carbon dioxide reduction has attracted attention because of the ability to use electricity from renewable energy sources such as wind, solar and wave.
Ideally, Jiao said, one would like to convert carbon dioxide produced in power plants, refineries and petrochemical plants to fuels or other chemicals through renewable energy use.
A 2007 Intergovernmental Panel on Climate Change report stated that 19 percent of greenhouse gas emissions resulted from industry in 2004, according to the Environmental Protection Agency's website.
"Selective conversion of carbon dioxide to carbon monoxide is a promising route for clean energy but it is a technically difficult process to accomplish," said Jiao. "We're hopeful that the catalyst we've developed can pave the way toward future advances in this area." Explore further:Process holds promise for production of synthetic gasoline More information: A selective and efficient electrocatalyst for carbon dioxide reduction." Qi Lu, Jonathan Rosen, Yang Zhou, Gregory S. Hutchings, Yannick C. Kimmel, Jingguang G. Chen, Feng Jiao. Nature Communications 5, Article number: 3242 DOI: 10.1038/ncomms4242 . Received 10 September 2013 Accepted 10 January 2014 Published 30 January 2014
Natural forms inspire McGill researchers to develop a technique to make glass less brittle
Published: 29Jan2014
Normally when you drop a drinking glass on the floor it shatters. But, in future, thanks to a technique developed in McGill’s Department of Mechanical Engineering, when the same thing happens the glass is likely to simply bend and become slightly deformed. That’s because Prof. François Barthelat and his team have successfully taken inspiration from the mechanics of natural structures like seashells in order to significantly increase the toughness of glass.
Normally when you drop a drinking glass on the floor it shatters. But, in future, thanks to a technique developed in McGill’s Department of Mechanical Engineering, when the same thing happens the glass is likely to simply bend and become slightly deformed. That’s because Prof. François Barthelat and his team have successfully taken inspiration from the mechanics of natural structures like seashells in order to significantly increase the toughness of glass.
“Mollusk shells are made up of about 95 per cent chalk, which is very brittle in its pure form,” says Barthelat. “But nacre, or mother-of-pearl, which coats the inner shells, is made up of microscopic tablets that are a bit like miniature Lego building blocks, is known to be extremely strong and tough, which is why people have been studying its structure for the past twenty years.”
Previous attempts to recreate the structures of nacre have proved to be challenging, according to Barthelat. “Imagine trying to build a Lego wall with microscopic building blocks. It’s not the easiest thing in the world.” Instead, what he and his team chose to do was to study the internal ‘weak’ boundaries or edges to be found in natural materials like nacre and then use lasers to engrave networks of 3D micro-cracks in glass slides in order to create similar weak boundaries. The results were dramatic.
The researchers were able to increase the toughness of glass slides (the kind of glass rectangles that get put under microscopes) 200 times compared to non-engraved slides. By engraving networks of micro-cracks in configurations of wavy lines in shapes similar to the wavy edges of pieces in a jigsaw puzzle in the surface of borosilicate glass, they were able to stop the cracks from propagating and becoming larger. They then filled these micro-cracks with polyurethane, although according to Barthelat, this second process is not essential since the patterns of micro-cracks in themselves are sufficient to stop the glass from shattering.
The researchers worked with glass slides simply because they were accessible, but Barthelat believes that the process will be very easy to scale up to any size of glass sheet, since people are already engraving logos and patterns on glass panels. He and his team are excited about the work that lies ahead for them.
“What we know now is that we can toughen glass, or other materials, by using patterns of micro-cracks to guide larger cracks, and in the process absorb the energy from an impact,” says Barthelat. “We chose to work with glass because we wanted to work with the archetypal brittle material. But we plan to go on to work with ceramics and polymers in future. Observing the natural world can clearly lead to improved man-made designs.”
To read the full paper: ‘Overcoming the brittleness of glass through bio-inspiration and micro-architecture’ by F. Barthelat et al in Nature Communications: http://www.nature.com/ncomms/2014/140128/ncomms4166/full/ncomms4166.html
The research was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Canada Foundation for Innovation (CFI), with partial support for one of the authors from a McGill Engineering Doctoral Award. The authors acknowledge useful technical advice by the company Vitro. To contact the researcher directly: François Barthelat | Dept. Of Mechanical Engineering | McGill University | 514-398-6318
From Leonardo da Vinci to Einstein, and Shakespeare to Stephen King, two data analysts have ranked the most significant people in history – do the results seem right?
The Literary Top 50 … top row from left: Shakespeare, Austen and Homer;
bottom row: King, Dickinson and Shelley
People love lists, and are perhaps even more fascinated by rankings – lists organised according to some measure of value or merit. Who were the most important women in history? The best writers or most influential artists? Our least illustrious political leaders? Who's bigger: Hitler or Napoleon? Picasso or Michelangelo? Charles Dickens or Jane Austen? John, Paul, George or Ringo?
We work in the fields of data and computer science and do not answer these questions as historians might, through a principled assessment of a person's achievements. Instead, we aggregate millions of opinions. We rank historical figures just as Google ranks web pages, by integrating a diverse set of measurements of reputation into a single consensus value.
Significance is related to fame but measures something different. According to our system, forgotten US President Chester A Arthur (who we rank at 499) is more historically significant than pop star Justin Bieber (ranked 8,633), even though Arthur may have a less devoted following and certainly has lower contemporary name recognition. We believe our computational, data-centric analysis provides new ways to understand and interpret the past.
Historically significant figures leave statistical evidence of their presence behind, if one knows where to look for it. We use several data sources to fuel our ranking algorithms. Most important is Wikipedia, the web-based, collaborative, multi-lingual encyclopedia. Wikipedia is enormous, featuring well over 3m articles in its English edition alone. But we use it in a manner quite different from the typical reader, by analysing the Wiki pages of more than 800,000 people to measure quantities that should correspond to historical significance. We would expect that more significant people should have longer Wikipedia pages than those less notable because they have greater accomplishments to report. The Wiki pages of people of higher significance should attract greater readership than those of lower significance. The elite should have pages linked to by other highly significant figures, meaning they should have a high PageRank, the measure of importance used by Google to identify important web pages. We combine these other variables into a single number using a statistical method called factor analysis. But we need one final correction: to fairly compare contemporary figures such as Britney Spears against, say, Aristotle, we must adjust for the fact that today's stars will fade from living memory over the next several generations. By analysing traces left in millions of scanned books, we hope to measure just how fast this decay occurs, and correct for it.
We have naturally received strong reactions from readers of our book Who's Bigger? complaining about our computational methodology. Certain historians have complained that Wiki cannot be trusted as a source for anything. This is pretty silly. People find Wikipedia articles to be generally accurate and informative, or else they wouldn't read them. Where do you head to read up on a new topic you are interested in? We think it is clear that anyone (or anything, like our algorithms) that has read all of Wikipedia would be in an excellent position to discourse about the most important people in recorded history.
More cogent is the complaint that our results are culturally biased because we analyse only the English edition of Wikipedia. How can we fairly assess the significance of Chinese poets against US presidents? We agree that any ranking of historical significance is indeed culturally dependent and so, yes, our rankings have an Anglocentric bias. But the depth of Wikipedia is so great that there are hundreds of articles about Chinese poets in the English edition.
Others highlight a few contemporary figures that they deem us to have overrated, such as Britney Spears (689) or Barack Obama (111), and use this anecdotal evidence to sneer. But we also conduct validation procedures, and compare our rankings to public opinion polls, Hall of Fame voting records, sports statistics, and even the prices of paintings and autographs.
Anecdotal evidence is not as compelling as it might seem. British readers have complained that our algorithms don't rank British figures high enough just as strongly as Spanish readers think we are unfair to their compatriots. But our book is designed in part to generate debate.
At No 1 … Leonardo da Vinci Photograph: Bettmann/CORBIS
Art has been a uniquely human activity for more than 40,000 years. But the names of artists went unrecorded for most of this period. The identities of several prominent Greek artists, most notably Phidias, survive through contemporary written accounts and Roman copies of their work. But the notion of artists with distinct identities then faded, not to be revived until the late middle ages. The great painters of the Renaissance dominate our rankings of the most significant pre-20th century artists.
Top of the list … Vincent Van Gogh. Photograph: Rex Features
The Impressionist painters and their successors are at the top of our table of the most significant modern artists. Later movements such as surrealism (Salvador Dalí, 1,021) and abstract expressionism (Jackson Pollock, 1,013) are represented, but by relatively few artists.
At No 2 … Charles Dickens. Photograph: Getty Images
Ranking the world's greatest literary figures is a parlour game – just like the ranking of presidents or prime ministers. It exposes the biases inherent in everyone's world-view. But our ranking, it turns out, agrees with others: our top 50 contains 39 members of Daniel Burt's The Literary 100, including his 11 highest-ranked figures. With our Anglocentric source bias, we feature a larger number of British and US writers (but Jane Austen and Emily Dickinson are the only women to make it into the top 50).
We generally score popular writers such as Oscar Wilde, Lewis Carroll and Mark Twain higher than we think the literary establishment would. We expect them to be surprised by our rank for horror novelist Stephen King. No other contemporary writer came close to a spot in our Literary 50. But we consider King to be the Dickens [33] of our time, characterised by immense popularity, mind-boggling productivity, and even the serial novel genre.