by Nancy Owano
(Phys.org) —A Friday report in Nature News handles a well-publicized topic, the air quality in Beijing. That may seem like rather old news, but the Friday report has new information on the city's troubling air quality. Scientists were looking for information about the potential for pathogens and allergens in Beijing's air and they turned to "metagenomics" as their study tool. The research team described what they were seeking in their paper, "Inhalable Microorganisms in Beijing's PM2.5 and PM10 Pollutants during a Severe Smog Event," for Environmental Science & Technology. "While the physical and chemical properties of PM pollutants have been extensively studied, much less is known about the inhalable microorganisms. Most existing data on airborne microbial communities using 16S or 18S rRNA gene sequencing to categorize bacteria or fungi into the family or genus levels do not provide information on their allergenic and pathogenic potentials. Here we employed metagenomic methods to analyze the microbial composition of Beijing's PM pollutants during a severe January smog event."
They took 14 air samples over seven consecutive days. Using genome sequencing, they found about 1,300 different microbial species in the heavy smog period of early last year. The scientists compared their results with a large gene database. What about their findings? Most of the microbes they found were benign but a few were responsible for allergies and respiratory disease. As Nature News reported, the most abundant species identified was Geodermatophilus obscurus, That is a common soil bacterium. Streptococcus pneumonia, however, was also part of the brew, which can cause pneumonia, along with Aspergillus fumigatus, a fungal allergen, and other bacteria typically found in faeces. "Our results," wrote the researchers, "suggested that the majority of the inhalable microorganisms were soil-associated and nonpathogenic to humans. Nevertheless, the sequences of several respiratory microbial allergens and pathogens were identified and their relative abundance appeared to have increased with increased concentrations of PM pollution."
The authors suggested that their findings may provide an important reference for environmental scientists, health workers and city planners. The researchers also suggested, according to Nature News, that clinical studies explore signs of the same microbes in the sputum of patients with respiratory tract infections, to assess whether smoggier days lead to more infections.
Metagenomics, which can analyze microbial communities regardless of the ability of member organisms to be cultured in the laboratory, is recognized as a powerful approach. Nature Reviews also describes metagenomics as "based on the genomic analysis of microbial DNA that is extracted directly from communities in environmental samples." Explore further:Delhi says air 'not as bad' as Beijing after smog scrutinyMore information:Environmental Science & Technology paper: pubs.acs.org/doi/abs/10.1021/es4048472
“And when our children’s children look us in the eye,” said President Obama in his State of the Union address, “and ask if we did all we could to leave them a safer, more stable world, with new sources of energy, I want us to be able to say yes, we did.”
I do, too.
But the way to do this is the exact opposite of Obama’s prescribed policies of restricting fossil fuel use and giving energy welfare to producers of unreliable energy from solar and wind. Just consider the last three decades.
When I was born 33 years ago, politicians and leaders were telling my parents’ generation the same thing we’re being told today: that for the sake of future generations, we need to end our supposed addiction to fossil fuels.
Fossil fuels, they explained, were a fast-depleting resource that would inevitably run out. Fossil fuels, they explained, would inevitably make our environment dirtier and dirtier. And of course, fossil fuels would change the climate drastically and for the worse. It was the responsibility of my parents’ generation, they were told, to drastically restrict fossil fuels and live on solar and wind energy.
My generation should be eternally grateful they did the exact opposite.
Since 1980, when I was born, fossil fuel use here and around the world has only grown. In the U.S., between 1980 and 2012 the consumption of oil increased 3.9%, the consumption of natural gas increased 28.5%, and the consumption of coal increased 12.6%. (Had it not been for the economic downturn these numbers would be higher.)
Globally, oil consumption increased 38.6%, natural gas increased 130.5%, and coal increased 106.8%. (Source: BP Statistical Review of World Energy, June 2013)
My parents’ generation was told to expect disastrous consequences.
In 1980, the “Global 2000 Report to the President” wrote: “If present trends continue, . . . the world
in 2000 will be more crowded, more polluted, less stable ecologically, and more vulnerable to disruption than the world we live in now. Serious stresses involving population, resources, and environment are clearly visible ahead.”
In 1989, the New Yorker’s star journalist Bill McKibben, claiming to represent a scientific consensus, prophesized:
We stand at the end of an era—the hundred years’ binge on oil, gas, and coal…The choice of doing nothing—of continuing to burn ever more oil and coal—is not a choice, in other words. It will lead us, if not straight to hell, then straight to a place with a similar temperature.
Al Gore, just before becoming Vice President, said the use of fossil fuels put us on the verge of an “ecological holocaust.”
What actually happened? Thanks in large part to our increasing use of fossil fuel energy and the technologies they power, life has gotten a lot better across the board for billions of people around the globe.
Life expectancy is way up. World life expectancy at birth was just 63 years in 1980. That number has increased to over 70. The US, already far above average with 73 in 1980, today enjoys an average life expectancy of 78. The infant mortality rate of mankind is less than half of what is used to be in 1980 (from 80 to 35 per 1000 live births).
Malnutrition and undernourishment have plummeted. Access to electricity, a proxy for development and health, is constantly increasing. Access to improved water sources, a necessity for basic hygiene standards and human health, has been possible for ever increasing portions of mankind, especially in poor countries (1990 = 76%, 2012 = 89%).
GDP per capita has constantly increased. The percentage of people who live on $2 a day in South Asia, a region with massive fossil fuel use, has been steadily on the decline from 87% in the early 1980s to 67% in 2010. (Source: World Bank Data.)
And then there is the statistic that the climate doomsayers will never acknowledge. Thanks to energy and technology, climate-related deaths have been plummeting for decades: you are 50X less likely to die from a climate-related cause (heat wave, cold snap, drought, flood, storm) than you would have been 80 years ago.
We are the longest-living, best-fed, healthiest, richest generation of humans on this planet and there is unending potential for further progress.
If, that is, we don’t listen to the anti-fossil-fuel “experts.”
Here’s a scary story. In the 1970s, arguably the most revered intellectuals on energy and environment were men named Amory Lovins and Paul Ehrlich. (They are still revered, which, for reasons that will become apparent within a paragraph, is a moral crime.)
In 1977, Lovins, considered an energy wunderkind for his supposedly innovative criticisms of fossil fuels and his support of solar power and reduced energy use, explained that we already used too much energy. And in particular, the kind of energy we least needed was . . . electricity: “[W]e don’t need any more big electric generating stations. We already have about twice as much electricity as we can use to advantage.”
Environmentalist legend Paul Ehrlich had famously declared “The battle to feed all of humanity is over. In the 1970s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now.” Thanks in large part to a surge in energy use that led to a massive alleviation of global hunger, that prediction did not come to pass. But it might have, had we followed Ehrlich’s advice: “Except in special circumstances, all construction of power generating facilities should cease immediately, and power companies should be forbidden to encourage people to use more power.
Power is much too cheap. It should certainly be made more expensive and perhaps rationed, in order to reduce its frivolous use.”
Had we listened to these two “experts,” billions of people—ultimately, every single person alive—would be worse off today. Or dead. You would certainly not be reading this on an “unnecessary” electronic device connecting to the electricity-hungry Internet.
This is what is at stake with energy. And while President Obama wasn’t extreme enough in his recent rhetoric for many environmentalist groups, he is on record as calling for outlawing 80% of fossil fuel use. Which would mean not only prohibiting future power plants, but shutting down existing ones.
Whether he knows it or not, every time he attacks fossil fuels, President Obama is working to make his grandchildren’s world much, much worse.
A scientific first: Team engineers photograph radiation beams in human body through
Cherenkov effect. Long exposure image of Cherenkov emission and induced fluorescence
from Fluorescein dissolving in water during irradiation from a therapeutic LINAC beam.
Credit: Adam Glaser.
Jan 23, 2014
A scientific breakthrough may give the field of radiation oncology new tools to increase the precision and safety of radiation treatment in cancer patients by helping doctors "see" the powerful beams of a linear accelerator as they enter or exit the body.
We don't have X-ray vision. When we have an X-ray or mammogram, we cannot detect the radiation beam that passes through our bone or soft tissue, neither can our doctor. But what if we could see X-rays? When we use powerful X-rays for cancer treatment, we could see how they hit the tumor. If we were off target, we could stop and make adjustments to improve accuracy. Pinpoint precision is important. The goal of radiation is to kill cancer cells without harming healthy tissue.
Safety in Radiation Oncology
As a way to make radiation safer and better, Dartmouth began to investigate a scientific phenomenon called the Cherenkov effect in 2011. Our scientists and engineers theorized that by using Cherenkov emissions the beam of radiation could be "visible" to the treatment team. The ability to capture an X-ray would show:
how the radiation signals travel through the body
the dose of radiation to the skin
any errors in dosage.
TV viewers may have seen images of sunken fuel rods from a nuclear power plant emitting a blue-green glow. That is the Cherenkov effect. When a particle with an electric charge travels faster than the speed of light through something that does not conduct electricity, like the human body or water, it glows. As the matter relaxes from polarization, it emits light. (Yes, for a brief period people glow during radiation.)
The Cherenkov effect in the laboratory
As a first step, engineers at the Thayer School of Engineering at Dartmouth modified a regular camera with a night vision scope to take photos of radiation beams as they passed through water. What appeared on the photos is the Cherenkov effect, a luminescent blue glow. (An engineering student, Adam Glaser, explains how it works in this video.)
To refine the approach for use in radiation treatment, scientists used a mannequin of the human body. They measured and studied the results to refine their ability to capture the luminescence, experimenting with beam size, position, and wavelength.
Cherenkov imaging used for first time in treatment setting
With the clinical aspects refined, Geisel School of Medicine researchers photographed luminescence during the routine radiation treatment of a dog with an oral tumor.
This was the first time Cherenkov studies came out of the laboratory and into a treatment setting. The scientists coined the approach Cherenkoscopy. As they anticipated, during the session they were able to see detailed information about the treatment field and the dose. The results were published in the November 2013 issue of the Journal of Biomedical Optics.
"This first observation in the dog proved that we could image a radiation beam during treatment in real time," said David Gladstone, ScD, chief of Clinical Physics at Norris Cotton Cancer Center. "The images verified the shape of the beam as well as intended motion of the treatment machine."
First image of Cherenkov emissions during treatment of human breast
Now ready to use the technology with a human patient, the team prepared to view radiation as it entered the body of a female breast cancer patient undergoing radiation in July 2013.
"Breast cancer is suited for this because the imaging visualizes the superficial dose of radiation to the skin," said Lesley A. Jarvis, MD, radiation oncologist, Norris Cotton Cancer Center. Skin reactions, similar to sunburn, are a common and bothersome side effect during breast radiation. "By imaging and quantitating the surface dose in a way that has never been done before," said Jarvis, "we hope to learn more about the physical factors contributing to this skin reaction."
By seeing the effect of radiation on the body, radiation oncologists and physicists can make adjustments to avoid side effects to the skin. Most radiation patients undergo somewhere between 8-20 sessions. The Cherenkov images of the breast cancer patient showed a hot spot in her underarm, which physicians and physicists could work to prevent in future sessions.
"The actual images show that we are treating the exact correct location, with the appropriate beam modifications and with the precise dose of radiation," said Jarvis.
Clinical use of Cherenkov emissions proves successful
This trial showed that the Cherenkov effect is feasible for use real-time during radiation. "We have learned the imaging is easy to incorporate into the patient's treatment, adding only minimal time to the treatments," said Jarvis.
"The time needed to acquire this information is negligible, even with our experimental, non-integrated system," said Gladstone. "Cherenkov images were found to contain much richer information than anticipated, specifically, we did not expect to visualize internal blood vessels."
Mapping blood vessels opens up opportunities for radiation oncology to use a person's internal anatomy to confirm precise locations. Skin tattoos on patients determine a preliminary alignment that is verified with X-rays, which show bony anatomy or implanted markers. Cherenkov imaging allow technicians to visualize soft tissue and internal vasculature.
A possible safety net for radiation treatment
By integrating Cherenkov imaging into routine clinical care, Gladstone says the technology could be used to verify that the proper dose is being delivered to patients, helping to avoid misadministration of radiation therapy, a rare, but dangerous occurrence.
Twelve patients are participating in a pilot study, which is almost complete. The research team plans to publish the results in a peer reviewed journal. The Cherenkov effect project team includes Lesley Jarvis, MD, assistant professor of Medicine, Geisel School of Medicine; Brian Pogue, PhD, professor of Engineering, Thayer School, professor of Physics & Astronomy, Dartmouth College, professor of Surgery, Geisel School of Medicine; David J. Gladstone, ScD, DABMP associate professor of Medicine, Geisel School of Medicine; Adam Glaser, engineering student; Rongxiao Zhang, physics student; Whitney Hitchcock, medical school student.
David Strumfels: Watch Jonathan Eisen writhe his hands over this. What? Scientists are above cognitive biases, unlike ordinary mortals (who required them to function). And they -- the NYT -- present no evidence! I have bad news for him. Cognitive bias is built into how the human brain functions. High intelligence and years of education and research do no make it just go away -- far from it, by creating the illusion of being above it, the Eisens and every other scientist in the world (including me) are that much more susceptible to it. The truth is, it should be Eisen to be the one presenting evidence that scientists really are just that far superior. I point to his outrage that he is probably (albeit unconsciously) knows he hasn't a leg to stand on.
It is the cross-checking, competition, and self-correctiveness of science that allows it to make progress, not the perfect lack of biases and other human frailties among the people who do it. Eisen, and every scientist, should know this, know it by heart one might say, if at least not by mind.
End Strumfels
SCIENCE is in crisis, just when we need it most. Two years ago, C. Glenn Begley and Lee M. Ellis reported in Nature that they were able to replicate only six out of 53 “landmark” cancer studies. Scientists now worry that many published scientific results are simply not true. The natural sciences often offer themselves as a model to other disciplines. But this time science might look for help to the humanities, and to literary criticism in particular.
A major root of the crisis is selective use of data. Scientists, eager to make striking new claims, focus only on evidence that supports their preconceptions. Psychologists call this “confirmation bias”: We seek out information that confirms what we already believe. “We each begin probably with a little bias,” as Jane Austen writes in “Persuasion,” “and upon that bias build every circumstance in favor of it.”
Despite the popular belief that anything goes in literary criticism, the field has real standards of scholarly validity. In his 1967 book “Validity in Interpretation,” E. D. Hirsch writes that “an interpretive hypothesis,” about a poem “is ultimately a probability judgment that is supported by evidence.” This is akin to the statistical approach used in the sciences; Mr. Hirsch was strongly influenced by John Maynard Keynes’s “A Treatise on Probability.”
However, Mr. Hirsch also finds that “every interpreter labors under the handicap of an inevitable circularity: All his internal evidence tends to support his hypothesis because much of it was constituted by his hypothesis.” This is essentially the problem faced by science today. According to Mr. Begley and Mr. Ellis’s report in Nature, some of the nonreproducible “landmark” studies inspired hundreds of new studies that tried to extend the original result without verifying if the original result was true. A claim is not likely to be disproved by an experiment that takes that claim as its starting point. Mr. Hirsch warns about falling “victim to the self-confirmability of interpretations.”
It’s a danger the humanities have long been aware of. In his 1960 book “Truth and Method,” the influential German philosopher Hans-Georg Gadamer argues that an interpreter of a text must first question “the validity — of the fore-meanings dwelling within him.” However, “this kind of sensitivity involves neither ‘neutrality’ with respect to content nor the extinction of one’s self.”
Rather, “the important thing is to be aware of one’s own bias.” To deal with the problem of selective use of data, the scientific community must become self-aware and realize that it has a problem. In literary criticism, the question of how one’s arguments are influenced by one’s prejudgments has been a central methodological issue for decades.
Sometimes prejudgments are hard to resist. In December 2010, for example, NASA-funded researchers, perhaps eager to generate public excitement for new forms of life, reported the existence of a bacterium that used arsenic instead of phosphorus in its DNA. Later, this study was found to have major errors. Even if such influences don’t affect one’s research results, we should at least be able to admit that they are possible.
Austen might say that researchers should emulate Mr. Darcy in “Pride and Prejudice,” who submits, “I will venture to say that my investigations and decisions are not usually influenced by my hopes and fears.” At least Mr. Darcy acknowledges the possibility that his personal feelings might influence his investigations.
But it would be wrong to say that the ideal scholar is somehow unbiased or dispassionate. In my freshman physics class at Caltech, David Goodstein, who later became vice provost of the university, showed us Robert Millikan’s lab notebooks for his famed 1909 oil drop experiment with Harvey Fletcher, which first established the electric charge of the electron.
The notebooks showed many fits and starts and many “results” that were obviously wrong, but as they progressed, the results got cleaner, and Millikan could not help but include comments such as “Best yet — Beauty — Publish.” In other words, Millikan excluded the data that seemed erroneous and included data that he liked, embracing his own confirmation bias.
Mr. Goodstein’s point was that the textbook “scientific method” of dispassionately testing a hypothesis is not how science really works. We often have a clear idea of what we want the results to be before we run an experiment. We freshman physics students found this a bit hard to take. What Mr. Goodstein was trying to teach us was that science as a lived, human process is different from our preconception of it. He was trying to give us a glimpse of self-understanding, a moment of self-doubt.
When I began to read the novels of Jane Austen, I became convinced that Austen, by placing sophisticated characters in challenging, complex situations, was trying to explicitly analyze how people acted strategically. There was no fancy name for this kind of analysis in Austen’s time, but today we call it game theory. I believe that Austen anticipated the main ideas of game theory by more than a century.
As a game theorist myself, how do I know I am not imposing my own way of thinking on Austen? I present lots of evidence to back up my claim, but I cannot deny my own preconceptions and training. As Mr. Gadamer writes, a researcher “cannot separate in advance the productive prejudices that enable understanding from the prejudices that hinder it.” We all bring different preconceptions to our inquiries, whether about Austen or the electron, and these preconceptions can spur as well as blind us.
Perhaps because of its self-awareness about what Austen would call the “whims and caprices” of human reasoning, the field of psychology has been most aggressive in dealing with doubts about the validity of its research. In an open email in September 2012 to fellow psychologists, the Nobel laureate Daniel Kahneman suggests that “to deal effectively with the doubts you should acknowledge their existence and confront them straight on, because a posture of defiant denial is self-defeating.”
Everyone, including natural scientists, social scientists and humanists, could use a little more self-awareness. Understanding science as fundamentally a human process might be necessary to save science itself.
Michael Suk-Young Chwe, a political scientist at U.C.L.A., is the author of “Jane Austen, Game Theorist. ”
Researchers at Princeton University have used nanotechnology to develop a nanotechnological mesh that increases efficiency over organic solar cells nearly threefold.
The scientists published their findings in the journal Optics Express¹. The team was able to reduce reflexivity and capture more of the light that isn’t reflected. The resulting solar cell is thinner, less reflective and utilizes sandwiched plastic and metal with the nanomesh. The so-called “Plasmonic Cavity with Subwavelength Hole array,” or “PlaCSH,” reduces the potential for losing light itself and reflects only 4% of direct sunlight, leading to a 52% increase in efficiency compared to conventional, organic solar cells.
Benefits of using the nanomesh
PlaCSH is capable of capturing large amounts of sunlight even when the sunlight is dispersed on cloudy days, which translates to an increase of 81% in efficiency in indirect lighting conditions. PlaCSH is up to 175% more efficient than traditional solar cells.
The mesh is only 30 nanometers thick and the holes in it are only 175 nm in diameter. This replaces the thicker, traditional top layer that is made out of indium-tin-oxide (ITO). Since this mesh is smaller than the wavelength of the light it’s trying to collect, it is able to exploit the bizarre way that light works in subwavelength structures, allowing the cell to capture light once it enters the holes in the mesh instead of letting much of it reflect away.
The scientists believe that the cells can be made cost effectively and that this method should work for silicon and gallium arsenide solar cells as well.
By modifying the molecular structure of a polymer used in solar cells, an international team of researchers has increased solar efficiency by more than thirty percent.
Researchers from North Carolina State University and the Chinese Academy of Sciences have found an easy way to modify the molecular structure of a polymer commonly used in solar cells. Their modification can increase solar cell efficiency by more than 30 percent.
Polymer-based solar cells have two domains, consisting of an electron acceptor and an electron donor material. Excitons are the energy particles created by solar cells when light is absorbed. In order to be harnessed effectively as an energy source, excitons must be able to travel quickly to the interface of the donor and acceptor domains and retain as much of the light’s energy as possible.
One way to increase solar cell efficiency is to adjust the difference between the highest occupied molecular orbit (HOMO) of the acceptor and lowest unoccupied molecular orbit (LUMO) levels of the polymer so that the exciton can be harvested with minimal loss. One of the most common ways to accomplish this is by adding a fluorine atom to the polymer’s molecular backbone, a difficult, multi-step process that can increase the solar cell’s performance, but has considerable material fabrication costs.
A team of chemists led by Jianhui Hou from the Chinese Academy of Sciences created a polymer known as PBT-OP from two commercially available monomers and one easily synthesized monomer. Wei Ma, a post-doctoral physics researcher from NC State and corresponding author on a paper describing the research, conducted the X-ray analysis of the polymer’s structure and the donor:acceptor morphology.
PBT-OP was not only easier to make than other commonly used polymers, but a simple manipulation of its chemical structure gave it a lower HOMO level than had been seen in other polymers with the same molecular backbone. PBT-OP showed an open circuit voltage (the voltage available from a solar cell) value of 0.78 volts, a 36 percent increase over the ~ 0.6 volt average from similar polymers.
According to NC State physicist and co-author Harald Ade, the team’s approach has several advantages. “The possible drawback in changing the molecular structure of these materials is that you may enhance one aspect of the solar cell but inadvertently create unintended consequences in devices that defeat the initial intent,” he says. “In this case, we have found a chemically easy way to change the electronic structure and enhance device efficiency by capturing a lager fraction of the light’s energy, without changing the material’s ability to absorb, create and transport energy.”
The researchers’ findings appear in Advanced Materials. The research was funded by the U.S. Department of Energy, Office of Science, Basic Energy Science and the Chinese Ministry of Science and Technology. Dr. Maojie Zhang synthesized the polymers; Xia Guo,Shaoqing Zhang and Lijun Huo from the Chinese Academy of Sciences also contributed to the work.
Publication: Maojie Zhang, et al., “An Easy and Effective Method to Modulate Molecular Energy Level of the Polymer Based on Benzodithiophene for the Application in Polymer Solar Cells,” Advanced Matererilas, 2013; doi: 10.1002/adma.201304631
Source: Tracey Peake, North Carolina State University
Image: Maojie Zhang, et al. doi: 10.1002/adma.201304631