Search This Blog

Tuesday, December 17, 2013

More Walter Mitty than James Bond: How climate change expert posed as CIA agent for 10 years — RT USA

More Walter Mitty than James Bond: How climate change expert posed as CIA agent for 10 years — RT USA

David Strumfels comment:  Global deniers will probably make of this, although it isn't relevant to the scientific evidence of global warming.  But it does make one wonder.

More Walter Mitty than James Bond: How climate change expert posed as CIA agent for 10 years

Published time: December 17, 2013 11:21
Edited time: December 17, 2013 17:37
John C. Beale (Photo from flickr.com/oversight)
John C. Beale (Photo from flickr.com/oversight)
A leading American expert on climate change and the Environmental Protection Agency’s highest-paid employee, deserves to spend 30 months behind bars for lying to his bosses about being a CIA spy to avoid doing his real job, US federal prosecutors say.

Prosecutors described John Beale’s actions as “crimes of massive proportion” that were “offensive” to those who actually work for the CIA, NBC reported.

Beale pled guilty in September, and has been accused of major fraud of almost $1 million in salary and other benefits over a decade.

While Beale claimed to have a James Bond-style lifestyle, in reality he was leading a Walter Mitty-style fantasy double life.

At one point, Beale in fact claimed to be urgently needed in Pakistan because the Taliban was torturing his CIA replacement, according to EPA Assistant Inspector General Patrick Sullivan, who headed the investigation into Beale’s activities.

“Due to recent events that you have probably read about, I am in Pakistan,” Beale wrote to Gina McCarthy, the EPA’s administrator, in an e-mail dated December 18, 2010. “Got the call Thurs and left Fri. Hope to be back for Christmas ….Ho, ho, ho.”

The EPA official also failed to show at his workplace for long stretches of time, for instance, 18 months starting from June 2011. During this period, he did “absolutely no work,” according to Beale’s lawyer.

In 2008, Beale didn’t show up at the EPA for six months, telling his boss that he was part of a special multi-agency election-year project relating to “candidate security.” He billed the government $57,000 for five trips to California that were made purely “for personal reasons,” his lawyer acknowledged. It turned out that Beale’s parents lived there.

He also claimed to be suffering from malaria that he got while serving in Vietnam – another story that turned out to be a lie.

Among his bonuses and travel expenses were first-class trips to London, where he stayed at five-star hotels and racked up thousands in bills for limos and taxis.

However, most of the time Beale avoided work he was at his Northern Virginia home, riding bikes, doing housework and reading books, or at a vacation house on Cape Cod – all while he claimed he was at the CIA’s Northern Virginia headquarters.

“He’s never been to Langley (the CIA’s Virginia headquarters),” said Patrick Sullivan. “The CIA has no record of him ever walking through the door.”

At the same time, Beale had been the highest-paid official at the EPA, receiving $206,000 a year.
Beale’s scam was revealed in 2013 when it was noticed that he was still receiving his salary a year and a half after he retired.

When first questioned by EPA officials early in 2013 about his alleged CIA undercover work, Beale brushed them off by saying he couldn’t discuss it, according to Sullivan.

Weeks later, after being confronted again by investigators, Beale admitted to lying, but “didn’t show much remorse” and explained he acted this way to “puff up his own image.”

Through his lawyer, Beale has asked the prosecution for leniency, blaming his behavior on psychological problems.

“With the help of his therapist, Mr. Beale has come to recognize that, beyond the motive of greed, his theft and deception were animated by a highly self-destructive and dysfunctional need to engage in excessively reckless, risky behavior,” attorney John Kern wrote.

The lawyer added that the desire to manipulate people by making up grandiose lies stemmed from Beale’s “insecurities.”

The two sentencing memos and other documents on the trial present new details of the case that’s been branded one of the most audacious and creative federal frauds in history.

“I thought, ‘Oh my God, How could this possibly have happened in this agency? I’ve worked for the government for 35 years. I’ve never seen a situation like this,” Patrick Sullivan told NBC News.

One of the most important points raised by the investigation was that why it took the EPA administration so long to start looking into Beale’s grandiose stories.

There’s a certain culture here at the EPA where the mission is the most important thing. They don’t think like criminal investigators. They tend to be very trusting and accepting,” Sullivan said.

It was revealed that Beale publicly retired, but kept getting his salary for another year and a half, with his expense vouchers approved by his colleague whose conduct is now looked into.

Beale is set to be sentenced in Washington on Wednesday.

Brain Neurons Subtract Images, Use Differences: ScienceDaily

ScienceDaily: Your source for the latest research news.  Original source of this article:

http://www.sciencedaily.com/releases/2013/12/131217104240.htm?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+sciencedaily%2Ftop_news%2Ftop_science+%28ScienceDaily%3A+Top+News+--+Top+Science%29&utm_content=FaceBook

Dec. 17, 2013 — Researchers have hitherto assumed that information supplied by the sense of sight was transmitted almost in its entirety from its entry point to higher brain areas, across which visual sensation is generated. "It was therefore a surprise to discover that the data volumes are considerably reduced as early as in the primary visual cortex, the bottleneck leading to the cerebrum," says PD Dr Dirk Jancke from the Institute for Neural Computation at the Ruhr-Universität. "We intuitively assume that our visual system generates a continuous stream of images, just like a video camera. However, we have now demonstrated that the visual cortex suppresses redundant information and saves energy by frequently forwarding image differences."
 
Plus or minus: the brain's two coding strategies
The researchers recorded the neurons' responses to natural image sequences, for example vegetation landscapes or buildings. They created two versions of the images: a complete one and one in which they had systematically removed certain elements, specifically vertical or horizontal contours. If the time elapsing between the individual images was short, i.e. 30 milliseconds, the neurons represented complete image information. That changed when the time elapsing in the sequences was longer than 100 milliseconds. Now, the neurons represented only those elements that were new or missing, namely image differences. "When we analyse a scene, the eyes perform very fast miniature movements in order to register the fine details," explains Nora Nortmann, postgraduate student at the Institute of Cognitive Science at the University of Osnabrück and the RUB work group Optical Imaging. The information regarding those details are forwarded completely and immediately by the primary visual cortex. "If, on the other hand, the time elapsing between the gaze changes is longer, the cortex codes only those aspects in the images that have changed," continues Nora Nortmann. Thus, certain image sections stand out and interesting spots are easier to detect, as the researchers speculate.

"Our brain is permanently looking into the future"
This study illustrates how activities of visual neurons are influenced by past events. "The neurons build up a short-term memory that incorporates constant input," explains Dirk Jancke. However, if something changes abruptly in the perceived image, the brain generates a kind of error message on the basis of the past images. Those signals do not reflect the current input, but the way the current input deviates from the expectations. Researchers have hitherto postulated that this so-called predictive coding only takes place in higher brain areas. "We demonstrated that the principle applies for earlier phases of cortical processing, too," concludes Jancke. "Our brain is permanently looking into the future and comparing current input with the expectations that arose based on past situations."

Observing brain activities in millisecond range
In order to monitor the dynamics of neuronal activities in the brain in the millisecond range, the scientists used voltage-dependent dyes. Those substances fluoresce when neurons receive electrical impulses and become active. Thanks to a high-resolution camera system and the subsequent computer-aided analysis, the neuronal activity can be measured across a surface of several square millimetres. The result is a temporally and spatially precise film of transmission processes within neuronal networks.

With Apologies to Christians, Jews, and Yiddish Speakers of all Stripes

Oy!  To the world, this schmuck has come,
To bother us all day!
With all his hymns and sermons,
He's worse than Uncle Hermann,
And on and on he goes,
And on and on he goes,
Until we all have a hole in the head.

Monday, December 16, 2013

A summary of the evidence that most published research is false | Simply Statistics

A summary of the evidence that most published research is false | Simply Statistics

A summary of the evidence that most published research is false


One of the hottest topics in science has two main conclusions:
  • Most published research is false
  • There is a reproducibility crisis in science
The first claim is often stated in a slightly different way: that most results of scientific experiments do not replicate. I recently got caught up in this debate and I frequently get asked about it.
So I thought I'd do a very brief review of the reported evidence for the two perceived crises. An important point is all of the scientists below have made the best effort they can to tackle a fairly complicated problem and this is early days in the study of science-wise false discovery rates. But the take home message is that there is currently no definitive evidence one way or another about whether most results are false.
  1. Paper: Why most published research findings are falseMain idea: People use hypothesis testing to determine if specific scientific discoveries are significant. This significance calculation is used as a screening mechanism in the scientific literature. Under assumptions about the way people perform these tests and report them it is possible to construct a universe where most published findings are false positive results. Important drawback: The paper contains no real data, it is purely based on conjecture and simulation.
  2. Paper: Drug development: Raise standards for preclinical researchMain ideaMany drugs fail when they move through the development process. Amgen scientists tried to replicate 53 high-profile basic research findings in cancer and could only replicate 6. Important drawback: This is not a scientific paper. The study design, replication attempts, selected studies, and the statistical methods to define "replicate" are not defined. No data is available or provided.
  3. Paper: An estimate of the science-wise false discovery rate and application to the top medical literatureMain idea: The paper collects P-values from published abstracts of papers in the medical literature and uses a statistical method to estimate the false discovery rate proposed in paper 1 above. Important drawback: The paper only collected data from major medical journals and the abstracts. P-values can be manipulated in many ways that could call into question the statistical results in the paper.
  4. Paper: Revised standards for statistical evidenceMain idea: The P-value cutoff of 0.05 is used by many journals to determine statistical significance. This paper proposes an alternative method for screening hypotheses based on Bayes factors. Important drawback: The paper is a theoretical and philosophical argument for simple hypothesis tests. The data analysis recalculates Bayes factors for reported t-statistics and plots the Bayes factor versus the t-test then makes an argument for why one is better than the other.
  5. Paper: Contradicted and initially stronger effects in highly cited research Main idea: This paper looks at studies that attempted to answer the same scientific question where the second study had a larger sample size or more robust (e.g. randomized trial) study design. Some effects reported in the second study do not match the results exactly from the first. Important drawback: The title does not match the results. 16% of studies were contradicted (meaning effect in a different direction). 16% reported smaller effect size, 44% were replicated and 24% were unchallenged. So 44% + 24% + 16% = 86% were not contradicted. Lack of replication is also not proof of error.
  6. PaperModeling the effects of subjective and objective decision making in scientific peer reviewMain idea: This paper considers a theoretical model for how referees of scientific papers may behave socially. They use simulations to point out how an effect called "herding" (basically peer-mimicking) may lead to biases in the review process. Important drawback: The model makes major simplifying assumptions about human behavior and supports these conclusions entirely with simulation. No data is presented.
  7. Paper: Repeatability of published microarray gene expression analysesMain idea: This paper attempts to collect the data used in published papers and to repeat one randomly selected analysis from the paper. For many of the papers the data was either not available or available in a format that made it difficult/impossible to repeat the analysis performed in the original paper. The types of software used were also not clear. Important drawbackThis paper was written about 18 data sets in 2005-2006. This is both early in the era of reproducibility and not comprehensive in any way. This says nothing about the rate of false discoveries in the medical literature but does speak to the reproducibility of genomics experiments 10 years ago.
  8. Paper: Investigating variation in replicability: The "Many Labs" replication project. (not yet published) Main ideaThe idea is to take a bunch of published high-profile results and try to get multiple labs to replicate the results. They successfully replicated 10 out of 13 results and the distribution of results you see is about what you'd expect (see embedded figure below). Important drawback: The paper isn't published yet and it only covers 13 experiments. That being said, this is by far the strongest, most comprehensive, and most reproducible analysis of replication among all the papers surveyed here.
I do think that the reviewed papers are important contributions because they draw attention to real concerns about the modern scientific process. Namely
  • We need more statistical literacy
  • We need more computational literacy
  • We need to require code be published
  • We need mechanisms of peer review that deal with code
  • We need a culture that doesn't use reproducibility as a weapon
  • We need increased transparency in review and evaluation of papers
Some of these have simple fixes (more statistics courses, publishing code) some are much, much harder (changing publication/review culture).
The Many Labs project (Paper 8) points out that statistical research is proceeding in a fairly reasonable fashion. Some effects are overestimated in individual studies, some are underestimated, and some are just about right. Regardless, no single study should stand alone as the last word about an important scientific issue. It obviously won't be possible to replicate every study as intensely as those in the Many Labs project, but this is a reassuring piece of evidence that things aren't as bad as some paper titles and headlines may make it seem.

Many labs data. Blue x's are original effect sizes. Other dots are effect sizes from replication experiments (http://rolfzwaan.blogspot.com/2013/11/what-can-we-learn-from-many-labs.html)
The Many Labs results suggest that the hype about the failures of science are, at the very least, premature. I think an equally important idea is that science has pretty much always worked with some number of false positive and irreplicable studies. This was beautifully described by Jared Horvath in this blog post from the Economist.  I think the take home message is that regardless of the rate of false discoveries, the scientific process has led to amazing and life-altering discoveries.

FDA examining antibacterial soaps, body washes - CNN.com

FDA examining antibacterial soaps, body washes - CNN.com

(CNN) -- Manufacturers of antibacterial hand soap and body wash will be required to prove their products are more effective than plain soap and water in preventing illness and the spread of infection, under a proposed rule announced Monday by the Food and Drug Administration.
Those manufacturers also will be required to prove their products are safe for long-term use, the agency said.
 
"Millions of Americans use antibacterial hand soap and body wash products," the agency said in a statement. "Although consumers generally view these products as effective tools to help prevent the spread of germs, there is currently no evidence that they are any more effective at preventing illness than washing with plain soap and water.
 
"Further, some data suggest that long-term exposure to certain active ingredients used in antibacterial products -- for example, triclosan (liquid soaps) and triclocarban (bar soaps) -- could pose health risks, such as bacterial resistance or hormonal effects."
About 2,000 individual products contain these products, health officials said.
 
"Our goal is, if a company is making a claim that something is antibacterial and in this case promoting the concept that consumers who use these products can prevent the spread of germs, then there ought to be data behind that," said Dr. Sandra Kweder, deputy director of the Office of New Drugs in FDA's Center for Drug Evaluation and Research.
 
"We think that companies ought to have data before they make these claims."
Studies in rats have shown a decrease in thyroid hormones with long-term exposure, she said. Collecting data from humans is "very difficult" because the studies look at a long time period.
 
 
Before the proposed rule is finalized, companies will need to provide data to support their claims, or -- if they do not -- the products will need to be reformulated or relabeled to remain on the market.
"This is a good first step toward getting unsafe triclosan off the market," said Mae Wu, an attorney for the Natural Resources Defense Council. "FDA is finally taking concerns about triclosan seriously. Washing your hands with soap containing triclosan doesn't make them cleaner than using regular soap and water and can carry potential health risks.
 
The FDA first proposed removing triclosan from certain products in 1978, the council said, "but because the agency took no final action, triclosan has been found in more and more soaps."
In 2010, the council said it sued FDA to force it to issue a final rule. The new proposed rule stems from a settlement in that suit, according to the NRDC.
 
The rule is available for public comment for 180 days, with a concurrent one-year period for companies to submit new data and information, followed by a 60-day period for rebuttal comments, according to the FDA.
 
The target deadline is June 2014 for the public comment period, then companies will have until December 2014 to submit data and studies. The FDA wants to finalize the rule and determine whether these products are "generally recognized as safe and effective" by September 2016.
"Antibacterial soaps and body washes are used widely and frequently by consumers in everyday home, work, school and public settings, where the risk of infection is relatively low," said Dr. Janet Woodcock, director of the FDA's Center for Drug Evaluation and Research.
 
"Due to consumers' extensive exposure to the ingredients in antibacterial soaps, we believe there should be a clearly demonstrated benefit from using antibacterial soap to balance any potential risk."
The action is part of FDA's ongoing review of antibacterial active ingredients, the agency said.
Hand sanitizers, wipes and antibacterial products used in health care settings are not affected.
Most hand sanitizers have 60% alcohol or ethanol and are generally recognized as safe when water isn't available, Kweder said. However, health officials still believe washing hands with soap and water is the best method.

Cassini reveals clues about Titan’s hydrocarbon lakes and seas | Science Recorder

Cassini reveals clues about Titan’s hydrocarbon lakes and seas | Science Recorder

P.J. O’Rourke - American Satirist, Journalist and Author | Point of Inquiry

P.J. O’Rourke - American Satirist, Journalist and Author | Point of Inquiry

Lie point symmetry

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_point_symmetry     ...