Search This Blog

Monday, January 27, 2014

Global Warming Alarmists Caught Doctoring '97-Percent Consensus' Claims

                   
Peter Ferrara


Global warming graphic
 (Photo credit: Wikipedia)
 
Global warming alarmists and their allies in the liberal media have been caught doctoring the results of a widely cited paper asserting there is a 97-percent scientific consensus regarding human-caused global warming. After taking a closer look at the paper, investigative journalists report the authors’ claims of a 97-pecent consensus relied on the authors misclassifying the papers of some of the world’s most prominent global warming skeptics. At the same time, the authors deliberately presented a meaningless survey question so they could twist the responses to fit their own preconceived global warming alarmism.

Global warming alarmist John Cook, founder of the misleadingly named blog site Skeptical Science, published a paper with several other global warming alarmists claiming they reviewed nearly 12,000 abstracts of studies published in the peer-reviewed climate literature. Cook reported that he and his colleagues found that 97 percent of the papers that expressed a position on human-caused global warming “endorsed the consensus position that humans are causing global warming.”
As is the case with other ‘surveys’ alleging an overwhelming scientific consensus on global warming, the question surveyed had absolutely nothing to do with the issues of contention between global warming alarmists and global warming skeptics. The question Cook and his alarmist colleagues surveyed was simply whether humans have caused some global warming. The question is meaningless regarding the global warming debate because most skeptics as well as most alarmists believe humans have caused some global warming. The issue of contention dividing alarmists and skeptics is whether humans are causing global warming of such negative severity as to constitute a crisis demanding concerted action.

Either through idiocy, ignorance, or both, global warming alarmists and the liberal media have been reporting that the Cook study shows a 97 percent consensus that humans are causing a global warming crisis. However, that was clearly not the question surveyed.

Investigative journalists at Popular Technology looked into precisely which papers were classified within Cook’s asserted 97 percent. The investigative journalists found Cook and his colleagues strikingly classified papers by such prominent, vigorous skeptics as Willie Soon, Craig Idso, Nicola Scafetta, Nir Shaviv, Nils-Axel Morner and Alan Carlin as supporting the 97-percent consensus.
Cook and his colleagues, for example, classified a peer-reviewed paper by scientist Craig Idso as explicitly supporting the ‘consensus’ position on global warming “without minimizing” the asserted severity of global warming. When Popular Technology asked Idso whether this was an accurate characterization of his paper, Idso responded, “That is not an accurate representation of my paper.

The papers examined how the rise in atmospheric CO2 could be inducing a phase advance in the spring portion of the atmosphere’s seasonal CO2 cycle. Other literature had previously claimed a measured advance was due to rising temperatures, but we showed that it was quite likely the rise in atmospheric CO2 itself was responsible for the lion’s share of the change. It would be incorrect to claim that our paper was an endorsement of CO2-induced global warming.”

When Popular Technology asked physicist Nicola Scafetta whether Cook and his colleagues accurately classified one of his peer-reviewed papers as supporting the ‘consensus’ position, Scafetta similarly criticized the Skeptical Science classification.

“Cook et al. (2013) is based on a straw man argument because it does not correctly define the IPCC AGW theory, which is NOT that human emissions have contributed 50%+ of the global warming since 1900 but that almost 90-100% of the observed global warming was induced by human emission,” Scafetta responded. “What my papers say is that the IPCC [United Nations Intergovernmental Panel on Climate Change] view is erroneous because about 40-70% of the global warming observed from 1900 to 2000 was induced by the sun.”

What it is observed right now is utter dishonesty by the IPCC advocates. … They are gradually engaging into a metamorphosis process to save face. … And in this way they will get the credit that they do not merit, and continue in defaming critics like me that actually demonstrated such a fact since 2005/2006,” Scafetta added.

Astrophysicist Nir Shaviv similarly objected to Cook and colleagues claiming he explicitly supported the ‘consensus’ position about human-induced global warming. Asked if Cook and colleagues accurately represented his paper, Shaviv responded, “Nope… it is not an accurate representation. The paper shows that if cosmic rays are included in empirical climate sensitivity analyses, then one finds that different time scales consistently give a low climate sensitivity. i.e., it supports the idea that cosmic rays affect the climate and that climate sensitivity is low. This means that part of the 20th century [warming] should be attributed to the increased solar activity and that 21st century warming under a business as usual scenario should be low (about 1°C).”

“I couldn’t write these things more explicitly in the paper because of the refereeing, however, you don’t have to be a genius to reach these conclusions from the paper,” Shaviv added.

To manufacture their misleading asserted consensus, Cook and his colleagues also misclassified various papers as taking “no position” on human-caused global warming. When Cook and his colleagues determined a paper took no position on the issue, they simply pretended, for the purpose of their 97-percent claim, that the paper did not exist.

Morner, a sea level scientist, told Popular Technology that Cook classifying one of his papers as “no position” was “Certainly not correct and certainly misleading. The paper is strongly against AGW [anthropogenic global warming], and documents its absence in the sea level observational facts. Also, it invalidates the mode of sea level handling by the IPCC.”

Soon, an astrophysicist, similarly objected to Cook classifying his paper as “no position.”

“I am sure that this rating of no position on AGW by CO2 is nowhere accurate nor correct,” said Soon.

I hope my scientific views and conclusions are clear to anyone that will spend time reading our papers. Cook et al. (2013) is not the study to read if you want to find out about what we say and conclude in our own scientific works,” Soon emphasized.

Viewing the Cook paper in the best possible light, Cook and colleagues can perhaps claim a small amount of wiggle room in their classifications because the explicit wording of the question they analyzed is simply whether humans have caused some global warming. By restricting the question to such a minimalist, largely irrelevant question in the global warming debate and then demanding an explicit, unsolicited refutation of the assertion in order to classify a paper as a ‘consensus’ contrarian, Cook and colleagues misleadingly induce people to believe 97 percent of publishing scientists believe in a global warming crisis when that is simply not the case.

Misleading the public about consensus opinion regarding global warming, of course, is precisely what the Cook paper sought to accomplish. This is a tried and true ruse perfected by global warming alarmists. Global warming alarmists use their own biased, subjective judgment to misclassify published papers according to criteria that is largely irrelevant to the central issues in the global warming debate. Then, by carefully parsing the language of their survey questions and their published results, the alarmists encourage the media and fellow global warming alarmists to cite these biased, subjective, totally irrelevant surveys as conclusive evidence for the lie that nearly all scientists believe humans are creating a global warming crisis.

These biased, misleading, and totally irrelevant “surveys” form the best “evidence” global warming alarmists can muster in the global warming debate. And this truly shows how embarrassingly feeble their alarmist theory really is.

Sunday, January 26, 2014

Climate change needs a new kind of scientist

Jan 17 2014

Scientific discoveries of recent decades have generated a wealth of knowledge on forests and climate change spanning many different sectors and disciplines. Sustainable development, poverty eradication, the rights of indigenous and local communities to land and resources, conservation of biodiversity, governance, water management, pollution (and all the policies and economic factors related to these sectors) are just some of the issues that scientists studying the relationship between forests climate change must consider.

Such knowledge generation has also laid the foundation for a broader mission to assist in developing integrated solutions. This is not something that we, as climate scientists, have been traditionally trained to do.

It’s clear that developing integrated solutions to such complex problems will require a new kind of climate scientist. A scientist who can think across biophysical and social disciplines. A scientist who can work across scales to engage all members of society in their research. A scientist who can understand the policy implications of their work.

One group that seems particularly enthusiastic to take on such a role is young researchers. In this article, I’ll go through a few examples where young researchers have led the “out of the box” thinking needed to tackle climate change problems.

Interdisciplinary science is at the heart of CIFOR’s Global Comparative Study on REDD+, which aims to inform policy makers, practitioners and donors about what works in reducing emissions from deforestation and forest degradation, and enhancement of forest carbon stocks (REDD+) in tropical countries.

In this study, a diverse group of foresters, biologists, sociologists, economists, political scientists, and anthropologists works together to understand how REDD+ can be implemented effectively, efficiently, equitably, and promote both social and environmental co-benefits.

I help coordinate a component of the study that focuses on measuring the impacts of subnational REDD+ initiatives. Through this part of the study, we have collected data in 170 villages with over 4,000 families in 6 countries: Brazil, Peru, Cameroon, Tanzania, Vietnam and Indonesia.
From 2010 to 2012, we hired nearly 80 undergraduate, Masters and PhD students in Latin America to collect baseline data on livelihoods and land-use at multiple sites across the Amazon. These young scientists have been critical in helping us share project knowledge in different ways.

KNOWLEDGE SHARING
As I was finishing my PhD six years ago, I joined forces with bunch of graduate students who were thinking about how, within our confined academic research environment, we could share knowledge in different ways.

The “knowledge exchange pyramid” (Fig 1) outlines how graduate students can exchange knowledge with local stakeholders during research.

There are three levels of knowledge exchange: (1) information sharing; (2) skill building; and (3) knowledge generation. The black circle represents the researchers while the white circle represents local stakeholders — communities, practitioners, policymakers.
Duchelle, A.E, K. Biedenweg, C. Lucas, A. Virapongse, J. Radachowsky, D. Wojcik, M. Londres, W.L. Bartels, D. Alvira, K.A. Kainer. 2009. Graduate students and knowledge exchange with local stakeholders: Possibilities and preparation. Biotropica 41(5): 578-585.

At the base of the pyramid (the simplest form of knowledge exchange) is information sharing – a primarily one-way transmission of ideas to stakeholders using presentations, brochures and posters. Our experience showed that these tools are particularly appropriate when time is limited, specific facts need to be shared, and information is not controversial.

If your goal is to change attitudes, you need to ensure that stakeholders are given a more active role in interpreting information either through community forums, presentations allowing discussion time, and short workshops.

An example of information sharing is returning research results to local stakeholders. One year after the teams collected baseline data for the Global Comparative Study on REDD+, we went back to all of the research sites and shared the results with the local communities and with the organizations (NGOs and governments) that implement the REDD+ activities.

While many researchers do go back to the places where they collected their data to share results, it has bothered me for years when you go to communities where you know there have been multiple research groups and they say to you, ‘you guys are the first group that come back with the information’.

While many researchers do go back to the places where they collected their data to share results, it has bothered me for years when you go to communities where you know there have been multiple research groups and they say to you, ‘you guys are the first group that come back with the information’.

Sharing results with local stakeholders is an incredibly important learning process and it makes good scientific sense. It allows community members to interpret the information and verify survey data before final analysis. We have found that when a community says ‘that doesn’t make sense, why would it be that way?’, it helps us rethink our interpretation of the data. All it takes is some time, some creativity, and a bit of money.

In the Global Comparative Study on REDD+, I envisaged us returning the results in a pretty conventional way. What blew me away was the innovation of our young researchers – they used art, games and even theatre to make the science interesting and relevant to local communities.
In the second part of the pyramid (slightly more complex knowledge exchange) is skill building, which encourages stakeholders to use knowledge to develop new skills. This is often a response to local demands for skills such as data collection and analysis, grant writing, or manuscript preparation.
While conducting research in Ucayali, Peru forest communities requested training in Global Positioning Systems (GPS) to help them locate and record specific trees in timber harvest areas required by their forest management plans. They also wanted to learn how to measure timber so they could determine a reasonable purchase price (and ensure they weren’t being swindled when it came time to sell the tree trunks or boards).

Skill building activities require more time, resources, and preparation than information sharing but can be incredibly important for building trust with communities. They can also be fun (friendly soccer games are a common feature of fieldwork in Latin America!)
At the highest level of the pyramid is knowledge generation, which includes communities, practitioners or policy makers as partners in different aspects of the research process (one example is “action research”). Together with the graduate researcher, they can create the research questions, implement the research, and analyze and disseminate the results.
While this is the most innovative type of knowledge exchange, it is also the hardest for young researchers to get involved in. Graduate students and their research partners will need to invest a lot of time and energy as well as obtain institutional support.

I know a Brazilian researcher who, before and during her Masters in The University of Florida’s Tropical Conservation and Development Program, developed long-term action research on the ecology of locally important tree species with one remote community in the Amazon estuary. She involved community members in all steps of the research process from setting research priorities, collecting data and training.

After assessing the research findings, community members took several actions. They more than doubled the number of local volunteers collecting data (and diversified to include youth, women, and community leaders), they presented research findings at community meetings and they shared those findings with nearby communities struggling to improve their livelihoods in a sustainable way.
Engaging local stakeholders in research is possible at any level. Young researchers are the ones who can help us think of new and innovative ways to do this.

MAKE USE OF THIS INFORMATION
If you are in academia, encourage your students to embark on this kind of knowledge exchange in their research. Plug them into your networks and create courses to help broaden their skill sets.
If you are a practitioner, welcome students and young researchers into your work. Be willing to develop research with them, be willing to learn from students and help them become better professionals.

If you are a donor, support the research that shows real and genuine knowledge exchange with relevant stakeholders.

If you are a student or young researcher, recognize the very unique moment that you are in right now in your career and expand your skill set. Some of the conventional academic pressures are less placed on you at early stages of your career, so use the opportunity you have now to engage and innovate.

This article was first published on ForestsClimateChange.org

Saturday, January 25, 2014

What Happens When You Can't Get an Abortion?

—By
| Thu Jun. 13, 2013 7:59 AM GMT


What happens to women who want abortions but can't get them? Abortion clinics all have "gestational deadlines" and will turn away women who are further into pregnancy than their rules allow, and this gave Diane Greene Foster, a professor of obstetrics and gynecology at UC San Francisco, an idea for a study. Instead of comparing women who have abortions to women who elect to carry their pregnancy to term, she compared a group of women who all wanted to have an abortion but didn't all get one:






When she looked at more objective measures of mental health over time — rates of depression and anxiety — she also found no correlation between having an abortion and increased symptoms....Turnaways did [] suffer from higher levels of anxiety, but six months out, there were no appreciable differences between the two groups.
Where the turnaways had more significant negative outcomes was in their physical health and economic stability....Women in the turnaway group suffered more ill effects, including higher rates of hypertension and chronic pelvic pain....Even “later abortions are significantly safer than childbirth,” she says.
....Economically, the results are even more striking. Adjusting for any previous differences between the two groups, women denied abortion were three times as likely to end up below the federal poverty line two years later. Having a child is expensive, and many mothers have trouble holding down a job while caring for an infant. Had the turnaways not had access to public assistance for women with newborns, Foster says, they would have experienced greater hardship.
The whole story is worth a read. In the long run, women who want abortions but don't get them adjust to their new lives. They aren't unhappy at becoming mothers. But there's not much question that their lives suffer, and as more and more states put more and more roadblocks in the way of abortion providers, that suffering will increase—with no mitigation from increased social services, since the red states that oppose abortion also generally don't think highly of providing much in the way of services to mothers in poverty.

Future solar cells may be made of wood

Future solar cells may be made of wood

Jan 23, 2014 by Lisa Zyga

Future solar cells may be made of wood       


















(a) A schematic of the hierarchical structure of a tree in which each structure is broken down to the level of the elementary fibrils. (b) Regular paper with microfibers has a microporous structure that causes light scattering. (c) The new …more

(Phys.org) —A new kind of paper that is made of wood fibers yet is 96% transparent could be a revolutionary material for next-generation solar cells. Coming from plants, the paper is inexpensive and more environmentally friendly than the plastic substrates often used in solar cells. However, its most important advantage is that it overcomes the tradeoff between optical transparency and optical haze that burdens most materials.

A team of researchers from the University of Maryland, the South China University of Technology, and the University of Nebraska-Lincoln, have published a on the new material in a recent issue of Nano Letters.

As the researchers explain, solar cell performance benefits when materials possess both a high (to allow for good light transmission) and a high optical haze (to increase the scattering and therefore the absorption of the transmitted light within the material). But so far, materials with high transparency values (of about 90%) have very low optical haze values (of less than 20%).

The new wood-based paper has an ultrahigh transparency of 96% and ultrahigh optical haze of 60%, which is the highest optical haze value reported among transparent substrates.
The main reason for this good performance in both areas is that the paper has a nanoporous rather than microporous structure. Regular paper is made of wood fibers and has low optical transparency due to the microcavities that exist within the porous structure that cause light scattering. In the new paper, these micropores are eliminated in order to improve the optical transparency. To do this, the researchers used a treatment called TEMPO to weaken the hydrogen bonds between the microfibers that make up the wood fibers, which causes the wood fibers to swell up and collapse into a dense, tightly packed structure containing nanopores rather than micropores.
Future solar cells may be made of wood        
Optical transmission haze versus transmittance for different substrates at 550 nm. Glass and PET are in the green area, which are suitable for displays due to their low haze and high transparency; the transparent paper developed in this work …more
"The papers are made of ribbon-like materials that can stack well without microsize cavities for high transmittance, but with nanopores for high optical haze," coauthor Liangbing Hu, Assistant Professor in the Department of Materials Science and Engineering at the University of Maryland, told Phys.org.
To test the paper for solar cell applications, the researchers coated the wood fiber paper onto the surface of a silicon slab. Experiments showed that the light-harvesting device can collect light with a 10% increase in efficiency. Due to the simplicity of this laminating process, that have already been installed and are in use could benefit similarly from the additional paper layer.

Although there are other papers made of nanofibers, this paper demonstrates a much higher optical transmittance while using much less energy and time for processing. With these advantages, the highly transparent, high-haze paper could offer an inexpensive way to enhance the efficiency of solar panels, solar roofs, and solar windows.

"We would like to work with solar cell and display companies to evaluate the applications," Hu said. "We are also interested in the manufacturing of such paper."
Explore further: Small-molecule solar cells get 50% increase in efficiency with optical spacer
More information: Zhiqiang Fang, et al. "Novel Nanostructured Paper with Ultrahigh Transparency and Ultrahigh Haze for Solar Cells." Nano Letters. DOI: 10.1021/nl404101p

Read more at: http://phys.org/news/2014-01-future-solar-cells-wood.html#jCp

 Read more at: http://phys.org/news/2014-01-future-solar-cells-wood.html#jCp

 Read more at: http://phys.org/news/2014-01-future-solar-cells-wood.html#jCp

Read more at: http://phys.org/news/2014-01-future-solar-cells-wood.html#jCp

Origins of Massive Star Explosions May Be Found

by Miriam Kramer, SPACE.com Staff Writer   |   March 07, 2013 02:01pm ET
http://www.space.com/20110-supernova-star-explosions-origins.html?cmpid=514639_20140125_17554014

Robotics and automation, employment, and aging Baby Boomers

By Kenneth Anderson  

Tech companies likely would have done much of this anyway.  There’s a convergence of interest in these technologies from many different directions.  Still, the aging of the Baby Boomer population bears noting as an indirect economic driver for new machines, to make aging Baby Boomer care and maintenance more possible and affordable. 

Mobile, connected, supplied, and independent within one’s own home for as long possible. Well, here you have all four: Google’s self-driving cars, Apple’s evolving iPhones as personal controller of your assistive devices,  Amazon’s home delivery, and the in-home assistive robots many companies are trying to design. 

The tech companies, it’s only partly an exaggeration to say, are firms whose business plans are based on old people.  It’s the services they want and need as they (“we,” let me be honest) grow older, striving to maintain autonomy and independence for as long as possible.  I might think (I do think) it’s unbelievably cool and not-old at all if one day I were to get a delivery by Amazon drone.  But actually, after the (thrilling) first time, it’s just because I’m old, don’t really feel like leaving the house, and anyway am too infirm to do so.  At least, not without a neuro-robotic “weak-limb support suit” for my legs, so I don’t lose my balance in the street and fall, and a Google car to get me to … the doctor’s office.

“Indirect” economic driver, however, because the Baby Boomers would not be paying out of their own pockets directly for many of the technologies that might be most important to them in retirement; government would wind up paying.  The concomitant uncertainties, political and otherwise, that would affect what amount to “procurement” decisions within Medicare and such programs, make decisions to go into expensive and technologically thus far unproven research and development (particularly with respect to the most ambitious artificial intelligence robotics) economically uncertain, contingent propositions.

But now a familiar debate has broken out – around the employment effects that are likely to come from these new technologies. (The Economist magazine summarizes the debate and comments in this week’s edition.)  On the one hand, innovative, disruptive technological change is nothing new.  The result has always been short-to-medium term creative destruction, sometimes including the destruction of whole occupational categories – followed by longer term job growth enabled by the new technologies or the increased wealth they create. 

In any case, over the long run, increases in the standard of living can only come through innovation and technological advance that allows greater economic output to be extracted from the same or smaller labor input.  In a world of many elderly, retired Baby Boomers and historically smaller worker base bearing much of the cost of the elderly living and health care costs, that has to matter a great deal.  Ben Miller and Robert D. Atkinson make the positive case for automation and robotics along these lines in a September 2013 report from the Information Technology and Innovation Foundation, Are Robots Taking Our Jobs, or Making Them?  

On the other hand, maybe this time is different.  That’s what  MIT professors Erik Brynjolfsson and Andrew McAfee argued in their 2012 book, Race Against the Machine, and reprise in a more nuanced way in their new book, released last week, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Machines.   Maybe “brilliant machines” will replace many workers – permanently.  Even without making sci-fi leaps of imagination for the capabilities of the short-to-medium term “artificial intelligence,” the emerging machines (as their designers intend them) aim to be not just “disembodied” AI computers, but instead genuine “robots,” possessed of mechanical capabilities for movement and mobility, and sensors, both of which are advancing rapidly technologies – not just AI computational abilities.  Perhaps this combination – the AI robot able to enter and interact in ordinary human social spaces – does signify a break from our past experiences of innovation as (eventual) producer of net new jobs over time.  Maybe significant new categories of work don’t emerge this time around – because as soon as one does, someone (or some thing) breaks it down into an algorithm, and then comes up with mechanical devices and sensors capable of executing the task – intelligently.  As a Babbage column in the Economist put it several years ago, in this scenario, capital becomes labor.  

What makes these machines special, then, is not just AI but, instead, AI embedded in a machine that can sense, decide, and then act, with mechanical movement and mobility, in the physical world – in the human social world.  In that case, economically, changes could be unprecedented, with implications for both manual labor and even for the “knowledge workers” previously insulated from both industrial automation and global outsourcing.  Unlike past technological revolutions, the result might not be whole new job categories gradually emerging (or at least ones that employ large numbers of workers). 

It’s striking, however, that the argument over employment doesn’t quite directly concern the elderly.  It doesn’t rationally engage aging Baby Boomers as consumers wanting these technologies and their services, or at least not Baby Boomers as both consumers and workers, with an interested foot in each camp.  They’ll already be retired and so their own employment won’t be their issue.  The employment issue in which, however, they will indirectly have an interest is the cost of human labor to care for them as elderly and infirm; that care is strongly labor intensive, in both unskilled and highly skilled (nursing) labor.  The difficulties of machine interactions in ordinary human environments – versus, say, the highly controlled factory or assembly line floor – has meant that this sector has not so far been widely automated.  But that is liable to significant change, if these technologies move forward – and if they are successful, they will have to become part of the calculation of the cost of care of the elderly, given that the technological shifts are not going to cheap, but the cost of both skilled and unskilled elder-care labor is only going to rise under current scenarios.  

Which is to say, no matter where you stand on the automation-robotics-employment debate, if tech’s business plan is significantly about the growing ranks of the elderly as the target market, then to that extent, the employment debate is less important to the elderly as regards their own employment, but (at least if the technologies are successful) squarely in the cross-hairs of public policy on the costs of care for the elderly.   It’s much more complicated than that, of course, and this is not to suggest that these are the only or even most significant factors.  The business model for robotics is not simply about retirees; the market for self-driving cars, for example, if they come to work as hoped by Google and others, will be far wider than that; eventually it becomes about everyone.   The point for now is only that the elderly form an important, though far from dominant, part of the markets for automation and robotics.

My own view is that Miller and Atkinson are mostly right about long-run job generation.  The “this time is different” view seems to me overstated – as so often the case with AI, as Gary Marcus has noted.    One should never rule out paradigm shifting advances, but so far as I can tell, the conceptual pathways as laid down for AI today are not going to lead – even over the long-run – to what sci-fi has already given us in imagination.  Siri is not “Her” – as even Siri herself noted in a recent Tweet.  For the future we can foresee, in the short-to-medium term, we’ll be more likely to have machines that (as ever) extend, but do not replace, human capabilities; in other cases, human capabilities will extend the machines.  The foreseeable future, I suspect, remains the process (long underway) of tag-teaming humans and machines.  Which is to say (mostly), same as it ever was.    

The significant new job categories (I speculate) run toward skilled manual labor of a new kind. The “maker movement”; new US manufacturing trends toward highly automated, but still human-run and staffed factories; new high technology, but still human-controlled, energy exploitation such as fracking; complex and crucial robotic machines under the supervision of nurses whose whole new skill sets put them in a new job category we might call nursing technologist – these are the areas of work that point the way forward.

Quasi-manual labor – but highly skilled, highly value-added, and value-added because it services the machines.  Or as economist Tyler Cowen put it in his 2013 Average Is Over, the next generation of workers will be defined by their relationships with the machine:  You can do very well if you are able to use technology to leverage your own productivity.  You can also do very well if you are able to use your human skills to leverage the productivity of the machine.  If you can’t do either, though, you might gradually fall into a welfare-supported underclass – because the world of work, even apparently merely “manual” labor, is largely out of your reach. 

Annals of the Presidency

On and off the road with Barack Obama.

by January 27, 2014

http://www.newyorker.com/reporting/2014/01/27/140127fa_fact_remnick?utm_source=tny&utm_campaign=generalsocial&utm_medium=twitter

Obama’s Presidency is on the clock. Hard as it has been to pass legislation, the coming year is a marker, the final interval before the fight for succession becomes politically all-consuming.
Obama’s Presidency is on the clock. Hard as it has been to pass legislation, the coming year is a marker, the final interval before the fight for succession becomes politically all-consuming. Photograph by Pari Dukovic.
                                         
On the Sunday afternoon before Thanksgiving, Barack Obama sat in the office cabin of Air Force One wearing a look of heavy-lidded annoyance. The Affordable Care Act, his signature domestic achievement and, for all its limitations, the most ambitious social legislation since the Great Society, half a century ago, was in jeopardy. His approval rating was down to forty per cent—lower than George W. Bush’s in December of 2005, when Bush admitted that the decision to invade Iraq had been based on intelligence that “turned out to be wrong.” Also, Obama said thickly, “I’ve got a fat lip.”

That morning, while playing basketball at F.B.I. headquarters, Obama went up for a rebound and came down empty-handed; he got, instead, the sort of humbling reserved for middle-aged men who stubbornly refuse the transition to the elliptical machine and Gentle Healing Yoga. This had happened before. In 2010, after taking a self-described “shellacking” in the midterm elections, Obama caught an elbow in the mouth while playing ball at Fort McNair. He wound up with a dozen stitches. The culprit then was one Reynaldo Decerega, a member of the Congressional Hispanic Caucus Institute. Decerega wasn’t invited to play again, though Obama sent him a photograph inscribed “For Rey, the only guy that ever hit the President and didn’t get arrested. Barack.”

This time, the injury was slighter and no assailant was named—“I think it was the ball,” Obama said—but the President needed little assistance in divining the metaphor in this latest insult to his person. The pundits were declaring 2013 the worst year of his Presidency. The Republicans had been sniping at Obamacare since its passage, nearly four years earlier, and HealthCare.gov, a Web site that was undertested and overmatched, was a gift to them. There were other beribboned boxes under the tree: Edward Snowden’s revelations about the National Security Agency; the failure to get anything passed on gun control or immigration reform; the unseemly waffling over whether the Egyptian coup was a coup; the solidifying wisdom in Washington that the President was “disengaged,” allergic to the forensic and seductive arts of political persuasion. The congressional Republicans quashed nearly all legislation as a matter of principle and shut down the government for sixteen days, before relenting out of sheer tactical confusion and embarrassment—and yet it was the President’s miseries that dominated the year-end summations.

Obama worried his lip with his tongue and the tip of his index finger. He sighed, slumping in his chair. The night before, Iran had agreed to freeze its nuclear program for six months. A final pact, if one could be arrived at, would end the prospect of a military strike on Iran’s nuclear facilities and the hell that could follow: terror attacks, proxy battles, regional war—take your pick. An agreement could even help normalize relations between the United States and Iran for the first time since the Islamic Revolution, in 1979. Obama put the odds of a final accord at less than even, but, still, how was this not good news?
The answer had arrived with breakfast. The Saudis, the Israelis, and the Republican leadership made their opposition known on the Sunday-morning shows and through diplomatic channels. Benjamin Netanyahu, the Israeli Prime Minister, called the agreement a “historic mistake.” Even a putative ally like New York Senator Chuck Schumer could go on “Meet the Press” and, fearing no retribution from the White House, hint that he might help bollix up the deal. Obama hadn’t tuned in. “I don’t watch Sunday-morning shows,” he said. “That’s been a well-established rule.” Instead, he went out to play ball.

Usually, Obama spends Sundays with his family. Now he was headed for a three-day fund-raising trip to Seattle, San Francisco, and Los Angeles, rattling the cup in one preposterous mansion after another. The prospect was dispiriting. Obama had already run his last race, and the chances that the Democratic Party will win back the House of Representatives in the 2014 midterm elections are slight. The Democrats could, in fact, lose the Senate.

For an important trip abroad, Air Force One is crowded with advisers, military aides, Secret Service people, support staff, the press pool. This trip was smaller, and I was along for the ride, sitting in a guest cabin with a couple of aides and a staffer who was tasked with keeping watch over a dark suit bag with a tag reading “The President.”

Obama spent his flight time in the private quarters in the nose of the plane, in his office compartment, or in a conference room. At one point on the trip from Andrews Air Force Base to Seattle, I was invited up front for a conversation. Obama was sitting at his desk watching the Miami Dolphins–Carolina Panthers game. Slender as a switch, he wore a white shirt and dark slacks; a flight jacket was slung over his high-backed leather chair. As we talked, mainly about the Middle East, his eyes wandered to the game. Reports of multiple concussions and retired players with early-onset dementia had been in the news all year, and so, before I left, I asked if he didn’t feel at all ambivalent about following the sport. He didn’t.

“I would not let my son play pro football,” he conceded. “But, I mean, you wrote a lot about boxing, right? We’re sort of in the same realm.”

The Miami defense was taking on a Keystone Kops quality, and Obama, who had lost hope on a Bears contest, was starting to lose interest in the Dolphins. “At this point, there’s a little bit of caveat emptor,” he went on. “These guys, they know what they’re doing. They know what they’re buying into. It is no longer a secret. It’s sort of the feeling I have about smokers, you know?”

Note: this is only one of eighteen pages.

Musings of Neil deGrasse Tyson

Posted on January 24, 2014 at 1:00 pm
By
                
quotes_inspirational_motivational_posters_neil_degrasse_tyson_1366x768_11521
Neil deGrasse Tyson giving the Vulcan salute. Photo Credit: Business Insider.

“For me, I am driven by two main philosophies: know more today about the world than I knew yesterday and lessen the suffering of others. You’d be surprised how far that gets you.”

If you don’t know who Neil deGrasse Tyson is, I would have to guess you have been living under a rock for the past ten years or so. For those of you that do know of him, how much do you really know about the man who is sometimes referred to as the Carl Sagan of the 21st century?


HUMBLE BEGINNINGS:



After seeing the stars during a visit to Pennsylvania, Tyson became entranced with astronomy at the tender age of nine, he continued to study astronomy throughout his teens. He even earned a foothold and slight fame in the astronomy community by giving lectures on the subject at the young age of fifteen.  Unfortunately for Carl Sagan, who wished to recruit Tyson to study at Cornell University, Tyson decided instead to attend Harvard University, where he eventually earned his Bachelor of Arts degree in physics in 1980.  Three years later, In 1983, he went on to receive his ‘Master of Arts’ degree in astronomy at the University of Texas at Austin. In the following years, while under the supervision of Professor Michael Rich at Columbia University, he received both a Master and Doctor of Philosophy degree in astrophysics.


EARLY CAREER:



NASA Distinguished Public Service Medal awarded to Tyson in 2004. Photo Credit: NASA

Tyson started writing science books early in his career, his first book, from 1989, was titledMerlin’s Tour of the Universe.’ It is a science fiction book in which the main character, Merlin – a fictional character from the Andromeda Galaxy – befriends many of Earth’s most famous scientists. This was only the start for Tyson, from 1989 to 2012 he has managed to write 12 separate science books; mostly revolving around the subjects of astronomy and astrophysics, in 1998 he even wrote a companion novel to Merlin’s Tour of the Universe which was titled ‘Just Visiting This Planet: Merlin Answers More Questions about Everything under the Sun, Moon, and Stars.’  In 1995, Tyson began writing a column for the Natural History magazine, merely called “Universe” (many of the archived articles can still be viewed here).  While working on his thesis, Tyson observed at the Cerro Tololo Inter-American Observatory, a ground based optical telescope in the Coquimbo Region of Chile, during that time he collaborated with Calán/Tololo Survey astronomers with their work on Type Ia supernovae.

It was these papers that comprised a portion of the discovery papers that related the use of Type Ia supernovae to measure distances, which in turn, led to the improvement of the Hubble constant, and the discovery of dark energy later on. Because of this, in 2001, President George W. Bush Jr. appointed Tyson (along with the second man on the moon, Buzz Aldrin) to serve on the Commission on the Future of the United States Aerospace Industry’ committee. Eventually, in 2004, he was subsequently appointed by the president to serve on the President’s Commission on Implementation of United States Space Exploration Policy board, shortly afterwards, he was awarded with NASA’s ‘Distinguished Public Service Medal,’ the highest honors given to a civilian by NASA.


THE HAYDEN PLANETARIUM:



Hayden Planetarium as seen at night time. Photo Credit: Wiki Commons user Alfred Gracombe.

Neil deGrasse Tyson started working at the Hayden Planetarium in 1996, not long after receiving his doctorate, unfortunately I can not find anything that specifies exactly when he was appointed as the first Frederick P. Rose Director at the planetarium, a position that Tyson still holds today.


THE PLUTO FILES:



pluto-planet
As director of the planetarium, Tyson decided to toss out traditional thinking in order to keep Pluto from being referred to as the ninth planet.  Tyson explained that he wanted to get away from simply counting the planet in the solar system. Instead, he wanted to look at similarities between the terrestrial planets, the similarities between gas giants and the similarities of Pluto with other objects found in the Kuiper belt.  Tyson has stated on many television shows, such as The Colbert Report and The Daily Show that his decision to stand against the traditional definition of a planet resulted in countless amounts of hate mail, most of it being from children.  In 2006 the International Astronomical Union confirmed Tyson’s assessment by changing the classification of Pluto from a fully-fledged planet, to a dwarf-planet.


THE PLANETARY SOCIETY:



We’re soul mates, Bill! Photo Credit: StarTalk Radio.

I was hoping to highlight Tyson’s tenure with The Planetary Society; however, after spending a few days scouring the internet for information, I haven’t honestly found a lot of useful info, I couldn’t even find a date when he joined; however I imagine it was fairly early.  The Planetary Society, which was founded in 1980 by Carl Sagan, Bruce Murray and Louis Friedman, abides by its objective to advance the exploration of space and to continue the search for extraterrestrial life.  Tyson was the Vice President of the Planetary Society for three years, until in 2005, when he passed the torch to his friend and confidant, Bill Nye. Still yet,  Tyson has continued to use his prominence with The Planetary Society, in conjunction with his ability to describe scientific processes in a fairly easy to understand manner, to help educate the public on certain science-related issues.


 PENNY4NASA ADVOCATE:



Take a penny, Leave a penny, Give NASA a damn penny! Photo Credit: PENNY4NASA.
Similarly, it should be no surprised to anyone that Neil is an outspoken advocate of the Penny4NASA campaign, which aims to increase the budget of NASA, thereby expanding its operations.  Currently, a lot of people believe that NASA’s budget is something like 5 to 10 cents on the dollar; however this is horribly incorrect, as Tyson frequently points out.  Currently, the budget for NASA is half a penny on the dollar, a far cry from what people thought it was (this figure has currently changed as the budget was increased by 800 million dollars for 2014).  In March 2012, Tyson testified before the United States Senate Committee on Commerce, Science, and Transportation, that; “Right now, NASA’s annual budget is half a penny on your tax dollar. For twice that—a penny on a dollar—we can transform the country from a sullen, dispirited nation, weary of economic struggle, to one where it has reclaimed its 20th century birthright to dream of tomorrow.” A full written transcription of the testimony can be read, here.


OTHER TASTY TIDBITS:



Credit: U.S. Air Force photo/Denise Gould
Credit: U.S. Air Force photo/Denise Gould

Most people don’t know this, but Tyson, a New Yorker, was an eye witness to the September 11th terrorist attacks on the World Trade Center. Living near what would come to be known world-wide as Ground Zero, Tyson watched the devastation as if unfolded.  His footage was included in a 2008 documentary, entitled ‘102 Minutes That Changed America’, Tyson also wrote a widely circulated letter in response to the tragedy.


  •  Tyson has also collaborated with PETA to create a public service announcement, stating “You don’t have to be a rocket scientist to know that kindness is a virtue.” At the same time, he did an interview with PETA that discussed the concept of intelligence in both humans and other animals, the failure of humans to communicate meaningfully with other animals, and the need for empathy in humanity.

  • Tyson is a wine aficionado to the extent that his wine collection has been featured in two different magazines, the first time was in May 2000 in Wine Spectator, and again in 2005, in The World of Fine Wine.


In March, the much beloved book and television series, Carl Sagan’s ‘Cosmos,’ will see a continuance on Fox. Neil deGrasse Tyson will narrate the reboot. Seth MacFarlane (the brains behind ‘Family Guy’) and Ann Druyan (Carl Sagan’s widow) are also intimately involved in the production of the series. Stay tuned!

Natural Selection Questions Answered Using Engineering Plus Evolutionary Analyses


 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Image Caption: Glossophaga soricina, a nectarivorous bat, feeding on the flowers of a banana plant. Nectar feeding bats comprised one of three evolutionary optima for mechanical advantage among New World Leaf-nosed bats. Credit: Beth Clare, Queen Mary University of London    
   
University of Massachusetts at Amherst
January 24, 2014
 
                                                                                                                                                                                                                                 
Introducing a new approach that combines evolutionary and engineering analyses to identify the targets of natural selection, researchers report in the current issue of Evolution that the new tool opens a way of discovering evidence for selection for biomechanical function in very diverse organisms and of reconstructing skull shapes in long-extinct ancestral species.
 
Evolutionary biologist Elizabeth Dumont and mechanical engineer Ian Grosse at the University of Massachusetts Amherst, with evolutionary biologist Liliana Dávalos of Stony Brook University and support from the National Science Foundation, studied the evolutionary histories of the adaptive radiation of New World leaf-nosed bats based on their dietary niches.
 
As the authors point out, adaptive radiations, that is, the explosive evolution of species into new ecological niches, have generated much of the biological diversity seen in the world today. “Natural selection is the driving force behind adaptation to new niches, but it can be difficult to identify which features are the targets of selection. This is especially the case when selection was important in the distant past of a group whose living members now occupy very different niches,” they note.
 
They set out to tackle this by examining almost 200 species of New World leaf-nosed bats that exploit many different food niches: Insects, frogs, lizards, fruit, nectar and even blood. The bats’ skulls of today reflect this dietary diversity. Species with long, narrow snouts eat nectar, while short-faced bats have exceptionally short, wide palates for eating hard fruits. Species that eat other foods have snouts shaped somewhere in between.
 
Dumont explains further, “We knew diet was associated with those things, but there was no evidence that natural selection acted to make those changes in the skull. The engineering model allowed us to identify the biomechanical functions that natural selection worked on. Some form or function helps an animal to perform better in its environment, but it can be hard to demonstrate exactly what that form or function is. We studied the engineering results using the evolutionary tree, which is a very cool new thing about this work.”
 
She and colleagues built an engineering model of a bat skull that can morph into the shape of any species, and used it to create skulls with all possible combinations of snout length and width. Then they ran engineering analyses on all the models to assess their structural strength and mechanical advantage, a measure of how efficiently and how hard bats can bite.
 
Analyzing the engineering results over hundreds of evolutionary trees of New World leaf-nosed bats revealed three optimal snout shapes favored by natural selection, they report. One was the long,
narrow snout of nectar feeders, the second was the extremely short and wide snout of short-faced bats, and the third optimum included all other species. Overall, selection for mechanical advantage was more important in determining the optima than was selection for structural strength, they add.
 
“Thanks to this new approach,” Dumont says, “we were able to answer our original question about natural selection in the evolution of these bats. It favored the highest mechanical advantage in short-faced bats, which gives them the high bite forces needed to pierce through the hardest figs. Nectar feeders have very low mechanical advantage, which is a trade-off for having long, narrow snouts that fit into the flowers in which they find nectar.”
Source: University of Massachusetts at Amherst      

Friday, January 24, 2014

Antarctic sea ice hit 35-year record high Saturday

 
Antarctic sea ice extent Sunday compared to 1979-2000 normal (NSIDC)
Antarctic sea ice extent on September 22 compared to 1981-2010 median depicted by orange curve (NSIDC)
Antarctic sea ice has grown to a record large extent for a second straight year, baffling scientists seeking to understand why this ice is expanding rather than shrinking in a warming world.
On Saturday, the ice extent reached 19.51 million square kilometers, according to data posted on the National Snow and Ice Data Center Web site.  That number bested record high levels set earlier this month and in 2012 (of 19.48 million square kilometers). Records date back to October 1978.
(NSIDC)
(NSIDC)

The increasing ice is especially perplexing since the water beneath the ice has warmed, not cooled.
“The overwhelming evidence is that the Southern Ocean is warming,” said Jinlun Zhang, a University of Washington scientist, studying Antarctic ice. “Why would sea ice be increasing? Although the rate of increase is small, it is a puzzle to scientists.”

In a new study in the Journal of Climate, Zhang finds both strengthening and converging winds around the South Pole can explain 80 percent of the increase in ice volume which has been observed.

“The polar vortex that swirls around the South Pole is not just stronger than it was when satellite records began in the 1970s, it has more convergence, meaning it shoves the sea ice together to cause ridging,” the study’s press release explains. “Stronger winds also drive ice faster, which leads to still more deformation and ridging. This creates thicker, longer-lasting ice, while exposing surrounding water and thin ice to the blistering cold winds that cause more ice growth.”

But no one seems to have a conclusive answer as to why winds are behaving this way.
“I haven’t seen a clear explanation yet of why the winds have gotten stronger,” Zhang told Michael Lemonick of Climate Central.

Some point to stratospheric ozone depletion, but a new study published in the Journal of Climate notes that computer models simulate declining – not increasing – Antarctic sea ice in recent decades due to this phenomenon (aka the ozone “hole”).

“This modeled Antarctic sea ice decrease in the last three decades is at odds with observations, which show a small yet statistically significant increase in sea ice extent,” says the study, led by Colorado State University atmospheric scientist Elizabeth Barnes.

A recent study by Lorenzo Polvani and Karen Smith of Columbia University says the model-defying sea ice increase may just reflect natural variability.

If the increase in ice is due to natural variability, Zhang says, warming from manmade greenhouse gases should eventually overcome it and cause the ice to begin retreating.

“If the warming continues, at some point the trend will reverse,” Zhang said.

However, a conclusion of the Barnes study is that the recovery of the stratospheric ozone layer – now underway – may slow/delay Antarctic warming and ice melt.

Ultimately, it’s apparent the relationship between ozone depletion, climate warming from greenhouse gases, natural variability, and how Antarctic ice responds is all very complicated. In sharp contrast, in the Arctic, there seems to be a relatively straight forward relationship between temperature and ice extent.

Related: Arctic sea ice has *not* recovered, in 7 visuals
Thus, in the Antarctic, we shouldn’t necessarily expect to witness the kind of steep decline in ice that has occurred in the Arctic.

“…the seeming paradox of Antarctic ice increasing while Arctic ice is decreasing is really no paradox at all,” explains Climate Central’s Lemonick. “The Arctic is an ocean surrounded by land, while the Antarctic is land surrounded by ocean. In the Arctic, moreover, you’ve got sea ice decreasing in the summer; at the opposite pole, you’ve got sea ice increasing in the winter. It’s not just an apples-and-oranges comparison: it’s more like comparing apple pie with orange juice.”

Genetically-modified purple tomatoes heading for shops

Purple tomatoes The new tomatoes could improve the nutritional value of everyday foods

 
The prospect of genetically modified purple tomatoes reaching the shelves has come a step closer.

Their dark pigment is intended to give tomatoes the same potential health benefits as fruit such as blueberries.

Developed in Britain, large-scale production is now under way in Canada with the first 1,200 litres of purple tomato juice ready for shipping.

The pigment, known as anthocyanin, is an antioxidant which studies on animals show could help fight cancer.

Scientists say the new tomatoes could improve the nutritional value of everything from ketchup to pizza topping.

The tomatoes were developed at the John Innes Centre in Norwich where Prof Cathie Martin hopes the first delivery of large quantities of juice will allow researchers to investigate its potential.

"With these purple tomatoes you can get the same compounds that are present in blueberries and cranberries that give them their health benefits - but you can apply them to foods that people actually eat in significant amounts and are reasonably affordable," she said.

Start Quote

I hope this will serve as a vanguard product where people can have access to something that is GM but has benefits for them”
End Quote Prof Cathie Martin John Innes Centre in Norwich

The tomatoes are part of a new generation of GM plants designed to appeal to consumers - the first types were aimed specifically at farmers as new tools in agriculture.

The purple pigment is the result of the transfer of a gene from a snapdragon plant - the modification triggers a process within the tomato plant allowing the anthocyanin to develop.

Although the invention is British, Prof Martin says European Union restrictions on GM encouraged her to look abroad to develop the technology.

Canadian regulations are seen as more supportive of GM and that led to a deal with an Ontario company, New Energy Farms, which is now producing enough purple tomatoes in a 465 square metre (5,000sq ft) greenhouse to make 2,000 litres (440 gallons) of juice.

According to Prof Martin, the Canadian system is "very enlightened".

"They look at the trait not the technology and that should be a way we start changing our thinking - asking if what you're doing is safe and beneficial, not 'Is it GM and therefore we're going to reject it completely'.

"It is frustrating that we've had to go to Canada to do a lot of the growing and the processing and I hope this will serve as a vanguard product where people can have access to something that is GM but has benefits for them."

The first 1,200 litres are due to be shipped to Norwich shortly - and because all the seeds will have been removed, there is no genetic material to risk any contamination.

Camelina plants
Scientists at Rothamsted hope to produce a GM plant that provides "fish oil"

The aim is to use the juice in research to conduct a wide range of tests including examining whether the anthocyanin has positive effects on humans. Earlier studies show benefits as an anti-inflammatory and in slowing cancers in mice.

A key question is whether a GM product that may have health benefits will influence public opinion.

A major survey across the European Union in 2010 found opponents outnumbered supporters by roughly three to one. The last approval for a GM food crop in the EU came in 1998.

Prof Martin hopes that the purple tomato juice will have a good chance of being approved for sale to consumers in North America in as little as two years' time.

She and other plant researchers in the UK hope that GM will come to be seen in a more positive light.

Legacy of distrust

Earlier on Friday, scientists at Rothamsted Research in Hertfordshire announced that they were seeking permission for field trials for a GM plant that could produce a "fish oil".

In a parallel project, they have been cultivating a type of GM wheat that is designed to release a pheromone that deters aphids.

Professor Nick Pidgeon, an environmental psychologist at Cardiff University, has run opinion polls and focus groups on GM and other technologies.

He says that a legacy of distrust, including from the time of mad cow disease, will causing lasting concern.

"Highlighting benefits will make a difference but it's only one part of the story which is quite complex.

"People will still be concerned that this is a technology that potentially interferes with natural systems - they'll be concerned about big corporations having control over the technology and, at the end of the day, you feed it to yourself and your children and that will be a particular concern for families across the UK."

"To change that quite negative view that people had 10-15 years ago will take quite a long time - it'll take a demonstration of safety, a demonstration of good regulation and of the ability to manage the technology in a safe way. And that doesn't happen overnight."

Information asymmetry

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inf...