Search This Blog

Saturday, January 25, 2014

Robotics and automation, employment, and aging Baby Boomers

By Kenneth Anderson  

Tech companies likely would have done much of this anyway.  There’s a convergence of interest in these technologies from many different directions.  Still, the aging of the Baby Boomer population bears noting as an indirect economic driver for new machines, to make aging Baby Boomer care and maintenance more possible and affordable. 

Mobile, connected, supplied, and independent within one’s own home for as long possible. Well, here you have all four: Google’s self-driving cars, Apple’s evolving iPhones as personal controller of your assistive devices,  Amazon’s home delivery, and the in-home assistive robots many companies are trying to design. 

The tech companies, it’s only partly an exaggeration to say, are firms whose business plans are based on old people.  It’s the services they want and need as they (“we,” let me be honest) grow older, striving to maintain autonomy and independence for as long as possible.  I might think (I do think) it’s unbelievably cool and not-old at all if one day I were to get a delivery by Amazon drone.  But actually, after the (thrilling) first time, it’s just because I’m old, don’t really feel like leaving the house, and anyway am too infirm to do so.  At least, not without a neuro-robotic “weak-limb support suit” for my legs, so I don’t lose my balance in the street and fall, and a Google car to get me to … the doctor’s office.

“Indirect” economic driver, however, because the Baby Boomers would not be paying out of their own pockets directly for many of the technologies that might be most important to them in retirement; government would wind up paying.  The concomitant uncertainties, political and otherwise, that would affect what amount to “procurement” decisions within Medicare and such programs, make decisions to go into expensive and technologically thus far unproven research and development (particularly with respect to the most ambitious artificial intelligence robotics) economically uncertain, contingent propositions.

But now a familiar debate has broken out – around the employment effects that are likely to come from these new technologies. (The Economist magazine summarizes the debate and comments in this week’s edition.)  On the one hand, innovative, disruptive technological change is nothing new.  The result has always been short-to-medium term creative destruction, sometimes including the destruction of whole occupational categories – followed by longer term job growth enabled by the new technologies or the increased wealth they create. 

In any case, over the long run, increases in the standard of living can only come through innovation and technological advance that allows greater economic output to be extracted from the same or smaller labor input.  In a world of many elderly, retired Baby Boomers and historically smaller worker base bearing much of the cost of the elderly living and health care costs, that has to matter a great deal.  Ben Miller and Robert D. Atkinson make the positive case for automation and robotics along these lines in a September 2013 report from the Information Technology and Innovation Foundation, Are Robots Taking Our Jobs, or Making Them?  

On the other hand, maybe this time is different.  That’s what  MIT professors Erik Brynjolfsson and Andrew McAfee argued in their 2012 book, Race Against the Machine, and reprise in a more nuanced way in their new book, released last week, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Machines.   Maybe “brilliant machines” will replace many workers – permanently.  Even without making sci-fi leaps of imagination for the capabilities of the short-to-medium term “artificial intelligence,” the emerging machines (as their designers intend them) aim to be not just “disembodied” AI computers, but instead genuine “robots,” possessed of mechanical capabilities for movement and mobility, and sensors, both of which are advancing rapidly technologies – not just AI computational abilities.  Perhaps this combination – the AI robot able to enter and interact in ordinary human social spaces – does signify a break from our past experiences of innovation as (eventual) producer of net new jobs over time.  Maybe significant new categories of work don’t emerge this time around – because as soon as one does, someone (or some thing) breaks it down into an algorithm, and then comes up with mechanical devices and sensors capable of executing the task – intelligently.  As a Babbage column in the Economist put it several years ago, in this scenario, capital becomes labor.  

What makes these machines special, then, is not just AI but, instead, AI embedded in a machine that can sense, decide, and then act, with mechanical movement and mobility, in the physical world – in the human social world.  In that case, economically, changes could be unprecedented, with implications for both manual labor and even for the “knowledge workers” previously insulated from both industrial automation and global outsourcing.  Unlike past technological revolutions, the result might not be whole new job categories gradually emerging (or at least ones that employ large numbers of workers). 

It’s striking, however, that the argument over employment doesn’t quite directly concern the elderly.  It doesn’t rationally engage aging Baby Boomers as consumers wanting these technologies and their services, or at least not Baby Boomers as both consumers and workers, with an interested foot in each camp.  They’ll already be retired and so their own employment won’t be their issue.  The employment issue in which, however, they will indirectly have an interest is the cost of human labor to care for them as elderly and infirm; that care is strongly labor intensive, in both unskilled and highly skilled (nursing) labor.  The difficulties of machine interactions in ordinary human environments – versus, say, the highly controlled factory or assembly line floor – has meant that this sector has not so far been widely automated.  But that is liable to significant change, if these technologies move forward – and if they are successful, they will have to become part of the calculation of the cost of care of the elderly, given that the technological shifts are not going to cheap, but the cost of both skilled and unskilled elder-care labor is only going to rise under current scenarios.  

Which is to say, no matter where you stand on the automation-robotics-employment debate, if tech’s business plan is significantly about the growing ranks of the elderly as the target market, then to that extent, the employment debate is less important to the elderly as regards their own employment, but (at least if the technologies are successful) squarely in the cross-hairs of public policy on the costs of care for the elderly.   It’s much more complicated than that, of course, and this is not to suggest that these are the only or even most significant factors.  The business model for robotics is not simply about retirees; the market for self-driving cars, for example, if they come to work as hoped by Google and others, will be far wider than that; eventually it becomes about everyone.   The point for now is only that the elderly form an important, though far from dominant, part of the markets for automation and robotics.

My own view is that Miller and Atkinson are mostly right about long-run job generation.  The “this time is different” view seems to me overstated – as so often the case with AI, as Gary Marcus has noted.    One should never rule out paradigm shifting advances, but so far as I can tell, the conceptual pathways as laid down for AI today are not going to lead – even over the long-run – to what sci-fi has already given us in imagination.  Siri is not “Her” – as even Siri herself noted in a recent Tweet.  For the future we can foresee, in the short-to-medium term, we’ll be more likely to have machines that (as ever) extend, but do not replace, human capabilities; in other cases, human capabilities will extend the machines.  The foreseeable future, I suspect, remains the process (long underway) of tag-teaming humans and machines.  Which is to say (mostly), same as it ever was.    

The significant new job categories (I speculate) run toward skilled manual labor of a new kind. The “maker movement”; new US manufacturing trends toward highly automated, but still human-run and staffed factories; new high technology, but still human-controlled, energy exploitation such as fracking; complex and crucial robotic machines under the supervision of nurses whose whole new skill sets put them in a new job category we might call nursing technologist – these are the areas of work that point the way forward.

Quasi-manual labor – but highly skilled, highly value-added, and value-added because it services the machines.  Or as economist Tyler Cowen put it in his 2013 Average Is Over, the next generation of workers will be defined by their relationships with the machine:  You can do very well if you are able to use technology to leverage your own productivity.  You can also do very well if you are able to use your human skills to leverage the productivity of the machine.  If you can’t do either, though, you might gradually fall into a welfare-supported underclass – because the world of work, even apparently merely “manual” labor, is largely out of your reach. 

Annals of the Presidency

On and off the road with Barack Obama.

by January 27, 2014

http://www.newyorker.com/reporting/2014/01/27/140127fa_fact_remnick?utm_source=tny&utm_campaign=generalsocial&utm_medium=twitter

Obama’s Presidency is on the clock. Hard as it has been to pass legislation, the coming year is a marker, the final interval before the fight for succession becomes politically all-consuming.
Obama’s Presidency is on the clock. Hard as it has been to pass legislation, the coming year is a marker, the final interval before the fight for succession becomes politically all-consuming. Photograph by Pari Dukovic.
                                         
On the Sunday afternoon before Thanksgiving, Barack Obama sat in the office cabin of Air Force One wearing a look of heavy-lidded annoyance. The Affordable Care Act, his signature domestic achievement and, for all its limitations, the most ambitious social legislation since the Great Society, half a century ago, was in jeopardy. His approval rating was down to forty per cent—lower than George W. Bush’s in December of 2005, when Bush admitted that the decision to invade Iraq had been based on intelligence that “turned out to be wrong.” Also, Obama said thickly, “I’ve got a fat lip.”

That morning, while playing basketball at F.B.I. headquarters, Obama went up for a rebound and came down empty-handed; he got, instead, the sort of humbling reserved for middle-aged men who stubbornly refuse the transition to the elliptical machine and Gentle Healing Yoga. This had happened before. In 2010, after taking a self-described “shellacking” in the midterm elections, Obama caught an elbow in the mouth while playing ball at Fort McNair. He wound up with a dozen stitches. The culprit then was one Reynaldo Decerega, a member of the Congressional Hispanic Caucus Institute. Decerega wasn’t invited to play again, though Obama sent him a photograph inscribed “For Rey, the only guy that ever hit the President and didn’t get arrested. Barack.”

This time, the injury was slighter and no assailant was named—“I think it was the ball,” Obama said—but the President needed little assistance in divining the metaphor in this latest insult to his person. The pundits were declaring 2013 the worst year of his Presidency. The Republicans had been sniping at Obamacare since its passage, nearly four years earlier, and HealthCare.gov, a Web site that was undertested and overmatched, was a gift to them. There were other beribboned boxes under the tree: Edward Snowden’s revelations about the National Security Agency; the failure to get anything passed on gun control or immigration reform; the unseemly waffling over whether the Egyptian coup was a coup; the solidifying wisdom in Washington that the President was “disengaged,” allergic to the forensic and seductive arts of political persuasion. The congressional Republicans quashed nearly all legislation as a matter of principle and shut down the government for sixteen days, before relenting out of sheer tactical confusion and embarrassment—and yet it was the President’s miseries that dominated the year-end summations.

Obama worried his lip with his tongue and the tip of his index finger. He sighed, slumping in his chair. The night before, Iran had agreed to freeze its nuclear program for six months. A final pact, if one could be arrived at, would end the prospect of a military strike on Iran’s nuclear facilities and the hell that could follow: terror attacks, proxy battles, regional war—take your pick. An agreement could even help normalize relations between the United States and Iran for the first time since the Islamic Revolution, in 1979. Obama put the odds of a final accord at less than even, but, still, how was this not good news?
The answer had arrived with breakfast. The Saudis, the Israelis, and the Republican leadership made their opposition known on the Sunday-morning shows and through diplomatic channels. Benjamin Netanyahu, the Israeli Prime Minister, called the agreement a “historic mistake.” Even a putative ally like New York Senator Chuck Schumer could go on “Meet the Press” and, fearing no retribution from the White House, hint that he might help bollix up the deal. Obama hadn’t tuned in. “I don’t watch Sunday-morning shows,” he said. “That’s been a well-established rule.” Instead, he went out to play ball.

Usually, Obama spends Sundays with his family. Now he was headed for a three-day fund-raising trip to Seattle, San Francisco, and Los Angeles, rattling the cup in one preposterous mansion after another. The prospect was dispiriting. Obama had already run his last race, and the chances that the Democratic Party will win back the House of Representatives in the 2014 midterm elections are slight. The Democrats could, in fact, lose the Senate.

For an important trip abroad, Air Force One is crowded with advisers, military aides, Secret Service people, support staff, the press pool. This trip was smaller, and I was along for the ride, sitting in a guest cabin with a couple of aides and a staffer who was tasked with keeping watch over a dark suit bag with a tag reading “The President.”

Obama spent his flight time in the private quarters in the nose of the plane, in his office compartment, or in a conference room. At one point on the trip from Andrews Air Force Base to Seattle, I was invited up front for a conversation. Obama was sitting at his desk watching the Miami Dolphins–Carolina Panthers game. Slender as a switch, he wore a white shirt and dark slacks; a flight jacket was slung over his high-backed leather chair. As we talked, mainly about the Middle East, his eyes wandered to the game. Reports of multiple concussions and retired players with early-onset dementia had been in the news all year, and so, before I left, I asked if he didn’t feel at all ambivalent about following the sport. He didn’t.

“I would not let my son play pro football,” he conceded. “But, I mean, you wrote a lot about boxing, right? We’re sort of in the same realm.”

The Miami defense was taking on a Keystone Kops quality, and Obama, who had lost hope on a Bears contest, was starting to lose interest in the Dolphins. “At this point, there’s a little bit of caveat emptor,” he went on. “These guys, they know what they’re doing. They know what they’re buying into. It is no longer a secret. It’s sort of the feeling I have about smokers, you know?”

Note: this is only one of eighteen pages.

Musings of Neil deGrasse Tyson

Posted on January 24, 2014 at 1:00 pm
By
                
quotes_inspirational_motivational_posters_neil_degrasse_tyson_1366x768_11521
Neil deGrasse Tyson giving the Vulcan salute. Photo Credit: Business Insider.

“For me, I am driven by two main philosophies: know more today about the world than I knew yesterday and lessen the suffering of others. You’d be surprised how far that gets you.”

If you don’t know who Neil deGrasse Tyson is, I would have to guess you have been living under a rock for the past ten years or so. For those of you that do know of him, how much do you really know about the man who is sometimes referred to as the Carl Sagan of the 21st century?


HUMBLE BEGINNINGS:



After seeing the stars during a visit to Pennsylvania, Tyson became entranced with astronomy at the tender age of nine, he continued to study astronomy throughout his teens. He even earned a foothold and slight fame in the astronomy community by giving lectures on the subject at the young age of fifteen.  Unfortunately for Carl Sagan, who wished to recruit Tyson to study at Cornell University, Tyson decided instead to attend Harvard University, where he eventually earned his Bachelor of Arts degree in physics in 1980.  Three years later, In 1983, he went on to receive his ‘Master of Arts’ degree in astronomy at the University of Texas at Austin. In the following years, while under the supervision of Professor Michael Rich at Columbia University, he received both a Master and Doctor of Philosophy degree in astrophysics.


EARLY CAREER:



NASA Distinguished Public Service Medal awarded to Tyson in 2004. Photo Credit: NASA

Tyson started writing science books early in his career, his first book, from 1989, was titledMerlin’s Tour of the Universe.’ It is a science fiction book in which the main character, Merlin – a fictional character from the Andromeda Galaxy – befriends many of Earth’s most famous scientists. This was only the start for Tyson, from 1989 to 2012 he has managed to write 12 separate science books; mostly revolving around the subjects of astronomy and astrophysics, in 1998 he even wrote a companion novel to Merlin’s Tour of the Universe which was titled ‘Just Visiting This Planet: Merlin Answers More Questions about Everything under the Sun, Moon, and Stars.’  In 1995, Tyson began writing a column for the Natural History magazine, merely called “Universe” (many of the archived articles can still be viewed here).  While working on his thesis, Tyson observed at the Cerro Tololo Inter-American Observatory, a ground based optical telescope in the Coquimbo Region of Chile, during that time he collaborated with Calán/Tololo Survey astronomers with their work on Type Ia supernovae.

It was these papers that comprised a portion of the discovery papers that related the use of Type Ia supernovae to measure distances, which in turn, led to the improvement of the Hubble constant, and the discovery of dark energy later on. Because of this, in 2001, President George W. Bush Jr. appointed Tyson (along with the second man on the moon, Buzz Aldrin) to serve on the Commission on the Future of the United States Aerospace Industry’ committee. Eventually, in 2004, he was subsequently appointed by the president to serve on the President’s Commission on Implementation of United States Space Exploration Policy board, shortly afterwards, he was awarded with NASA’s ‘Distinguished Public Service Medal,’ the highest honors given to a civilian by NASA.


THE HAYDEN PLANETARIUM:



Hayden Planetarium as seen at night time. Photo Credit: Wiki Commons user Alfred Gracombe.

Neil deGrasse Tyson started working at the Hayden Planetarium in 1996, not long after receiving his doctorate, unfortunately I can not find anything that specifies exactly when he was appointed as the first Frederick P. Rose Director at the planetarium, a position that Tyson still holds today.


THE PLUTO FILES:



pluto-planet
As director of the planetarium, Tyson decided to toss out traditional thinking in order to keep Pluto from being referred to as the ninth planet.  Tyson explained that he wanted to get away from simply counting the planet in the solar system. Instead, he wanted to look at similarities between the terrestrial planets, the similarities between gas giants and the similarities of Pluto with other objects found in the Kuiper belt.  Tyson has stated on many television shows, such as The Colbert Report and The Daily Show that his decision to stand against the traditional definition of a planet resulted in countless amounts of hate mail, most of it being from children.  In 2006 the International Astronomical Union confirmed Tyson’s assessment by changing the classification of Pluto from a fully-fledged planet, to a dwarf-planet.


THE PLANETARY SOCIETY:



We’re soul mates, Bill! Photo Credit: StarTalk Radio.

I was hoping to highlight Tyson’s tenure with The Planetary Society; however, after spending a few days scouring the internet for information, I haven’t honestly found a lot of useful info, I couldn’t even find a date when he joined; however I imagine it was fairly early.  The Planetary Society, which was founded in 1980 by Carl Sagan, Bruce Murray and Louis Friedman, abides by its objective to advance the exploration of space and to continue the search for extraterrestrial life.  Tyson was the Vice President of the Planetary Society for three years, until in 2005, when he passed the torch to his friend and confidant, Bill Nye. Still yet,  Tyson has continued to use his prominence with The Planetary Society, in conjunction with his ability to describe scientific processes in a fairly easy to understand manner, to help educate the public on certain science-related issues.


 PENNY4NASA ADVOCATE:



Take a penny, Leave a penny, Give NASA a damn penny! Photo Credit: PENNY4NASA.
Similarly, it should be no surprised to anyone that Neil is an outspoken advocate of the Penny4NASA campaign, which aims to increase the budget of NASA, thereby expanding its operations.  Currently, a lot of people believe that NASA’s budget is something like 5 to 10 cents on the dollar; however this is horribly incorrect, as Tyson frequently points out.  Currently, the budget for NASA is half a penny on the dollar, a far cry from what people thought it was (this figure has currently changed as the budget was increased by 800 million dollars for 2014).  In March 2012, Tyson testified before the United States Senate Committee on Commerce, Science, and Transportation, that; “Right now, NASA’s annual budget is half a penny on your tax dollar. For twice that—a penny on a dollar—we can transform the country from a sullen, dispirited nation, weary of economic struggle, to one where it has reclaimed its 20th century birthright to dream of tomorrow.” A full written transcription of the testimony can be read, here.


OTHER TASTY TIDBITS:



Credit: U.S. Air Force photo/Denise Gould
Credit: U.S. Air Force photo/Denise Gould

Most people don’t know this, but Tyson, a New Yorker, was an eye witness to the September 11th terrorist attacks on the World Trade Center. Living near what would come to be known world-wide as Ground Zero, Tyson watched the devastation as if unfolded.  His footage was included in a 2008 documentary, entitled ‘102 Minutes That Changed America’, Tyson also wrote a widely circulated letter in response to the tragedy.


  •  Tyson has also collaborated with PETA to create a public service announcement, stating “You don’t have to be a rocket scientist to know that kindness is a virtue.” At the same time, he did an interview with PETA that discussed the concept of intelligence in both humans and other animals, the failure of humans to communicate meaningfully with other animals, and the need for empathy in humanity.

  • Tyson is a wine aficionado to the extent that his wine collection has been featured in two different magazines, the first time was in May 2000 in Wine Spectator, and again in 2005, in The World of Fine Wine.


In March, the much beloved book and television series, Carl Sagan’s ‘Cosmos,’ will see a continuance on Fox. Neil deGrasse Tyson will narrate the reboot. Seth MacFarlane (the brains behind ‘Family Guy’) and Ann Druyan (Carl Sagan’s widow) are also intimately involved in the production of the series. Stay tuned!

Natural Selection Questions Answered Using Engineering Plus Evolutionary Analyses


 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Image Caption: Glossophaga soricina, a nectarivorous bat, feeding on the flowers of a banana plant. Nectar feeding bats comprised one of three evolutionary optima for mechanical advantage among New World Leaf-nosed bats. Credit: Beth Clare, Queen Mary University of London    
   
University of Massachusetts at Amherst
January 24, 2014
 
                                                                                                                                                                                                                                 
Introducing a new approach that combines evolutionary and engineering analyses to identify the targets of natural selection, researchers report in the current issue of Evolution that the new tool opens a way of discovering evidence for selection for biomechanical function in very diverse organisms and of reconstructing skull shapes in long-extinct ancestral species.
 
Evolutionary biologist Elizabeth Dumont and mechanical engineer Ian Grosse at the University of Massachusetts Amherst, with evolutionary biologist Liliana Dávalos of Stony Brook University and support from the National Science Foundation, studied the evolutionary histories of the adaptive radiation of New World leaf-nosed bats based on their dietary niches.
 
As the authors point out, adaptive radiations, that is, the explosive evolution of species into new ecological niches, have generated much of the biological diversity seen in the world today. “Natural selection is the driving force behind adaptation to new niches, but it can be difficult to identify which features are the targets of selection. This is especially the case when selection was important in the distant past of a group whose living members now occupy very different niches,” they note.
 
They set out to tackle this by examining almost 200 species of New World leaf-nosed bats that exploit many different food niches: Insects, frogs, lizards, fruit, nectar and even blood. The bats’ skulls of today reflect this dietary diversity. Species with long, narrow snouts eat nectar, while short-faced bats have exceptionally short, wide palates for eating hard fruits. Species that eat other foods have snouts shaped somewhere in between.
 
Dumont explains further, “We knew diet was associated with those things, but there was no evidence that natural selection acted to make those changes in the skull. The engineering model allowed us to identify the biomechanical functions that natural selection worked on. Some form or function helps an animal to perform better in its environment, but it can be hard to demonstrate exactly what that form or function is. We studied the engineering results using the evolutionary tree, which is a very cool new thing about this work.”
 
She and colleagues built an engineering model of a bat skull that can morph into the shape of any species, and used it to create skulls with all possible combinations of snout length and width. Then they ran engineering analyses on all the models to assess their structural strength and mechanical advantage, a measure of how efficiently and how hard bats can bite.
 
Analyzing the engineering results over hundreds of evolutionary trees of New World leaf-nosed bats revealed three optimal snout shapes favored by natural selection, they report. One was the long,
narrow snout of nectar feeders, the second was the extremely short and wide snout of short-faced bats, and the third optimum included all other species. Overall, selection for mechanical advantage was more important in determining the optima than was selection for structural strength, they add.
 
“Thanks to this new approach,” Dumont says, “we were able to answer our original question about natural selection in the evolution of these bats. It favored the highest mechanical advantage in short-faced bats, which gives them the high bite forces needed to pierce through the hardest figs. Nectar feeders have very low mechanical advantage, which is a trade-off for having long, narrow snouts that fit into the flowers in which they find nectar.”
Source: University of Massachusetts at Amherst      

Friday, January 24, 2014

Antarctic sea ice hit 35-year record high Saturday

 
Antarctic sea ice extent Sunday compared to 1979-2000 normal (NSIDC)
Antarctic sea ice extent on September 22 compared to 1981-2010 median depicted by orange curve (NSIDC)
Antarctic sea ice has grown to a record large extent for a second straight year, baffling scientists seeking to understand why this ice is expanding rather than shrinking in a warming world.
On Saturday, the ice extent reached 19.51 million square kilometers, according to data posted on the National Snow and Ice Data Center Web site.  That number bested record high levels set earlier this month and in 2012 (of 19.48 million square kilometers). Records date back to October 1978.
(NSIDC)
(NSIDC)

The increasing ice is especially perplexing since the water beneath the ice has warmed, not cooled.
“The overwhelming evidence is that the Southern Ocean is warming,” said Jinlun Zhang, a University of Washington scientist, studying Antarctic ice. “Why would sea ice be increasing? Although the rate of increase is small, it is a puzzle to scientists.”

In a new study in the Journal of Climate, Zhang finds both strengthening and converging winds around the South Pole can explain 80 percent of the increase in ice volume which has been observed.

“The polar vortex that swirls around the South Pole is not just stronger than it was when satellite records began in the 1970s, it has more convergence, meaning it shoves the sea ice together to cause ridging,” the study’s press release explains. “Stronger winds also drive ice faster, which leads to still more deformation and ridging. This creates thicker, longer-lasting ice, while exposing surrounding water and thin ice to the blistering cold winds that cause more ice growth.”

But no one seems to have a conclusive answer as to why winds are behaving this way.
“I haven’t seen a clear explanation yet of why the winds have gotten stronger,” Zhang told Michael Lemonick of Climate Central.

Some point to stratospheric ozone depletion, but a new study published in the Journal of Climate notes that computer models simulate declining – not increasing – Antarctic sea ice in recent decades due to this phenomenon (aka the ozone “hole”).

“This modeled Antarctic sea ice decrease in the last three decades is at odds with observations, which show a small yet statistically significant increase in sea ice extent,” says the study, led by Colorado State University atmospheric scientist Elizabeth Barnes.

A recent study by Lorenzo Polvani and Karen Smith of Columbia University says the model-defying sea ice increase may just reflect natural variability.

If the increase in ice is due to natural variability, Zhang says, warming from manmade greenhouse gases should eventually overcome it and cause the ice to begin retreating.

“If the warming continues, at some point the trend will reverse,” Zhang said.

However, a conclusion of the Barnes study is that the recovery of the stratospheric ozone layer – now underway – may slow/delay Antarctic warming and ice melt.

Ultimately, it’s apparent the relationship between ozone depletion, climate warming from greenhouse gases, natural variability, and how Antarctic ice responds is all very complicated. In sharp contrast, in the Arctic, there seems to be a relatively straight forward relationship between temperature and ice extent.

Related: Arctic sea ice has *not* recovered, in 7 visuals
Thus, in the Antarctic, we shouldn’t necessarily expect to witness the kind of steep decline in ice that has occurred in the Arctic.

“…the seeming paradox of Antarctic ice increasing while Arctic ice is decreasing is really no paradox at all,” explains Climate Central’s Lemonick. “The Arctic is an ocean surrounded by land, while the Antarctic is land surrounded by ocean. In the Arctic, moreover, you’ve got sea ice decreasing in the summer; at the opposite pole, you’ve got sea ice increasing in the winter. It’s not just an apples-and-oranges comparison: it’s more like comparing apple pie with orange juice.”

Genetically-modified purple tomatoes heading for shops

Purple tomatoes The new tomatoes could improve the nutritional value of everyday foods

 
The prospect of genetically modified purple tomatoes reaching the shelves has come a step closer.

Their dark pigment is intended to give tomatoes the same potential health benefits as fruit such as blueberries.

Developed in Britain, large-scale production is now under way in Canada with the first 1,200 litres of purple tomato juice ready for shipping.

The pigment, known as anthocyanin, is an antioxidant which studies on animals show could help fight cancer.

Scientists say the new tomatoes could improve the nutritional value of everything from ketchup to pizza topping.

The tomatoes were developed at the John Innes Centre in Norwich where Prof Cathie Martin hopes the first delivery of large quantities of juice will allow researchers to investigate its potential.

"With these purple tomatoes you can get the same compounds that are present in blueberries and cranberries that give them their health benefits - but you can apply them to foods that people actually eat in significant amounts and are reasonably affordable," she said.

Start Quote

I hope this will serve as a vanguard product where people can have access to something that is GM but has benefits for them”
End Quote Prof Cathie Martin John Innes Centre in Norwich

The tomatoes are part of a new generation of GM plants designed to appeal to consumers - the first types were aimed specifically at farmers as new tools in agriculture.

The purple pigment is the result of the transfer of a gene from a snapdragon plant - the modification triggers a process within the tomato plant allowing the anthocyanin to develop.

Although the invention is British, Prof Martin says European Union restrictions on GM encouraged her to look abroad to develop the technology.

Canadian regulations are seen as more supportive of GM and that led to a deal with an Ontario company, New Energy Farms, which is now producing enough purple tomatoes in a 465 square metre (5,000sq ft) greenhouse to make 2,000 litres (440 gallons) of juice.

According to Prof Martin, the Canadian system is "very enlightened".

"They look at the trait not the technology and that should be a way we start changing our thinking - asking if what you're doing is safe and beneficial, not 'Is it GM and therefore we're going to reject it completely'.

"It is frustrating that we've had to go to Canada to do a lot of the growing and the processing and I hope this will serve as a vanguard product where people can have access to something that is GM but has benefits for them."

The first 1,200 litres are due to be shipped to Norwich shortly - and because all the seeds will have been removed, there is no genetic material to risk any contamination.

Camelina plants
Scientists at Rothamsted hope to produce a GM plant that provides "fish oil"

The aim is to use the juice in research to conduct a wide range of tests including examining whether the anthocyanin has positive effects on humans. Earlier studies show benefits as an anti-inflammatory and in slowing cancers in mice.

A key question is whether a GM product that may have health benefits will influence public opinion.

A major survey across the European Union in 2010 found opponents outnumbered supporters by roughly three to one. The last approval for a GM food crop in the EU came in 1998.

Prof Martin hopes that the purple tomato juice will have a good chance of being approved for sale to consumers in North America in as little as two years' time.

She and other plant researchers in the UK hope that GM will come to be seen in a more positive light.

Legacy of distrust

Earlier on Friday, scientists at Rothamsted Research in Hertfordshire announced that they were seeking permission for field trials for a GM plant that could produce a "fish oil".

In a parallel project, they have been cultivating a type of GM wheat that is designed to release a pheromone that deters aphids.

Professor Nick Pidgeon, an environmental psychologist at Cardiff University, has run opinion polls and focus groups on GM and other technologies.

He says that a legacy of distrust, including from the time of mad cow disease, will causing lasting concern.

"Highlighting benefits will make a difference but it's only one part of the story which is quite complex.

"People will still be concerned that this is a technology that potentially interferes with natural systems - they'll be concerned about big corporations having control over the technology and, at the end of the day, you feed it to yourself and your children and that will be a particular concern for families across the UK."

"To change that quite negative view that people had 10-15 years ago will take quite a long time - it'll take a demonstration of safety, a demonstration of good regulation and of the ability to manage the technology in a safe way. And that doesn't happen overnight."

A New Physics Theory of Life

 
Jeremy England
Katherine Taylor for Quanta Magazine

Jeremy England, a 31-year-old physicist at MIT, thinks he has found the underlying physics driving the origin and evolution of life.


Why does life exist?

Popular hypotheses credit a primordial soup, a bolt of lightning and a colossal stroke of luck. But if a provocative new theory is correct, luck may have little to do with it. Instead, according to the physicist proposing the idea, the origin and subsequent evolution of life follow from the fundamental laws of nature and “should be as unsurprising as rocks rolling downhill.”

From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. Jeremy England, a 31-year-old assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life.
Plagiomnium affineKristian Peters
Cells from the moss Plagiomnium affine with visible chloroplasts, organelles that conduct photosynthesis by capturing sunlight.

“You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant,” England said.
England’s theory is meant to underlie, rather than replace, Darwin’s theory of evolution by natural selection, which provides a powerful description of life at the level of genes and populations. “I am certainly not saying that Darwinian ideas are wrong,” he explained. “On the contrary, I am just saying that from the perspective of the physics, you might call Darwinian evolution a special case of a more general phenomenon.”

His idea, detailed in a recent paper and further elaborated in a talk he is delivering at universities around the world, has sparked controversy among his colleagues, who see it as either tenuous or a potential breakthrough, or both.

England has taken “a very brave and very important step,” said Alexander Grosberg, a professor of physics at New York University who has followed England’s work since its early stages. The “big hope” is that he has identified the underlying physical principle driving the origin and evolution of life, Grosberg said.

“Jeremy is just about the brightest young scientist I ever came across,” said Attila Szabo, a biophysicist in the Laboratory of Chemical Physics at the National Institutes of Health who corresponded with England about his theory after meeting him at a conference. “I was struck by the originality of the ideas.”

Others, such as Eugene Shakhnovich, a professor of chemistry, chemical biology and biophysics at Harvard University, are not convinced. “Jeremy’s ideas are interesting and potentially promising, but at this point are extremely speculative, especially as applied to life phenomena,” Shakhnovich said.

England’s theoretical results are generally considered valid. It is his interpretation — that his formula represents the driving force behind a class of phenomena in nature that includes life — that remains unproven. But already, there are ideas about how to test that interpretation in the lab.

“He’s trying something radically different,” said Mara Prentiss, a professor of physics at Harvard who is contemplating such an experiment after learning about England’s work. “As an organizing lens, I think he has a fabulous idea. Right or wrong, it’s going to be very much worth the investigation.”
A computer simulation by Jeremy England and colleagues shows a system of particles confined inside a viscous fluid in which the turquoise particles are driven by an oscillating force. Over time (from top to bottom), the force triggers the formation of more bonds among the particles.Courtesy of Jeremy England
A computer simulation by Jeremy England and colleagues shows a system of particles confined inside a viscous fluid in which the turquoise particles are driven by an oscillating force. Over time (from top to bottom), the force triggers the formation of more bonds among the particles.

At the heart of England’s idea is the second law of thermodynamics, also known as the law of increasing entropy or the “arrow of time.” Hot things cool down, gas diffuses through air, eggs scramble but never spontaneously unscramble; in short, energy tends to disperse or spread out as time progresses. Entropy is a measure of this tendency, quantifying how dispersed the energy is among the particles in a system, and how diffuse those particles are throughout space. It increases as a simple matter of probability: There are more ways for energy to be spread out than for it to be concentrated. Thus, as particles in a system move around and interact, they will, through sheer chance, tend to adopt configurations in which the energy is spread out. Eventually, the system arrives at a state of maximum entropy called “thermodynamic equilibrium,” in which energy is uniformly distributed. A cup of coffee and the room it sits in become the same temperature, for example. As long as the cup and the room are left alone, this process is irreversible. The coffee never spontaneously heats up again because the odds are overwhelmingly stacked against so much of the room’s energy randomly concentrating in its atoms.

Although entropy must increase over time in an isolated or “closed” system, an “open” system can keep its entropy low — that is, divide energy unevenly among its atoms — by greatly increasing the entropy of its surroundings. In his influential 1944 monograph “What Is Life?” the eminent quantum physicist Erwin Schrödinger argued that this is what living things must do. A plant, for example, absorbs extremely energetic sunlight, uses it to build sugars, and ejects infrared light, a much less concentrated form of energy. The overall entropy of the universe increases during photosynthesis as the sunlight dissipates, even as the plant prevents itself from decaying by maintaining an orderly internal structure.

Life does not violate the second law of thermodynamics, but until recently, physicists were unable to use thermodynamics to explain why it should arise in the first place. In Schrödinger’s day, they could solve the equations of thermodynamics only for closed systems in equilibrium. In the 1960s, the Belgian physicist Ilya Prigogine made progress on predicting the behavior of open systems weakly driven by external energy sources (for which he won the 1977 Nobel Prize in chemistry). But the behavior of systems that are far from equilibrium, which are connected to the outside environment and strongly driven by external sources of energy, could not be predicted.

This situation changed in the late 1990s, due primarily to the work of Chris Jarzynski, now at the University of Maryland, and Gavin Crooks, now at Lawrence Berkeley National Laboratory. Jarzynski and Crooks showed that the entropy produced by a thermodynamic process, such as the cooling of a cup of coffee, corresponds to a simple ratio: the probability that the atoms will undergo that process divided by their probability of undergoing the reverse process (that is, spontaneously interacting in such a way that the coffee warms up). As entropy production increases, so does this ratio: A system’s behavior becomes more and more “irreversible.” The simple yet rigorous formula could in principle be applied to any thermodynamic process, no matter how fast or far from equilibrium. “Our understanding of far-from-equilibrium statistical mechanics greatly improved,” Grosberg said. England, who is trained in both biochemistry and physics, started his own lab at MIT two years ago and decided to apply the new knowledge of statistical physics to biology.

Using Jarzynski and Crooks’ formulation, he derived a generalization of the second law of thermodynamics that holds for systems of particles with certain characteristics: The systems are strongly driven by an external energy source such as an electromagnetic wave, and they can dump heat into a surrounding bath. This class of systems includes all living things. England then determined how such systems tend to evolve over time as they increase their irreversibility. “We can show very simply from the formula that the more likely evolutionary outcomes are going to be the ones that absorbed and dissipated more energy from the environment’s external drives on the way to getting there,” he said. The finding makes intuitive sense: Particles tend to dissipate more energy when they resonate with a driving force, or move in the direction it is pushing them, and they are more likely to move in that direction than any other at any given moment.

“This means clumps of atoms surrounded by a bath at some temperature, like the atmosphere or the ocean, should tend over time to arrange themselves to resonate better and better with the sources of mechanical, electromagnetic or chemical work in their environments,” England explained.
Self Replicating MicrostructuresCourtesy of Michael Brenner/Proceedings of the National Academy of Sciences
Self-Replicating Sphere Clusters: According to new research at Harvard, coating the surfaces of microspheres can cause them to spontaneously assemble into a chosen structure, such as a polytetrahedron (red), which then triggers nearby spheres into forming an identical structure.

Self-replication (or reproduction, in biological terms), the process that drives the evolution of life on Earth, is one such mechanism by which a system might dissipate an increasing amount of energy over time. As England put it, “A great way of dissipating more is to make more copies of yourself.” In a September paper in the Journal of Chemical Physics, he reported the theoretical minimum amount of dissipation that can occur during the self-replication of RNA molecules and bacterial cells, and showed that it is very close to the actual amounts these systems dissipate when replicating. He also showed that RNA, the nucleic acid that many scientists believe served as the precursor to DNA-based life, is a particularly cheap building material. Once RNA arose, he argues, its “Darwinian takeover” was perhaps not surprising.

The chemistry of the primordial soup, random mutations, geography, catastrophic events and countless other factors have contributed to the fine details of Earth’s diverse flora and fauna. But according to England’s theory, the underlying principle driving the whole process is dissipation-driven adaptation of matter.

This principle would apply to inanimate matter as well. “It is very tempting to speculate about what phenomena in nature we can now fit under this big tent of dissipation-driven adaptive organization,” England said. “Many examples could just be right under our nose, but because we haven’t been looking for them we haven’t noticed them.”

Scientists have already observed self-replication in nonliving systems. According to new research led by Philip Marcus of the University of California, Berkeley, and reported in Physical Review Letters in August, vortices in turbulent fluids spontaneously replicate themselves by drawing energy from shear in the surrounding fluid. And in a paper appearing online this week in Proceedings of the National Academy of Sciences, Michael Brenner, a professor of applied mathematics and physics at Harvard, and his collaborators present theoretical models and simulations of microstructures that self-replicate. These clusters of specially coated microspheres dissipate energy by roping nearby spheres into forming identical clusters. “This connects very much to what Jeremy is saying,” Brenner said.
Besides self-replication, greater structural organization is another means by which strongly driven systems ramp up their ability to dissipate energy. A plant, for example, is much better at capturing and routing solar energy through itself than an unstructured heap of carbon atoms. Thus, England argues that under certain conditions, matter will spontaneously self-organize. This tendency could account for the internal order of living things and of many inanimate structures as well. “Snowflakes, sand dunes and turbulent vortices all have in common that they are strikingly patterned structures that emerge in many-particle systems driven by some dissipative process,” he said. Condensation, wind and viscous drag are the relevant processes in these particular cases.

“He is making me think that the distinction between living and nonliving matter is not sharp,” said Carl Franck, a biological physicist at Cornell University, in an email. “I’m particularly impressed by this notion when one considers systems as small as chemical circuits involving a few biomolecules.”
SnowflakeWilson Bentley
If a new theory is correct, the same physics it identifies as responsible for the origin of living things could explain the formation of many other patterned structures in nature. Snowflakes, sand dunes and self-replicating vortices in the protoplanetary disk may all be examples of dissipation-driven adaptation.

England’s bold idea will likely face close scrutiny in the coming years. He is currently running computer simulations to test his theory that systems of particles adapt their structures to become better at dissipating energy. The next step will be to run experiments on living systems.

Prentiss, who runs an experimental biophysics lab at Harvard, says England’s theory could be tested by comparing cells with different mutations and looking for a correlation between the amount of energy the cells dissipate and their replication rates. “One has to be careful because any mutation might do many things,” she said. “But if one kept doing many of these experiments on different systems and if [dissipation and replication success] are indeed correlated, that would suggest this is the correct organizing principle.”

Brenner said he hopes to connect England’s theory to his own microsphere constructions and determine whether the theory correctly predicts which self-replication and self-assembly processes can occur — “a fundamental question in science,” he said.

Having an overarching principle of life and evolution would give researchers a broader perspective on the emergence of structure and function in living things, many of the researchers said. “Natural selection doesn’t explain certain characteristics,” said Ard Louis, a biophysicist at Oxford University, in an email. These characteristics include a heritable change to gene expression called methylation, increases in complexity in the absence of natural selection, and certain molecular changes Louis has recently studied.

If England’s approach stands up to more testing, it could further liberate biologists from seeking a Darwinian explanation for every adaptation and allow them to think more generally in terms of dissipation-driven organization. They might find, for example, that “the reason that an organism shows characteristic X rather than Y may not be because X is more fit than Y, but because physical constraints make it easier for X to evolve than for Y to evolve,” Louis said.

“People often get stuck in thinking about individual problems,” Prentiss said.  Whether or not England’s ideas turn out to be exactly right, she said, “thinking more broadly is where many scientific breakthroughs are made.”

Emily Singer contributed reporting.

Correction: This article was revised on January 22, 2014, to reflect that Ilya Prigogine won the Nobel Prize in chemistry, not physics.

Archetype

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Archetype The concept of an archetyp...