Search This Blog

Friday, December 27, 2013

The rise and fall of the Hockey Stick

The rise of the so called Hockey Stick graph is pivotal to the story of the rise of the alarm about man made global warming.

Reprinted from A Sceptical Mind

The fall of the Hockey Stick graph is pivotal to the rise of scepticism about man made global warming.

Here is the story of the rise and fall of the Hockey Stick.

The Background
A central and critical plank of the alarmist global warming case is that the current phase of warming that started in the late 19th century is unprecedented.

Why is this claim so important?

Because if a similar or greater warming phase has occurred in the very recent past, before human CO2 emissions had caused CO2 levels to rise, then clearly any such recent warming must have been natural and was not caused by CO2. And if any recent similar warming phase was natural then clearly the current phase of warming could also be a natural phenomena.

If the current phase of warming could be natural then those arguing that it was primarily caused  by human CO2 emissions would have to prove their hypothesis. And this is something they cannot do.
The only “proof” that CO2 is currently forcing up global temperatures is the claim that the current warming is somehow unusual, unique and unnatural. That’s the total argument for CO2 forcing. Something unprecedented is happening to the climate and CO2 is the only candidate for what is causing this unique phenomena.

Its certainly true that the well understood physics of CO2 in the atmosphere demonstrates (see “CO2 the basic facts“)that CO2 is indeed a greenhouse gas and will have a warming impact. No one disputes that. The issue is what is the scale of impact that this CO2 warming is having on the overall climate system. Is the effect of the CO2 so big that it can drive the temperature of the whole planet up in a way that is big enough to actually alter the climate?

This is a much harder question to answer because no one has a model of the total climate system that actually works and which verifiably produces even remotely accurate forecasts about climate trends.
So without a working model of the total climate system the only way to “prove” that CO2 is driving climate change is to prove that something truly unique is happening to the climate, that there is unprecedented warming occurring, and and then propose man made CO2 change as the only candidate as the cause of this ‘unprecedented’ warming.

The “problem” of the Medieval Warm Period

Until the 1990s there were many, many references in scientific and historical literature to a period labelled the Medieval Warm Period (MWP) lasting from about AD 800–1300. It was followed by a much cooler period termed the Little Ice Age. Based on both temperature reconstructions using proxy measures and voluminous historical references it was accepted that the Medieval Warm Period had been a period when global temperatures were a bit hotter than today’s temperatures. Until about the mid-1990s the Medieval Warm Period was for climate researchers an undisputed fact. The existence of the Medieval Warm Period was accepted without question and noted in the first progress report of the IPCC from 1990. On page 202 of that 1990 IPCC report there was the graphic 7c (see below), in which the Medieval Warm Period was portrayed as clearly warmer than the present.

By the time of the second IPCC report in 1995 where for the first time CO2 forcing began to be proposed more prominently as a cause of serious alarm, the Medieval Warm Period was sidelined in the text and narrative. An important way that this was done in the report was to alter the diagram of recent climate history by simply shortening the time period it covered so that it now started after the Medieval Warm Period. All that was shown was the long slow recovery from the Little Ice Age to today’s temperatures, i.e. a long period of increasing temperatures. But clearly this was only a short term solution. The way that the Medieval Warm Period dominated the recent climate graph challenged the basic argument for CO2 forcing which was that the late 20th century climate was some how unique. As Jay Overpeck, an IPCC participant said in his email to Professor Deming, “We have to get rid of the Medieval Warm Period”.

In order to prove CO2 forcing the Medieval Warm Period had to be eliminated.

The Rise of the Hockey Stick

Between the 1995 second IPCC report and the 2001 third IPCC report there was a complete revision in the way that recent climate history was portrayed. The supporters of the theory that CO2 changes were driving temperatures up had succeeded in their goal of eliminating the Medieval Warm Period. This rewriting of climate history and the elimination of the Medieval Warm Period was achieved through the famous Hockey Stick graph.

To understand the scale of the revision that had taken place compare the two graphs below. The one on the left is diagram 7c from page 202 of the 1990 IPCC report in which the Medieval Warm Period was portrayed as clearly warmer than the present. On the right is the Hockey Stick graph from the 2001 IPCC report in which the Medieval Warm Period and the Little Ice Age have all but disappeared and the recent climate history is dominated by a rapid temperature rise in the last 20th century.


The first blow against the accepted understanding of climate history came in 1995 when the English climatologist Keith Briffa (based at the Climate Research Unit at East Anglia) published in the journal Nature a study with sensational results. According to his studies of tree rings in the Siberian Polar-Ural, there had never been a Medieval Warm Period and the 20th century suddenly appeared as the warmest of the last 1000 years. The most recent part of this study is known as the Yamal study, because of the name of the region it was done in, and it has recently been discredited – see here.
Briffa’s work boldly proposed that the 20th Century had experienced the warmest climate of the millennium and this claim was now the central battlefield for the scientific argument about CO2 forcing. This of course ignored the Climatic Optimum (see Happy Holocene) between 5000 and 9000 years ago when temperatures were significantly higher than today but most people (and certainly the media and politicians) actually think that 5000 years is a long time ago so there was no need to undermine the Climatic Optimum in order to win wide public support for the CO2 forcing hypothesis. Hottest in the last 1000 years would do.

Briffa’s work had an impact and laid the ground work but the real knock out blow that finally succeeded in eliminating the Medieval Warm Period was a paper published in 1998 in Nature by Mann, Bradley and Hughes entitled, “Global-scale temperature patterns and climate forcing over the past six centuries” (you can download it here). This was the original peer reviewed hockey stick article.

Michael Mann of the Department of Geosciences, University of Massachusetts, who was the primary author of the paper, had in one scientific coup overturned the whole of climate history. Using tree rings as a basis for assessing past temperature changes back to the year 1,000 AD, supplemented by other proxies from more recent centuries, Mann completely redrew climate history, turning the Medieval Warm Period and Little Ice Age into non-events. In the new Hockey Stick diagram the Medieval Warm Period and Little Ice Age have disappeared, to be replaced by a largely benign and slightly cooling linear trend in climate until 1900 AD after which the Mann’s new graph showed the temperature shooting up in the 20th century in an apparently anomalous and accelerating fashion.

In every other science when such a drastic revision of previously accepted knowledge is promulgated, there is considerable debate and initial scepticism, the new theory facing a gauntlet of criticism and intense review. Only if a new idea survives that process does it become broadly accepted by the scientific peer group and the public at large.

This never happened with Mann’s `Hockey Stick’. The coup was total, bloodless, and swift as Mann’s paper was greeted with a chorus of uncritical approval from the increasingly politically committed supporters of the CO2 greenhouse theory. Within the space of only 12 months, the new theory had become entrenched as a new orthodoxy. The ultimate consummation of the new theory came with the release of the draft of the Third Assessment Report of the IPCC in 2000. Based solely on this new paper from a relatively unknown and young scientist the IPCC could now boldly state:

“It is likely that the rate and duration of the warming of the 20th century is larger than any other time during the last 1,000 years. The 1990s are likely to have been the warmest decade of the millennium in the Northern Hemisphere, and 1998 is likely to have been the warmest year.”

Overturning its own previous view in the 1995 report, the IPCC presented the `Hockey Stick’ as the new orthodoxy with hardly an apology or explanation for the abrupt U-turn since its 1995 report. The IPCC could show almost no supporting scientific justification because other than Mann’s Hockey Stick paper, and Briffa’s Siberian tree ring study there was little in the way of research confirming their new line.

The Hockey Stick graph, the new orthodoxy, was blown up to a wall sized display and used as a back drop for the public launch of the 2001 IPCC report.

Within months of the IPCC draft release, the long-awaited draft U.S. `National Assessment’ Overview document featured the `Hockey Stick’ as the first of many climatic graphs and charts in its report, affirming the crucial importance placed in it by the authors and by the active pro CO2 warming campaign at large. This was now not an esoteric theory about the distant past but rather the core foundation upon which the offensive on global warming was being mounted.

Soon the Hockey Stick was everywhere and with it went the new simple and catchy campaigning slogans “its hotter now than the last 1000 years!“, “1998 was the hottest year for a 1000 years!

Not long after the 2001 IPCC report the Government of Canada sent the hockey stick to schools across the country, and its famous conclusion about the 1990s being the warmest decade of the millennium was the opening line of a pamphlet sent to every household in Canada to promote the Kyoto Protocol.

Al Gore’s Oscar winning and hugely popular film “An Inconvenient Truth” was virtually built around the Hockey Stick (although Gore couldn’t resist tweaking it to make it look even more compelling by changing the way the graph data was displayed along the axis so that the temperature trend line it showed looked even steeper and starker).

In the UK the Government announced that the DVD of the “An Inconvenient Truth” would be sent to every school in the country as a teaching aid.

The Hockey Stick seemed to be carrying all before it. Dr Mann was promoted, given a central position in the IPCC and became a star of the media.

And then it all went horribly wrong.

The Fall of the Hockey Stick

In the years immediately after the 2001 IPCC report it seemed as if the sudden adoption of the Hockey Stick model of the earth’s recent climate past had created a new orthodoxy which could not be challenged. Even when some scientists quietly worried that the new theory about the past climate had been adopted way too quickly or were unhappy about the way that satellite temperature readings didn’t seem to fit the Hockey Stick model or they noticed that new individual proxy studies still seemed to keep showing that the Medieval Warm Period was hotter than today, they mostly stayed silent. They didn’t want to be branded as ‘deniers‘ after all.

Then an unlikely hero emerged in the shape of Stephen McIntyre a retired mineralogist fromToronto. McIntyre is not a scientist or an economist but he does know a lot about statistics, maths and data analysis and he is a curious guy. He didn’t start off as a climate sceptic but was just someone interested in the nuts and bolts of these new and apparently exciting ideas about climate change, and he was curious about how the Hockey Stick graph was made and wanted to see if the raw data looked like hockey sticks too. In the Spring of 2003, Stephen McIntyre requested the raw data set used in the Hockey Stick paper from Mann. After some delay Mann arranged provision of a file which he said was the one used in the original 1998 Hockey Stick paper and McIntyre began to look at how Mann had processed all the data from the numerous different proxy studies cited as his source material and how they had been combined to produce the average that was the basis of the famous Hockey Stick shape.

About this time Steve McIntyre linked up with Ross McKitrick a Canadian economist specialising in environmental economics and policy analysis. Together McIntyre and McKitrick began to dig down into the data that Mann had used in his paper and the statistical techniques used to create the single blended average used to make the Hockey Stick. They immediately began to find problems.

Some of these problems just seemed the sort of errors that are caused by sloppy data handling concerning location labels, use of obsolete editions, unexplained truncations of available series, etc.
Although such errors should have been spotted in the peer review process and they would adversely affect the quality of Mann’s conclusions they had a relatively small effect on the final results.

But McIntyre and McKitrick found one major error, an error so big that it invalidated the entire conclusion of the whole paper. A whopper of an error.

As we have seen what Mann had done was blend together lots of different proxy studies of the past climate going back a 1000 years and then produced an average of all these studies and a single graph showing the trend. Clearly the validity of the techniques used to blend together and average the different data from the various different studies was absolutely critical as to the validity of the final conclusions reached and the resulting Hockey Stick graph. This sort of blending of data sets is a very common statistical exercise and there are very well established techniques for undertaking such an exercise, these techniques use values that are called ‘principal components’ (if you want to know a lot more about the technical details then download McKitrick’s paper from here). What McIntyre and McKitrick discovered was that Mann had used very unusual principal component values and the effect of the choice of value used had drastically skewed the outcome of the blending and averaging exercise. Effectively what Mann’s odd statistical techniques did was to select data that had any sort of Hockey Stick shape and hugely increase its weight in the averaging process. Using Mann’s technique it meant that any data was almost certain to produce a spurious Hockey Stick shape.

Here is an example of the sort of things Mann was doing to the raw date.


Above are two separate temperature reconstructions running from 1400AD, both use tree rings, one is from California and one is from Arizona. Both were were part of the data used by Mann and included in the Hockey Stick average. The top one shows a temperature up tick at the end in the 20th century like the final Hockey Stick, the other shows a relatively flat temperature for the 20th century. Mann’s statistical trick gives the top series, the one with the desired Hockey Stick shape a weighting in the data that is 390 times that of the bottom series just because it has a Hockey Stick bend at the end.

This means that whatever data is fed into Mann’s statistical manipulations is almost bound to produce a Hockey Stick shape whether it is actually in the data or not.

McIntyre and McKitrick then took their critical analysis a step further. When you apply a statistical manipulation to a set of data it is important to make sure that what you doing is not actually distorting the data so much that you are really just creating something new, spurious and false in the numbers. One way to do this is to take the statistical manipulation in question and apply it to several examples of random numbers (sometimes this is called a Red Noise test). To simplify, you use random numbers as input data, then apply the statistical technique you are testing to the random numbers then if the techniques are sound you should get a set of random numbers coming out the other end of the calculations. There should be no false shape imparted to the random noise by the statistical techniques themselves, if what you get out is random numbers then this would prove that the techniques you were testing were not adding anything artificial to the numbers. This is what McIntyre and McKitrick did using the techniques that Mann had used in the Hockey Stick paper. And the results were staggering.

What they found was that 99% of the time you could process random data using Mann’s techniques and it would generate a Hockey Stick shape. This meant that Mann’s claim that the Hockey Stick graph represented an accurate reconstruction of the past climate was in tatters.

Here are some examples. Below are eight graphs. Seven were made by processing random numbers using Mann’s techniques. The eighth is the actual Hockey Stick chart from Mann’s paper. See if you can spot which is which.


McIntyre and McKitrick submitted a letter to Nature about the serious flaws they had uncovered in the methodology used in the Hockey Stick paper. After a long (8-month) reviewing process Nature notified them that they would not publish it. They concluded it could not be explained in the 500-word limit they were prepared to give McIntyre and McKitrick, and one of the referees said he found the material was quite technical and unlikely to be of interest to the general readers!

Instead of publishing anything from McIntyre and McKitrick explaining the serious errors that they had found Nature allowed Mann to make a coy correction in an on-line Supplement (but not in the printed text itself) where he revealed the nonstandard method he had used, and added the unsupported claim that it did not affect the results.

Eventually in 2003, McIntyre and McKitrick published an article entitled “Corrections to the Mann et al. (1998) Proxy Data Base and Northern Hemisphere Average Temperature Series” in the journal Energy and Environment raising concerns about what they had found in Manns Hockey Stick paper.
By this point following further work analysing Mann’s paper McIntyre and McKitrick showed that the data mining procedure did not just pull out a random group of proxies, instead it pulled out a single eccentric group of bristlecone pine chronologies published by Graybill and Idso in 1993 called the Sheep Mountain series.The original authors of the bristlecone study have always stressed that these trees are not proper climate proxies, their study was not trying to do a climate reconstruction and that they were surprised that Mann included it in the Hockey Stick data set. McIntyre and McKitrick had discovered that just removing this odd series from Mann’s proxy set and then applying Mann’s own eccentric statistical averaging caused the Hockey Stick shape to disappear. This revolutionary new model of the recent climate past was that fragile and it revealed the Hockey Stick graph as just a carefully worked artificial creation.

In the graph below the dotted line is the original Hockey Stick chart as published by Mann and as adopted and promoted by the IPCC. The solid line shows the past temperature reconstruction if the data used by Mann is averaged using the correct statistical analysis techniques rather than Mann’s unconventional ones. As can be seen the familiar Medieval Warm Period re-emerges and the 1990s cease to be the hottest of the millennium, that title is now claimed by the early 1400s.


In doing this research McIntyre and McKitrick had legitimately accessed Mann’s public college web site server in order to get a lot of the source material, and whilst doing this they found the data that provoked them to look at the bristlecone series in a folder entitled “Censored”.  It seems that Mann had done this very experiment himself and discovered that the climate graph loses its hockey stick shape when the bristlecone series are removed. In so doing he discovered that the hockey stick was not an accurate chart of the recent global climate pattern, it is an artificial creation that hinges on a flawed group of US proxies that are not even valid climate indicators. But Mann did not disclose this fatal weakness of his results, and it only came to light because of McIntyre and McKitrick’’s laborious efforts.

You can download McKitrick’ss own account of the whole Hockey Stick saga here and this web page compiled by McIntyre and McKitrick has a list of links and documents relating to the Hockey Stick controversy.

Following the publication of McIntyre and McKitrick’s critique of Mann’s work there was an immediate counter attack by some climatologists who had worked closely with Mann in the past. The attack on McIntyre and McKitrick’s critique of Mann’s work really boiled down to saying that of course the Hockey Stick disappeared if you stopped using Mann’s techniques and that you should carry on using Mann’s techniques and then you could get the Hockey Stick back!

Eventually a US senate committee of inquiry was set up under the chairmanship of Edward Wegman a highly respected Professor of mathematics and statistics and in 2006 his report was published. You can download it here.

The report examined the background to Mann’s Hockey Stick paper, the paper itself, the critique of it by McIntyre and McKitrick and took evidence from all the key players. Interestingly Wegman’s committee commissioned some original research into how the small world of climatology actually worked. The study of the social networking of the paleoclimatology world showed how closed it was and how often a small group of scientists both co-wrote and peer reviewed each others papers. For work that depended so much on making statistical claims about trends it was noted that it was surprising that no statisticians ever seemed to be involved in either the research work itself or its peer review.

The key finding in the WEgman Report was that “Our committee believes that the assessments that the decade of the 1990s was the hottest decade in a millennium and that 1998 was the hottest year in a millennium cannot be supported by the MBH98/99 [the technical name of Mann's original Hockey Stick paper]”

The other conclusions of the Wegman Report are also very interesting; It listed the following conclusions:
 
Conclusion 1. The politicization of academic scholarly work leads to confusing public debates. Scholarly papers published in peer reviewed journals are considered the archival record of research. There is usually no requirement to archive supplemental material such as code and data. Consequently, the supplementary material for academic work is often poorly documented and archived and is not sufficiently robust to withstand intense public debate. In the present example there was too much reliance on peer review, which seemed not to be sufficiently independent.

Conclusion 2. Sharing of research materials, data, and results is haphazard and often grudgingly done. We were especially struck by Dr. Mann’s insistence that the code he developed was his intellectual property and that he could legally hold it personally without disclosing it to peers. When code and data are not shared and methodology is not fully disclosed, peers do not have the ability to replicate the work and thus independent verification is impossible.

Conclusion 3. As statisticians, we were struck by the isolation of communities such as the paleoclimate community that rely heavily on statistical methods, yet do not seem to be interacting with the mainstream statistical community. The public policy implications of this debate are financially staggering and yet apparently no independent statistical expertise was sought or used.

Conclusion 4. While the paleoclimate reconstruction has gathered much publicity because it reinforces a policy agenda, it does not provide insight and understanding of the physical mechanisms of climate change except to the extent that tree ring, ice cores and such give physical evidence such as the prevalence of green-house gases. What is needed is deeper understanding of the physical mechanisms of climate change.

Generally the response of the IPCC, the supporters of the CO2 hypothesis and the broader coalition of climate campaigners to all this was a cross between a sneer and a yawn, and the Hockey Stick continued to be used widely as a campaigning and propaganda tool.

It is still being used today.

In 2008 the BBC paid for a large truck to tour central London displaying a giant version of Mann’s Hockey Stick as part of the promotion of its very pro CO2 warming mini series called “Climate Wars”.

The Selfish Gene: A defence against ID Attacks


Levan Gvelesiani Shared a post with you


As for other topics: the"Evolution theory" is no more science. It has so much holes and problems, that you can not take it as "scienific theory" Sorry but it is the reality: science which checks the facts has conclusions which are against the darwinistic model.

Read more


David J Strumfels response

I've read these references, and all are spurious or irrelevant. Moran basically agrees with Dawkins, and only complains that selfish alleles would be a better name than selfish gene. I have no quibble here, and doubt Dawkins would either. Moran's other complaint is Dawkins' alleged extreme reliance on natural selection, as if he ignored all other forces on genes, such as luck. That is a false characterization of Dawkins, who most certainly recognizes non-natural selection in genes: read The Blind Watchmaker for whole chapters on genetic drift, and so forth. This is a false, but common criticism of people who have not bothered to read Dawkins.

As far as I can make out, Edwards uses a mix of incomprehensible pseudophilosophy to claim Dawkins ideas are the equivalent to Social Darwinism, a claim Dawkins has demolished many times himself. Then there are statements like, "With regard to the ‘selfish’ gene concept where is the gene for selfishness?" so completely misconstrues the Selfish Gene theory that they utterly miss Dawkins point. Dawkins' has pointed out innumerable time that of course genes don't have selfish intentions; it is just that in bodies and species, it is a useful way of modeling their apparent intentions.

As for Hogenboom, she simply goes all out in misunderstanding selfish genes, without, apparently, a trace of serious though at all about the subject. She actually thinks selfish genes must lead to selfish individual behavior, and never to cooperation! I have to emphasize this because the fact that selfish genes can lead to highly cooperative behaviors (individuals in a species share a large number of genes) has been well understood by scientists for decades and no one seriously questions it. Dawkins would just shake his head at this invincible ignorance, I suspect.

Your "As for other topics: the"Evolution theory" is no more science. It has so much holes and problems, that you can not take it as "scienific theory" Sorry but it is the reality: science which checks the facts has conclusions which are against the darwinistic model." is itself merely a claim made without reading Dawkins (at all, I'll wager), only his ill-informed critics, and believing this is scientific research. But it is not. It is only wishful thinking (there has to be an intelligent creator out there!), with statements and claims cherry picked to appear to support your preconceived believed, while science in full stubbornly refuses to cooperate. Please, please Mr. Gvelesiani, stop deluding yourself.

Thursday, December 26, 2013

New Study Brings Scientists Closer to the Origin of RNA

Full article at New Study Brings Scientists Closer to the Origin of RNA

New Study Brings Scientists Closer to the Origin of RNA

Dec 24, 2013 by John Toon

New Study Brings Scientists Closer to the Origin of RNAAtomic force microscopy image of structures formed by the the self-assembly of TAP-ribose nucleoside with cyanuric acid. Credit: Nicholas Hud.


 (Phys.org) —One of the biggest questions in science is how life arose from the chemical soup that existed on early Earth. One theory is that RNA, a close relative of DNA, was the first genetic molecule to arise around 4 billion years ago, but in a primitive form that later evolved into the RNA and DNA molecules that we have in life today. New research shows one way this chain of events might have started.


Today, is stored in DNA. RNA is created from DNA to put that information into action. RNA can direct the creation of proteins and perform other essential functions of that DNA can't do. RNA's versatility is one reason that scientists think this polymer came first, with DNA evolving later as a better way to store genetic information for the long haul. But like DNA, RNA also could be a product of evolution, scientists theorize.

Chemists at the Georgia Institute of Technology have shown how molecules that may have been present on early Earth can self-assemble into structures that could represent a starting point of RNA. The spontaneous formation of RNA is seen as a crucial step in the origin of life, but one that scientists have struggled with for decades.

"In our study, we demonstrate a reaction that we see as important for the formation of the earliest RNA-like molecules," said Nicholas Hud, professor of Chemistry and Biochemistry at Georgia Tech, where he's also the director of the Center for Chemical Evolution.

The study was published Dec. 14 online in the Journal of the American Chemical Society. The research was funded by the National Science Foundation and NASA.

RNA is perfect for the roles it plays in life today, Hud said, but chemically it's extraordinarily difficult to make. This suggests that RNA evolved from simpler chemical couplings. As life became more chemically complex and enzymes were born, evolutionary pressures would have driven pre-RNA into the more refined modern RNA.

RNA is made of three chemical components: the sugar ribose, the bases and phosphate. A ribose-base-phosphate unit links together with other ribose-base-phosphate units to form an RNA polymer.
Figuring out how the bond between the bases and ribose first formed has been a difficult problem to address in the origins of life field, Hud said.

In the study, Hud's team investigated bases that are chemically related to the bases of modern RNA, but that might be able to spontaneously bond with ribose and assemble with other bases through the same interactions that enable DNA and RNA to store information. They homed in on a molecule called triaminopyrimidine (TAP).

The researchers mixed TAP with ribose under conditions meant to mimic a drying pond on early Earth. TAP and ribose reacted together in high yield, with up to 80 percent of TAP being converted into nucleosides, which is the name for the ribose-base unit of RNA. Previous attempts to form a ribose-base bond with the current RNA bases in similar reactions had either failed or produced nucleosides in very low yields.

"This study is important in showing a feasible step for how we get the start of an RNA-like molecule, but also how the building blocks of the first RNA-like polymers could have found each other and self-assembled in what would have been a very complex mixture of chemicals," Hud said.

The researchers demonstrated this property of the TAP nucleosides by adding another molecule to their reaction mixture, called cyanuric acid, which is known to interact with TAP. Even in the unpurified reaction mixture, noncovalent polymers formed with thousands of paired nucleosides.

"It is amazing that these nucleosides and bases actually assemble on their own, as life today requires complex enzymes to bring together RNA building blocks and to spatially order them prior to polymerization,"said Brian Cafferty, a graduate student at Georgia Tech and co-author of the study

The study demonstrated one possible way that the building blocks for an ancestor of RNA could have come together on early Earth. TAP is an intriguing candidate for one of the first bases that eventually led to modern RNA molecules, but there are certainly others, Hud said.

Future work, in Hud's lab and by other laboratories in the Center for Chemical Evolution, will investigate the origins of RNA's phosphate backbone, as well as other pathways toward modern RNA.

"We're looking for a simple, robust chemistry that can explain the earliest origin of RNA or its ancestor," Hud said.

Explore further: Molecules assemble in water, hint at origins of life      

December 2013 guide to the five visible planets | Astronomy Essentials | EarthSky

From December 2013 guide to the five visible planets | Astronomy Essentials | EarthSky

December 2013 guide to the five visible planets

Venus pops out at dusk. Jupiter rises at early evening, and Mars comes up after midnight. Saturn before dawn. Info and charts here.

Jupiter rises as Venus sets on December 2013 evenings. Read more
Jupiter rises as Venus sets on December 2013 evenings. Read more
In December 2013, the planet Jupiter shines in front of Gemini, the radiant for the Geminid meteor shower. Read more
In December 2013, the planet Jupiter shines in front of Gemini, the radiant for the Geminid meteor shower. Read more
Find Jupiter and the Winter Circle on these December nights! Read more
Find Jupiter and the Winter Circle on these December nights! Read more
Moon, Mars out between midnight and dawn on December 25 and 26. Read more
Moon, Mars out between midnight and dawn on December 25 and 26. Read more
 
Only one planet is easily visible at dusk and nightfall throughout December 2013: Venus. It is shining at its brightest now; you can’t miss it. Venus! It’s the beautiful evening star. Plus Jupiter can be seen in the evening sky. In early December, look for the giant planet to rise in the east about three hours after sunset, or at about the time that Venus sets. By late December, Jupiter will be up by dusk or nightfall, or roughly an hour before Venus sets in the west. If you have an unobstructed horizon, you should be able to see Venus and Jupiter shining pretty much opposite of each other at early evening, starting around the second week of December. They are the sky’s two brightest planets, and they’ll be like bright bookends, briefly, enclosing the evening sky.

Meanwhile, if you’re a night owl or early riser, watch for Mars and Saturn and Mercury in the morning sky. Mars shines in front of the Constellation Virgo the Maiden, rising in the east about one hour after the midnight hour in early December, and coming up around midnight on New Year’s Eve.
Mars reaches its highest point for the night at or near dawn all through December. At mid-northern latitudes, Saturn rises about two hours before the sun in early December, and nearly four hours before sunrise by late December. You might catch Mercury before sunrise in early December, but this world quickly sinks into the glare of sunrise each day thereafter, to pass from the morning to evening sky by the month’s end.

Follow the links below to learn more about planets and special sky events in December 2013.

How did the universe get so complicated?

logo
By Marcus Chown   –

Where does the complexity of the universe come from? Why are there galaxies and stars, atoms and iPhones, rainbows and roses? A vital clue comes from seeing the reflection of your face in a window.
Staring outwards, maybe you see cars driving past, trees swaying in the breeze, a dog being walked. But you also see a faint reflection of your face, because the glass is not perfectly transparent. About 95 per cent of light goes through and about 5 per cent is reflected back.
This simple observation became extremely difficult to understand at the start of the 20th century when physicists discovered that light is a stream of tiny machine-gun bullets called photons – all identical. If they are all identical, surely they should be affected identically by a window pane? Either they should all go through or they should all be reflected. There is only one way to explain 95 per cent going through and 5 per cent bouncing back: if photons have a 95 per cent chance of being transmitted and a 5 per cent chance of being turned back. But this means that if you could follow an individual photon as it headed towards a window pane, you could never know for sure whether it would be reflected or transmitted. Its behaviour is fundamentally unpredictable.

And what is true of photons is true of all denizens of the sub-microscopic world: atoms, electrons, neutrinos, everything. The universe is fundamentally unpredictable, fundamentally random. This so shocked Einstein that he famously declared: “God does not play dice with the universe.” (Less well known is Niels Bohr’s retort: “Stop telling God where to throw his dice.”) But not only was Einstein wrong, he was spectacularly wrong.

Here’s why: the universe is expanding, its constituent galaxies flying apart like pieces of cosmic shrapnel in the aftermath of the Big Bang. If the expansion is imagined running backwards, like a movie in reverse, the universe gets smaller and smaller. But the universe is “quantum” – which means it is not only unpredictable but grainy. Everything comes in “quanta” – indivisible grains that cannot be cut any smaller: matter, energy, even space. So, if you could see space on the smallest scale with some kind of supermicroscope it would look like a chessboard, with squares that could not be made any smaller.

Now, if we imagine the space shrinking as we run the expansion of the universe backwards, the chessboard gets smaller but the chess squares cannot shrink. So there are fewer and fewer of them. In fact, at the beginning of the universe, at a time known as the “inflationary epoch”, there were only about a thousand chess squares. That’s only a thousand places to either put energy or not put energy.
If you are into computers, you will understand that the universe was describable by only 1,000 bits of information. I have 16GB flash memory on my key ring. That figure denotes 16 billion bits so, on it, I could store the information for 16 million universes!

Fast-forward to today. In order to describe the universe, it would be necessary to record the location and type of every atom, as well as the energy state of every electron in every atom. Instead of 1,000 bits, 1 followed by 89 zeroes bits would be needed to describe the universe. So the big question is: if the universe started out so simple, with pretty much no information, where did all the information, all the complexity, come from?

This is where the window pane comes in. Information is the same as randomness. If I have a number that is non-random – say, 1 repeated a billion times – I can tell you what it is in a few words: “1 repeated a billion times”. It therefore contains hardly any information. But say I have a random number a billion digits long. To tell you it, I must tell you each and every one of the billion digits. It therefore contains a lot of information.

So here is the answer to the conundrum of where the universe’s information ultimately comes from. Every random quantum event since the Big Bang has injected information into the universe. Every time an atom spat out a photon – or did not – it injected information; every time an atomic nucleus decayed, or did not decay, it injected information.

Einstein was wrong when he said: “God does not play dice with the universe.” Not only does his metaphorical God play dice with the universe, if he did not, there would be no universe, certainly not of the complexity needed for humans to have arisen and for you to be reading these words. We live in a random reality. We live in a universe ultimately generated by the quantum roll of a dice.

Marcus Chown’s What a Wonderful World: One Man’s Attempt to Explain the Big Stuff (Faber & Faber) is out now

Does Warming Lead to More Hurricanes?



From:  https://en.wikipedia.org/wiki/List_of_Atlantic_hurricane_records

Increasing temperature is likely to lead to increasing precipitation[6][7] but the effects on storms are less clear. Extratropical storms partly depend on the temperature gradient, which is predicted to weaken in the northern hemisphere as the polar region warms more than the rest of the hemisphere.

David Strumfels -- However, from https://en.wikipedia.org/wiki/Physical_impacts_of_climate_change:

Storm strength leading to extreme weather is increasing, such as the power dissipation index of hurricane intensity.[17] Kerry Emanuel writes that hurricane power dissipation is highly correlated with temperature, reflecting global warming.[18] However, a further study by Emanuel using current model output concluded that the increase in power dissipation in recent decades cannot be completely attributed to global warming.[19] Hurricane modeling has produced similar results, finding that hurricanes, simulated under warmer, high-CO2 conditions, are more intense, however, hurricane frequency will be reduced.[20] Worldwide, the proportion of hurricanes reaching categories 4 or 5 – with wind speeds above 56 metres per second – has risen from 20% in the 1970s to 35% in the 1990s.[21] Precipitation hitting the US from hurricanes has increased by 7% over the 20th century.[22][23][24] The extent to which this is due to global warming as opposed to the Atlantic Multidecadal Oscillation is unclear. Some studies have found that the increase in sea surface temperature may be offset by an increase in wind shear, leading to little or no change in hurricane activity.[25] Hoyos et al. (2006) have linked the increasing trend in number of category 4 and 5 hurricanes for the period 1970–2004 directly to the trend in sea surface temperatures.


From http://www.principia-scientific.org/evaluation-of-climate-change-reconsidered-ii-physical-science.html:

As to extreme weather events that some claim are occurring with greater intensity and frequency because of the increase in atmospheric CO2, the data do not support that claim. There has been no recent increase in the intensity or frequency of hurricanes or typhoons either globally or in any specific ocean area. Nor has there been any significant increase in stormy weather or precipitation frequency or magnitude.

From https://www.google.com/search?q=hurricane+frequency&safe=off&hl=en&qscrl=1&rlz=1T4GGLS_enUS532US532&tbm=isch&tbo=u&source=univ&sa=X&ei=RXG8Uoy7OMuvsQSV5IDgAw&sqi=2&ved=0CCwQsAQ&biw=1219&bih=768#facrc=_&imgdii=_&imgrc=Pn-UM9t-lJ3krM%3A%3BBtsIDrdHWR7hjM%3Bhttp%253A%252F%252Fimages.sciencedaily.com%252F2009%252F09%252F090922112207-large.jpg%3Bhttp%253A%252F%252Fwww.sciencedaily.com%252Freleases%252F2009%252F09%252F090922112207.htm%3B600%3B489

 

Sorry if some of these images are hard to read.  Yet they do seem to challenge the IPCC and media's assumption that higher CO2/temperature must lead to more and more intense hurricanes.

Scientists highlight the resurrection of extinct animals as both a strong possibility and a major potential conservation

          
Scientists highlight the resurrection of extinct animals as both a strong possibility and a major potential conservation issue  Thylacine, or Tasmanian tiger. Credit: E.J. Keller Baker     

(Phys.org) —Scientists from across the world have "scanned the horizon" in order to identify potentially significant medium and long-term threats to conservation efforts.

Resurrection of several , the increasingly accelerated loss of wild rhinoceroses and a disastrous financial response to unburnable carbon are just some future global conservation issues flagged up in this year's Horizon Scan, recently published in Trends in Ecology and Evolution.

Professor William Sutherland and Dr Mark Spalding are amongst the 18 scientists who took part in this year's Horizon Scan, seeking to identify potential future conservation issues in order to reduce the "probability of sudden confrontation with major social or environmental changes".

One such plausible issue is the resurrection or re-construction of extinct species, such as the woolly mammoth, or the thylacine (a carnivorous marsupial). However, though there may be many benefits to the restoration of these animals, such a high-profile project could lead to attention and resources being diverted from attempts to thwart current threats to non-extinct species' survival.

Professor Sutherland said 'There has been discussion of this idea for some time but it is now looking more practical and the idea is being taken seriously. A key issues is whether this is really a conservation priority'.

Though the last died around 4000 years ago, methods such as back-breeding, cloning and genetic engineering may lead to their resurrection. Not only could these , and others such as the thylacine and the passenger pigeon, be re-constructed and returned to their native environments, they could potentially be used to "provide tools for outreach and education".

However, though this would be a conservational triumph, it could also hamper efforts to protect animals that are currently facing extinction, as both attention and resources would be diverted from preserving existing species and their habitats. Furthermore, there has not been any investigation into the "viability, ethics and safety of releasing resurrected species", nor the effect their presence may have on indigenous flora and fauna.

Another potential conservational issue identified by the Horizon Scan further highlights the problems facing species today. The loss of wild rhinoceroses and elephants is set to reaccelerate within the next few years, partially stimulated by a growing desire for ivory and horn.

In 2013, it is estimated that over 600 rhinoceroses were poached for their horn in South Africa alone, out of a total global population of less than 26,000. Though an increased human population and proximity to growing infrastructure is partially responsible, organised crime syndicates and intensive hunting carry the weight of the blame. In the Asian countries that use it, rhinoceros horn is more expensive than gold. Demand for the precious horn is ever increasing, resulting in elevated levels of poaching. If attention and resources are diverted from the protection of these majestic animals, we may have yet more candidates for resurrection in the future.

Altogether, this group of scientists identified the top 15 potential conservation issues (out of an initial group of 81 issues). In addition to the above topics, extensive land loss in southeast Asia from subsidence of peatlands, carbon solar cells as an alternative source of renewable energy, and an emerging fungal disease amongst snakes, have also been voted as plausible threats that need to be stopped before they can be realised.
Explore further: Scientists identify top conservation threats and opportunities
More information: William J. Sutherland, Rosalind Aveling, Thomas M. Brooks, Mick Clout, Lynn V. Dicks, Liz Fellman, Erica Fleishman, David W. Gibbons, Brandon Keim, Fiona Lickorish, Kathryn A. Monk, Diana Mortimer, Lloyd S. Peck, Jules Pretty, Johan Rockström, Jon Paul Rodríguez, Rebecca K. Smith, Mark D. Spalding, Femke H. Tonneijck, Andrew R. Watkinson, "A horizon scan of global conservation issues for 2014," Trends in Ecology & Evolution, Volume 29, Issue 1, January 2014, Pages 15-22, ISSN 0169-5347, dx.doi.org/10.1016/j.tree.2013.11.004.
Journal reference: Trends in Ecology and Evolution

Researchers team up on potential fuel cell advance from Phys.Org

Dec 19, 2013 by Lori Ann White @ http://phys.org/news/2013-12-team-potential-fuel-cell-advance.html
          
Researchers team up on potential fuel cell advanceSLAC researchers Hernan Sanchez Casalongue (left) and Hirohito Ogasawara tune the custom fuel cell built for SSRL Beam Line 13-2. Credit: Brad Plummer/SLAC

Scientists at SLAC National Accelerator Laboratory put together clues from experiments and theory to discover subtle variations in the way fuel cells generate electricity – an advance that could lead to ways to make the cells more efficient.
As reported today in Nature Communications, researchers focused powerful X-rays from SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) on one half of a tiny but functional fuel cell and watched it combine oxygen and hydrogen to make water. They saw something they didn't expect.
"We were surprised to find two possible routes for this reaction to take place," said Hirohito Ogasawara, a staff scientist at SSRL and with the SLAC/Stanford SUNCAT Center for Interface Science and Catalysis. What's more, one route uses less of the fuel cell's energy to complete – leaving more energy to power a car, for example.

However, the news wasn't a surprise to SUNCAT theorists, who had already proposed the existence of such variations in fuel cell chemistry. These variations are important because fuel cells turn chemical energy to electricity, and even a subtle difference can add up to a considerable amount of electricity over time.

On one side of a fuel cell, hydrogen gas is split into protons and electrons, which travel to the other side of the cell along different paths, providing electricity along the way. There they combine with oxygen gas to form water, a process that requires a catalyst to propel the reaction along. The most commonly used catalyst is platinum, a metal more costly than gold; research has focused on ways to decrease the amount of platinum needed by making the catalyst as efficient as possible. This has been a difficult task without tools that show each reaction as it takes place, step by step.

Ogasawara and colleagues used a technique called ambient pressure photoelectron spectroscopy (APXPS) at SSRL to watch the reactions taking place on the surface of the platinum catalyst in minute detail, and under realistic conditions.

"At first, what was new was the technique, and that we could see what was happening under working conditions," said Hernan Sanchez Casalongue, a graduate student in chemistry who designed and built the miniature fuel cell used to help test the efficacy of APXPS in this research. "But as we analyzed our results, we saw there were two different kinds of hydroxide on the surface of the platinum."

Hydroxide is an "intermediate species" that briefly forms on the way to the creation of water. It consists of one hydrogen nucleus bonded to one oxygen atom – O-H instead of H2O. In the fuel cell the researchers found that one type of hydroxide is "hydrated," or loosely bonded with a water molecule, and the other is not, and the one that's not hydrated requires less energy to take that final step to becoming H2O.

Ogasawara and Sanchez Casalongue took their discovery to SUNCAT theorists, who had already theorized that a change in the voltage applied to the fuel cell could affect the formation of hydroxide.

"This led us to the insight that tuning the hydration of hydroxide may lead to more efficient catalysis, but at the time there was no experimental evidence to back that up," said Venkat Viswanathan, then a graduate student at SUNCAT and now a faculty member at Carnegie Mellon.

Ogasawara's experiment, possible only with APXPS, provided Viswanathan and the other SUNCAT theorists with their experimental evidence. It also gives scientists another tool for improving fuel cells: Figure out how to make more of the non-hydrated hydroxides, and the fuel cell efficiency will improve.

Anders Nilsson, deputy director of SUNCAT and a co-author on the paper, said, "This represents a real breakthrough in electrocatalysis. These intermediate chemical species have long been speculated on but have never before been directly observed. This discovery could lead to more efficient catalysts."

Ogasawara said the researchers can't give any firm numbers on how much this can boost energy production from fuel cells – "This was a proof-of-concept experiment" – but it's an encouraging development, and they are looking at reactions involving other catalysts for similar phenomena.
They're also going to use APXPS to study the other side of the reaction – splitting water to make hydrogen and oxygen.
Explore further: New catalyst for fuel cells a potential substitute for platinum

More information: "Direct observation of the oxygenated species during oxygen reduction on a platinum fuel cell cathode." Hernan Sanchez Casalongue, Sarp Kaya, Venkatasubramanian Viswanathan, Daniel J. Miller, Daniel Friebel, Heine A. Hansen, Jens K. Nørskov, Anders Nilsson, Hirohito Ogasawara. Nature Communications 4, Article number: 2817 DOI: 10.1038/ncomms3817

If Your Holy Book Tells You How to Treat Your Slaves ...


2004 Indian Ocean earthquake and tsunami

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/2004_tsunami   

Tsunami strikes Ao Nang, Thailand.
Date00:58:53, 26 December 2004 (UTC) (2004-12-26T00:58:53Z)[1]
Magnitude9.1–9.3 Mw[1]
Depth30 km (19 mi)[1]
Epicenter3°18′58″N 95°51′14″E / 3.316°N 95.854°E / 3.316; 95.854Coordinates: 3°18′58″N 95°51′14″E / 3.316°N 95.854°E / 3.316; 95.854[1]
TypeUndersea (subduction)
Countries or regionsIndonesia (mainly in Aceh)
Sri Lanka
India (mostly in Tamil Nadu)
Thailand
Maldives
Somalia
TsunamiYes
Casualties230,210 – 280,000 deaths[2][3][4]
 
The 2004 Indian Ocean earthquake was an undersea megathrust earthquake that occurred at 00:58:53 UTC on Sunday, 26 December 2004, with an epicentre off the west coast of Sumatra, Indonesia. The quake itself is known by the scientific community as the Sumatra–Andaman earthquake.[5][6] The resulting tsunami was given various names, including the 2004 Indian Ocean tsunami, South Asian tsunami, Indonesian tsunami, the Christmas tsunami and the Boxing Day tsunami.[7]

The earthquake was caused when the Indian Plate was subducted by the Burma Plate and triggered a series of devastating tsunamis along the coasts of most landmasses bordering the Indian Ocean, killing over 230,000 people in fourteen countries, and inundating coastal communities with waves up to 30 meters (98 ft) high.[8] It was one of the deadliest natural disasters in recorded history. Indonesia was the hardest-hit country, followed by Sri Lanka, India, and Thailand.

With a magnitude of Mw 9.1–9.3, it is the third largest earthquake ever recorded on a seismograph. The earthquake had the longest duration of faulting ever observed, between 8.3 and 10 minutes. It caused the entire planet to vibrate as much as 1 centimetre (0.4 inches)[9] and triggered other earthquakes as far away as Alaska.[10] Its epicentre was between Simeulue and mainland Indonesia.[11]
The plight of the affected people and countries prompted a worldwide humanitarian response. In all, the worldwide community donated more than $14 billion (2004 US$) in humanitarian aid.[12]

File:2004 Indonesia Tsunami Complete.gif

Earthquake characteristics

The earthquake was initially documented as moment magnitude 8.8. In February 2005 scientists revised the estimate of the magnitude to 9.0.[13] Although the Pacific Tsunami Warning Center has accepted these new numbers, the United States Geological Survey has so far not changed its estimate of 9.1. The most recent studies in 2006 have obtained a magnitude of Mw 9.1–9.3. Dr. Hiroo Kanamori of the California Institute of Technology believes that Mw 9.2 is a good representative value for the size of this great earthquake.[14]
The hypocentre of the main earthquake was approximately 160 km (100 mi), in the Indian Ocean just north of Simeulue island, off the western coast of northern Sumatra, at a depth of 30 km (19 mi) below mean sea level (initially reported as 10 km (6.2 mi)). The northern section of the Sunda megathrust, ruptured; the rupture having a length of 1,300 km (810 mi).[11] The earthquake (followed by the tsunami) was felt simultaneously in Bangladesh, India, Malaysia, Myanmar, Thailand, Singapore and the Maldives.[15] Splay faults, or secondary "pop up faults", caused long, narrow parts of the sea floor to pop up in seconds. This quickly elevated the height and increased the speed of waves, causing the complete destruction of the nearby Indonesian town of Lhoknga.[16]
 
Indonesia lies between the Pacific Ring of Fire along the north-eastern islands adjacent to New Guinea, and the Alpide belt that runs along the south and west from Sumatra, Java, Bali, Flores to Timor.

Great earthquakes such as the Sumatra-Andaman event, which are invariably associated with megathrust events in subduction zones, have seismic moments that can account for a significant fraction of the global earthquake moment across century-scale time periods. Of all the seismic moment released by earthquakes in the 100 years from 1906 through 2005, roughly one-eighth was due to the Sumatra-Andaman event. This quake, together with the Good Friday Earthquake (Alaska, 1964) and the Great Chilean Earthquake (1960), account for almost half of the total moment. The much smaller but still catastrophic 1906 San Francisco earthquake is included in the diagram below for perspective. Mw denotes the magnitude of an earthquake on the moment magnitude scale.

Since 1900 the only earthquakes recorded with a greater magnitude were the 1960 Great Chilean Earthquake (magnitude 9.5) and the 1964 Good Friday Earthquake in Prince William Sound (9.2). The only other recorded earthquakes of magnitude 9.0 or greater were off Kamchatka, Russia, on 4 November 1952 (magnitude 9.0)[17] and Tōhoku, Japan (magnitude 9.0) in March 2011. Each of these megathrust earthquakes also spawned tsunamis in the Pacific Ocean. However, the death toll from these was significantly lower, primarily because of the lower population density along the coasts near affected areas and the much greater distances to more populated coasts and also due to the superior infrastructure and warning systems in MEDCs (More Economically Developed Countries) such as Japan.

Other very large megathrust earthquakes occurred in 1868 (Peru, Nazca Plate and South American
Plate); 1827 (Colombia, Nazca Plate and South American Plate); 1812 (Venezuela, Caribbean Plate and South American Plate) and 1700 (western North America, Juan de Fuca Plate and North American Plate). All of them are believed to be greater than magnitude 9, but no accurate measurements were available at the time.

Is the Cambrian Explosion View Outdated? (With thanks to John Hunter)

Source:  https://en.wikipedia.org/wiki/Cambrian_explosion


The fossil record as Darwin knew it seemed to suggest that the major metazoan groups appeared in a few million years of the early to mid-Cambrian, and even in the 1980s this still appeared to be the case.[15][16]

However, evidence of Precambrian metazoa is gradually accumulating. If the Ediacaran Kimberella was a mollusc-like protostome (one of the two main groups of coelomates),[20][61] the protostome and deuterostome lineages must have split significantly before 550 million years ago (deuterostomes are the other main group of coelomates).[94] Even if it is not a protostome, it is widely accepted as a bilaterian.[65][94] Since fossils of rather modern-looking Cnidarians (jellyfish-like organisms) have been found in the Doushantuo lagerstätte, the Cnidarian and bilaterian lineages must have diverged well over 580 million years ago.[94]

Trace fossils[59] and predatory borings in Cloudina shells provide further evidence of Ediacaran animals.[95] Some fossils from the Doushantuo formation have been interpreted as embryos and one (Vernanimalcula) as a bilaterian coelomate, although these interpretations are not universally accepted.[48][49][96] Earlier still, predatory pressure has acted on stromatolites and acritarchs since around 1,250 million years ago.[44]

The presence of Precambrian animals somewhat dampens the "bang" of the explosion: not only was the appearance of animals gradual, but their evolutionary radiation ("diversification") may also not have been as rapid as once thought. Indeed, statistical analysis shows that the Cambrian explosion was no faster than any of the other radiations in animals' history.[note 5] However, it does seem that some innovations linked to the explosion – such as resistant armour – only evolved once in the animal lineage; this makes a lengthy Precambrian animal lineage harder to defend.[98] Further, the conventional view that all the phyla arose in the Cambrian is flawed; while the phyla may have diversified in this time period, representatives of the crown-groups of many phyla do not appear until much later in the Phanerozoic.[53] Further, the mineralized phyla that form the basis of the fossil record may not be representative of other phyla, since most mineralized phyla originated in a benthic setting. The fossil record is consistent with a Cambrian Explosion that was limited to the benthos, with pelagic phyla evolving much later.[53]

Ecological complexity among marine animals increased in the Cambrian, as well later in the Ordovician.[5] However, recent research has overthrown the once-popular idea that disparity was exceptionally high throughout the Cambrian, before subsequently decreasing.[99] In fact, disparity remains relatively low throughout the Cambrian, with modern levels of disparity only attained after the early Ordovician radiation.[5]

The diversity of many Cambrian assemblages is similar to today's,[100][91] and at a high (class/phylum) level, diversity is thought by some to have risen relatively smoothly through the Cambrian, stabilizing somewhat in the Ordovician.[101] This interpretation, however, glosses over the astonishing and fundamental pattern of basal polytomy and phylogenetic telescoping at or near the Cambrian boundary, as seen in most major animal lineages.[102] Thus Harry Blackmore Whittington's questions regarding the abrupt nature of the Cambrian explosion remain, and have yet to be satisfactorily answered.[103]

Wednesday, December 25, 2013

The first principle is that you must not fool yourself and you are the easiest person to fool.

 

The first principle is that you must not fool yourself and you are the easiest person to fool.

Epigenetics enigma resolved: First structure of enzyme that removes methylation

Epigenetics enigma resolved: First structure of enzyme that removes methylation
Read more at: http://phys.org/news/2013-12-epigenetics-enigma-enzyme-methylation.html#jCp
Epigenetics enigma resolved       
This is the structure of the Tet enzyme with DNA. Note the purple ball at the active site, close to which one DNA base is flipped out of the double helix. Also note the degree to which the double helix is bent. Credit: Xiaodong Cheng, Emory University

The finding is important for the field of epigenetics because Tet enzymes chemically modify DNA, changing signposts that tell the cell's machinery "this gene is shut off" into other signs that say "ready for a change."

Tet enzymes' roles have come to light only in the last five years; they are needed for stem cells to maintain their multipotent state, and are involved in early embryonic and brain development and in cancer.

The results, which could help scientists understand how Tet enzymes are regulated and look for drugs that manipulate them, are scheduled for publication in Nature.

Researchers led by Xiaodong Cheng, PhD, determined the structure of a Tet family member from Naegleria gruberi by X-ray crystallography. The structure shows how the enzyme interacts with its target DNA, bending the double helix and flipping out the base that is to be modified.
"This base flipping mechanism is also used by other enzymes that modify and repair DNA, but we can see from the structure that the Tet family enzymes interact with the DNA in a distinct way," Cheng says.

Cheng is professor of biochemistry at Emory University School of Medicine and a Georgia Research Alliance Eminent Scholar. The first author of the paper is research associate Hideharu Hashimoto, PhD. A team led by Yu Zheng, PhD, a senior research scientist at New England Biolabs, contributed to the paper by analyzing the enzymatic activity of Tet using liquid chromatography–mass spectrometry.

Using oxygen, Tet enzymes change 5-methylcytosine into 5-hydroxymethylcytosine and other oxidized forms of methylcytosine. 5-methylcytosine (5-mC) and 5-hydroxymethylcytosine (5-hmC) are both epigenetic modifications of DNA, which change how DNA is regulated without altering the letters of the genetic code itself.

5-mC is generally found on genes that are turned off or on repetitive regions of the genome. 5-mC helps shut off genes that aren't supposed to be turned on (depending on the cell type) and changes in 5-mC's distribution underpin a healthy cell's transformation into a cancer cell.

In contrast to 5-mC, 5-hmC appears to be enriched on active genes, especially in brain cells. Having a Tet enzyme form 5-hmC seems to be a way for cells to erase or at least modify the "off" signal provided by 5-mC, although the functions of 5-hmC are an active topic of investigation, Cheng says.
Alterations of the Tet enzymes have been found in forms of leukemia, so having information on the enzymes' could help scientists design drugs that interfere with them.
N. gruberi is a single-celled organism found in soil or fresh water that can take the form of an amoeba or a flagellate; its close relative N. fowleri can cause deadly brain infections. Cheng says his team chose to study the enzyme from Naegleria because it was smaller and simpler and thus easier to crystallize than mammalian forms of the , yet still resembles mammalian forms in protein sequence.

Mammalian Tet enzymes appear to have an additional regulatory domain that the Naegleria forms do not; understanding how that domain works will be a new puzzle opened up by having the Naegleria structure, Cheng says.
Journal reference: Nature
Provided by Emory University


 

How Rare Am I? Genographic Project Results Demonstrate Our Extended Family Tree

Most participants of National Geographic’s Genographic Project can recite their haplogroup as readily as their mother’s maiden name. Yet outside consumer genetics, the word haplogroup is still unknown. Your haplogroup, or genetic branch of the human family tree, tells you about your deep ancestry—often thousands of years ago—and shows you the possible paths of migration taken by these ancient ancestors.  Your haplogroup also places you within a community of relatives, some distant, with whom you unmistakably share an ancestor way back when.
DNA Molecule
DNA Molecule

Haplogroup H1, Genographic’s most common lineage.
Let’s focus here on mitochondrial DNA haplogroup is H1, as it is the Genographic Project’s most common maternal lineage result. You inherited your mitochondrial DNA purely from your mother, who inherited it from her mother, and her mother, and so on. Yet, unlike often is the case with a mother’s maiden name, her maternal haplogroup is passed down through generations. Today, all members of haplogroup H1 are direct descendants from the first H1 woman that lived thousands of years ago. Most H1 members may know their haplogroup as H1a or H1b2 or H1c1a, etc, yet as a single genetic branch, H1 accounts for 15% of Genographic participants. What’s more, in the past few years, anthropologists have discovered and named an astonishing 200 new branches within haplogroup H1; and that number continues to grow.
Haplogroup H
Haplogroup H3, sister branch to H1

The origin of haplogroup H1 continues to be a debate as well. Most researchers suggest it was born in the Middle East between 10,000 and 15,000 years ago, and spread from there to Europe and North Africa. However, ancient DNA studies show that its ancestral haplogroup H first appears in Central Europe just 8,000 year ago. Its vast diversity and high concentration in Spain and Portugal, suggests H1 may have existed there during the last Ice Age, and spread north after glaciers melted. Yet others postulate that its young age and high frequency indicate it spread as agriculture took shape in Europe.
Any of the scenarios is possible. As technology improves, more DNA is extracted and sequenced from ancient bones, and more people contribute their DNA to the Genographic Project, we will keep learning about H1, and all other haplogroups.  It is because of participants contributing their DNA, their stories, and their hypotheses to science that we can carry forward this exciting work uncovering our deep genetic connections.

Happy Haplogroups!

What does it mean to be conscious?

   
A patient in a vegetative state was not just aware, but paying attention
Image courtesy of University of Cambridge

By Patricia Salber

A study published today (10/31/2013) in the online open source journal, NeuroImage:  Clinical, further blurs the boundaries of what it means to be conscious.  Although the title, Dissociable endogenous and exogenous attention in disorders of consciousnessand the research methodology are almost indecipherable to those of us not inside the beltway of chronic Disorders of Consciousness (DoC) research, University of Cambridge translates for us on their website.

Basically the researchers, lead by Dr. Srivas Chennu at the University of Cambridge, were trying to see if patients diagnosed as either in a vegetative state (VS) or minimally conscious state (MCS) could pay attention to (count) certain words, called the attended words, when they were embedded in a string of other randomly presented words, called the distracting words.  Normal brain wave responses were established by performing the word testing on 8 healthy volunteers.  The same testing was then applied to 21 brain damaged individuals, 9 with a clinical diagnosis of vegetative state and 12 with a diagnosis of minimally conscious state.  Most of the patients did not respond to the presentation of words as did normal volunteers.  But one did.

The patient, described at Patient P1, suffered a traumatic brain injury 4 months prior to testing.  He was diagnosed as “behaviorally vegetative,” based on a Coma Recovery Score-Revised (CRS-R) of 7 (8 or greater = MCS).  In addition to being able to consciously attend to the key words, this patient could also follow simple commands to imagine playing tennis.

Dr. Chennu was quoted as saying, “we are progressively building up a fuller picture of the sensory, perceptual and cognitive abilities in patients” with vegetative and minimally conscious states.  Yes, this is true.  But what does it mean if someone previously diagnosed as vegetative can now be shown to perform this sort of task?  Dr. Chennu hopes that this information will spur the development of “future technology to help patients in a vegetative state communicate with the outside world.”

I think this is fascinating research and it sheds new insights into how the brain functions, but it also raises a number of important questions.   For example, if I can attend to words, does it change my prognosis?  Patient P1 was found to have minimal cortical atrophy.  Perhaps he is just slow to transition from a vegetative to a MCS.  If attending to words is associated with a better prognosis, should that make me a candidate for intensive and expensive rehabilitation?  If so, who should pay for this?  If I have an advanced directive that says I don’t want to continue to live in a persistent vegetative state, will this level of awareness mean I am not really vegetative.  As more and more resources are poured into care for folks with severe brain damage, does it come at a societal cost?
 What trade offs are we making, what services are we forgoing, as we spend money developing tools to improve communication in vegetative states

Of course no one has the answer to these questions and I suspect as researchers like those at Cambridge continue to learn more about the functioning of the severely injured brain, the more difficult it will be to clearly say what is really means to be “aware.”

Cooperative

From Wikipedia, the free encyclopedia ...