Google+ Badge

Follow by Email

Search This Blog

Saturday, January 4, 2014

Biofuels Vital Graphics - Powering Green Economy

Biofuels Vital Graphics visualizes the opportunities, the need for safeguards, and the options that help ensure sustainability of biofuels to make them a cornerstone for a Green Economy. Stories from around the world have been highlighter to exemplify possible approaches, lessons learned, risks and opportunities Biofuels Vital Graphics is meant as a communications tool.

It builds on an earlier report by the International Panel for Sustainable Resource Management of the United Nations Environment Programme, Towards Sustainable Production and Use of Resources: Assessing Biofuels, as well as research produced since.

Read online: 
Web | Mobile | PDF (4mb) | E-book (flash) | iTunes app | Maps & Graphics collection

Liquid, gaseous or solid biofuels hold great promise to deliver an increasing share of the energy required to power a new global green economy. Many in government and the energy industry believe this modern bioenergy can play a significant role in reducing pollution and greenhouse gases, and promoting development through new business opportunities and jobs. Modern bioenergy can be a mechanism for economic development enabling local communities to secure the energy they need, with farmers earning additional income and achieving greater price stability for their production.

Third generation photovoltaic cell

From Wikipedia, the free encyclopedia
Third generation photovoltaic cells are solar cells that are potentially able to overcome the Shockley–Queisser limit of 31-41% power efficiency for single bandgap solar cells. This includes a range of alternatives to the so-called "first generation solar cells" (which are solar cells made of semiconducting p-n junctions) and "second generation solar cells" (based on reducing the cost of first generation cells by employing thin film technologies). Common third-generation systems include multi-layer ("tandem") cells made of amorphous silicon or gallium arsenide, while more theoretical developments include frequency conversion, hot-carrier effects and other multiple-carrier ejection.
[1] [2][3][4]


Solar cells can be thought of as visible light counterparts to radio receivers. A receiver consists of three basic parts; an antenna that converts the radio waves (light) into wave-like motions of electrons in the antenna material, an electronic valve that traps the electrons as they pop off the end of the antenna, and a tuner that amplifies electrons of a selected frequency. It is possible to build a solar cell identical to a radio, a system known as an optical rectenna, but to date these have not been practical.

Instead, the vast majority of the solar electric market is made up of silicon-based devices. In silicon cells, the silicon acts as both the antenna (or electron donor, technically) as well as the electronic valve. Silicon is almost ideal as a solar cell material; it is widely available, relatively inexpensive, and has a bandgap that is ideal for solar collection. On the downside it is energetically expensive to produce silicon in bulk, and great efforts have been made to reduce or eliminate the silicon in a cell. Moreover it is mechanically fragile, which typically requires a sheet of strong glass to be used as mechanical support and protection from the elements. The glass alone is a significant portion of the cost of a typical solar module.

According to the Shockley–Queisser limit, the majority of a cell's theoretical efficiency is due to the difference in energy between the bandgap and solar photon. Any photon with more energy than the bandgap can cause photoexcitation, but in this case any energy above and beyond the bandgap energy is lost. Consider the solar spectrum; only a small portion of the light reaching the ground is blue, but those photons have three times the energy of red light. Silicon's bandgap is 1.1 eV, about that of red light, so in this case the extra energy contained in blue light is lost in a silicon cell. If the bandgap is tuned higher, say to blue, that energy is now captured, but only at the cost of rejecting all the lower energy photons.

It is possible to greatly improve on a single-junction cell by stacking extremely thin cells with different bandgaps on top of each other - the "tandem cell" or "multi-junction" approach. Traditional silicon preparation methods do not lend themselves to this approach. There has been some progress using thin-films of amorphous silicon, notably Uni-Solar's products, but other issues have prevented these from matching the performance of traditional cells. Most tandem-cell structures are based on higher performance semiconductors, notably gallium arsenide (GaAs). Three-layer GaAs cells hold the production record of 41.6% for experimental examples.[5]

Numerical analysis shows that the "perfect" single-layer solar cell should have a bandgap of 1.13 eV, almost exactly that of silicon. Such a cell can have a maximum theoretical power conversion efficiency of 33.7% - the solar power below red (in the infrared) is lost, and the extra energy of the higher colors is also lost. For a two layer cell, one layer should be tuned to 1.64 eV and the other at 0.94 eV, with a theoretical performance of 44%. A three-layer cell should be tuned to 1.83, 1.16 and 0.71 eV, with an efficiency of 48%. A theoretical "infinity-layer" cell would have a theoretical efficiency of 64%.[

Thursday, January 2, 2014

Home electricity use in US falling to 2001 levels

Dec 30, 2013 by Jonathan Fahey on
Read more at:
           Home electricity use in US falling to 2001 levels  


This combination of Associated Press file photos shows, left, a Cingular "Fast Forward" cradle and Motorola mobile phone in New York on Tuesday Nov. 4, 2003, and an Apple ultracompact USB Power Adapter, on Friday, Sept. 19, 2008, in New York.
The average amount of electricity consumed in U.S. homes has fallen to levels last seen more than a decade ago, back when the smartest device in people's pockets was a Palm pilot and anyone talking about a tablet was probably an archaeologist or a preacher.
Because of more energy-efficient housing, appliances and gadgets, usage is on track to decline in 2013 for the third year in a row, to its lowest point since 2001, even though our lives are more electrified.
Here's a look at what has changed since the last time consumption was so low.
In the early 2000s, as energy prices rose, more states adopted or toughened building codes to force builders to better seal homes so heat or air-conditioned air doesn't seep out so fast. That means newer homes waste less energy.
Also, insulated windows and other building technologies have dropped in price, making retrofits of existing homes more affordable. In the wake of the financial crisis, billions of dollars in Recovery Act funding was directed toward home-efficiency programs.
Big appliances such as refrigerators and air conditioners have gotten more efficient thanks to federal energy standards that get stricter ever few years as technology evolves.
A typical room —one of the biggest power hogs in the home—uses 20 percent less per hour of full operation than it did in 2001, according to the Association of Home Appliance Manufacturers.
Home electricity use in US falling to 2001 levels       
This combination of Associated Press file photos shows, top, Switch75 light LED bulbs in clear and frosted, on Tuesday, Nov. 8, 2011 in New York and a 100-watt incandescent light bulb at Royal Lighting in Los Angeles on Jan. 21, 2011. LEDs.

Central air conditioners, refrigerators, dishwashers, water heaters, washing machines and dryers also have gotten more efficient.
Other devices are using less juice, too. Some 40-inch (1-meter) LED televisions bought today use 80 percent less power than the cathode ray tube televisions of the past. Some use just $8 worth of electricity over a year when used five hours a day—less than a 60-watt incandescent bulb would use.
Those are being replaced with and LEDs that use 70 to 80 percent less power. According to the Energy Department, widespread use of LED bulbs could save output equivalent to that of 44 large power plants by 2027.

The move to mobile also is helping. Desktop computers with big CRT monitors are being replaced with laptops, tablet computers and smart phones, and these mobile devices are specifically designed to sip power to prolong battery life.

It costs $1.36 to power an iPad for a year, compared with $28.21 for a desktop computer, according to the Electric Power Research Institute.  
We are using more devices, and that is offsetting what would otherwise be a more
dramatic reduction in power consumption.

Home electricity use in US falling to 2001 levels       
This combination of Associated Press file photos shows, top, a house in Duluth, Minn.,with triple-paned, south-facing windows that draw heat from the sun, and bottom an undated photo provided by Lowe's shows weatherstripping being applied.
 DVRs spin at all hours of the day, often under more than one television in a home. Game consoles are getting more sophisticated to process better graphics and connect with other players, and therefore use more power.

More homes have central air conditioners instead of window units. They are more efficient, but people use them more often.

Still, Jennifer Amman, the buildings program director at the American Council for an Energy-Efficient Economy, says she is encouraged.
    Home electricity use in US falling to 2001 levels
    In this combination of Associated Press file photos, a man, top, looks at the back of a
   Sony's 4K XBR LED television in Las Vegas, on Monday, Jan. 7, 2013. and bottom, a
   man ooks at a CRT television in Redwood City, Calif., on Wednesday, Oct.

"It's great to see this movement, to see the shift in the national numbers," she says. "I expect we'll see greater improvement over time. There is so much more that can be done."

The Energy Department predicts average residential electricity use per customer will fall again in 2014, by 1 percent.

In a world first, Japan extracted natural gas from frozen undersea deposits this year.

By Lisa Raffensperger @

Global fuel supplies may soon be dramatically enlarged thanks to new techniques to tap into huge reserves of natural gas trapped under the seafloor. In March, Japan became the first country to successfully extract methane from frozen undersea deposits called gas hydrates. 
These lacy structures of ice, found around the globe buried under permafrost and the oceanfloor, have pores filled with highly flammable gas. By some estimates, hydrates could store more than 10 quadrillion cubic feet of harvestable methane — enough to fulfill the present gas needs of the entire United States for the next 400 years.
The following is from Wikipedia, under methane clathrates.
Methane clathrate (CH4•5.75H2O[1]), also called methane hydrate, hydromethane, methane ice, fire ice, natural gas hydrate, or gas hydrate, is a solid clathrate compound (more specifically, a clathrate hydrate) in which a large amount of methane is trapped within a crystal structure of water, forming a solid similar to ice.[2] Originally thought to occur only in the outer regions of the Solar System where temperatures are low and water ice is common, significant deposits of methane clathrate have been found under sediments on the ocean floors of Earth.[3]
Methane clathrates are common constituents of the shallow marine geosphere, and they occur both in deep sedimentary structures, and form outcrops on the ocean floor. Methane hydrates are believed to form by migration of gas from depth along geological faults, followed by precipitation, or crystallization, on contact of the rising gas stream with cold sea water. Methane clathrates are also present in deep Antarctic ice cores, and record a history of atmospheric methane concentrations, dating to 800,000 years ago.[4] The ice-core methane clathrate record is a primary source of data for global warming research, along with oxygen and carbon dioxide.

The sedimentary methane hydrate reservoir probably contains 2–10 times the currently known reserves of conventional natural gas, as of 2013.[25] This represents a potentially important future source of hydrocarbon fuel. However, in the majority of sites deposits are thought to be too dispersed for economic extraction.[18] Other problems facing commercial exploitation are detection of viable reserves and development of the technology for extracting methane gas from the hydrate deposits.

A research and development project in Japan is aiming for commercial-scale extraction near Aichi Prefecture by 2016.[26][27] In August 2006, China announced plans to spend 800 million yuan (US$100 million) over the next 10 years to study natural gas hydrates.[28] A potentially economic reserve in the Gulf of Mexico may contain approximately 100 billion cubic metres (3.5×10^12 cu ft) of gas.[18] Bjørn Kvamme and Arne Graue at the Institute for Physics and technology at the University of Bergen have developed a method for injecting CO2 into hydrates and reversing the process; thereby extracting CH4 by direct exchange.[29] The University of Bergen's method is being field tested by ConocoPhillips and state-owned Japan Oil, Gas and Metals National Corporation (JOGMEC), and partially funded by the U.S. Department of Energy. The project has already reached injection phase and was analyzing resulting data by March 12, 2012.[30]

On March 12, 2013, JOGMEC researchers announced that they had successfully extracted natural gas from frozen methane hydrate.[31] In order to extract the gas, specialized equipment was used to drill into and depressurize the hydrate deposits, causing the methane to separate from the ice. The gas was then collected and piped to surface where it was ignited to prove its presence.[32] According to an industry spokesperson, "It [was] the world's first offshore experiment producing gas from methane hydrate".[31] Previously, gas had been extracted from onshore deposits, but never from offshore deposits which are much more common.[32] The hydrate field from which the gas was extracted is located 50 kilometres (31 mi) from central Japan in the Nankai Trough, 300 metres (980 ft) under the sea.[31][32] A spokesperson for JOGMEC remarked "Japan could finally have an energy source to call its own".[32] The experiment will continue for two weeks before it is determined how efficient the gas extraction process has been.[32] Marine geologist Mikio Satoh remarked "Now we know that extraction is possible. The next step is to see how far Japan can get costs down to make the technology economically viable."[32] Japan estimates that there are at least 1.1 trillion cubic meters of methane trapped in the Nankai Trough, enough to meet the country's needs for more than ten years.[32]

NASA's Cold Fusion Folly Posted by Buzz Skyline at

I am sad - horrified really - to learn that some NASA scientists have caught cold fusion madness. As is so often the case with companies and research groups that get involved in this fruitless enterprise, they tend to make their case by first pointing out how nice it would be to have a clean, cheap, safe, effectively limitless source of power. Who could say no to that?
NASA Langley scientists are hoping to build spacecraft powered with cold fusion. Image courtesy of NASA.
Here's a word of caution: anytime anyone, especially a scientist, starts by telling you about glorious, nigh-unbelievable futuristic applications of their idea, be very, very skeptical.

NASA, for example, is promoting a cold fusion scheme that they say will power your house and car, and even a space plane that is apparently under development, despite the fact that  cold fusion power supplies don't exist yet and almost certainly never will. And if that's not enough, NASA's brand of cold fusion can solve our climate change problems by converting carbon directly into nitrogen.

The one hitch in the plan, unfortunately, is that they're going to have to violate some very well established physics to make it happen. To say the least, I wouldn't count on it.

To be clear, cold fusion does indeed work - provided you use a heavier cousin of the electron, known as a muon, to make it happen. There is no question that muon-catalyzed fusion is a perfectly sound, well-understood process that would be an abundant source of energy, if only we could find or create a cheap source of muons. Unfortunately, it takes way more energy to create the muons that go into muon-catalyzed fusion than comes out of the reaction.

Cold fusion that doesn't involve muons, on the other hand, doesn't work. In fact, the very same physics principles that make muon-catalyzed fusion possible are the ones that guarantee that the muon-less version isn't possible.

To get around the problem presented by nature and her physical laws, NASA's scientists have joined other cold fusion advocates in rebranding their work under the deceptively scientific moniker LENR (Low Energy Nuclear Reactions), and backing it up with various sketchy theories.

The main theory currently in fashion among cold fusion people is the Widom-Larsen LENR theory, which claims that neutrons can result from interactions with "heavy electrons" and protons in a lump of material in a cold fusion experiment. These neutrons, so the argument goes, can then be absorbed in a material (copper is a popular choice) which becomes unstable and decays to form a lighter material (nickel, assuming you start with copper), giving off energy in the process.

At least one paper argues that Widom and Larsen made some serious errors in their calculations that thoroughly undermine their theory. But even if you assume the Widom-Larsen paper is correct, then there should be detectable neutrons produced in cold fusion experiments. (Coincidentally, it's primarily because no neutrons were detected in the original cold fusion experiments of Pons and Fleischmann that physicists were first clued into the fact no fusion was happening at all.)

Some proponents claim that the neutrons produced in the Widom-Larsen theory are trapped in the sample material and rapidly absorbed by atoms. But because the neutrons are formed at room temperature, they should have energies typical of thermal neutrons, which move on average at about 2000 meters a second. That means that a large fraction of them should escape the sample, and be easily detectable. Those that don't escape, but instead are absorbed by atoms would also lead to detectable radiation as the neutron-activated portions of the material decays. Either way, it would be pretty dangerous to be near an experiment like that, if it worked.  The fact that cold fusion researchers are alive is fairly good evidence that their experiments aren't doing what they think they're doing.

But if you're willing to believe Widom-Larsen, and you suspend your disbelief long enough to accept that the neutrons exclusively stay in the sample for some reason, and that the energy released as a result dosn't include any radiation, it should still be pretty easy to determine if the experiments work. All you'd have to do is look for nickel in a sample that initially consisted of pure copper. If published proof exists, I haven't found it yet (please send links to peer-reviewed publications, if you've seen something).

Instead, people like NASA's Dennis Bushnell are happy with decidedly unscientific evidence for cold fusion. Among other things, Bushnell notes that " . . . several labs have blown up studying LENR and windows have melted, indicating when the conditions are "right" prodigious amounts of energy can be produced and released."

Of course, chemical reactions can blow things up and melt glass too. There's no reason to conclude nuclear reactions were responsible. And it certainly isn't publishable proof of cold fusion. Considering that most of these experiments involve hydrogen gas and electricity, it's not at all surprising that labs go up in flames on occasion.

On a related note, a recent article in Forbes magazine reported that Lewis Larsen, of the above-mentioned Widom-Larsen theory, claims that measurements of the isotopes of mercury in compact fluorescent bulbs indicate that LENR reactions are taking place in light fixtures everywhere. If only it were true, it would offer serious support for the Widom-Larsen theory.

It's too bad the paper Larsen cites says nothing of the sort. According to an article in Chemical and Engineering News, the scientists who performed the study of gas in fluorescent bulbs were motivated by the knowledge that some mercury isotopes are absorbed in the glass of the bulbs more readily than others. The isotope ratio inside isn't changing because of nuclear reactions, but instead by soaking into the glass at different rates. Sorry Lewis Larsen, nice try.

Chimpanzee–human last common ancestor

From Wikipedia, the free encyclopedia

The chimpanzee–human last common ancestor (CHLCA, CLCA, or C/H LCA) is the last species that humans, bonobos and chimpanzees share as a common ancestor.

In human genetic studies, the CHLCA is useful as an anchor point for calculating single-nucleotide polymorphism (SNP) rates in human populations where chimpanzees are used as an outgroup.[citation needed] The CHLCA is frequently cited as an anchor for molecular time to most recent common ancestor (TMRCA) determination because the two species of the genus Pan, the bonobos and the chimpanzee, are the species most genetically similar to Homo sapiens.

Time estimates

The age of the CHLCA is an estimate. The fossil find of Ardipithecus kadabba, Sahelanthropus tchadensis, and Orrorin tugenensis are closest in age and expected morphology to the CHLCA and suggest the LCA (last common ancestor) is older than 7 million years. The earliest studies of apes suggested the CHLCA may have been as old as 25 million years; however, protein studies in the 1970s suggested the CHLCA was less than 8 million years in age. Genetic methods based on Orangutan/Human and Gibbon/Human LCA times were then used to estimate a Chimpanzee/Human LCA of 6 million years, and LCA times between 5 and 7 million years ago are currently used in the literature.[note 1]
One no longer has the option of considering a fossil older than about eight million years as a hominid no matter what it looks like.
—V. Sarich, Background for man[1]

Because chimps and humans share a matrilineal ancestor, establishing the geological age of that last ancestor allows the estimation of the mutation rate. However, fossils of the exact last common ancestor would be an extremely rare find. The CHLCA is frequently cited as an anchor for mt-TMRCA determination because chimpanzees are the species most genetically similar to humans. However, there are no known fossils that represent that CHLCA. It is believed that there are no proto-chimpanzee fossils or proto-gorilla fossils that have been clearly identified. However, Richard Dawkins, in his book The Ancestor's Tale, proposes that robust australopithecines such as Paranthropus are the ancestors of gorillas, whereas some of the gracile australopithecines are the ancestors of chimpanzees (see Homininae).
In effect, there is now no a priori reason to presume that human-chimpanzee split times are especially recent, and the fossil evidence is now fully compatible with older chimpanzee-human divergence dates [7 to 10 Ma...
—White et al. (2009), [2]

Some researchers tried to estimate the age of the CHLCA (TCHLCA) using biopolymer structures which differ slightly between closely related animals. Among these researchers, Allan C. Wilson and Vincent Sarich were pioneers in the development of the molecular clock for humans. Working on protein sequences they eventually determined that apes were closer to humans than some paleontologists perceived based on the fossil record.[note 2] Later Vincent Sarich concluded that the TCHLCA was no greater than 8 million years in age, with a favored range between 4 and 6 million years before present.

This paradigmatic age has stuck with molecular anthropology until the late 1990s, when others began questioning the certainty of the assumption. Currently, the estimation of the TCHLCA is less certain, and there is genetic as well as paleontological support for increasing TCHLCA. A 13 million year TCHLCA is one proposed age.[2][3]

A source of confusion in determining the exact age of the PanHomo split is evidence of a more complex speciation process rather than a clean split between the two lineages. Different chromosomes appear to have split at different times, possibly over as much as a 4 million year period, indicating a long and drawn out speciation process with large scale hybridization events between the two emerging lineages.[4] Particularly the X chromosome shows very little difference between Humans and chimpanzees, though this effect may also partly be the result of rapid evolution of the X chromosome in the last common ancestors.[5] Complex speciation and incomplete lineage sorting of genetic sequences seem to also have happened in the split between our lineage and that of the gorilla, indicating "messy" speciation is the rule rather than exception in large-bodied primates.[6][7] Such a scenario would explain why divergence age between the Homo and Pan has varied with the chosen method and why a single point has been so far hard to track down.

Richard Wrangham argued that the CHLCA was so similar to chimpanzee (Pan troglodytes), that it should be classified as a member of the Pan genus, and called Pan prior.[8]


  1. Jump up ^ Studies have pointed to the slowing molecular clock as monkeys evolved into apes and apes evolved into humans. In particular, Macaque monkey mtDNA has evolved 30% more rapidly than African ape mtDNA.
  2. Jump up ^ "If man and old world monkeys last shared a common ancestor 30 million years ago, then man and African apes shared a common ancestor 5 million years ago..." Sarich & Wilson (1971)


  1. Jump up ^ Background for man: readings in physical anthropology, 1971
  2. ^ Jump up to: a b White TD, Asfaw B, Beyene Y, et al. (October 2009). "Ardipithecus ramidus and the paleobiology of early hominids". Science 326 (5949): 75–86. doi:10.1126/science.1175802. PMID 19810190. 
  3. Jump up ^ Arnason U, Gullberg A, Janke A (December 1998). "Molecular timing of primate divergences as estimated by two nonprimate calibration points". J. Mol. Evol. 47 (6): 718–27. doi:10.1007/PL00006431. PMID 9847414. 
  4. Jump up ^ Patterson N, Richter DJ, Gnerre S, Lander ES, Reich D (June 2006). "Genetic evidence for complex speciation of humans and chimpanzees". Nature 441 (7097): 1103–8. doi:10.1038/nature04789. PMID 16710306. 
  5. Jump up ^ Wakeley J (March 2008). "Complex speciation of humans and chimpanzees". Nature 452 (7184): E3–4; discussion E4. doi:10.1038/nature06805. PMID 18337768. 
  6. Jump up ^ Scally A, Dutheil JY, Hillier LW, et al. (March 2012). "Insights into hominid evolution from the gorilla genome sequence". Nature 483 (7388): 169–75. doi:10.1038/nature10842. PMC 3303130. PMID 22398555. 
  7. Jump up ^ Van Arsdale, A.P. "Go, go, Gorilla genome". The Pleistocene Scene – A.P. Van Arsdale Blog. Retrieved 16 November 2012. 
  8. Jump up ^ De Waal, Frans B. M (2002-10-15). Tree of Origin: What Primate Behavior Can Tell Us About Human Social Evolution. pp. 124–126. ISBN 9780674010048.

Viewpoint: Human evolution, from tree to braid

One and the same: What many thought of as three separate species may in fact be just one

If one human evolution paper published in 2013 sticks in my mind above all others, it has to be the wonderful report in the 18 October issue of the journal Science.

The article in question described the beautiful fifth skull from Dmanisi in Georgia. Most commentators and colleagues were full of praise, but controversy soon reared its ugly head.

What was, in my view, a logical conclusion reached by the authors was too much for some researchers to take.

The conclusion of the Dmanisi study was that the variation in skull shape and morphology observed in this small sample, derived from a single population of Homo erectus, matched the entire variation observed among African fossils ascribed to three species - H. erectus, H. habilis and H. rudolfensis.

The five highly variable Dmanisi fossils belonged to a single population of H. erectus, so how could we argue any longer that similar variation among spatially and temporally widely distributed fossils in Africa reflected differences between species? They all had to be the same species.

I have been advocating that the morphological differences observed within fossils typically ascribed to Homo sapiens (the so-called modern humans) and the Neanderthals fall within the variation observable in a single species.

It was not surprising to find that Neanderthals and modern humans interbred, a clear expectation of the biological species concept.

But most people were surprised with that particular discovery, as indeed they were with the fifth skull and many other recent discoveries, for example the "Hobbit" from the Indonesian island of Flores.

It seems that almost every other discovery in palaeoanthropology is reported as a surprise. I wonder when the penny will drop: when we have five pieces of a 5,000-piece jigsaw puzzle, every new bit that we add is likely to change the picture.

Did we really think that having just a minuscule residue of our long and diverse past was enough for us to tell humanity's story?

If the fossils of 1.8 or so million years ago and those of the more recent Neanderthal-modern human era were all part of a single, morphologically diverse, species with a wide geographical range, what is there to suggest that it would have been any different in the intervening periods?

Probably not so different if we take the latest finds from the Altai Mountains in Siberia into account. Denisova Cave has produced yet another surprise, revealing that, not only was there gene flow between Neanderthals, Denisovans and modern humans, but that a fourth player was also involved in the gene-exchange game.

The identity of the fourth player remains unknown but it was an ancient lineage that had been separate for probably over a million years. H. erectus seems a likely candidate. Whatever the name we choose to give this mystery lineage, what these results show is that gene flow was possible not just among contemporaries but also between ancient and more modern lineages.

Pit of Bones A femur recovered from the famed "Pit of Bones" site in Spain yielded 400,000-year-old DNA

Just to show how little we really know of the human story, another genetic surprise has confounded palaeoanthropologists. Scientists succeeded in extracting the most ancient mitochondrial DNA so far, from the Sima de los Huesos site in Atapuerca, Spain.

The morphology of these well-known Middle Pleistocene (approximately 400,000 years old) fossils have long been thought to represent a lineage leading to the Neanderthals.

When the results came in they were actually closer to the 40,000 year-old Denisovans from Siberia. We can speculate on the result but others have offered enough alternatives for me to not to have to add to them.

The conclusion that I derive takes me back to Dmanisi: We have built a picture of our evolution based on the morphology of fossils and it was wrong.

We just cannot place so much taxonomic weight on a handful of skulls when we know how plastic - or easily changeable - skull shape is in humans. And our paradigms must also change.

The Panel of Hands at El Castillo Cave, Spain Old assumptions are being challenged as new thinking emerges

Some time ago we replaced a linear view of our evolution by one represented by a branching tree. It is now time to replace it with that of an interwoven plexus of genetic lineages that branch out and fuse once again with the passage of time.

This means, of course, that we must abandon, once and for all, views of modern human superiority over archaic (ancient) humans. The terms "archaic" and "modern" lose all meaning as do concepts of modern human replacement of all other lineages.

It also releases us from the deep-rooted shackles that have sought to link human evolution with stone tool-making technological stages - the Stone Ages - even when we have known that these have overlapped with each other for half-a-million years in some instances.

The world of our biological and cultural evolution was far too fluid for us to constrain it into a few stages linked by transitions.

The challenge must now be to try and learn as much as we can of the detail. We have to flesh out the genetic information and this is where archaeology comes into the picture. We may never know how the Denisovans earned a living, after all we have mere fragments of their anatomy at our disposal, let alone other populations that we may not even be aware of.

What we can do is try to understand the spectrum of potential responses of human populations to different environmental conditions and how culture has intervened in these relationships. The Neanderthals will be central to our understanding of the possibilities because they have been so well studied.

A recent paper, for example, supports the view that Neanderthals at La Chapelle-aux-Saints in France intentionally buried their dead which contrasts with reports of cannibalistic behaviour not far away at El Sidron in northern Spain.

Here we have two very different behavioural patterns within Neanderthals. Similarly, modern humans in south-western Europe painted in cave walls for a limited period but many contemporaries did not. Some Neanderthals did it in a completely different way it seems, by selecting raptor feathers of particular colours. Rather than focus on differences between modern humans and Neanderthals, what the examples show is the range of possibilities open to humans (Neanderthals included) in different circumstances.

The future of human origins research will need to focus along three axes:

  • further genetic research to clarify the relationship of lineages and the history of humans;
  • research using new technology on old archaeological sites, as at La Chapelle; and
  • research at sites that currently retain huge potential for new discoveries.

Sites in the latter category are few and far between. In Europe at least, many were excavated during the last century but there are some outstanding examples remaining. Gorham's and Vanguard Caves in Gibraltar, where I work, are among those because they span over 100,000 years of occupation and are veritable repositories of data.

There is another dimension to this story. It seems that the global community is coming round to recognising the value of key sites that document human evolution.

In 2012, the caves on Mount Carmel were inscribed on the Unesco World Heritage List and the UK Government will be putting Gorham's and associated caves on the Rock of Gibraltar forward for similar status in January 2015. It is recognition of the value of these caves as archives of the way of life and the environments of people long gone but who are very much a part of our story.

Prof Clive Finlayson is director of the Gibraltar Museum and author of the book The Improbable Primate.

Earth's temperature could rise by more than 4°C by 2100, claim some scientists.

Research by the University of New South Wales found that the global climate is more affected by carbon dioxide than previously thought.
The scientists believe temperatures could rise by more than 8°C by 2200 if C02 emissions are not reduced.

By Sarah Griffiths
      Global temperatures could soar by at least 4°C by 2100 if carbon dioxide emissions aren’t slashed, new research warns.   
Climate scientists claim that temperatures could rise by at least 4°C by 2100 and potentially more than 8°C by 2200, which could have disastrous results for the planet.
The research, published in the journal Nature, found that the global climate is more affected by carbon dioxide than previously thought.
Scientists added that temperatures could rise by more than 8°C by 2200 if CO2 emissions are not reduced. The research found that the global climate is more affected by carbon dioxide than previously thought
Scientists added that temperatures could rise by more than 8°C by 2200 if CO2 emissions are not reduced. The research found that the global climate is more affected by carbon dioxide than previously thought


Fewer clouds form as the planet warms so that less sunlight is reflected back into space, driving temperature on Earth higher.
When water evaporates from oceans, vapour can rise nine miles into the atmosphere to create rain clouds that reflect light, or can rise just a few miles and drift back down without forming clouds.
While both processes occur in the real world, current climate models place too much emphasis on the amount of clouds that form on a daily basis.
By looking at how clouds form in on the planet , scientists are able to create more realistic climate models, which are used to predict future temperatures.
Scientists have long debated how clouds affect global warming.
It could also solve one of the mysteries of climate sensitivity - the role of cloud formation and whether it has positive or negative effect on global warming.
Researchers now believe that existing climate models significantly overestimate the number of clouds protecting our atmosphere from overheating.
The study suggests that fewer clouds form as the planet warms, so that less sunlight is reflected back into space, driving temperatures up on Earth.
Professor Steven Sherwood, from the University of New South Wales, said: 'Our research has shown climate models indicating a low temperature response to a doubling of carbon dioxide from pre-industrial times are not reproducing the correct processes that lead to cloud formation.'
'When the processes are correct in the climate models, the level of climate sensitivity is far higher.
Protective: Researchers now believe that existing climate models significantly overestimate the number of clouds protecting the atmosphere from overheating
Protective: Researchers now believe that existing climate models significantly overestimate the number of clouds protecting the atmosphere from overheating

'Previously, estimates of the sensitivity of global temperature to a doubling of carbon dioxide ranged from 1.5°C to 5°C.

'This new research takes away the lower end of climate sensitivity estimates, meaning that global average temperatures will increase by 3°C to 5°C with a doubling of carbon dioxide.'

Professor Sherwood told The Guardian that a rise of 4°C would likely be 'catastrophic' rather than just dangerous.

'For example, it would make life difficult, if not impossible, in much of the tropics, and would guarantee the eventual melting of the Greenland ice sheet and some of the Antarctic ice sheet' he said.

The costs of extreme weather events have risen dramatically, climate scientists warned last week.

The national science academies of EU Member States believe Europe needs to plan for future probabilities of extreme weather, such as heat waves, floods and storms.
Highlighting a 60 per cent rise over the last 30 years in the cost of damage from extreme weather events across Europe, the European Academies' Science Advisory Council (EASAC) warned of the grave economic and social consequences if European policy makers do not use the latest estimates of future droughts, floods and storms in their planning while adapting to global warming and the resulting climate disruption.

The report urges EU nations to prepare for heat waves and think about how to reduce the number of deaths. Flood defence is also an area that requires improvement, as rising sea levels will leave coastal areas at serious risk from storm surges.

Researchers also believe climate research and adaptation plans should be given more priority.

The key to this narrower but higher estimate can be found by looking at the role of water vapour in cloud formation.

When water vapour is taken up by the atmosphere through evaporation, the updraughts can rise up to nine miles (15km) and form clouds that produce heavy rains.

The can however also rise just a few kilometres before returning to the surface without forming rain clouds, which reflect light away from the earth's surface.
When they rise only a few kilometres, they reduce total cloud cover because they pull more vapour away from the higher clouds forming.

Researchers found that climate models predicting a lesser rise in the Earth's temperature, do not include enough of the lower level water vapour process.

Most models show nearly all updraughts rising to 9 miles and forming clouds, reflecting more sunlight and as a result, the global temperature in these models becomes less sensitive in its response to atmospheric carbon dioxide.
The scientists warned that such a rise in temperatures on Earth would lead to droughts (pictured) and make life difficult for people living in the tropics. A hotter planet would also likely lead to the melting of the Greenland ice sheet and some of the Antarctic ice sheet
The scientists warned that such a rise in temperatures on Earth would lead to droughts (pictured) and make life difficult for people living in the tropics. A hotter planet would also likely lead to the melting of the Greenland ice sheet and some of the Antarctic ice sheet
When the models are made more realistic, the water vapour is taken to a wider range of heights in the atmosphere, causing fewer clouds to form as the climate warms.
This increases the amount of sunlight and heat entering the atmosphere and as a result increases the sensitivity of our climate to carbon dioxide or any other perturbation.

The result is that when the models are correct, the doubling of carbon dioxide expected in the next 50 years will see a temperature increase of at least 4°C by 2100.
Professor Sherwood said: 'Climate sceptics like to criticise climate models for getting things wrong and we are the first to admit they are not perfect, but what we are finding is that the mistakes are being made by those models that predict less warming, not those that predict more.
'Rises in global average temperatures of this magnitude will have profound impacts on the world and the economies of many countries if we don’t urgently start to curb our emissions.'

Wednesday, January 1, 2014

Jaw-Dropping Views of Saturn Cap 2013 for NASA's Cassini Spacecraft (Photos)

by Stephanie Pappas, Staff Writer   |   December 30, 2013 10:08am ET