Search This Blog

Saturday, January 18, 2014

Again: Did the Human-Chimp ancestor walk upright?

Sahelanthropus

From Wikipedia, the free encyclopedia
            
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Cast of a Sahelanthropus tchadensis skull (Toumaï)
 

 
Sahelanthropus tchadensis is an extinct hominine species that is dated to about 7 million years ago, possibly very close to the time of the chimpanzee/human divergence, and so it is unclear whether it can be regarded as a member of the Hominini tribe.[1] Few, if any, specimens are known, other than the partial skull nicknamed Toumaï ("hope of life").
 
Existing fossils include a relatively small cranium known as Toumaï ("hope of life" in the local Dazaga language of Chad in central Africa), five pieces of jaw, and some teeth, making up a head that has a mixture of derived and primitive features. The braincase, being only 320 cm³ to 380 cm³ in volume, is similar to that of extant chimpanzees and is notably less than the approximate human volume of 1350 cm³.[citation needed]
 
The teeth, brow ridges, and facial structure differ markedly from those found in Homo sapiens. Cranial features show a flatter face, u-shaped dental arcade, small canines, an anterior foramen magnum, and heavy brow ridges. No postcranial remains have been recovered. The fossil suffered a large amount of distortion during the time of fossilisation and discovery.[citation needed]
Because no postcranial remains (i.e., bones below the skull) have been discovered, it is not known definitively whether Sahelanthropus tchadensis was indeed bipedal or two-footed, although claims for an anteriorly placed foramen magnum suggests that this may have been the case. Some paleontologists[who?] have disputed[why?] this interpretation of the basicranium. Its canine wear is similar to other Miocene apes.[2] Moreover, according to recent information, the femur of a hominid might have been discovered alongside the cranium but never published.[3]
 
The fossils were discovered in the Djurab Desert of Chad by a team of four led by Michel Brunet; three Chadians, Adoum Mahamat, Djimdoumalbaye Ahounta and Gongdibé Fanoné, and Frenchman, Alain Beauvilain et al.[4][5] All known material of Sahelanthropus were found between July 2001 to March 2002 at three sites (TM 247, TM 266 (which yielded most of the material), and TM 292). The discoverers claimed that S. tchadensis is the oldest known human ancestor after the split of the human line from that of chimpanzees.[6]
 
The bones were found far from most previous hominin fossil finds, which are from Eastern and Southern Africa. However, an Australopithecus bahrelghazali mandible was found in Chad by Beauvilain A., Brunet M. and Moutaye A.H.E. as early as 1995.[6] With the sexual dimorphism known to have existed in early hominids, the difference between Ardipithecus and Sahelanthropus may not be large enough to warrant a separate species for the latter.[7]
 
Sahelanthropus may represent a common ancestor of humans and chimpanzees; no consensus has been reached yet by the scientific community. The original placement of this species as a human ancestor but not a chimpanzee ancestor would complicate the picture of human phylogeny. In particular, if Toumaï is a direct human ancestor, then its facial features bring into doubt the status of Australopithecus because its thickened brow ridges were reported to be similar to those of some later fossil hominids (notably Homo erectus), whereas this morphology differs from that observed in all australopithecines, most fossil hominids and extant humans.
 
Another possibility is that Toumaï is related to both humans and chimpanzees, but is the ancestor of neither. Brigitte Senut and Martin Pickford, the discoverers of Orrorin tugenensis, suggested that the features of S. tchadensis are consistent with a female proto-gorilla. Even if this claim is upheld, then the find would lose none of its significance, for at present, precious few chimpanzee or gorilla ancestors have been found anywhere in Africa. Thus if S. tchadensis is an ancestral relative of the chimpanzees (or gorillas), then it represents the first known member of their lineage. Furthermore, S. tchadensis does indicate that the last common ancestor of humans and chimpanzees is unlikely to resemble chimpanzees very much, as had been previously supposed by some paleontologists.[8][9]
 
A further possibility, highlighted by research published in 2012, is that the human/chimpanzee split is earlier than previously thought, with a possible range of 7 to 13 million years ago (with the more recent end of this range being favoured by most researchers), based on slower than previously thought changes between generations in human DNA. Indeed, some researchers (such as Tim D. White, University of California) consider suggestions that Sahelanthropus is too early to be a human ancestor to have evaporated.[10]
 
Sediment isotope analysis of cosmogenic atoms in the fossil yielded an age of about 7 million years.[11] In this case, however, the fossils were found exposed in loose sand; co-discoverer Beauvilain cautions that such sediment can be easily moved by the wind, unlike packed earth.[12]
In fact, Toumaï was probably reburied in the recent past. Taphonomic analysis reveals the likelihood of one, perhaps two, burial(s) which seemingly occurred after the introduction of Islam in the region. Two other hominid fossils (a left femur and a mandible) were in the same “grave” along with various mammal remains. The sediment surrounding the fossils might thus not be the material in which the bones were originally deposited, making it necessary to corroborate the fossil's age by some other means.[13] The fauna found at the site – namely the anthracotheriid Libycosaurus petrochii and the suid Nyanzachoerus syrticus – suggests an age of more than 6 million years, as these species were probably already extinct by that time.
 

Orrorin

From Wikipedia, the free encyclopedia
 
Orrorin tugenensis is a postulated early species of Homininae, estimated at  .1 to 5.7 million years (Ma) and discovered in 2000. It is not confirmed how Orrorin is related to modern humans. Its discovery was an argument against the hypothesis that australopithecines are human ancestors, as much as it still remains the most prevalent hypothesis of human evolution as of 2012.[1]

The name of genus Orrorin (plural Orroriek) means "original man" in Tugen,[2][3] and the name of the only classified species, O. tugenensis, derives from Tugen Hills in Kenya, where the first fossil was found in 2000.[3] As of 2007, 20 fossils of the species have been found.[4]

The 20 specimens found as of 2007 include: the posterior part of a mandible in two pieces; a symphysis and several isolated teeth; three fragments of femora; a partial humerus; a proximal phalanx; and a distal thumb phalanx. [4]

Orrorin had small teeth relative to its body size. Its dentition differs from that found in Australopithecus' in that its cheek teeth are smaller and less elongated mesiodistally and from Ardipithecus in that its enamel is thicker. The dentition differs from both these species in the presence of mesial groove on the upper canines. The canines are ape-like but reduced, like those found in Miocene apes and female chimpanzees. Orrorin had small post-canines and was microdont like modern humans, whereas robust Australopithecenes were megadont. [4]

In the femur, the head is spherical and rotated anteriorly; the neck is elongated and oval in section and the lesser trochanter protrudes medially. While this suggest that Orrorin was bipedal, the rest of the postcranium indicates it climbed trees. While the proximal phalanx is curved, the distal pollical phalanx is of human proportions and has thus been associated with toolmaking, but should probably be associated with grasping abilities useful for tree-climbing in this context.[4]

After the fossils were found in 2000, they were held at the Kipsaraman village community museum, but the museum was subsequently closed. Since then, according to the Community Museums of Kenya chairman Eustace Kitonga, the fossils are stored at a secret bank vault in Nairobi.[5]

If Orrorin proves to be a direct human ancestor, then australopithecines such as Australopithecus afarensis ("Lucy") may be considered a side branch of the hominid family tree: Orrorin is both earlier, by almost 3 million years, and more similar to modern humans than is A. afarensis. The main similarity is that the Orrorin femur is morphologically closer to that of H. sapiens than is Lucy's; there is, however, some debate over this point. [6]

Other fossils (leaves and many mammals) found in the Lukeino Formation show that Orrorin lived in dry evergreen forest environment, not the savanna assumed by many theories of human evolution.[6]

The team that found these fossils in 2000 was led by Brigitte Senut and Martin Pickford[2] from the Muséum national d'histoire naturelle. The discoverers conclude that Orrorin is a hominin on the basis of its bipedal locomotion and dental anatomy; based on this, they date the split between hominins and African great apes to at least 7 million years ago, in the Messinian. This date is markedly different from those derived using the molecular clock approach, but has found general acceptance among paleoanthropologists.

The 20 fossils have been found at four sites in the Lukeino Formation: of these, the fossils at Cheboit and Aragai are the oldest (6.1 Ma), while those in Kapsomin and Kapcheberek are found in the upper levels of the formation (5.7 Ma).[

Is a mini ice age on the way? Scientists warn the Sun has 'gone to sleep' and say it could cause temperatures to plunge

2013 was due to be year of the 'solar maximum' Researchers say solar activity is at a fraction of what they expect.  Conditions 'very similar' a time in 1645 when a mini ice age hit 
|

The Sun's activity is at its lowest for 100 years, scientists have warned.  They say the conditions are eerily similar to those before the Maunder Minimum, a time in 1645 when a mini ice age hit, Freezing London's River Thames.  Researcher believe the solar lull could cause major changes, and say there is a 20% chance it could lead to 'major changes' in temperatures.

Sunspot numbers are well below their values from 2011, and strong solar flares have been infrequent, as this image shows - despite Nasa forecasting major solar storms
Sunspot numbers are well below their values from 2011, and strong solar flares have been infrequent, as this image shows - despite Nasa forecasting major solar storms

THE SOLAR CYCLE
Conventional wisdom holds that solar activity swings back and forth like a simple pendulum.  At one end of the cycle, there is a quiet time with few sunspots and flares.  At the other end, solar max brings high sunspot numbers and frequent solar storms.  It’s a regular rhythm that repeats every 11 years.
Reality is more complicated.
Astronomers have been counting sunspots for centuries, and they have seen that the solar cycle is not perfectly regular.   'Whatever measure you use, solar peaks are coming down,' Richard Harrison of the Rutherford Appleton Laboratory in Oxfordshire told the BBC.
 
'I've been a solar physicist for 30 years, and I've never seen anything like this.'

He says the phenomenon could lead to colder winters similar to those during the Maunder Minimum.

'There were cold winters, almost a mini ice age.  'You had a period when the River Thames froze.'  Lucie Green of UCL believes that things could be different this time due to human activity.  'We have 400 years of observations, and it is in a very similar to phase as it was in the runup to the Maunder Minimum.  'The world we live in today is very different, human activity may counteract this - it is difficult to say what the consequences are.'

Mike Lockwood University of Reading says that the lower temperatures could affect the global jetstream, causing weather systems to collapse.  'We estimate within 40 years there a 10-20% probability we will be back in Maunder Minimum territory,' he said.  Last year Nasa warned 'something unexpected' is happening on the Sun'  This year was supposed to be the year of 'solar maximum,' the peak of the 11-year sunspot cycle.  But as this image reveals, solar activity is relatively low.

THE MAUNDER MINIMUM

The Maunder Minimum (also known as the prolonged sunspot minimum) is the name used for the period starting in about 1645 and continuing to about 1715 when sunspots became exceedingly rare, as noted by solar observers of the time.
It caused London's River Thames to freeze over, and 'frost fairs' became popular.
The Frozen Thames, 1677 - an oil painting by Abraham Hondius shows the old London Bridge during the Maunder Minimum
The Frozen Thames, 1677 - an oil painting by Abraham Hondius shows the old London Bridge during the Maunder Minimum
This period of solar inactivity also corresponds to a climatic period called the "Little Ice Age" when rivers that are normally ice-free froze and snow fields remained year-round at lower altitudes.
There is evidence that the Sun has had similar periods of inactivity in the more distant past, Nasa says. The connection between solar activity and terrestrial climate is an area of on-going research.
'Sunspot numbers are well below their values from 2011, and strong solar flares have been infrequent,' the space agency says.

The image above shows the Earth-facing surface of the Sun on February 28, 2013, as observed by the Helioseismic and Magnetic Imager (HMI) on NASA's Solar Dynamics Observatory.
 
It observed just a few small sunspots on an otherwise clean face, which is usually riddled with many spots during peak solar activity.  Experts have been baffled by the apparent lack of activity - with many wondering if NASA simply got it wrong.

However, Solar physicist Dean Pesnell of NASA’s Goddard Space Flight Center believes he has a different explanation.  'This is solar maximum,' he says. 'But it looks different from what we expected because it is double-peaked.'  'The last two solar maxima, around 1989 and 2001, had not one but two peaks.'

Solar activity went up, dipped, then rose again, performing a mini-cycle that lasted about two years, he said.
Researchers have recently captured massive sunspots on the solar surface - and believed we should have seen more
Researchers have recently captured massive sunspots on the solar surface - and believed we should have seen more

The same thing could be happening now, as sunspot counts jumped in 2011 and dipped in 2012, he believes.  Pesnell expects them to rebound in 2013: 'I am comfortable in saying that another peak will happen in 2013 and possibly last into 2014.'

He spotted a similarity between Solar Cycle 24 and Solar Cycle 14, which had a double-peak during the first decade of the 20th century.
If the two cycles are twins, 'it would mean one peak in late 2013 and another in 2015'.

Scientists are saying that the Sun is in a phase of "solar lull" - meaning that it has fallen asleep - and it is baffling them.

History suggests that periods of unusual "solar lull" coincide with bitterly cold winters.
Rebecca Morelle reports for BBC Newsnight on the effect this inactivity could have on our current climate, and what the implications might be for global warming.
 
David Strumfels: 
 
 
The two graphs above shows the relationship between solar activity and global temperatures from 1550-2000.  Clearly, very close.  The second graph shows the same relationship 1880-2005.  Again, strong agreement until ~1980 when the two part ways.  Solar activity and global temperatures are (as common sense dictates) closely aligned; the 1980 parting I suspect is due to CO2 build-up overcoming (temporarily?) this alignment.  If solar activity remains low for an extended period of time it should slow warming (as from ~2005-2013) or even reverse it; the 20-25 year delay is expected because warming of the oceans, which release their added heat slowly.

Read more: http://www.dailymail.co.uk/sciencetech/article-2541599/Is-mini-ice-age-way-Scientists-warn-Sun-gone-sleep-say-cause-temperatures-plunge.html#ixzz2qmAXPJ5K

Friday, January 17, 2014

Outwitting the Perfect Pathogen | The Scientist Magazine®

Outwitting the Perfect Pathogen | The Scientist Magazine®
Tuberculosis is exquisitely adapted to the human body. Researchers need a new game plan for beating it.  By | January 1, 2014

WORLDWIDE PATHOGEN: About one-third of the human population is infected with Mycobacterium tuberculosis (cultures shown above), some 13 million of which are actually sick with TB.CDC/GEORGE KUBICA

In 2009, an international consortium of researchers initiated an efficacy trial for a new tuberculosis (TB) vaccine—the first in more than 80 years. With high hopes, a team led by the South African Tuberculosis Vaccine Initiative inoculated 2,797 infants in the country, half with a vaccine called MVA85A and half with a placebo. They followed the children for up to three years and finally announced the result last February. It was not good news (Lancet, 381:1021-28, 2013).
“It did not work,” says Thomas Evans, president and CEO of Aeras, the Rockville, Maryland-based nonprofit that sponsored the trial. The vaccine did not protect children against the deadly disease.

“The whole field was disappointed,” says Robert Ryall, TB vaccine project leader at Sanofi Pasteur, who was not involved in the trial. “And unfortunately the field did not learn much.” The vaccine developers still do not know why MVA85A didn’t work.

The only vaccine currently available in the fight against TB is Bacille Calmette-Guérin (BCG), a live vaccine first used in 1921 and originally derived from a cow tuberculosis strain. Though the exact mechanism of the vaccine’s protection remains unclear, researchers do know that it doesn’t work well: it reduces the risk of a form of TB that is especially lethal to infants, but it does not reliably protect against TB lung infections, which kill more than a million adults worldwide each year.

With every cough or sneeze of an infected individual, TB bacilli fly through the air, and to date have spread to one-third of the world’s population. In most individuals, Mycobacterium tuberculosis (Mtb) lie dormant, never causing sickness. In others, however, the bacteria cause life-threatening lung infections. Some 13 million people around the world are actively sick with TB, and someone dies of the disease approximately every 20 seconds, according to the World Health Organization (WHO).

“The need for a TB vaccine is enormous,” says David Sherman, a tuberculosis expert at the nonprofit Seattle Biomedical Research Institute. And an inadequate vaccine is not the field’s only problem: the four main drugs currently used to treat tuberculosis are also decades old, take six months to rid the body of the bacilli, and are becoming obsolete due to the spread of multidrug-resistant and extensively drug-resistant TB. Despite the gloomy outlook, many researchers are still plugging away, through pharmaceutical-nonprofit partnerships and redesigned basic research efforts, to achieve a happy ending.

Ancient foe

Tuberculosis has plagued humans for thousands of years. Even ancient Egyptians were ravaged by TB, as evidence from mummies has shown. And over those millennia, Mtb has learned to quietly, carefully live within the human body.

“It’s not just a pathogen; in some ways it’s commensal,” says Evans. “It’s been dealing with the human immune system for a long period of time and knows how to go latent and keep itself transmitted.” Of the roughly 2 billion people infected with Mtb, about 90 percent will never get sick, though they are a vast reservoir of the bacteria, fueling the epidemic. And when illness occurs, unlike many infections that involve an acute sickness as the host’s immune system battles the pathogen, tuberculosis infection resembles a chronic disease. “Everything about the infection is slowed down, frankly, in ways we don’t understand,” says Sherman.

E. coli, for example, replicates so quickly—about once every 20 minutes—that one cell can grow into a colony of a million overnight. Mtb, on the other hand, only doubles once every 20 hours, and would take three weeks to grow a colony of similar size. Additionally, the human immune system produces antibodies against most pathogens in roughly 5 to 7 days. Antibody production against Mtb takes three weeks, likely because the bacteria are slow to travel to the lymph nodes where an adaptive immune response commences. “TB is exquisitely adapted to long-term survival in a human host,” says Sherman.

The current TB drug regimen relies on a six-month treatment of four antibiotics, all discovered in the 1950s and ’60s and which primarily inhibit cell-wall and RNA synthesis. (See illustration.) Worldwide, about 3.6 percent of new TB cases and 20 percent of recurring infections are multidrug resistant, according to the WHO.
Mtb is not just a pathogen; in some ways it’s commensal.
—­Thomas Evans, Aeras
Unfortunately, there isn’t a deep pipeline of drug candidates to fall back on. It wasn’t until December 2012, some 50 years after the last first-in-class approvals, that the US Food and Drug Administration approved a TB drug with a new mechanism of action. Janssen Therapeutics’ bedaquiline (Sirturo) inhibits an ATP synthase enzyme in the bacterium’s cell membrane to prevent the pathogen from generating energy and replicating. (See illustration.) No other anti-TB drugs are close to approval.

TB drug development has been slow for several reasons. For one, the drugs are difficult and expensive to make, and they are primarily needed in developing countries that can’t afford to pay top dollar for a six-month drug regimen. “Working in TB will not drive profit for pharmaceutical companies,” says Manos Perros, head of AstraZeneca’s Boston-based Infection Innovative Medicines Unit. As a result, most recent TB drug development has involved collaborations between big pharma and government institutions or nonprofit advocacy organizations, as well as academia. These are “partnerships that bring resources and funding that make this kind of work, frankly, possible,” says Perros. “This is a space where competitions between pharma and academia are unfruitful.”

Other pharma companies share that sentiment. In February 2013, Glaxo-SmithKline (GSK) opened up the closely guarded doors of their laboratories to share information with the TB research community about 177 compounds from the company’s pharmaceutical library that appear to inhibit Mtb (ChemMedChem, 8:313-21, 2013). The set of compounds has already been sent to nine groups in the U.K., U.S., Canada, The Netherlands, France, Australia, Argentina, and India, according to GSK spokesperson Melinda Stubbee.

But even with this collaborative attitude, the research community has struggled to develop successful new TB drugs, in part because the bacterium hides latent inside cells such as macrophages, and unpredictably becomes active in different sites in the lung. “TB drug development is extremely challenging because a drug has to kill not only the replicating but the nonreplicating bacteria,” says Feng Wang of the California Institute for Biomedical Research in La Jolla. To tackle this problem, Wang, along with Peter Schultz at Scripps Research Institute, also in La Jolla, and William Jacobs at Albert Einstein College of Medicine in New York, used a novel screening method to test the effect of 70,000 compounds on a biofilm of Mtb that simulates the latent version of the bacterium. One compound popped out of the screen: TCA1 killed both replicating and nonreplicating Mtb (PNAS, 110:E2510-17, 2013). It appeared to attack on two fronts: preventing bacterial cell-wall synthesis and inhibiting a bacterial enzyme involved in cofactor biosynthesis, which is likely what makes it effective against nonreplicating Mtb. (See illustration.) The compound has since proven successful in both acute and chronic animal models of TB, and the team is tweaking the chemistry to try and make it even more potent, says Wang.

Pharmaceutical company AstraZeneca is similarly developing a drug that is active against latent bacteria. AZD5847, a type of antibiotic called an oxazolidinone that is typically used to treat staph infections, is able to reach and kill Mtb lodging inside macrophages. The company is currently testing the drug in a Phase 2 efficacy trial in South Africa involving 75 patients. But developing the compound wasn’t easy, notes Perros. “We’ve been investing for a decade. It really takes a long time.”

Seeking a boost

But even if quick-acting, potent drugs were available, Mtb is so abundant and so well adapted to the human population that the only true path to eradication is not treatment, but prevention. “There’s no endgame without a vaccine,” says Aeras’s Evans. “No matter how much we think we should work on drugs or diagnostics, if we’re not working on vaccines, we’ll never get to our final goal.”

The failure of the MVA85A vaccine trial in South Africa last year was disappointing, but at least a dozen other TB vaccine candidates continue in clinical trials. Most of these reflect one of two general strategies for preventing tuberculosis: improve the existing BCG vaccine or, more commonly, boost its effect with a secondary vaccine. BCG, which is given to infants, primes the immune response early in life, so booster vaccines are usually designed to protect adolescents and adults from later infection. The MVA85A vaccine, for example, was a modified viral vector expressing Mtb antigen 85A designed as a booster to BCG.

Vaccine development, however, is hindered by lack of cellular or molecular markers that directly correlate with immune protection from TB, making it difficult to predict how well a vaccine might protect against TB based on the responses of a handful of individuals. “The only tool we have to make sure a vaccine works is a very large, very expensive field trial,” says Evans. And that high price tag, as in TB drug development, has turned numerous pharmaceutical companies off the pursuit of a TB vaccine.

But with financial and research support from nonprofit partners like Aeras—funded by the Bill & Melinda Gates Foundation, among others—a few companies are still in the game. In collaboration with Aeras, Sanofi Pasteur is developing a BCG booster vaccine that began Phase 1/2a safety trials in South Africa last July. It is a recombinant vaccine made up of two TB proteins fused together and coupled with an adjuvant called IC31, which really “drives the immune response,” says Sanofi’s Ryall. Aeras also has another big-pharma partnership with GSK on a vaccine called M72/AS01e, which has been in Phase 1 and 2 clinical trials since 2004, including an ongoing trial in Taiwan and Estonia. The vaccine combines a GSK recombinant antigen called M72, derived from two tuberculosis-expressed proteins, and a GSK adjuvant called AS01e.

Fresh start

With TB drugs and vaccines still in early clinical phases, some scientists are going back to the basics to see if a better molecular understanding of the bacterium itself could assist these programs. “We need to develop vaccines, and we need to develop products, but as we do, it’s very clear that we need to be learning a lot more about the immunobiology [of TB],” says Evans.

Last July, for example, Sherman and colleagues published the first large-scale map of the bacterium’s regulatory and metabolic networks (Nature, 499:178-83, 2013). The team initially plotted the relationships of 50 Mtb transcription factors, and later, all 200, which control the expression of the rest of the bacterium’s genes. “Our hope is that by looking at it in this different way, we can describe different kinds of drug targets than we have ever done before,” says Sherman.

The team found that Mtb is remarkably well networked, so that if a mutation or drug stymies one gene or protein, others step in as backups, allowing the bacterium to continue functioning normally. But targeting transcription factors that control whole networks could shut down an entire system, backups and all. One such network already looks like a promising drug target—transcription factors controlling a group of proteins in the bacterium’s cell membrane that pump antibiotics and other drugs out of the cell. Mtb has so many such pumps that it is extremely difficult to target multiple pumps for treatment, but genes that activate numerous pumps at the same time are a far more promising drug target.

The idea that scientists will soon develop new, better TB drugs and vaccines “helps get me up in the morning,” says Sherman. It’s going to take more breakthroughs than are on the immediate horizon, he adds, “but if we keep at it, we will get there.”

From funding agencies to scientific agency

From funding agencies to scientific agency | EMBOr

Collective allocation of science funding as an alternative to peer review
, , , ,

Author Affiliations

Publicly funded research involves the distribution of a considerable amount of money. Funding agencies such as the US National Science Foundation (NSF), the US National Institutes of Health (NIH) and the European Research Council (ERC) give billions of dollars or euros of taxpayers' money to individual researchers, research teams, universities, and research institutes each year. Taxpayers accordingly expect that governments and funding agencies will spend their money prudently and efficiently.
 
Investing money to the greatest effect is not a challenge unique to research funding agencies and there are many strategies and schemes to choose from. Nevertheless, most funders rely on a tried and tested method in line with the tradition of the scientific community: the peer review of individual proposals to identify the most promising projects for funding. This method has been considered the gold standard for assessing the scientific value of research projects essentially since the end of the Second World War.
Investing money to the greatest effect is not a challenge unique to research funding agencies and there are many strategies and schemes to choose from
However, there is mounting critique of the use of peer review to direct research funding. High on the list of complaints is the cost, both in terms of time and money. In 2012, for example, NSF convened more than 17,000 scientists to review 53,556 proposals [1]. Reviewers generally spend a considerable time and effort to assess and rate proposals of which only a minority can eventually get funded. Of course, such a high rejection rate is also frustrating for the applicants. Scientists spend an increasing amount of time writing and submitting grant proposals. Overall, the scientific community invests an extraordinary amount of time, energy, and effort into the writing and reviewing of research proposals, most of which end up not getting funded at all. This time would be better invested in conducting the research in the first place.
 
Peer review may also be subject to biases, inconsistencies, and oversights. The need for review panels to reach consensus may lead to sub‐optimal decisions owing to the inherently stochastic nature of the peer review process. Moreover, in a period where the money available to fund research is shrinking, reviewers may tend to “play it safe” and select proposals that have a high chance of producing results, rather than more challenging and ambitious projects. Additionally, the structuring of funding around calls‐for‐proposals to address specific topics might inhibit serendipitous discovery, as scientists work on problems for which funding happens to be available rather than trying to solve more challenging problems.
 
The scientific community holds peer review in high regard, but it may not actually be the best possible system for identifying and supporting promising science. Many proposals have been made to reform funding systems, ranging from incremental changes to peer review—including careful selection of reviewers [2] and post‐hoc normalization of reviews [3]—to more radical proposals such as opening up review to the entire online population [4] or removing human reviewers altogether by allocating funds through an objective performance measure [5].
Overall, the scientific community invests an extraordinary amount of time, energy and effort into the writing and reviewing of research proposals…
We would like to add another alternative inspired by the mathematical models used to search the internet for relevant information: a highly decentralized funding model in which the wisdom of the entire scientific community is leveraged to determine a fair distribution of funding. It would still require human insight and decision‐making, but it would drastically reduce the overhead costs and may alleviate many of the issues and inefficiencies of the proposal submission and peer review system, such as bias, “playing it safe”, or reluctance to support curiosity‐driven research.
 
Our proposed system would require funding agencies to give all scientists within their remit an unconditional, equal amount of money each year. However, each scientist would then be required to pass on a fixed percentage of their previous year's funding to other scientists whom they think would make best use of the money (Fig 1). Every year, then, scientists would receive a fixed basic grant from their funding agency combined with an elective amount of funding donated by their peers. As a result of each scientist having to distribute a given percentage of their previous year's budget to other scientists, money would flow through the scientific community. Scientists who are generally anticipated to make the best use of funding will accumulate more.
Figure 1. Proposed funding system
 
Illustrations of the existing (left) and the proposed (right) funding systems, with reviewers in blue and investigators in red. In most current funding models, like those used by NSF and NIH, investigators write proposals in response to solicitations from funding agencies. These proposals are reviewed by small panels and funding agencies use these reviews to help make funding decisions, providing awards to some investigators. In the proposed system, all scientists are both investigators and reviewers: every scientist receives a fixed amount of funding from the government and discretionary distributions from other scientists, but each is required in turn to redistribute some fraction of the total they received to other investigators.
It may help to illustrate the idea with an example. Suppose that the basic grant is set to US$100,000—this corresponds to roughly the entire 2012 NSF budget divided by the total number of researchers that it funded [1]—and the required fraction that any scientist is required to donate is set to f = 0.5 or 50%. Suppose, then, that Scientist K received a basic grant of $100,000 and $200,000 from her peers, which gave her a funding total of $300,000. In 2013, K can spend 50% of that total sum, $150,000, on her own research program, but must donate 50% to other scientists for their 2014 budget. Rather than painstakingly submitting and reviewing project proposals, K and her colleagues can donate to one another by logging into a centralized website and entering the names of the scientists they choose to donate to and how much each should receive.
 
More formally, suppose that a funding agency's total budget is ty in year y, and it simply maintains a set of funding accounts for n qualified scientists chosen according to criteria such as academic appointment status, number of publications and other bibliometric indicators, or area of research. The amount of funding in these accounts in year y is represented as n vectors αy, where each entry αy(i) corresponds to the amount of funding in the account of scientist i in year y. Each year, the funding agency deposits a fixed amount into each account, equal to the total funding budget divided by the total number of scientists: ty/n. In addition, in each year y scientist i must distribute a fixed fraction f є [0,1] of the funding he or she received to other scientists. We represent all of these choices by an n × n funding transfer matrix Dy, where Dy(i, j) contains the fraction of his or her funds that scientist I will give to scientist j. By construction, this matrix satisfies the properties that all entries are between 0 and 1 inclusive; Dy(i,i) = 0, so that no scientist can donate money to him or herself; and Embedded Image, so that every scientist is required to donate a fraction f of the previous year's funding to others. The distribution of funding over scientists received for year y + 1 is thus expressed by: Embedded Image
 
This form assumes that the portion of a scientist's funding that remains after donation is either spent or stored in a separate research account for later years. An interesting and perhaps necessary modification may be that redistribution pertains to the entirety of funding that a scientist has accumulated over many years, not just the amount received in a particular year. This would ensure that unused funding is gradually re‐injected into the system while still preserving long‐term stability of funding.
 
Network and computer scientists will recognize the general outline of these equations. Google pioneered a similar heuristic approach to rank web pages by transferring “importance” [6] via the web's network of page links; pages that accumulate “importance” rank higher in search results. A similar principle has been successfully used to determine the impact of scientific journals [7] and scholarly authors [8].
 
Instead of attributing “impact” or “relevance”, our approach distributes actual money. We believe that this simple, highly distributed, self‐organizing process can yield sophisticated behavior at a global level. Respected and productive scientists are likely to receive a comparatively large number of donations. They must in turn distribute a fraction of this larger total to others; their high status among scientists thus affords them greater influence over how funding is distributed. The unconditional yearly basic grant in turn ensures stability and gives all scientists greater autonomy for serendipitous discovery, rather than having to chase available funding. As the priorities and preferences of the scientific community change over time, reflected in the values of Dy, the flow of funding will gradually change accordingly. Rather than converging on a stationary distribution, the system will dynamically adjust funding levels to where they are most needed as scientists collectively assess and re‐assess each others' merits. Last but not least, the proposed scheme would fund people instead of projects: it would liberate researchers from peer pressure and funding cycles and would give them much greater flexibility to spend their allocation as they see fit.
 
Of course, funding agencies and governments may still wish or need to play a guiding role, for instance to foster advances in certain areas of national interest or to encourage diversity. This capacity could be included in the outlined system in a number of straightforward ways. Traditional peer‐reviewed, project‐based funding could be continued in parallel. In addition, funding agencies could vary the base funding rate to temporarily inject more money into certain disciplines or research areas. Scientists may be offered the option to donate to special aggregated “large‐scale projects” to support research projects that develop or rely on large‐scale scientific infrastructure. The system could also include some explicit temporal dampening to prevent sudden large changes. Scientists could, for example, be allowed to save surplus funding from previous years in “slush” funds to protect against lean times in the future.
 
In practice, the system will require stringent conflict‐of‐interest rules similar to the ones that have been widely adopted to keep traditional peer review fair and unbiased. For example, scientists might be prevented from donating to themselves, advisors, advisees, close collaborators, or even researchers at their own institution. Funding decisions must remain confidential so scientists can always make unbiased decisions; should groups of people attempt to affect global funding distribution they will lack the information to do so effectively. At the very least, the system will allow funding agencies to confidentially study and monitor the flow of funding in the aggregate; potential abuse such as circular funding schemes can be identified and remediated. This data will furthermore support Science of Science efforts to identify new emerging areas of research and future priorities.
Peer review of proposals has served science well for decades, but funding agencies may want to consider alternative approaches to public funding of research…
Such an open and dynamic funding system might also induce profound changes in scholarly communication. Scientists and researchers may feel more strongly compelled to openly and freely share results with the public and their community if this attracts the interest of colleagues and therefore potential donors. A “publish or perish” strategy may matter less than clearly and compellingly communicating the outcomes, scientific merit, broader impact, vision, and agenda of one's research programs so as to convince the scientific community to contribute to it.
 
Peer review of proposals has served science well for decades, but perhaps it's time for funding agencies to consider alternative approaches to public funding of research—based on advances in mathematics and modern technology—to optimize their return on investment. The system proposed here requires a fraction of the costs associated with traditional peer review, but may yield comparable or even better results. The savings of financial and human resources could be used to identify new targets of opportunity, to support the translation of scientific results into products and jobs, and to help communicate advances in science and technology.

Acknowledgments

The authors acknowledge support by the National Science Foundation under grant SBE #0914939, the Andrew W. Mellon Foundation, and National Institutes of Health award U01 GM098959.
National Science Foundation 0914939
Andrew W. Mellon Foundation
National Institutes of Health U01 GM098959

Footnotes

  • The authors declare that they have no conflict of interest.

References

It is TIme for Greenpeace to be Proscecuted for Crimes Against Humanity

Standing Up for GMOs

  1. Phillip Sharp11
+ Author Affiliations
  1. 1Bruce Alberts is President Emeritus of the U.S. National Academy of Sciences and former Editor-in-Chief of Science.
  2. 2Roger Beachy is a Wolf Prize laureate; President Emeritus of the Donald Danforth Plant Science Center, St. Louis, MO, USA; and former director of the U.S. National Institute of Food and Agriculture.
  3. 3David Baulcombe is a Wolf Prize laureate and Royal Society Professor in the Department of Plant Sciences of the University of Cambridge, Cambridge, UK. He receives research funding from Syngenta and is a consultant for Syngenta.
  4. 4Gunter Blobel is a Nobel laureate and the John D. Rockefeller Jr. Professor at the Rockefeller University, New York, NY, USA.
  5. 5Swapan Datta is Deputy Director General (Crop Science) of the Indian Council of Agricultural Research, New Delhi, India; the Rash Behari Ghosh Chair Professor at Calcutta University, India; and a former scientist at ETH-Zurich, Switzerland, and at IRRI, Philippines.
  6. 6Nina Fedoroff is a National Medal of Science laureate; a Distinguished Professor at the King Abdullah University of Science and Technology, Thuwal, Saudi Arabia; an Evan Pugh Professor at Pennylvania State University, University Park, PA, USA; and former President of AAAS.
  7. 7Donald Kennedy is President Emeritus of Stanford University, Stanford, CA, USA, and former Editor-in-Chief of Science.
  8. 8Gurdev S. Khush is a World Food Prize laureate, Japan Prize laureate, and former scientist at IRRI, Los Baños, Philippines.
  9. 9Jim Peacock is a former Chief Scientist of Australia and former Chief of the Division of Plant Industry at the Commonwealth Scientific and Industrial Research Organization, Canberra, Australia.
  10. 10Martin Rees is President Emeritus of the Royal Society, Fellow of Trinity College, and Emeritus Professor of Cosmology and Astrophysics at the University of Cambridge, Cambridge, UK.
  11. 11Phillip Sharp is a Nobel laureate; an Institute Professor at the Massachusetts Institute of Technology, Cambridge, MA, USA; and President of AAAS.
Figure
CREDIT: IRRI
On 8 August 2013, vandals destroyed a Philippine “Golden Rice” field trial. Officials and staff of the Philippine Department of Agriculture that conduct rice tests for the International Rice Research Institute (IRRI) and the Philippine Rice Research Institute (PhilRice) had gathered for a peaceful dialogue. They were taken by surprise when protesters invaded the compound, overwhelmed police and village security, and trampled the rice. Billed as an uprising of farmers, the destruction was actually carried out by protesters trucked in overnight in a dozen jeepneys.
 
The global scientific community has condemned the wanton destruction of these field trials, gathering thousands of supporting signatures in a matter of days.* If ever there was a clear-cut cause for outrage, it is the concerted campaign by Greenpeace and other nongovernmental organizations, as well as by individuals, against Golden Rice. Golden Rice is a strain that is genetically modified by molecular techniques (and therefore labeled a genetically modified organism or GMO) to produce β-carotene, a precursor of vitamin A. Vitamin A is an essential component of the light-absorbing molecule rhodopsin in the eye. Severe vitamin A deficiency results in blindness, and half of the roughly half-million children who are blinded by it die within a year. Vitamin A deficiency also compromises immune system function, exacerbating many kinds of illnesses. It is a disease of poverty and poor diet, responsible for 1.9 to 2.8 million preventable deaths annually, mostly of children under 5 years old and women.
 
Rice is the major dietary staple for almost half of humanity, but white rice grains lack vitamin A. Research scientists Ingo Potrykus and Peter Beyer and their teams developed a rice variety whose grains accumulate β-carotene. It took them, in collaboration with IRRI, 25 years to develop and test varieties that express sufficient quantities of the precursor that a few ounces of cooked rice can provide enough β-carotene to eliminate the morbidity and mortality of vitamin A deficiency. It took time, as well, to obtain the right to distribute Golden Rice seeds, which contain patented molecular constructs, free of charge to resource-poor farmers.
 
The rice has been ready for farmers to use since the turn of the 21st century, yet it is still not available to them. Escalating requirements for testing have stalled its release for more than a decade. IRRI and PhilRice continue to patiently conduct the required field tests with Golden Rice, despite the fact that these tests are driven by fears of “potential” hazards, with no evidence of actual hazards. Introduced into commercial production over 17 years ago, GM crops have had an exemplary safety record. And precisely because they benefit farmers, the environment, and consumers, GM crops have been adopted faster than any other agricultural advance in the history of humanity.
 
New technologies often evoke rumors of hazard. These generally fade with time when, as in this case, no real hazards emerge. But the anti-GMO fever still burns brightly, fanned by electronic gossip and well-organized fear-mongering that profits some individuals and organizations. We, and the thousands of other scientists who have signed the statement of protest, stand together in staunch opposition to the violent destruction of required tests on valuable advances such as Golden Rice that have the potential to save millions of impoverished fellow humans from needless suffering and death.
  • * B. Chassy et al., “Global scientific community condemns the recent destruction of field trials of Golden Rice in the Philippines”; http://chn.ge/143PyHo (2013).
  • E. Mayo-Wilson et al., Br. Med. J. 343, d5094 (2011).
  • G. Tang et al., Am. J. Clin. Nutr. 96, 658 (2012).

Astrophysics, the Impossible Science -- More Than Quantum Mechanics?

Last week, Nobel Laureate Martinus Veltman gave a talk at the Simons Center. After the talk, a number of people asked him questions about several things he didn’t know much about, including supersymmetry and dark matter. After deflecting a few such questions, he proceeded to go on a brief rant against astrophysics, professing suspicion of the field’s inability to do experiments and making fun of an astrophysicist colleague’s imprecise data. The rant was a rather memorable feat of curmudgeonliness, and apparently typical Veltman behavior. It left several of my astrophysicist friends fuming. For my part, it inspired me to write a positive piece on astrophysics, highlighting something I don’t think is brought up enough.
 
The thing about astrophysics, see, is that astrophysics is impossible.
Imagine, if you will, an astrophysical object. As an example, picture a black hole swallowing a star.
Are you picturing it?
 
Now think about where you’re looking from. Chances are, you’re at some point up above the black hole, watching the star swirl around, seeing something like this:
Where are you in this situation? On a spaceship? Looking through a camera on some probe?
 
Astrophysicists don’t have spaceships that can go visit black holes. Even the longest-ranging probes have barely left the solar system. If an astrophysicist wants to study a black hole swallowing a star, they can’t just look at a view like that. Instead, they look at something like this:
The image on the right is an artist’s idea of what a black hole looks like. The three on the left?
 
They’re what the astrophysicist actually sees. And even that is cleaned up a bit, the raw output can be even more opaque.
 
A black hole swallowing a star? Just a few blobs of light, pixels on screen. You can measure brightness and dimness, filter by color from gamma rays to radio waves, and watch how things change with time. You don’t even get a whole lot of pixels for distant objects. You can’t do experiments, either, you just have to wait for something interesting to happen and try to learn from the results.
 
It’s like staring at the static on a TV screen, day after day, looking for patterns, until you map out worlds and chart out new laws of physics and infer a space orders of magnitude larger than anything anyone’s ever experienced.
 
And naively, that’s just completely and utterly impossible.
And yet…and yet…and yet…it works!
 
Crazy people staring at a screen can’t successfully make predictions about what another part of the screen will look like. They can’t compare results and hone their findings. They can’t demonstrate principles (like General Relativity) that change technology here on Earth. Astrophysics builds on itself, discovery by discovery, in a way that can only be explained by accepting that it really does work (a theme that I’ve had occasion to harp on before).
 
Physics began with astrophysics. Trying to explain the motion of dots in a telescope and objects on the ground with the same rules led to everything we now know about the world. Astrophysics is hard, arguably impossible…but impossible or not, there are people who spend their lives successfully making it work.
 
 
(David Strumfels) -- With a chemistry background, not astrophysics, I have to wonder where quantum mechanics stacks up.  TO give one example, the hydrogen atom:
 
 
We see the electron orbiting about the proton nucleus, an image we probably saw in high school, and the quantized orbits added by Bohr don't alter what we see significantly (though it is a significant addition).  Now, physics teaches us that an object in orbit about another orbit possesses angular momentum -- which means it is changing direction continuously.
 
But the electron here possesses no angular momentum, according to quantum mechanics.  It's worse that that; the electron has not exact space at anytime we specify.  It is attracted to the nucleus, yes, but outside of that it could be anywhere in the universe, though mostly like close to the nucleus.  I hesitate to go into this further, except that the electron occupies well defined orbitals, which describe its spatial distribution through all space.  The orbitals are squares of the wave function describing the electron, and has a simple formula like this:
 
 
And this is just the simplest of all atoms, hydrogen.  Try to work out more complicated atoms, and you run up against the three body equation, meaning there is no exact solution at all.  Same with molecules molecules ... you get the idea.
 
In the end I won't judge, because I understand neither astrophysics or quantum mechanics well enough to draw a comparison.  As for molecules, I can only give a picture, in this case of hemoglobin.  Here there is structure built upon structure, built upon structure -- the final structure being the atomic orbitals of hydrogen and other atoms.
 
 
 
 
 

Entropy (information theory)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Entropy_(information_theory) In info...