Search This Blog

Friday, January 17, 2014

From funding agencies to scientific agency

From funding agencies to scientific agency | EMBOr

Collective allocation of science funding as an alternative to peer review
, , , ,

Author Affiliations

Publicly funded research involves the distribution of a considerable amount of money. Funding agencies such as the US National Science Foundation (NSF), the US National Institutes of Health (NIH) and the European Research Council (ERC) give billions of dollars or euros of taxpayers' money to individual researchers, research teams, universities, and research institutes each year. Taxpayers accordingly expect that governments and funding agencies will spend their money prudently and efficiently.
 
Investing money to the greatest effect is not a challenge unique to research funding agencies and there are many strategies and schemes to choose from. Nevertheless, most funders rely on a tried and tested method in line with the tradition of the scientific community: the peer review of individual proposals to identify the most promising projects for funding. This method has been considered the gold standard for assessing the scientific value of research projects essentially since the end of the Second World War.
Investing money to the greatest effect is not a challenge unique to research funding agencies and there are many strategies and schemes to choose from
However, there is mounting critique of the use of peer review to direct research funding. High on the list of complaints is the cost, both in terms of time and money. In 2012, for example, NSF convened more than 17,000 scientists to review 53,556 proposals [1]. Reviewers generally spend a considerable time and effort to assess and rate proposals of which only a minority can eventually get funded. Of course, such a high rejection rate is also frustrating for the applicants. Scientists spend an increasing amount of time writing and submitting grant proposals. Overall, the scientific community invests an extraordinary amount of time, energy, and effort into the writing and reviewing of research proposals, most of which end up not getting funded at all. This time would be better invested in conducting the research in the first place.
 
Peer review may also be subject to biases, inconsistencies, and oversights. The need for review panels to reach consensus may lead to sub‐optimal decisions owing to the inherently stochastic nature of the peer review process. Moreover, in a period where the money available to fund research is shrinking, reviewers may tend to “play it safe” and select proposals that have a high chance of producing results, rather than more challenging and ambitious projects. Additionally, the structuring of funding around calls‐for‐proposals to address specific topics might inhibit serendipitous discovery, as scientists work on problems for which funding happens to be available rather than trying to solve more challenging problems.
 
The scientific community holds peer review in high regard, but it may not actually be the best possible system for identifying and supporting promising science. Many proposals have been made to reform funding systems, ranging from incremental changes to peer review—including careful selection of reviewers [2] and post‐hoc normalization of reviews [3]—to more radical proposals such as opening up review to the entire online population [4] or removing human reviewers altogether by allocating funds through an objective performance measure [5].
Overall, the scientific community invests an extraordinary amount of time, energy and effort into the writing and reviewing of research proposals…
We would like to add another alternative inspired by the mathematical models used to search the internet for relevant information: a highly decentralized funding model in which the wisdom of the entire scientific community is leveraged to determine a fair distribution of funding. It would still require human insight and decision‐making, but it would drastically reduce the overhead costs and may alleviate many of the issues and inefficiencies of the proposal submission and peer review system, such as bias, “playing it safe”, or reluctance to support curiosity‐driven research.
 
Our proposed system would require funding agencies to give all scientists within their remit an unconditional, equal amount of money each year. However, each scientist would then be required to pass on a fixed percentage of their previous year's funding to other scientists whom they think would make best use of the money (Fig 1). Every year, then, scientists would receive a fixed basic grant from their funding agency combined with an elective amount of funding donated by their peers. As a result of each scientist having to distribute a given percentage of their previous year's budget to other scientists, money would flow through the scientific community. Scientists who are generally anticipated to make the best use of funding will accumulate more.
Figure 1. Proposed funding system
 
Illustrations of the existing (left) and the proposed (right) funding systems, with reviewers in blue and investigators in red. In most current funding models, like those used by NSF and NIH, investigators write proposals in response to solicitations from funding agencies. These proposals are reviewed by small panels and funding agencies use these reviews to help make funding decisions, providing awards to some investigators. In the proposed system, all scientists are both investigators and reviewers: every scientist receives a fixed amount of funding from the government and discretionary distributions from other scientists, but each is required in turn to redistribute some fraction of the total they received to other investigators.
It may help to illustrate the idea with an example. Suppose that the basic grant is set to US$100,000—this corresponds to roughly the entire 2012 NSF budget divided by the total number of researchers that it funded [1]—and the required fraction that any scientist is required to donate is set to f = 0.5 or 50%. Suppose, then, that Scientist K received a basic grant of $100,000 and $200,000 from her peers, which gave her a funding total of $300,000. In 2013, K can spend 50% of that total sum, $150,000, on her own research program, but must donate 50% to other scientists for their 2014 budget. Rather than painstakingly submitting and reviewing project proposals, K and her colleagues can donate to one another by logging into a centralized website and entering the names of the scientists they choose to donate to and how much each should receive.
 
More formally, suppose that a funding agency's total budget is ty in year y, and it simply maintains a set of funding accounts for n qualified scientists chosen according to criteria such as academic appointment status, number of publications and other bibliometric indicators, or area of research. The amount of funding in these accounts in year y is represented as n vectors αy, where each entry αy(i) corresponds to the amount of funding in the account of scientist i in year y. Each year, the funding agency deposits a fixed amount into each account, equal to the total funding budget divided by the total number of scientists: ty/n. In addition, in each year y scientist i must distribute a fixed fraction f є [0,1] of the funding he or she received to other scientists. We represent all of these choices by an n × n funding transfer matrix Dy, where Dy(i, j) contains the fraction of his or her funds that scientist I will give to scientist j. By construction, this matrix satisfies the properties that all entries are between 0 and 1 inclusive; Dy(i,i) = 0, so that no scientist can donate money to him or herself; and Embedded Image, so that every scientist is required to donate a fraction f of the previous year's funding to others. The distribution of funding over scientists received for year y + 1 is thus expressed by: Embedded Image
 
This form assumes that the portion of a scientist's funding that remains after donation is either spent or stored in a separate research account for later years. An interesting and perhaps necessary modification may be that redistribution pertains to the entirety of funding that a scientist has accumulated over many years, not just the amount received in a particular year. This would ensure that unused funding is gradually re‐injected into the system while still preserving long‐term stability of funding.
 
Network and computer scientists will recognize the general outline of these equations. Google pioneered a similar heuristic approach to rank web pages by transferring “importance” [6] via the web's network of page links; pages that accumulate “importance” rank higher in search results. A similar principle has been successfully used to determine the impact of scientific journals [7] and scholarly authors [8].
 
Instead of attributing “impact” or “relevance”, our approach distributes actual money. We believe that this simple, highly distributed, self‐organizing process can yield sophisticated behavior at a global level. Respected and productive scientists are likely to receive a comparatively large number of donations. They must in turn distribute a fraction of this larger total to others; their high status among scientists thus affords them greater influence over how funding is distributed. The unconditional yearly basic grant in turn ensures stability and gives all scientists greater autonomy for serendipitous discovery, rather than having to chase available funding. As the priorities and preferences of the scientific community change over time, reflected in the values of Dy, the flow of funding will gradually change accordingly. Rather than converging on a stationary distribution, the system will dynamically adjust funding levels to where they are most needed as scientists collectively assess and re‐assess each others' merits. Last but not least, the proposed scheme would fund people instead of projects: it would liberate researchers from peer pressure and funding cycles and would give them much greater flexibility to spend their allocation as they see fit.
 
Of course, funding agencies and governments may still wish or need to play a guiding role, for instance to foster advances in certain areas of national interest or to encourage diversity. This capacity could be included in the outlined system in a number of straightforward ways. Traditional peer‐reviewed, project‐based funding could be continued in parallel. In addition, funding agencies could vary the base funding rate to temporarily inject more money into certain disciplines or research areas. Scientists may be offered the option to donate to special aggregated “large‐scale projects” to support research projects that develop or rely on large‐scale scientific infrastructure. The system could also include some explicit temporal dampening to prevent sudden large changes. Scientists could, for example, be allowed to save surplus funding from previous years in “slush” funds to protect against lean times in the future.
 
In practice, the system will require stringent conflict‐of‐interest rules similar to the ones that have been widely adopted to keep traditional peer review fair and unbiased. For example, scientists might be prevented from donating to themselves, advisors, advisees, close collaborators, or even researchers at their own institution. Funding decisions must remain confidential so scientists can always make unbiased decisions; should groups of people attempt to affect global funding distribution they will lack the information to do so effectively. At the very least, the system will allow funding agencies to confidentially study and monitor the flow of funding in the aggregate; potential abuse such as circular funding schemes can be identified and remediated. This data will furthermore support Science of Science efforts to identify new emerging areas of research and future priorities.
Peer review of proposals has served science well for decades, but funding agencies may want to consider alternative approaches to public funding of research…
Such an open and dynamic funding system might also induce profound changes in scholarly communication. Scientists and researchers may feel more strongly compelled to openly and freely share results with the public and their community if this attracts the interest of colleagues and therefore potential donors. A “publish or perish” strategy may matter less than clearly and compellingly communicating the outcomes, scientific merit, broader impact, vision, and agenda of one's research programs so as to convince the scientific community to contribute to it.
 
Peer review of proposals has served science well for decades, but perhaps it's time for funding agencies to consider alternative approaches to public funding of research—based on advances in mathematics and modern technology—to optimize their return on investment. The system proposed here requires a fraction of the costs associated with traditional peer review, but may yield comparable or even better results. The savings of financial and human resources could be used to identify new targets of opportunity, to support the translation of scientific results into products and jobs, and to help communicate advances in science and technology.

Acknowledgments

The authors acknowledge support by the National Science Foundation under grant SBE #0914939, the Andrew W. Mellon Foundation, and National Institutes of Health award U01 GM098959.
National Science Foundation 0914939
Andrew W. Mellon Foundation
National Institutes of Health U01 GM098959

Footnotes

  • The authors declare that they have no conflict of interest.

References

It is TIme for Greenpeace to be Proscecuted for Crimes Against Humanity

Standing Up for GMOs

  1. Phillip Sharp11
+ Author Affiliations
  1. 1Bruce Alberts is President Emeritus of the U.S. National Academy of Sciences and former Editor-in-Chief of Science.
  2. 2Roger Beachy is a Wolf Prize laureate; President Emeritus of the Donald Danforth Plant Science Center, St. Louis, MO, USA; and former director of the U.S. National Institute of Food and Agriculture.
  3. 3David Baulcombe is a Wolf Prize laureate and Royal Society Professor in the Department of Plant Sciences of the University of Cambridge, Cambridge, UK. He receives research funding from Syngenta and is a consultant for Syngenta.
  4. 4Gunter Blobel is a Nobel laureate and the John D. Rockefeller Jr. Professor at the Rockefeller University, New York, NY, USA.
  5. 5Swapan Datta is Deputy Director General (Crop Science) of the Indian Council of Agricultural Research, New Delhi, India; the Rash Behari Ghosh Chair Professor at Calcutta University, India; and a former scientist at ETH-Zurich, Switzerland, and at IRRI, Philippines.
  6. 6Nina Fedoroff is a National Medal of Science laureate; a Distinguished Professor at the King Abdullah University of Science and Technology, Thuwal, Saudi Arabia; an Evan Pugh Professor at Pennylvania State University, University Park, PA, USA; and former President of AAAS.
  7. 7Donald Kennedy is President Emeritus of Stanford University, Stanford, CA, USA, and former Editor-in-Chief of Science.
  8. 8Gurdev S. Khush is a World Food Prize laureate, Japan Prize laureate, and former scientist at IRRI, Los Baños, Philippines.
  9. 9Jim Peacock is a former Chief Scientist of Australia and former Chief of the Division of Plant Industry at the Commonwealth Scientific and Industrial Research Organization, Canberra, Australia.
  10. 10Martin Rees is President Emeritus of the Royal Society, Fellow of Trinity College, and Emeritus Professor of Cosmology and Astrophysics at the University of Cambridge, Cambridge, UK.
  11. 11Phillip Sharp is a Nobel laureate; an Institute Professor at the Massachusetts Institute of Technology, Cambridge, MA, USA; and President of AAAS.
Figure
CREDIT: IRRI
On 8 August 2013, vandals destroyed a Philippine “Golden Rice” field trial. Officials and staff of the Philippine Department of Agriculture that conduct rice tests for the International Rice Research Institute (IRRI) and the Philippine Rice Research Institute (PhilRice) had gathered for a peaceful dialogue. They were taken by surprise when protesters invaded the compound, overwhelmed police and village security, and trampled the rice. Billed as an uprising of farmers, the destruction was actually carried out by protesters trucked in overnight in a dozen jeepneys.
 
The global scientific community has condemned the wanton destruction of these field trials, gathering thousands of supporting signatures in a matter of days.* If ever there was a clear-cut cause for outrage, it is the concerted campaign by Greenpeace and other nongovernmental organizations, as well as by individuals, against Golden Rice. Golden Rice is a strain that is genetically modified by molecular techniques (and therefore labeled a genetically modified organism or GMO) to produce β-carotene, a precursor of vitamin A. Vitamin A is an essential component of the light-absorbing molecule rhodopsin in the eye. Severe vitamin A deficiency results in blindness, and half of the roughly half-million children who are blinded by it die within a year. Vitamin A deficiency also compromises immune system function, exacerbating many kinds of illnesses. It is a disease of poverty and poor diet, responsible for 1.9 to 2.8 million preventable deaths annually, mostly of children under 5 years old and women.
 
Rice is the major dietary staple for almost half of humanity, but white rice grains lack vitamin A. Research scientists Ingo Potrykus and Peter Beyer and their teams developed a rice variety whose grains accumulate β-carotene. It took them, in collaboration with IRRI, 25 years to develop and test varieties that express sufficient quantities of the precursor that a few ounces of cooked rice can provide enough β-carotene to eliminate the morbidity and mortality of vitamin A deficiency. It took time, as well, to obtain the right to distribute Golden Rice seeds, which contain patented molecular constructs, free of charge to resource-poor farmers.
 
The rice has been ready for farmers to use since the turn of the 21st century, yet it is still not available to them. Escalating requirements for testing have stalled its release for more than a decade. IRRI and PhilRice continue to patiently conduct the required field tests with Golden Rice, despite the fact that these tests are driven by fears of “potential” hazards, with no evidence of actual hazards. Introduced into commercial production over 17 years ago, GM crops have had an exemplary safety record. And precisely because they benefit farmers, the environment, and consumers, GM crops have been adopted faster than any other agricultural advance in the history of humanity.
 
New technologies often evoke rumors of hazard. These generally fade with time when, as in this case, no real hazards emerge. But the anti-GMO fever still burns brightly, fanned by electronic gossip and well-organized fear-mongering that profits some individuals and organizations. We, and the thousands of other scientists who have signed the statement of protest, stand together in staunch opposition to the violent destruction of required tests on valuable advances such as Golden Rice that have the potential to save millions of impoverished fellow humans from needless suffering and death.
  • * B. Chassy et al., “Global scientific community condemns the recent destruction of field trials of Golden Rice in the Philippines”; http://chn.ge/143PyHo (2013).
  • E. Mayo-Wilson et al., Br. Med. J. 343, d5094 (2011).
  • G. Tang et al., Am. J. Clin. Nutr. 96, 658 (2012).

Astrophysics, the Impossible Science -- More Than Quantum Mechanics?

Last week, Nobel Laureate Martinus Veltman gave a talk at the Simons Center. After the talk, a number of people asked him questions about several things he didn’t know much about, including supersymmetry and dark matter. After deflecting a few such questions, he proceeded to go on a brief rant against astrophysics, professing suspicion of the field’s inability to do experiments and making fun of an astrophysicist colleague’s imprecise data. The rant was a rather memorable feat of curmudgeonliness, and apparently typical Veltman behavior. It left several of my astrophysicist friends fuming. For my part, it inspired me to write a positive piece on astrophysics, highlighting something I don’t think is brought up enough.
 
The thing about astrophysics, see, is that astrophysics is impossible.
Imagine, if you will, an astrophysical object. As an example, picture a black hole swallowing a star.
Are you picturing it?
 
Now think about where you’re looking from. Chances are, you’re at some point up above the black hole, watching the star swirl around, seeing something like this:
Where are you in this situation? On a spaceship? Looking through a camera on some probe?
 
Astrophysicists don’t have spaceships that can go visit black holes. Even the longest-ranging probes have barely left the solar system. If an astrophysicist wants to study a black hole swallowing a star, they can’t just look at a view like that. Instead, they look at something like this:
The image on the right is an artist’s idea of what a black hole looks like. The three on the left?
 
They’re what the astrophysicist actually sees. And even that is cleaned up a bit, the raw output can be even more opaque.
 
A black hole swallowing a star? Just a few blobs of light, pixels on screen. You can measure brightness and dimness, filter by color from gamma rays to radio waves, and watch how things change with time. You don’t even get a whole lot of pixels for distant objects. You can’t do experiments, either, you just have to wait for something interesting to happen and try to learn from the results.
 
It’s like staring at the static on a TV screen, day after day, looking for patterns, until you map out worlds and chart out new laws of physics and infer a space orders of magnitude larger than anything anyone’s ever experienced.
 
And naively, that’s just completely and utterly impossible.
And yet…and yet…and yet…it works!
 
Crazy people staring at a screen can’t successfully make predictions about what another part of the screen will look like. They can’t compare results and hone their findings. They can’t demonstrate principles (like General Relativity) that change technology here on Earth. Astrophysics builds on itself, discovery by discovery, in a way that can only be explained by accepting that it really does work (a theme that I’ve had occasion to harp on before).
 
Physics began with astrophysics. Trying to explain the motion of dots in a telescope and objects on the ground with the same rules led to everything we now know about the world. Astrophysics is hard, arguably impossible…but impossible or not, there are people who spend their lives successfully making it work.
 
 
(David Strumfels) -- With a chemistry background, not astrophysics, I have to wonder where quantum mechanics stacks up.  TO give one example, the hydrogen atom:
 
 
We see the electron orbiting about the proton nucleus, an image we probably saw in high school, and the quantized orbits added by Bohr don't alter what we see significantly (though it is a significant addition).  Now, physics teaches us that an object in orbit about another orbit possesses angular momentum -- which means it is changing direction continuously.
 
But the electron here possesses no angular momentum, according to quantum mechanics.  It's worse that that; the electron has not exact space at anytime we specify.  It is attracted to the nucleus, yes, but outside of that it could be anywhere in the universe, though mostly like close to the nucleus.  I hesitate to go into this further, except that the electron occupies well defined orbitals, which describe its spatial distribution through all space.  The orbitals are squares of the wave function describing the electron, and has a simple formula like this:
 
 
And this is just the simplest of all atoms, hydrogen.  Try to work out more complicated atoms, and you run up against the three body equation, meaning there is no exact solution at all.  Same with molecules molecules ... you get the idea.
 
In the end I won't judge, because I understand neither astrophysics or quantum mechanics well enough to draw a comparison.  As for molecules, I can only give a picture, in this case of hemoglobin.  Here there is structure built upon structure, built upon structure -- the final structure being the atomic orbitals of hydrogen and other atoms.
 
 
 
 
 

Are There 'Laws' in Social Science?

by Ross Pomeroy in Think Big 
January 17, 2014, 12:29 PM
220595
This post originally appeared in the Newton blog on RealClearScience.
You can read the original here.

Richard Feynman rarely shied away from debate. When asked for his opinions, he gave them, honestly and openly. In 1981, he put forth this one:

"Social science is an example of a science which is not a science... They follow the forms... but they don't get any laws."

Many modern social scientists will certainly say they've gotten somewhere. They can point to the law of supply and demand or Zipf's law for proof-at-first-glance -- they have the word "law" in their title! The law of supply and demand, of course, states that the market price for a certain good will fluctuate based upon the quantity demanded by consumers and the quantity supplied by producers. Zipf's law statistically models the frequency of words uttered in a given natural language.

But are social science "laws" really laws? A scientific law is "a statement based on repeated experimental observations that describes some aspect of the world. A scientific law always applies under the same conditions, and implies that there is a causal relationship involving its elements." The natural and physical sciences are rife with laws. It is, for example, a law that non-linked genes assort independently, or that the total energy of an isolated system is conserved.

But what about the poster child of social science laws: supply and demand? Let's take it apart. Does it imply a causal relationship? Yes, argues MIT professor Harold Kincaid.

"A demand or supply curve graphs how much individuals are willing to produce or buy at any given price. When there is a shift in price, that causes corresponding changes in the amount produced and purchased. A shift in the supply or demand curve is a second causal process – when it gets cheaper to produce some commodity, for example, the amount supplied for each given price may increase."

Are there repeated experimental observations for it? Yes, again, says Kincaid (PDF).

"The observational evidence comes from many studies of diverse commodities – ranging from agricultural goods to education to managerial reputations – in different countries over the past 75 years. Changes in price, demand, and supply are followed over time. Study after study finds the proposed connections."

Does supply and demand occur under the same conditions? That is difficult to discern. In the real world, unseen factors lurk behind every observation. Economists can do their best to control variables, but how can we know if the conditions are precisely identical?

Still, supply and demand holds up very well. Has Mr. Feynman been proved wrong? Perhaps. And if social science can produce laws, is it, too, a science? By Feynman's definition, it seems so.

The reason why social science and its purveyors often gets such a bad rap has less to do with the rigor of their methods and more to do with the perplexity of their subject matter. Humanity and its cultural constructs are more enigmatic than much of the natural world. Even Feynman recognized this.

"Social problems are very much harder than scientific ones," he noted. Social science itself may be an enterprise doomed, not necessarily to fail, just to never fully succeed. Utilizing science to study something inherently unscientific is a tricky business.

Of lice and men (and chimps): Study tracks pace of molecular evolution

Jan 07, 2014 from Phys.Org



 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
A new study led by Kevin Johnson of the Illinois Natural History Survey (seated, at left), with (from left to right) entomology professor Barry Pittendrigh, animal biology professor Ken Paige and postdoctoral researcher Julie Allen, indicates
Read more at:
http://phys.org/news/2014-01-lice-men-chimps-tracks-pace.html#jCp
 
A new study compares the relative rate of molecular evolution between humans and chimps with that of their lice. The researchers wanted to know whether evolution marches on at a steady pace in all creatures or if subtle changes in genes – substitutions of individual letters of the genetic code – occur more rapidly in some groups than in others.
 
A report of the study appears in the Proceedings of the Royal Society B.
The team chose its study subjects because humans, chimps and their lice share a common historical fate: When the ancestors of humans and chimps went their separate ways, evolutionarily speaking, so did their lice.

"Humans are chimps' closest relatives and chimps are humans' closest relatives – and their lice are each others' closest relatives," said study leader Kevin Johnson, an ornithologist with the Illinois Natural History Survey at the University of Illinois. "Once the hosts were no longer in contact with each other, the parasites were not in contact with each other because they spend their entire life cycle on their hosts."

This fact, a mutual divergence that began at the same point in time (roughly 5 million to 6 million years ago) allowed Johnson and his colleagues to determine whether occurs faster in primates or in their parasites.

Previous studies had looked at the rate of molecular changes between parasites and their hosts, but most focused on single in the mitochondria, tiny energy-generating structures outside the nucleus of the cell that are easier to study. The new analysis is the first to look at the pace of molecular change across the genomes of different groups. It compared a total of 1,534 genes shared by the primates and their parasites. To do this, the team had to first assemble a rough sequence of the chimp louse (Pan troglodytes schweinfurthii) genome, the only one of the four organisms for which a full genome sequence was unavailable.

The team also tracked whether changes in gene sequence altered the structure of the proteins for which the genes coded (they looked only at protein-coding genes). For every gene they analyzed, they determined whether sequence changes resulted in a different amino acid being added to a protein at a given location.

They found that – at the scale of random changes to gene sequence – the lice are winning the molecular evolutionary race. This confirmed what previous, more limited studies had hinted at.
"For every single gene we looked at, the lice had more differences (between them) than (were found) between humans and chimps. On average, the parasites had almost 15 times more changes," Johnson said. "Often in parasites you see these faster rates," he said. There have been several hypotheses as to why, he said.

Humans and chimps had a greater percentage of sequence changes that led to changes in protein structure, the researchers found. That means that even though the louse genes are changing at a faster rate, most of those changes are "silent," having no effect on the proteins for which they code. Since these changes make no difference to the life of the organism, they are tolerated, Johnson said. Those sequence changes that actually do change the structure of proteins in lice are likely to be harmful and are being eliminated by natural selection, he said.

In humans and , the higher proportion of amino acid changes suggests that some of those genes are under the influence of "positive selection," meaning that the altered proteins give the primates some evolutionary advantage, Johnson said. Most of the genes that changed more quickly or slowly in primates followed the same pattern in their , Johnson said.

"The most likely explanation for this is that certain genes are more important for the function of the cell and can't tolerate change as much," Johnson said.

The new study begins to answer fundamental questions about changes at the molecular level that eventually shape the destinies of all organisms, Johnson said.

"Any difference that we see between species at the morphological level almost certainly has a genetic basis, so understanding how different genes are different from each other helps us understand why different species are different from each other," he said. "Fundamentally, we want to know which genetic differences matter, which don't, and why certain genes might change faster than others, leading to those differences."
Explore further: Louse genetics offer clues on human migrations
More information: "Rates of Genomic Divergence in Humans, Chimpanzees and Their Lice," rspb.royalsocietypublishing.org/lookup/doi/10.1098/rspb.2013.2174
Journal reference: Proceedings of the Royal Society B

Read more at: http://phys.org/news/2014-01-lice-men-chimps-tracks-pace.html#jCp



New form of quantum matter: Natural 3D counterpart to graphene discovered

17 hours ago by Lynn Yarris in Phys.orgNatural 3D counterpart to graphene discovered
A topological Dirac semi-metal state is realized at the critical point in the phase transition from a normal insulator to a topological insulator. The + and - signs denote the even and odd parity of the energy bands. Credit: Yulin Chen, Oxford

The discovery of what is essentially a 3D version of graphene – the 2D sheets of carbon through which electrons race at many times the speed at which they move through silicon - promises exciting new things to come for the high-tech industry, including much faster transistors and far more compact hard drives. A collaboration of researchers at the DOE's Lawrence Berkeley National Laboratory (Berkeley Lab) has discovered that sodium bismuthate can exist as a form of quantum matter called a three-dimensional topological Dirac semi-metal (3DTDS). This is the first experimental confirmation of 3D Dirac fermions in the interior or bulk of a material, a novel state that was only recently proposed by theorists.
"A 3DTDS is a natural three-dimensional counterpart to graphene with similar or even better mobility and velocity electrons," says Yulin Chen, a physicist with Berkeley Lab's Advanced Light Source (ALS) when he initiated the study that led to this discovery, and now with the University of Oxford. "Because of its 3D Dirac fermions in the bulk, a 3DTDS also features intriguing non-saturating linear magnetoresistance that can be orders of magnitude higher than the GMR materials now used in hard drives, and it opens the door to more efficient optical sensors."
Chen is the corresponding author of a paper in Science reporting the discovery. The paper is titled "Discovery of a Three-dimensional Topological Dirac Semimetal, Na3Bi." Co-authors were Zhongkai Liu, Bo Zhou, Yi Zhang, Zhijun Wang, Hongming Weng, Dharmalingam Prabhakaran, Sung-Kwan Mo, Zhi-Xun Shen, Zhong Fang, Xi Dai and Zahid Hussain.


Two of the most exciting new materials in the world of high technology today are graphene and , crystalline materials that are electrically insulating in the bulk but conducting on the surface. Both feature 2D Dirac fermions (fermions that aren't their own antiparticle), which give rise to extraordinary and highly coveted physical properties. Topological insulators also possess a unique , in which bulk electrons behave like those in an insulator while surface electrons behave like those in graphene.

Natural 3D counterpart to graphene discovered       
Beamline 10.0.1 at Berkeley Lab's Advanced Light Source is optimized for the study of for electron structures and correlated electron systems. Credit: Roy Kaltschmidt, Berkeley Lab

"The swift development of graphene and topological insulators has raised questions as to whether there are 3D counterparts and other materials with unusual topology in their electronic structure," says Chen. "Our discovery answers both questions. In the sodium bismuthate we studied, the bulk conduction and valence bands touch only at discrete points and disperse linearly along all three momentum directions to form bulk 3D Dirac fermions. Furthermore, the topology of a 3DTSD electronic structure is also as unique as those of topological insulators."

Read more at: http://phys.org/news/2014-01-quantum-natural-3d-counterpart-graphene.html#jCp

Thursday, January 16, 2014

Quantum Experiment Shows How Time ‘Emerges’ from Entanglement

https://medium.com/the-physics-arxiv-blog/d5d3dc850933
 

Time is an emergent phenomenon that is a side effect of quantum entanglement, say physicists. And they have the first experimental results to prove it


When the new ideas of quantum mechanics spread through science like wildfire in the first half of the 20th century, one of the first things physicists did was to apply them to gravity and general relativity. The results were not pretty.
 
It immediately became clear that these two foundations of modern physics were entirely incompatible. When physicists attempted to meld the approaches, the resulting equations were bedeviled with infinities making it impossible to make sense of the results.
 
Then in the mid-1960s, there was a breakthrough. The physicists John Wheeler and Bryce DeWitt successfully combined the previously incompatible ideas in a key result that has since become known as the Wheeler-DeWitt equation. This is important because it avoids the troublesome infinites—a huge advance.
 
But it didn’t take physicists long to realise that while the Wheeler-DeWitt equation solved one significant problem, it introduced another. The new problem was that time played no role in this equation. In effect, it says that nothing ever happens in the universe, a prediction that is clearly at odds with the observational evidence.
 
This conundrum, which physicists call ‘the problem of time’, has proved to be a thorn in flesh of modern physicists, who have tried to ignore it but with little success.
Then in 1983, the theorists Don Page and William Wootters came up with a novel solution based on the quantum phenomenon of entanglement. This is the exotic property in which two quantum particles share the same existence, even though they are physically separated.
 
Entanglement is a deep and powerful link and Page and Wootters showed how it can be used to measure time. Their idea was that the way a pair of entangled particles evolve is a kind of clock that can be used to measure change.
 
But the results depend on how the observation is made. One way to do this is to compare the change in the entangled particles with an external clock that is entirely independent of the universe. This is equivalent to god-like observer outside the universe measuring the evolution of the particles using an external clock.
 
In this case, Page and Wootters showed that the particles would appear entirely unchanging—that time would not exist in this scenario.
 
But there is another way to do it that gives a different result. This is for an observer inside the universe to compare the evolution of the particles with the rest of the universe. In this case, the internal observer would see a change and this difference in the evolution of entangled particles compared with everything else is an important a measure of time.
 
This is an elegant and powerful idea. It suggests that time is an emergent phenomenon that comes about because of the nature of entanglement. And it exists only for observers inside the universe. Any god-like observer outside sees a static, unchanging universe, just as the Wheeler-DeWitt equations predict.
 
Of course, without experimental verification, Page and Wootter’s ideas are little more than a philosophical curiosity. And since it is never possible to have an observer outside the universe, there seemed little chance of ever testing the idea.
 
Until now. Today, Ekaterina Moreva at the Istituto Nazionale di Ricerca Metrologica (INRIM) in Turin, Italy, and a few pals have performed the first experimental test of Page and Wootters’ ideas. And they confirm that time is indeed an emergent phenomenon for ‘internal’ observers but absent for external ones.
 
The experiment involves the creation of a toy universe consisting of a pair of entangled photons and an observer that can measure their state in one of two ways. In the first, the observer measures the evolution of the system by becoming entangled with it. In the second, a god-like observer measures the evolution against an external clock which is entirely independent of the toy universe.
 
The experimental details are straightforward. The entangled photons each have a polarisation which can be changed by passing it through a birefringent plate. In the first set up, the observer measures the polarisation of one photon, thereby becoming entangled with it. He or she then compares this with the polarisation of the second photon. The difference is a measure of time.
 
In the second set up, the photons again both pass through the birefringent plates which change their polarisations. However, in this case, the observer only measures the global properties of both photons by comparing them against an independent clock.
 
In this case, the observer cannot detect any difference between the photons without becoming entangled with one or the other. And if there is no difference, the system appears static. In other words, time does not emerge.
 
“Although extremely simple, our model captures the two, seemingly contradictory, properties of the Page-Wootters mechanism,” say Moreva and co.
 
That’s an impressive experiment. Emergence is a popular idea in science. In particular, physicists have recently become excited about the idea that gravity is an emergent phenomenon. So it’s a relatively small step to think that time may emerge in a similar way.
 
What emergent gravity has lacked, of course, is an experimental demonstration that shows how it works in in practice. That’s why Moreva and co’s work is significant. It places an abstract and exotic idea on firm experimental footing for the first time.
 
Perhaps most significant of all is the implication that quantum mechanics and general relativity are not so incompatible after all. When viewed through the lens of entanglement, the famous ‘problem of time’ just melts away.
 
The next step will be to extend the idea further, particularly to the macroscopic scale. It’s one thing to show how time emerges for photons, it’s quite another to show how it emerges for larger things such as humans and train timetables.
 
And therein lies another challenge.
Ref: arxiv.org/abs/1310.4691 :Time From Quantum Entanglement: An Experimental Illustration

Brain on Autopilot

Neuroscience News
How the architecture of the brain shapes its functioning.
The structure of the human brain is complex, reminiscent of a circuit diagram with countless connections. But what role does this architecture play in the functioning of the brain? To answer this question, researchers at the Max Planck Institute for Human Development in Berlin, in cooperation with colleagues at the Free University of Berlin and University Hospital Freiburg, have for the first time analysed 1.6 billion connections within the brain simultaneously. They found the highest agreement between structure and information flow in the “default mode network,” which is responsible for inward-focused thinking such as daydreaming.
These images shows the brain connections.
A daydreaming brain: the yellow areas depict the default mode network from three different perspectives; the coloured fibres show the connections amongst each other and with the remainder of the brain. Credit Max Planck Institute.

Everybody’s been there: You’re sitting at your desk, staring out the window, your thoughts wandering. Instead of getting on with what you’re supposed to be doing, you start mentally planning your next holiday or find yourself lost in a thought or a memory. It’s only later that you realize what has happened: Your brain has simply “changed channels”—and switched to autopilot.

For some time now, experts have been interested in the competition among different networks of the brain, which are able to suppress one another’s activity. If one of these approximately 20 networks is active, the others remain more or less silent. So if you’re thinking about your next holiday, it is almost impossible to follow the content of a text at the same time.

To find out how the anatomical structure of the brain impacts its functional networks, a team of researchers at the Max Planck Institute for Human Development in Berlin, in cooperation with colleagues at the Free University of Berlin and the University Hospital Freiburg, have analysed the connections between a total of 40,000 tiny areas of the brain. Using functional magnetic resonance imaging, they examined a total of 1.6 billion possible anatomical connections between these different regions in 19 participants aged between 21 and 31 years. The research team compared these connections with the brain signals actually generated by the nerve cells.

Their results showed the highest agreement between brain structure and brain function in areas forming part of the “default mode network“, which is associated with daydreaming, imagination, and self-referential thought.

“In comparison to other networks, the default mode network uses the most direct anatomical connections. We think that neuronal activity is automatically directed to level off at this network whenever there are no external influences on the brain,” says Andreas Horn, lead author of the study and researcher in the Center for Adaptive Rationality at the Max Planck Institute for Human Development in Berlin.

Living up to its name, the default mode network seems to become active in the absence of external influences. In other words, the anatomical structure of the brain seems to have a built-in autopilot setting. It should not, however, be confused with an idle state. On the contrary, daydreaming, imagination, and self-referential thought are complex tasks for the brain.

“Our findings suggest that the structural architecture of the brain ensures that it automatically switches to something useful when it is not being used for other activities,” says Andreas Horn. “But the brain only stays on autopilot until an external stimulus causes activity in another network, putting an end to the daydreaming. A buzzing fly, a loud bang in the distance, or focused concentration on a text, for example.”

The researchers hope that their findings will contribute to a better understanding of brain functioning in healthy people, but also of neurodegenerative disorders such as Alzheimer’s disease and psychiatric conditions such as schizophrenia. In follow-up studies, the research team will compare the brain structures of patients with neurological disorders with those of healthy controls.

Notes about this neuroscience and neuroimaging research
Contact: Nicole Siller – Max Planck Gesellschaft
Source: Max Planck Gesellschaft press release
Image Source: The image is adapted from the Max Planck Gesellschaft press release.
Original Research: Abstract for “The structural–functional connectome and the default mode network of the human brain” by Andreas Horn, Dirk Ostwald, Marco Reisert, and Felix Blankenburg. in Neuroimage. Published online October 4 2013 doi:10.1016/j.neuroimage.2013.09.069

Right to exist

From Wikipedia, the free encyclopedia (Redirected from Right to Exist ) French historian Ernest Renan de...