Search This Blog

Friday, January 17, 2014

From funding agencies to scientific agency

From funding agencies to scientific agency | EMBOr

Collective allocation of science funding as an alternative to peer review
, , , ,

Author Affiliations

Publicly funded research involves the distribution of a considerable amount of money. Funding agencies such as the US National Science Foundation (NSF), the US National Institutes of Health (NIH) and the European Research Council (ERC) give billions of dollars or euros of taxpayers' money to individual researchers, research teams, universities, and research institutes each year. Taxpayers accordingly expect that governments and funding agencies will spend their money prudently and efficiently.
 
Investing money to the greatest effect is not a challenge unique to research funding agencies and there are many strategies and schemes to choose from. Nevertheless, most funders rely on a tried and tested method in line with the tradition of the scientific community: the peer review of individual proposals to identify the most promising projects for funding. This method has been considered the gold standard for assessing the scientific value of research projects essentially since the end of the Second World War.
Investing money to the greatest effect is not a challenge unique to research funding agencies and there are many strategies and schemes to choose from
However, there is mounting critique of the use of peer review to direct research funding. High on the list of complaints is the cost, both in terms of time and money. In 2012, for example, NSF convened more than 17,000 scientists to review 53,556 proposals [1]. Reviewers generally spend a considerable time and effort to assess and rate proposals of which only a minority can eventually get funded. Of course, such a high rejection rate is also frustrating for the applicants. Scientists spend an increasing amount of time writing and submitting grant proposals. Overall, the scientific community invests an extraordinary amount of time, energy, and effort into the writing and reviewing of research proposals, most of which end up not getting funded at all. This time would be better invested in conducting the research in the first place.
 
Peer review may also be subject to biases, inconsistencies, and oversights. The need for review panels to reach consensus may lead to sub‐optimal decisions owing to the inherently stochastic nature of the peer review process. Moreover, in a period where the money available to fund research is shrinking, reviewers may tend to “play it safe” and select proposals that have a high chance of producing results, rather than more challenging and ambitious projects. Additionally, the structuring of funding around calls‐for‐proposals to address specific topics might inhibit serendipitous discovery, as scientists work on problems for which funding happens to be available rather than trying to solve more challenging problems.
 
The scientific community holds peer review in high regard, but it may not actually be the best possible system for identifying and supporting promising science. Many proposals have been made to reform funding systems, ranging from incremental changes to peer review—including careful selection of reviewers [2] and post‐hoc normalization of reviews [3]—to more radical proposals such as opening up review to the entire online population [4] or removing human reviewers altogether by allocating funds through an objective performance measure [5].
Overall, the scientific community invests an extraordinary amount of time, energy and effort into the writing and reviewing of research proposals…
We would like to add another alternative inspired by the mathematical models used to search the internet for relevant information: a highly decentralized funding model in which the wisdom of the entire scientific community is leveraged to determine a fair distribution of funding. It would still require human insight and decision‐making, but it would drastically reduce the overhead costs and may alleviate many of the issues and inefficiencies of the proposal submission and peer review system, such as bias, “playing it safe”, or reluctance to support curiosity‐driven research.
 
Our proposed system would require funding agencies to give all scientists within their remit an unconditional, equal amount of money each year. However, each scientist would then be required to pass on a fixed percentage of their previous year's funding to other scientists whom they think would make best use of the money (Fig 1). Every year, then, scientists would receive a fixed basic grant from their funding agency combined with an elective amount of funding donated by their peers. As a result of each scientist having to distribute a given percentage of their previous year's budget to other scientists, money would flow through the scientific community. Scientists who are generally anticipated to make the best use of funding will accumulate more.
Figure 1. Proposed funding system
 
Illustrations of the existing (left) and the proposed (right) funding systems, with reviewers in blue and investigators in red. In most current funding models, like those used by NSF and NIH, investigators write proposals in response to solicitations from funding agencies. These proposals are reviewed by small panels and funding agencies use these reviews to help make funding decisions, providing awards to some investigators. In the proposed system, all scientists are both investigators and reviewers: every scientist receives a fixed amount of funding from the government and discretionary distributions from other scientists, but each is required in turn to redistribute some fraction of the total they received to other investigators.
It may help to illustrate the idea with an example. Suppose that the basic grant is set to US$100,000—this corresponds to roughly the entire 2012 NSF budget divided by the total number of researchers that it funded [1]—and the required fraction that any scientist is required to donate is set to f = 0.5 or 50%. Suppose, then, that Scientist K received a basic grant of $100,000 and $200,000 from her peers, which gave her a funding total of $300,000. In 2013, K can spend 50% of that total sum, $150,000, on her own research program, but must donate 50% to other scientists for their 2014 budget. Rather than painstakingly submitting and reviewing project proposals, K and her colleagues can donate to one another by logging into a centralized website and entering the names of the scientists they choose to donate to and how much each should receive.
 
More formally, suppose that a funding agency's total budget is ty in year y, and it simply maintains a set of funding accounts for n qualified scientists chosen according to criteria such as academic appointment status, number of publications and other bibliometric indicators, or area of research. The amount of funding in these accounts in year y is represented as n vectors αy, where each entry αy(i) corresponds to the amount of funding in the account of scientist i in year y. Each year, the funding agency deposits a fixed amount into each account, equal to the total funding budget divided by the total number of scientists: ty/n. In addition, in each year y scientist i must distribute a fixed fraction f є [0,1] of the funding he or she received to other scientists. We represent all of these choices by an n × n funding transfer matrix Dy, where Dy(i, j) contains the fraction of his or her funds that scientist I will give to scientist j. By construction, this matrix satisfies the properties that all entries are between 0 and 1 inclusive; Dy(i,i) = 0, so that no scientist can donate money to him or herself; and Embedded Image, so that every scientist is required to donate a fraction f of the previous year's funding to others. The distribution of funding over scientists received for year y + 1 is thus expressed by: Embedded Image
 
This form assumes that the portion of a scientist's funding that remains after donation is either spent or stored in a separate research account for later years. An interesting and perhaps necessary modification may be that redistribution pertains to the entirety of funding that a scientist has accumulated over many years, not just the amount received in a particular year. This would ensure that unused funding is gradually re‐injected into the system while still preserving long‐term stability of funding.
 
Network and computer scientists will recognize the general outline of these equations. Google pioneered a similar heuristic approach to rank web pages by transferring “importance” [6] via the web's network of page links; pages that accumulate “importance” rank higher in search results. A similar principle has been successfully used to determine the impact of scientific journals [7] and scholarly authors [8].
 
Instead of attributing “impact” or “relevance”, our approach distributes actual money. We believe that this simple, highly distributed, self‐organizing process can yield sophisticated behavior at a global level. Respected and productive scientists are likely to receive a comparatively large number of donations. They must in turn distribute a fraction of this larger total to others; their high status among scientists thus affords them greater influence over how funding is distributed. The unconditional yearly basic grant in turn ensures stability and gives all scientists greater autonomy for serendipitous discovery, rather than having to chase available funding. As the priorities and preferences of the scientific community change over time, reflected in the values of Dy, the flow of funding will gradually change accordingly. Rather than converging on a stationary distribution, the system will dynamically adjust funding levels to where they are most needed as scientists collectively assess and re‐assess each others' merits. Last but not least, the proposed scheme would fund people instead of projects: it would liberate researchers from peer pressure and funding cycles and would give them much greater flexibility to spend their allocation as they see fit.
 
Of course, funding agencies and governments may still wish or need to play a guiding role, for instance to foster advances in certain areas of national interest or to encourage diversity. This capacity could be included in the outlined system in a number of straightforward ways. Traditional peer‐reviewed, project‐based funding could be continued in parallel. In addition, funding agencies could vary the base funding rate to temporarily inject more money into certain disciplines or research areas. Scientists may be offered the option to donate to special aggregated “large‐scale projects” to support research projects that develop or rely on large‐scale scientific infrastructure. The system could also include some explicit temporal dampening to prevent sudden large changes. Scientists could, for example, be allowed to save surplus funding from previous years in “slush” funds to protect against lean times in the future.
 
In practice, the system will require stringent conflict‐of‐interest rules similar to the ones that have been widely adopted to keep traditional peer review fair and unbiased. For example, scientists might be prevented from donating to themselves, advisors, advisees, close collaborators, or even researchers at their own institution. Funding decisions must remain confidential so scientists can always make unbiased decisions; should groups of people attempt to affect global funding distribution they will lack the information to do so effectively. At the very least, the system will allow funding agencies to confidentially study and monitor the flow of funding in the aggregate; potential abuse such as circular funding schemes can be identified and remediated. This data will furthermore support Science of Science efforts to identify new emerging areas of research and future priorities.
Peer review of proposals has served science well for decades, but funding agencies may want to consider alternative approaches to public funding of research…
Such an open and dynamic funding system might also induce profound changes in scholarly communication. Scientists and researchers may feel more strongly compelled to openly and freely share results with the public and their community if this attracts the interest of colleagues and therefore potential donors. A “publish or perish” strategy may matter less than clearly and compellingly communicating the outcomes, scientific merit, broader impact, vision, and agenda of one's research programs so as to convince the scientific community to contribute to it.
 
Peer review of proposals has served science well for decades, but perhaps it's time for funding agencies to consider alternative approaches to public funding of research—based on advances in mathematics and modern technology—to optimize their return on investment. The system proposed here requires a fraction of the costs associated with traditional peer review, but may yield comparable or even better results. The savings of financial and human resources could be used to identify new targets of opportunity, to support the translation of scientific results into products and jobs, and to help communicate advances in science and technology.

Acknowledgments

The authors acknowledge support by the National Science Foundation under grant SBE #0914939, the Andrew W. Mellon Foundation, and National Institutes of Health award U01 GM098959.
National Science Foundation 0914939
Andrew W. Mellon Foundation
National Institutes of Health U01 GM098959

Footnotes

  • The authors declare that they have no conflict of interest.

References

Moon

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Moon   Near side of the Moon , lunar ...