The replication crisis (also called the replicability crisis and the reproducibility crisis) is an ongoing methodological crisis in which it has been found that many scientific studies are difficult or impossible to replicate or reproduce. The replication crisis most severely affects the social sciences and medicine. The phrase was coined in the early 2010s as part of a growing awareness of the problem. The replication crisis represents an important body of research in the field of metascience.
Because the reproducibility of experimental results is an essential part of the scientific method, an inability to replicate the studies of others has potentially grave consequences for many fields of science in which significant theories are grounded on unreproducible experimental work. The replication crisis has been particularly widely discussed in the fields of medicine, where a number of efforts have been made to re-investigate classic results, to determine both the reliability of the results and, if found to be unreliable, the reasons for the failure of replication.
Scope
Overall
A 2016 poll of 1,500 scientists reported that 70% of them had failed to reproduce at least one other scientist's experiment (50% had failed to reproduce one of their own experiments). In 2009, 2% of scientists admitted to falsifying studies at least once and 14% admitted to personally knowing someone who did. Misconducts were reported more frequently by medical researchers than others.
In psychology
Several factors have combined to put psychology at the center of the controversy. According to a 2018 survey of 200 meta-analyses, "psychological research is, on average, afflicted with low statistical power". Much of the focus has been on the area of social psychology, although other areas of psychology such as clinical psychology, developmental psychology, and educational research have also been implicated.
Firstly, questionable research practices (QRPs) have been identified as common in the field. Such practices, while not intentionally fraudulent, involve capitalizing on the gray area of acceptable scientific practices or exploiting flexibility in data collection, analysis, and reporting, often in an effort to obtain a desired outcome. Examples of QRPs include selective reporting or partial publication of data (reporting only some of the study conditions or collected dependent measures in a publication), optional stopping (choosing when to stop data collection, often based on statistical significance of tests), post-hoc storytelling (framing exploratory analyses as confirmatory analyses), and manipulation of outliers (either removing outliers or leaving outliers in a dataset to cause a statistical test to be significant). A survey of over 2,000 psychologists indicated that a majority of respondents admitted to using at least one QRP. The publication bias (see Section "Causes" below) leads to an elevated number of false positive results. It is augmented by the pressure to publish as well as the author's own confirmation bias and is an inherent hazard in the field, requiring a certain degree of skepticism on the part of readers.
Secondly, psychology and social psychology in particular, has found itself at the center of several scandals involving outright fraudulent research, most notably the admitted data fabrication by Diederik Stapel as well as allegations against others. However, most scholars acknowledge that fraud is, perhaps, the lesser contribution to replication crises.
Thirdly, several effects in psychological science have been found to be difficult to replicate even before the current replication crisis. For example, the scientific journal Judgment and Decision Making has published several studies over the years that fail to provide support for the unconscious thought theory. Replications appear particularly difficult when research trials are pre-registered and conducted by research groups not highly invested in the theory under questioning.
These three elements together have resulted in renewed attention for replication supported by psychologist Daniel Kahneman. Scrutiny of many effects have shown that several core beliefs are hard to replicate. A 2014 special edition of the journal Social Psychology focused on replication studies and a number of previously held beliefs were found to be difficult to replicate. A 2012 special edition of the journal Perspectives on Psychological Science also focused on issues ranging from publication bias to null-aversion that contribute to the replication crises in psychology. In 2015, the first open empirical study of reproducibility in psychology was published, called the Reproducibility Project. Researchers from around the world collaborated to replicate 100 empirical studies from three top psychology journals. Fewer than half of the attempted replications were successful at producing statistically significant results in the expected directions, though most of the attempted replications did produce trends in the expected directions.
Many research trials and meta-analyses are compromised by poor quality and conflicts of interest that involve both authors and professional advocacy organizations, resulting in many false positives regarding the effectiveness of certain types of psychotherapy.
Although the British newspaper The Independent wrote that the results of the reproducibility project show that much of the published research is just "psycho-babble", the replication crisis does not necessarily mean that psychology is unscientific. Rather this process is part of the scientific process in which old ideas or those that cannot withstand careful scrutiny are pruned, although this pruning process is not always effective. The consequence is that some areas of psychology once considered solid, such as social priming, have come under increased scrutiny due to failed replications.
Nobel laureate and professor emeritus in psychology Daniel Kahneman argued that the original authors should be involved in the replication effort because the published methods are often too vague. Others such as Dr. Andrew Wilson disagree and argue that the methods should be written down in detail. An investigation of replication rates in psychology in 2012 indicated higher success rates of replication in replication studies when there was author overlap with the original authors of a study (91.7% successful replication rates in studies with author overlap compared to 64.6% success replication rates without author overlap).
Focus on the replication crisis has led to other renewed efforts in the discipline to re-test important findings. In response to concerns about publication bias and p-hacking, more than 140 psychology journals have adopted result-blind peer review where studies are accepted not on the basis of their findings and after the studies are completed, but before the studies are conducted and upon the basis of the methodological rigor of their experimental designs and the theoretical justifications for their statistical analysis techniques before data collection or analysis is done. Early analysis of this procedure has estimated that 61 percent of result-blind studies have led to null results, in contrast to an estimated 5 to 20 percent in earlier research. In addition, large-scale collaborations between researchers working in multiple labs in different countries and that regularly make their data openly available for different researchers to assess have become much more common in the field.
Psychology replication rates
A report by the Open Science Collaboration in August 2015 that was coordinated by Brian Nosek estimated the reproducibility of 100 studies in psychological science from three high-ranking psychology journals. Overall, 36% of the replications yielded significant findings (p value below 0.05) compared to 97% of the original studies that had significant effects. The mean effect size in the replications was approximately half the magnitude of the effects reported in the original studies.
The same paper examined the reproducibility rates and effect sizes by journal (Journal of Personality and Social Psychology [JPSP], Journal of Experimental Psychology: Learning, Memory, and Cognition [JEP:LMC], Psychological Science [PSCI]) and discipline (social psychology, developmental psychology). Study replication rates were 23% for JPSP, 48% for JEP:LMC, and 38% for PSCI. Studies in the field of cognitive psychology had a higher replication rate (50%) than studies in the field of social psychology (25%).
An analysis of the publication history in the top 100 psychology journals between 1900 and 2012 indicated that approximately 1.6% of all psychology publications were replication attempts. Articles were considered a replication attempt if the term "replication" appeared in the text. A subset of those studies (500 studies) was randomly selected for further examination and yielded a lower replication rate of 1.07% (342 of the 500 studies [68.4%] were actually replications). In the subset of 500 studies, analysis indicated that 78.9% of published replication attempts were successful.
A study published in 2018 in Nature Human Behaviour sought to replicate 21 social and behavioral science papers from Nature and Science, finding that only 13 could be successfully replicated. Similarly, in a study conducted under the auspices of the Center for Open Science, a team of 186 researchers from 60 different laboratories (representing 36 different nationalities from 6 different continents) conducted replications of 28 classic and contemporary findings in psychology. The focus of the study was not only on whether or not the findings from the original papers replicated, but also on the extent to which findings varied as a function of variations in samples and contexts. Overall, 14 of the 28 findings failed to replicate despite massive sample sizes. However, if a finding replicated, it replicated in most samples, while if a finding was not replicated, it failed to replicate with little variation across samples and contexts. This evidence is inconsistent with a popular explanation that failures to replicate in psychology are likely due to changes in the sample between the original and replication study.
A disciplinary social dilemma
Highlighting the social structure that discourages replication in psychology, Brian D. Earp and Jim A. C. Everett enumerated five points as to why replication attempts are uncommon:
- "Independent, direct replications of others' findings can be time-consuming for the replicating researcher"
- "[Replications] are likely to take energy and resources directly away from other projects that reflect one's own original thinking"
- "[Replications] are generally harder to publish (in large part because they are viewed as being unoriginal)"
- "Even if [replications] are published, they are likely to be seen as 'bricklaying' exercises, rather than as major contributions to the field"
- "[Replications] bring less recognition and reward, and even basic career security, to their authors"
For these reasons the authors advocated that psychology is facing a disciplinary social dilemma, where the interests of the discipline are at odds with the interests of the individual researcher.
"Methodological terrorism" controversy
With the replication crisis of psychology earning attention, Princeton University psychologist Susan Fiske drew controversy for calling out critics of psychology. She labeled these unidentified "adversaries" with names such as "methodological terrorist" and "self-appointed data police", and said that criticism of psychology should only be expressed in private or through contacting the journals. Columbia University statistician and political scientist Andrew Gelman, responded to Fiske, saying that she had found herself willing to tolerate the "dead paradigm" of faulty statistics and had refused to retract publications even when errors were pointed out. He added that her tenure as editor has been abysmal and that a number of published papers edited by her were found to be based on extremely weak statistics; one of Fiske's own published papers had a major statistical error and "impossible" conclusions.
In medicine
Out of 49 medical studies from 1990–2003 with more than 1000 citations, 45 claimed that the studied therapy was effective. Out of these studies, 16% were contradicted by subsequent studies, 16% had found stronger effects than did subsequent studies, 44% were replicated, and 24% remained largely unchallenged. The US Food and Drug Administration in 1977–1990 found flaws in 10–20% of medical studies. In a paper published in 2012, Glenn Begley, a biotech consultant working at Amgen, and Lee Ellis, at the University of Texas, found that only 11% of 53 pre-clinical cancer studies could be replicated. The irreproducible studies had a number of features in common, including that studies were not performed by investigators blinded to the experimental versus the control arms, there was a failure to repeat experiments, a lack of positive and negative controls, failure to show all the data, inappropriate use of statistical tests and use of reagents that were not appropriately validated.
A survey on cancer researchers found that half of them had been unable to reproduce a published result. A similar survey by Nature on 1,576 researchers who took a brief online questionnaire on reproducibility showed that more than 70% of researchers have tried and failed to reproduce another scientist's experiments, and more than half have failed to reproduce their own experiments. "Although 52% of those surveyed agree there is a significant 'crisis' of reproducibility, less than 31% think failure to reproduce published results means the result is probably wrong, and most say they still trust the published literature."
A 2016 article by John Ioannidis, Professor of Medicine and of Health Research and Policy at Stanford University School of Medicine and a Professor of Statistics at Stanford University School of Humanities and Sciences, elaborated on "Why Most Clinical Research Is Not Useful". In the article Ioannidis laid out some of the problems and called for reform, characterizing certain points for medical research to be useful again; one example he made was the need for medicine to be "patient centered" (e.g. in the form of the Patient-Centered Outcomes Research Institute) instead of the current practice to mainly take care of "the needs of physicians, investigators, or sponsors".
In marketing
Marketing is another discipline with a "desperate need" for replication. Many famous marketing studies fail to be repeated upon replication, a notable example being the "too-many-choices" effect, in which a high number of choices of product makes a consumer less likely to purchase. In addition to the previously mentioned arguments, replication studies in marketing are needed to examine the applicability of theories and models across countries and cultures, which is especially important because of possible influences of globalization.
In economics
A 2016 study in the journal Science found that one-third of 18 experimental studies from two top-tier economics journals (American Economic Review and the Quarterly Journal of Economics) failed to successfully replicate. A 2017 study in the Economic Journal suggested that "the majority of the average effects in the empirical economics literature are exaggerated by a factor of at least 2 and at least one-third are exaggerated by a factor of 4 or more".
In sports science
A 2018 study took the field of exercise and sports science to task for insufficient replication studies, limited reporting of both null and trivial results, and insufficient research transparency. Statisticians have criticized sports science for common use of a controversial statistical method called "magnitude-based inference" which has allowed sports scientists to extract apparently significant results from noisy data where ordinary hypothesis testing would have found none.
In water resource management
A 2019 study in Scientific Data suggested that only a small number of articles in water resources and management journals could be reproduced, while the majority of articles were not replicable due to data unavailability. The study estimated with 95% confidence that "results might be reproduced for only 0.6% to 6.8% of all 1,989 articles".
Political repercussions
In the US, science's reproducibility crisis has become a topic of political contention, linked to the attempt to diminish regulations – e.g. of emissions of pollutants, with the argument that these regulations are based on non-reproducible science. Previous attempts with the same aim accused studies used by regulators of being non-transparent.
Public awareness and perceptions
Concerns have been expressed within the scientific community that the general public may consider science less credible due to failed replications. Research supporting this concern is sparse, but a nationally representative survey in Germany showed that more than 75% of Germans have not heard of replication failures in science. The study also found that most Germans have positive perceptions of replication efforts: Only 18% think that non-replicability shows that science cannot be trusted, while 65% think that replication research shows that science applies quality control, and 80% agree that errors and corrections are part of science.
Causes
A major cause of low reproducibility is the publication bias and the selection bias, in turn caused by the fact that statistically insignificant results are rarely published or discussed in publications on multiple potential effects. Among potential effects that are inexistent (or tiny), the statistical tests show significance (at the usual level) with 5% probability. If a large number of such effects are screened in a chase for significant results, these erroneously significant ones inundate the appropriately found ones, and they lead to (still erroneously) successful replications again with just 5% probability. An increasing proportion of such studies thus progressively lowers the replication rate corresponding to studies of plausibly relevant effects. Erroneously significant results may also come from questionable practices in data analysis called data dredging or P-hacking, HARKing, and researcher degrees of freedom.
Glenn Begley and John Ioannidis proposed these causes for the increase in the chase for significance:
- Generation of new data/publications at an unprecedented rate.
- Majority of these discoveries will not stand the test of time.
- Failure to adhere to good scientific practice and the desperation to publish or perish.
- Multiple varied stakeholders.
They conclude that no party is solely responsible, and no single solution will suffice.
These issues may lead to the canonization of false facts.
In fact, some predictions of an impending crisis in the quality control mechanism of science can be traced back several decades, especially among scholars in science and technology studies (STS). Derek de Solla Price – considered the father of scientometrics – predicted that science could reach 'senility' as a result of its own exponential growth. Some present day literature seems to vindicate this 'overflow' prophecy, lamenting the decay in both attention and quality.
Philosopher and historian of science Jerome R. Ravetz predicted in his 1971 book Scientific Knowledge and Its Social Problems that science – in its progression from "little" science composed of isolated communities of researchers, to "big" science or "techno-science" – would suffer major problems in its internal system of quality control. Ravetz recognized that the incentive structure for modern scientists could become dysfunctional, now known as the present 'publish or perish' challenge, creating perverse incentives to publish any findings, however dubious. According to Ravetz, quality in science is maintained only when there is a community of scholars linked by a set of shared norms and standards, all of whom are willing and able to hold one another accountable.
Historian Philip Mirowski offered a similar diagnosis in his 2011 book Science Mart (2011). In the title, the word 'Mart' is in reference to the retail giant 'Walmart', used by Mirowski as a metaphor for the commodification of science. In Mirowski's analysis, the quality of science collapses when it becomes a commodity being traded in a market. Mirowski argues his case by tracing the decay of science to the decision of major corporations to close their in-house laboratories. They outsourced their work to universities in an effort to reduce costs and increase profits. The corporations subsequently moved their research away from universities to an even cheaper option – Contract Research Organizations (CRO).
The crisis of science's quality control system is affecting the use of science for policy. This is the thesis of a recent work by a group of STS scholars, who identify in 'evidence based (or informed) policy' a point of present tension. Economist Noah Smith suggests that a factor in the crisis has been the overvaluing of research in academia and undervaluing of teaching ability, especially in fields with few major recent discoveries.
Social system theory, due to the German sociologist Niklas Luhmann offers another reading of the crisis . According to this theory each the systems such as 'economy', 'science', 'religion', 'media' and so on communicates using its own code, true/false for science, profit/loss for the economy, new/no-news for the media; according to some sociologists, science's mediatization, its commodification and its politicization – as a result of the structural coupling among systems – have led to a confusion of the original system codes. If science's code true/false is substituted for by those of the other systems, such as profit/loss, news/no-news, science's operation enters into an internal crisis.
Response
Replication has been referred to as "the cornerstone of science". Replication studies attempt to evaluate whether published results reflect true findings or false positives. The integrity of scientific findings and reproducibility of research are important as they form the knowledge foundation on which future studies are built.
Metascience
Metascience is the use of scientific methodology to study science itself. Metascience seeks to increase the quality of scientific research while reducing waste. It is also known as "research on research" and "the science of science", as it uses research methods to study how research is done and where improvements can be made. Metascience concerns itself with all fields of research and has been described as "a bird's eye view of science." In the words of John Ioannidis, "Science is the best thing that has happened to human beings ... but we can do it better."
Meta-research continues to be conducted to identify the roots of the crisis and to address them. Methods of addressing the crisis include pre-registration of scientific studies and clinical trials as well as the founding of organizations such as CONSORT and the EQUATOR Network that issue guidelines for methodology and reporting. There are continuing efforts to reform the system of academic incentives, to improve the peer review process, to reduce the misuse of statistics, to combat bias in scientific literature, and to increase the overall quality and efficiency of the scientific process.
Tackling publication bias with pre-registration of studies
A recent innovation in scientific publishing to address the replication crisis is through the use of registered reports. The registered report format requires authors to submit a description of the study methods and analyses prior to data collection. Once the method and analysis plan is vetted through peer-review, publication of the findings is provisionally guaranteed, based on whether the authors follow the proposed protocol. One goal of registered reports is to circumvent the publication bias toward significant findings that can lead to implementation of questionable research practices and to encourage publication of studies with rigorous methods.
The journal Psychological Science has encouraged the preregistration of studies and the reporting of effect sizes and confidence intervals. The editor in chief also noted that the editorial staff will be asking for replication of studies with surprising findings from examinations using small sample sizes before allowing the manuscripts to be published.
Moreover, only a very small proportion of academic journals in psychology and neurosciences explicitly stated that they welcome submissions of replication studies in their aim and scope or instructions to authors. This phenomenon does not encourage the reporting or even attempt on replication studies.
Shift to a complex systems paradigm
It has been argued that research endeavours working within the conventional linear paradigm necessarily end up in replication difficulties. Problems arise if the causal processes in the system under study are "interaction-dominant" instead of "component dominant", multiplicative instead of additive, and with many small non-linear interactions producing macro-level phenomena, that are not reducible to their micro-level components. In the context of such complex systems, conventional linear models produce answers that are not reasonable, because it is not in principle possible to decompose the variance as suggested by the General Linear Model (GLM) framework – aiming to reproduce such a result is hence evidently problematic. The same questions are currently being asked in many fields of science, where researchers are starting to question assumptions underlying classical statistical methods.
Emphasizing replication attempts in teaching
Based on coursework in experimental methods at MIT, Stanford, and the University of Washington, it has been suggested that methods courses in psychology and other fields emphasize replication attempts rather than original studies. Such an approach would help students learn scientific methodology and provide numerous independent replications of meaningful scientific findings that would test the replicability of scientific findings. Some have recommended that graduate students should be required to publish a high-quality replication attempt on a topic related to their doctoral research prior to graduation.
Reducing the p-value required for claiming significance of new results
Many publications require a p-value of p < 0.05 to claim statistical significance. The paper "Redefine statistical significance", signed by a large number of scientists and mathematicians, proposes that in "fields where the threshold for defining statistical significance for new discoveries is p < 0.05, we propose a change to p < 0.005. This simple step would immediately improve the reproducibility of scientific research in many fields."
Their rationale is that "a leading cause of non-reproducibility (is that the) statistical standards of evidence for claiming new discoveries in many fields of science are simply too low. Associating 'statistically significant' findings with p < 0.05 results in a high rate of false positives even in the absence of other experimental, procedural and reporting problems."
This call was subsequently criticised by another large group, who argued that "redefining" the threshold would not fix current problems, would lead to some new ones, and that in the end, all thresholds needed to be justified case-by-case instead of following general conventions.
Addressing the misinterpretation of p-values
Although statisticians are unanimous that use of the p < 0.05 provides weaker evidence than is generally appreciated, there is a lack of unanimity about what should be done about it. Some have advocated that Bayesian methods should replace p-values. This has not happened on a wide scale, partly because it is complicated, and partly because many users distrust the specification of prior distributions in the absence of hard data. A simplified version of the Bayesian argument, based on testing a point null hypothesis was suggested by Colquhoun (2014, 2017). The logical problems of inductive inference were discussed in "The problem with p-values" (2016).
The hazards of reliance on p-values were emphasized by pointing out that even observation of p = 0.001 was not necessarily strong evidence against the null hypothesis. Despite the fact that the likelihood ratio in favour of the alternative hypothesis over the null is close to 100, if the hypothesis was implausible, with a prior probability of a real effect being 0.1, even the observation of p = 0.001 would have a false positive risk of 8 percent. It would not even reach the 5 percent level.
It was recommended that the terms "significant" and "non-significant" should not be used. p-values and confidence intervals should still be specified, but they should be accompanied by an indication of the false positive risk. It was suggested that the best way to do this is to calculate the prior probability that would be necessary to believe in order to achieve a false positive risk of, say, 5%. The calculations can be done with R scripts that are provided, or, more simply, with a web calculator. This so-called reverse Bayesian approach, which was suggested by Matthews (2001), is one way to avoid the problem that the prior probability is rarely known.
Encouraging larger sample sizes
To improve the quality of replications, larger sample sizes than those used in the original study are often needed. Larger sample sizes are needed because estimates of effect sizes in published work are often exaggerated due to publication bias and large sampling variability associated with small sample sizes in an original study. Further, using significance thresholds usually leads to inflated effects, because particularly with small sample sizes, only the largest effects will become significant.
Sharing raw data in online repositories
Online repositories where data, protocols, and findings can be stored and evaluated by the public seek to improve the integrity and reproducibility of research. Examples of such repositories include the Open Science Framework, Registry of Research Data Repositories, and Psychfiledrawer.org. Sites like Open Science Framework offer badges for using open science practices in an effort to incentivize scientists. However, there has been concern that those who are most likely to provide their data and code for analyses are the researchers that are likely the most sophisticated. John Ioannidis at Stanford University suggested that "the paradox may arise that the most meticulous and sophisticated and method-savvy and careful researchers may become more susceptible to criticism and reputation attacks by reanalyzers who hunt for errors, no matter how negligible these errors are".
Funding for replication studies
In July 2016 the Netherlands Organisation for Scientific Research made €3 million available for replication studies. The funding is for replication based on reanalysis of existing data and replication by collecting and analysing new data. Funding is available in the areas of social sciences, health research and healthcare innovation.
In 2013 the Laura and John Arnold Foundation funded the launch of The Center for Open Science with a $5.25 million grant and by 2017 had provided an additional $10 million in funding. It also funded the launch of the Meta-Research Innovation Center at Stanford at Stanford University run by John Ioannidis and Steven Goodman to study ways to improve scientific research. It also provided funding for the AllTrials initiative led in part by Ben Goldacre.
Emphasize triangulation, not just replication
Marcus R. Munafò and George Davey Smith argue, in a piece published by Nature, that research should emphasize triangulation, not just replication. They claim that,
replication alone will get us only so far (and) might actually make matters worse ... We believe that an essential protection against flawed ideas is triangulation. This is the strategic use of multiple approaches to address one question. Each approach has its own unrelated assumptions, strengths and weaknesses. Results that agree across different methodologies are less likely to be artefacts. ... Maybe one reason replication has captured so much interest is the often-repeated idea that falsification is at the heart of the scientific enterprise. This idea was popularized by Karl Popper's 1950s maxim that theories can never be proved, only falsified. Yet an overemphasis on repeating experiments could provide an unfounded sense of certainty about findings that rely on a single approach. ... philosophers of science have moved on since Popper. Better descriptions of how scientists actually work include what epistemologist Peter Lipton called in 1991 "inference to the best explanation".
Raise the overall standards of methods presentation
Some authors have argued that the insufficient communication of experimental methods is a major contributor to the reproducibility crisis and that improving the quality of how experimental design and statistical analyses are reported would help improve the situation. These authors tend to plea for both a broad cultural change in the scientific community of how statistics are considered and a more coercive push from scientific journals and funding bodies.
Implications for the pharmaceutical industry
Pharmaceutical companies and venture capitalists maintain research laboratories or contract with private research service providers (e.g. Envigo and Smart Assays Biotechnologies) whose job is to replicate academic studies, in order to test if they are accurate prior to investing or trying to develop a new drug based on that research. The financial stakes are high for the company and investors, so it is cost effective for them to invest in exact replications. Execution of replication studies consume resources. Further, doing an expert replication requires not only generic expertise in research methodology, but specific expertise in the often narrow topic of interest. Sometimes research requires specific technical skills and knowledge, and only researchers dedicated to a narrow area of research might have those skills. Right now, funding agencies are rarely interested in bankrolling replication studies, and most scientific journals are not interested in publishing such results. Amgen Oncology's cancer researchers were only able to replicate 11 percent of the innovative studies they selected to pursue over a 10-year period; a 2011 analysis by researchers with pharmaceutical company Bayer found that the company's in-house findings agreed with the original results only a quarter of the time, at the most. The analysis also revealed that, when Bayer scientists were able to reproduce a result in a direct replication experiment, it tended to translate well into clinical applications; meaning that reproducibility is a useful marker of clinical potential.