In empirical research,
the term may be used to refer to authors under-reporting unexpected or
undesirable experimental results, attributing the results to sampling
or measurement error, while being more trusting of expected or
desirable results, though these may be subject to the same sources of
error. In this context, reporting bias can eventually lead to a status
quo where multiple investigators discover and discard the same results,
and later experimenters justify their own reporting bias by observing
that previous experimenters reported different results. Thus, each
incident of reporting bias can make future incidents more likely.
Reporting biases in research
Research
can only contribute to knowledge if it is communicated from
investigators to the community. The generally accepted primary means of
communication is “full” publication of the study methods and results in
an article published in a scientific journal. Sometimes, investigators
choose to present their findings at a scientific meeting as well, either
through an oral or poster presentation. These presentations are
included as part of the scientific record as brief “abstracts” which may
or may not be recorded in publicly accessible documents typically found
in libraries or the World Wide Web.
Sometimes, investigators fail to publish the results of entire studies. The Declaration of Helsinki and other consensus documents have outlined the ethical obligation to make results from clinical research publicly available.
Reporting bias occurs when the dissemination of research findings
is influenced by the nature and direction of the results, for instance
in systematic reviews. Positive results is a commonly used term to describe a study finding that one intervention is better than another.
Various attempts have been made to overcome the effects of the
reporting biases, including statistical adjustments to the results of
published studies.
None of these approaches has proved satisfactory, however, and there is
increasing acceptance that reporting biases must be tackled by
establishing registers of controlled trials and by promoting good
publication practice. Until these problems have been addressed,
estimates of the effects of treatments based on published evidence may
be biased.
Case study
Litigation brought upon by consumers and health insurers against Pfizer for the fraudulent sales practices in marketing of the drug gabapentin in 2004 revealed a comprehensive publication strategy that employed elements of reporting bias. Spin
was used to put emphasis on favorable findings that favored gabapentin,
and also to explain away unfavorable findings towards the drug. In this
case, favorable secondary outcomes became the focus over the original
primary outcome, which was unfavorable. Other changes found in outcome
reporting include the introduction of a new primary outcome, failure to
distinguish between primary and secondary outcomes, and failure to
report one or more protocol-defined primary outcomes.
The decision to publish certain findings in certain journals is another strategy.
Trials with statistically significant findings were generally published
in academic journals with higher circulation more often than trials
with nonsignificant findings. Timing of publication results of trials
was influenced, in that the company tried to optimize the timing between
the release of two studies. Trials with nonsignificant findings were
found to be published in a staggered fashion, as to not have two
consecutive trials published without salient findings. Ghost authorship was also an issue, where professional medical writers who drafted the published reports were not properly acknowledged.
Fallout from this case is still being settled by Pfizer in 2014, 10 years after the initial litigation.
Types of reporting bias
Publication bias
The publication or nonpublication of research findings, depending on
the nature and direction of the results. Although medical writers have
acknowledged the problem of reporting biases for over a century,
it was not until the second half of the 20th century that researchers
began to investigate the sources and size of the problem of reporting
biases.
Over the past two decades, evidence has accumulated that failure
to publish research studies, including clinical trials testing
intervention effectiveness, is pervasive. Almost all failure to publish is due to failure of the investigator to submit; only a small proportion of studies are not published because of rejection by journals.
The most direct evidence of publication bias in the medical field
comes from follow-up studies of research projects identified at the
time of funding or ethics approval.
These studies have shown that “positive findings” is the principal
factor associated with subsequent publication: researchers say that the
reason they don’t write up and submit reports of their research for
publication is usually because they are “not interested” in the results
(editorial rejection by journals is a rare cause of failure to publish).
Even those investigators who have initially published their
results as conference abstracts are less likely to publish their
findings in full unless the results are “significant”.
This is a problem because data presented in abstracts are frequently
preliminary or interim results and thus may not be reliable
representations of what was found once all data were collected and
analyzed.
In addition, abstracts are often not accessible to the public through
journals, MEDLINE, or easily accessed databases. Many are published in
conference programs, conference proceedings, or on CD-ROM, and are made
available only to meeting registrants.
The main factor associated with failure to publish is negative or null findings. Controlled trials that are eventually reported in full are published more rapidly if their results are positive.
Publication bias leads to overestimates of treatment effect in
meta-analyses, which in turn can lead doctors and decision makers to
believe a treatment is more useful than it is.
It is now well-established that publication bias is associated with the source of funding for the study.
Time lag bias
The
rapid or delayed publication of research findings, depending on the
nature and direction of the results. In a systematic review of the
literature, Hopewell and her colleagues found that overall, trials with
“positive results” (statistically significant in favor of the
experimental arm) were published about a year sooner than trials with
“null or negative results” (not statistically significant or
statistically significant in favor of the control arm).
Multiple (duplicate) publication bias
The
multiple or singular publication of research findings, depending on the
nature and direction of the results. Investigators may also publish the
same findings multiple times using a variety of patterns of “duplicate”
publication.
Many duplicates are published in journal supplements, potentially
difficult to access literature. Positive results appear to be published
more often in duplicate, which can lead to overestimates of a treatment
effect.
Location bias
The
publication of research findings in journals with different ease of
access or levels of indexing in standard databases, depending on the
nature and direction of results. There is also evidence that, compared
to negative or null results, statistically significant results are on
average published in journals with greater impact factors,
and that publication in the mainstream (non grey) literature is
associated with an overall greater treatment effect compared to the grey
literature.
Citation bias
The
citation or non-citation of research findings, depending on the nature
and direction of the results. Authors tend to cite positive results over
negative or null results, and this has been established over a broad
cross section of topics.
Differential citation may lead to a perception in the community that an
intervention is effective when it is not, and it may lead to
over-representation of positive findings in systematic reviews if those
left uncited are difficult to locate.
Selective pooling of results in a meta-analysis is a form of
citation bias that is particularly insidious in its potential to
influence knowledge. To minimize bias, pooling of results from similar
but separate studies requires an exhaustive search for all relevant
studies. That is, a meta-analysis (or pooling of data from multiple
studies) must always have emerged from a systematic review (not a
selective review of the literature), even though a systematic review
does not always have an associated meta-analysis.
Language bias
The
publication of research findings in a particular language, depending on
the nature and direction of the results. There is longstanding question
about whether there is a language bias such that investigators choose
to publish their negative findings in non-English language journals and
reserve their positive findings for English language journals. Some
research has shown that language restrictions in systematic reviews can
change the results of the review and in other cases, authors have not found that such a bias exists.
Knowledge reporting bias
The
frequency with which people write about actions, outcomes, or
properties is not a reflection of real-world frequencies or the degree
to which a property is characteristic of a class of individuals. People
write about only some parts of the world around them; much of the
information is left unsaid.
Outcome reporting bias
The selective reporting of some outcomes but not others, depending on the nature and direction of the results. A study may be published in full, but pre-specified outcomes omitted or misrepresented.
Efficacy outcomes that are statistically significant have a higher
chance of being fully published compared to those that are not
statistically significant.
Selective reporting of suspected or confirmed adverse treatment
effects is an area for particular concern because of the potential for
patient harm. In a study of adverse drug events submitted to
Scandinavian drug licensing authorities, reports for published studies
were less likely than unpublished studies to record adverse events (for
example, 56 vs 77% respectively for Finnish trials involving
psychotropic drugs).
Recent attention in the lay and scientific media on failure to
accurately report adverse events for drugs (e.g., selective serotonin
uptake inhibitors, rosiglitazone, rofecoxib) has resulted in additional
publications, too numerous to review, indicating substantial selective
outcome reporting (mainly suppression) of known or suspected adverse
events.