Search This Blog

Wednesday, January 2, 2019

Randomized controlled trial

From Wikipedia, the free encyclopedia

Flowchart of four phases (enrollment, intervention allocation, follow-up, and data analysis) of a parallel randomized trial of two groups, modified from the CONSORT (Consolidated Standards of Reporting Trials) 2010 Statement
 
A randomized controlled trial (or randomized control trial; RCT) is a type of scientific (often medical) experiment which aims to reduce bias when testing a new treatment. The people participating in the trial are randomly allocated to either the group receiving the treatment under investigation or to a group receiving standard treatment (or placebo treatment) as the control. Randomization minimizes selection bias and the different comparison groups allow the researchers to determine any effects of the treatment when compared with the no treatment (control) group, while other variables are kept constant. The RCT is often considered the gold standard for a clinical trial. RCTs are often used to test the efficacy or effectiveness of various types of medical intervention and may provide information about adverse effects, such as drug reactions. Random assignment of intervention is done after subjects have been assessed for eligibility and recruited, but before the intervention to be studied begins.

Random allocation in real trials is complex, but conceptually the process is like tossing a coin. After randomization, the two (or more) groups of subjects are followed in exactly the same way and the only differences between them is the care they receive. For example, in terms of procedures, tests, outpatient visits, and follow-up calls, should be those intrinsic to the treatments being compared. The most important advantage of proper randomization is that it minimizes allocation bias, balancing both known and unknown prognostic factors, in the assignment of treatments.

The terms "RCT" and randomized trial are sometimes used synonymously, but the methodologically sound practice is to reserve the "RCT" name only for trials that contain control groups, in which groups receiving the experimental treatment are compared with control groups receiving no treatment (a placebo-controlled study) or a previously tested treatment (a positive-control study). The term "randomized trials" omits mention of controls and can describe studies that compare multiple treatment groups with each other (in the absence of a control group). Similarly, although the "RCT" name is sometimes expanded as "randomized clinical trial" or "randomized comparative trial", the methodologically sound practice, to avoid ambiguity in the scientific literature, is to retain "control" in the definition of "RCT" and thus reserve that name only for trials that contain controls. Not all randomized clinical trials are randomized controlled trials (and some of them could never be, in cases where controls would be impractical or unethical to institute). The term randomized controlled clinical trials is a methodologically sound alternate expansion for "RCT" in RCTs that concern clinical research; however, RCTs are also employed in other research areas, including many of the social sciences.

History

The first reported clinical trial was conducted by James Lind in 1747 to identify treatment for scurvy. Randomized experiments appeared in psychology, where they were introduced by Charles Sanders Peirce, and in education. Later, randomized experiments appeared in agriculture, due to Jerzy Neyman and Ronald A. Fisher. Fisher's experimental research and his writings popularized randomized experiments.

The first published RCT in medicine appeared in the 1948 paper entitled "Streptomycin treatment of pulmonary tuberculosis", which described a Medical Research Council investigation. One of the authors of that paper was Austin Bradford Hill, who is credited as having conceived the modern RCT.

By the late 20th century, RCTs were recognized as the standard method for "rational therapeutics" in medicine. As of 2004, more than 150,000 RCTs were in the Cochrane Library. To improve the reporting of RCTs in the medical literature, an international group of scientists and editors published Consolidated Standards of Reporting Trials (CONSORT) Statements in 1996, 2001 and 2010, and these have become widely accepted. Randomization is the process of assigning trial subjects to treatment or control groups using an element of chance to determine the assignments in order to reduce the bias.

Ethics

Although the principle of clinical equipoise ("genuine uncertainty within the expert medical community... about the preferred treatment") common to clinical trials has been applied to RCTs, the ethics of RCTs have special considerations. For one, it has been argued that equipoise itself is insufficient to justify RCTs. For another, "collective equipoise" can conflict with a lack of personal equipoise (e.g., a personal belief that an intervention is effective). Finally, Zelen's design, which has been used for some RCTs, randomizes subjects before they provide informed consent, which may be ethical for RCTs of screening and selected therapies, but is likely unethical "for most therapeutic trials."

Although subjects almost always provide informed consent for their participation in an RCT, studies since 1982 have documented that RCT subjects may believe that they are certain to receive treatment that is best for them personally; that is, they do not understand the difference between research and treatment. Further research is necessary to determine the prevalence of and ways to address this "therapeutic misconception".

The RCT method variations may also create cultural effects that have not been well understood. For example, patients with terminal illness may join trials in the hope of being cured, even when treatments are unlikely to be successful.

Trial registration

In 2004, the International Committee of Medical Journal Editors (ICMJE) announced that all trials starting enrolment after July 1, 2005 must be registered prior to consideration for publication in one of the 12 member journals of the committee. However, trial registration may still occur late or not at all. Medical journals have been slow in adapting policies requiring mandatory clinical trial registration as a prerequisite for publication.

Classifications

By study design

One way to classify RCTs is by study design. From most to least common in the healthcare literature, the major categories of RCT study designs are:
  • Parallel-group – each participant is randomly assigned to a group, and all the participants in the group receive (or do not receive) an intervention.
  • Crossover – over time, each participant receives (or does not receive) an intervention in a random sequence.
  • Cluster – pre-existing groups of participants (e.g., villages, schools) are randomly selected to receive (or not receive) an intervention.
  • Factorial – each participant is randomly assigned to a group that receives a particular combination of interventions or non-interventions (e.g., group 1 receives vitamin X and vitamin Y, group 2 receives vitamin X and placebo Y, group 3 receives placebo X and vitamin Y, and group 4 receives placebo X and placebo Y).
An analysis of the 616 RCTs indexed in PubMed during December 2006 found that 78% were parallel-group trials, 16% were crossover, 2% were split-body, 2% were cluster, and 2% were factorial.

By outcome of interest (efficacy vs. effectiveness)

RCTs can be classified as "explanatory" or "pragmatic." Explanatory RCTs test efficacy in a research setting with highly selected participants and under highly controlled conditions. In contrast, pragmatic RCTs (pRCTs) test effectiveness in everyday practice with relatively unselected participants and under flexible conditions; in this way, pragmatic RCTs can "inform decisions about practice."

By hypothesis (superiority vs. noninferiority vs. equivalence)

Another classification of RCTs categorizes them as "superiority trials", "noninferiority trials", and "equivalence trials", which differ in methodology and reporting. Most RCTs are superiority trials, in which one intervention is hypothesized to be superior to another in a statistically significant way. Some RCTs are noninferiority trials "to determine whether a new treatment is no worse than a reference treatment." Other RCTs are equivalence trials in which the hypothesis is that two interventions are indistinguishable from each other.

Randomization

The advantages of proper randomization in RCTs include:
  • "It eliminates bias in treatment assignment," specifically selection bias and confounding.
  • "It facilitates blinding (masking) of the identity of treatments from investigators, participants, and assessors."
  • "It permits the use of probability theory to express the likelihood that any difference in outcome between treatment groups merely indicates chance."
There are two processes involved in randomizing patients to different interventions. First is choosing a randomization procedure to generate an unpredictable sequence of allocations; this may be a simple random assignment of patients to any of the groups at equal probabilities, may be "restricted", or may be "adaptive." A second and more practical issue is allocation concealment, which refers to the stringent precautions taken to ensure that the group assignment of patients are not revealed prior to definitively allocating them to their respective groups. Non-random "systematic" methods of group assignment, such as alternating subjects between one group and the other, can cause "limitless contamination possibilities" and can cause a breach of allocation concealment.

However empirical evidence that adequate randomization changes outcomes relative to inadequate randomization has been difficult to detect.

Procedures

The treatment allocation is the desired proportion of patients in each treatment arm. 

An ideal randomization procedure would achieve the following goals:
  • Maximize statistical power, especially in subgroup analyses. Generally, equal group sizes maximize statistical power, however, unequal groups sizes maybe more powerful for some analyses (e.g., multiple comparisons of placebo versus several doses using Dunnett’s procedure), and are sometimes desired for non-analytic reasons (e.g., patients maybe more motivated to enroll if there is a higher chance of getting the test treatment, or regulatory agencies may require a minimum number of patients exposed to treatment).
  • Minimize selection bias. This may occur if investigators can consciously or unconsciously preferentially enroll patients between treatment arms. A good randomization procedure will be unpredictable so that investigators cannot guess the next subject's group assignment based on prior treatment assignments. The risk of selection bias is highest when previous treatment assignments are known (as in unblinded studies) or can be guessed (perhaps if a drug has distinctive side effects).
  • Minimize allocation bias (or confounding). This may occur when covariates that affect the outcome are not equally distributed between treatment groups, and the treatment effect is confounded with the effect of the covariates (i.e., an "accidental bias"). If the randomization procedure causes an imbalance in covariates related to the outcome across groups, estimates of effect may be biased if not adjusted for the covariates (which may be unmeasured and therefore impossible to adjust for).
However, no single randomization procedure meets those goals in every circumstance, so researchers must select a procedure for a given study based on its advantages and disadvantages.

Simple

This is a commonly used and intuitive procedure, similar to "repeated fair coin-tossing." Also known as "complete" or "unrestricted" randomization, it is robust against both selection and accidental biases. However, its main drawback is the possibility of imbalanced group sizes in small RCTs. It is therefore recommended only for RCTs with over 200 subjects.

Restricted

To balance group sizes in smaller RCTs, some form of "restricted" randomization is recommended. The major types of restricted randomization used in RCTs are:
  • Permuted-block randomization or blocked randomization: a "block size" and "allocation ratio" (number of subjects in one group versus the other group) are specified, and subjects are allocated randomly within each block. For example, a block size of 6 and an allocation ratio of 2:1 would lead to random assignment of 4 subjects to one group and 2 to the other. This type of randomization can be combined with "stratified randomization", for example by center in a multicenter trial, to "ensure good balance of participant characteristics in each group." A special case of permuted-block randomization is random allocation, in which the entire sample is treated as one block. The major disadvantage of permuted-block randomization is that even if the block sizes are large and randomly varied, the procedure can lead to selection bias. Another disadvantage is that "proper" analysis of data from permuted-block-randomized RCTs requires stratification by blocks.
  • Adaptive biased-coin randomization methods (of which urn randomization is the most widely known type): In these relatively uncommon methods, the probability of being assigned to a group decreases if the group is overrepresented and increases if the group is underrepresented. The methods are thought to be less affected by selection bias than permuted-block randomization.

Adaptive

At least two types of "adaptive" randomization procedures have been used in RCTs, but much less frequently than simple or restricted randomization:
  • Covariate-adaptive randomization, of which one type is minimization: The probability of being assigned to a group varies in order to minimize "covariate imbalance." Minimization is reported to have "supporters and detractors" because only the first subject's group assignment is truly chosen at random, the method does not necessarily eliminate bias on unknown factors.
  • Response-adaptive randomization, also known as outcome-adaptive randomization: The probability of being assigned to a group increases if the responses of the prior patients in the group were favorable. Although arguments have been made that this approach is more ethical than other types of randomization when the probability that a treatment is effective or ineffective increases during the course of an RCT, ethicists have not yet studied the approach in detail.

Allocation concealment

"Allocation concealment" (defined as "the procedure for protecting the randomization process so that the treatment to be allocated is not known before the patient is entered into the study") is important in RCTs. In practice, clinical investigators in RCTs often find it difficult to maintain impartiality. Stories abound of investigators holding up sealed envelopes to lights or ransacking offices to determine group assignments in order to dictate the assignment of their next patient. Such practices introduce selection bias and confounders (both of which should be minimized by randomization), possibly distorting the results of the study. Adequate allocation concealment should defeat patients and investigators from discovering treatment allocation once a study is underway and after the study has concluded. Treatment related side-effects or adverse events may be specific enough to reveal allocation to investigators or patients thereby introducing bias or influencing any subjective parameters collected by investigators or requested from subjects. 

Some standard methods of ensuring allocation concealment include sequentially numbered, opaque, sealed envelopes (SNOSE); sequentially numbered containers; pharmacy controlled randomization; and central randomization. It is recommended that allocation concealment methods be included in an RCT's protocol, and that the allocation concealment methods should be reported in detail in a publication of an RCT's results; however, a 2005 study determined that most RCTs have unclear allocation concealment in their protocols, in their publications, or both. On the other hand, a 2008 study of 146 meta-analyses concluded that the results of RCTs with inadequate or unclear allocation concealment tended to be biased toward beneficial effects only if the RCTs' outcomes were subjective as opposed to objective.

Sample size

The number of treatment units (subjects or groups of subjects) assigned to control and treatment groups, affects an RCT's reliability. If the effect of the treatment is small, the number of treatment units in either group may be insufficient for rejecting the null hypothesis in the respective statistical test. The failure to reject the null hypothesis would imply that the treatment shows no statistically significant effect on the treated in a given test. But as the sample size increases, the same RCT may be able to demonstrate a significant effect of the treatment, even if this effect is small.

Blinding

An RCT may be blinded, (also called "masked") by "procedures that prevent study participants, caregivers, or outcome assessors from knowing which intervention was received." Unlike allocation concealment, blinding is sometimes inappropriate or impossible to perform in an RCT; for example, if an RCT involves a treatment in which active participation of the patient is necessary (e.g., physical therapy), participants cannot be blinded to the intervention.

Traditionally, blinded RCTs have been classified as "single-blind", "double-blind", or "triple-blind"; however, in 2001 and 2006 two studies showed that these terms have different meanings for different people. The 2010 CONSORT Statement specifies that authors and editors should not use the terms "single-blind", "double-blind", and "triple-blind"; instead, reports of blinded RCT should discuss "If done, who was blinded after assignment to interventions (for example, participants, care providers, those assessing outcomes) and how."

RCTs without blinding are referred to as "unblinded", "open", or (if the intervention is a medication) "open-label". In 2008 a study concluded that the results of unblinded RCTs tended to be biased toward beneficial effects only if the RCTs' outcomes were subjective as opposed to objective; for example, in an RCT of treatments for multiple sclerosis, unblinded neurologists (but not the blinded neurologists) felt that the treatments were beneficial. In pragmatic RCTs, although the participants and providers are often unblinded, it is "still desirable and often possible to blind the assessor or obtain an objective source of data for evaluation of outcomes."

Analysis of data

The types of statistical methods used in RCTs depend on the characteristics of the data and include:
Regardless of the statistical methods used, important considerations in the analysis of RCT data include:
  • Whether an RCT should be stopped early due to interim results. For example, RCTs may be stopped early if an intervention produces "larger than expected benefit or harm", or if "investigators find evidence of no important difference between experimental and control interventions."
  • The extent to which the groups can be analyzed exactly as they existed upon randomization (i.e., whether a so-called "intention-to-treat analysis" is used). A "pure" intention-to-treat analysis is "possible only when complete outcome data are available" for all randomized subjects; when some outcome data are missing, options include analyzing only cases with known outcomes and using imputed data. Nevertheless, the more that analyses can include all participants in the groups to which they were randomized, the less bias that an RCT will be subject to.
  • Whether subgroup analysis should be performed. These are "often discouraged" because multiple comparisons may produce false positive findings that cannot be confirmed by other studies.

Reporting of results

The CONSORT 2010 Statement is "an evidence-based, minimum set of recommendations for reporting RCTs." The CONSORT 2010 checklist contains 25 items (many with sub-items) focusing on "individually randomised, two group, parallel trials" which are the most common type of RCT.

For other RCT study designs, "CONSORT extensions" have been published, some examples are:
  • Consort 2010 Statement: Extension to Cluster Randomised Trials
  • Consort 2010 Statement: Non-Pharmacologic Treatment Interventions

Relative importance and observational studies

Two studies published in The New England Journal of Medicine in 2000 found that observational studies and RCTs overall produced similar results. The authors of the 2000 findings questioned the belief that "observational studies should not be used for defining evidence-based medical care" and that RCTs' results are "evidence of the highest grade." However, a 2001 study published in Journal of the American Medical Association concluded that "discrepancies beyond chance do occur and differences in estimated magnitude of treatment effect are very common" between observational studies and RCTs.

Two other lines of reasoning question RCTs' contribution to scientific knowledge beyond other types of studies:
  • If study designs are ranked by their potential for new discoveries, then anecdotal evidence would be at the top of the list, followed by observational studies, followed by RCTs.
  • RCTs may be unnecessary for treatments that have dramatic and rapid effects relative to the expected stable or progressively worse natural course of the condition treated. One example is combination chemotherapy including cisplatin for metastatic testicular cancer, which increased the cure rate from 5% to 60% in a 1977 non-randomized study.

Interpretation of statistical results

Like all statistical methods, RCTs are subject to both type I ("false positive") and type II ("false negative") statistical errors. Regarding Type I errors, a typical RCT will use 0.05 (i.e., 1 in 20) as the probability that the RCT will falsely find two equally effective treatments significantly different. Regarding Type II errors, despite the publication of a 1978 paper noting that the sample sizes of many "negative" RCTs were too small to make definitive conclusions about the negative results, by 2005-2006 a sizeable proportion of RCTs still had inaccurate or incompletely reported sample size calculations.

Peer review

Peer review of results is an important part of the scientific method. Reviewers examine the study results for potential problems with design that could lead to unreliable results (for example by creating a systematic bias), evaluate the study in the context of related studies and other evidence, and evaluate whether the study can be reasonably considered to have proven its conclusions. To underscore the need for peer review and the danger of over-generalizing conclusions, two Boston-area medical researchers performed a randomized controlled trial in which they randomly assigned either a parachute or an empty backpack to 23 volunteers who jumped from either a biplane or a helicopter. The study was able to accurately report that parachutes fail to reduce injury compared to empty backpacks. The key context that limited the general applicability of this conclusion was that the aircraft were parked on the ground, and participants had only jumped about two feet.

Advantages

RCTs are considered to be the most reliable form of scientific evidence in the hierarchy of evidence that influences healthcare policy and practice because RCTs reduce spurious causality and bias. Results of RCTs may be combined in systematic reviews which are increasingly being used in the conduct of evidence-based practice. Some examples of scientific organizations' considering RCTs or systematic reviews of RCTs to be the highest-quality evidence available are:
Notable RCTs with unexpected results that contributed to changes in clinical practice include:
  • After Food and Drug Administration approval, the antiarrhythmic agents flecainide and encainide came to market in 1986 and 1987 respectively. The non-randomized studies concerning the drugs were characterized as "glowing", and their sales increased to a combined total of approximately 165,000 prescriptions per month in early 1989. In that year, however, a preliminary report of an RCT concluded that the two drugs increased mortality. Sales of the drugs then decreased.
  • Prior to 2002, based on observational studies, it was routine for physicians to prescribe hormone replacement therapy for post-menopausal women to prevent myocardial infarction. In 2002 and 2004, however, published RCTs from the Women's Health Initiative claimed that women taking hormone replacement therapy with estrogen plus progestin had a higher rate of myocardial infarctions than women on a placebo, and that estrogen-only hormone replacement therapy caused no reduction in the incidence of coronary heart disease. Possible explanations for the discrepancy between the observational studies and the RCTs involved differences in methodology, in the hormone regimens used, and in the populations studied. The use of hormone replacement therapy decreased after publication of the RCTs.

Disadvantages

Many papers discuss the disadvantages of RCTs. What follows are among the most frequently cited drawbacks.

Time and costs

RCTs can be expensive; one study found 28 Phase III RCTs funded by the National Institute of Neurological Disorders and Stroke prior to 2000 with a total cost of US$335 million, for a mean cost of US$12 million per RCT. Nevertheless, the return on investment of RCTs may be high, in that the same study projected that the 28 RCTs produced a "net benefit to society at 10-years" of 46 times the cost of the trials program, based on evaluating a quality-adjusted life year as equal to the prevailing mean per capita gross domestic product.

The conduct of an RCT takes several years until being published, thus data is restricted from the medical community for long years and may be of less relevance at time of publication.

It is costly to maintain RCTs for the years or decades that would be ideal for evaluating some interventions.

Interventions to prevent events that occur only infrequently (e.g., sudden infant death syndrome) and uncommon adverse outcomes (e.g., a rare side effect of a drug) would require RCTs with extremely large sample sizes and may therefore best be assessed by observational studies.

Due to the costs of running RCTs, these usually only inspect one variable or very few variables, rarely reflecting the full picture of a complicated medical situation; whereas the case report, for example, can detail many aspects of the patient's medical situation (e.g. patient history, physical examination, diagnosis, psychosocial aspects, follow up).

Conflict of interest dangers

A 2011 study done to disclose possible conflicts of interests in underlying research studies used for medical meta-analyses reviewed 29 meta-analyses and found that conflicts of interests in the studies underlying the meta-analyses were rarely disclosed. The 29 meta-analyses included 11 from general medicine journals; 15 from specialty medicine journals, and 3 from the Cochrane Database of Systematic Reviews. The 29 meta-analyses reviewed an aggregate of 509 randomized controlled trials (RCTs). Of these, 318 RCTs reported funding sources with 219 (69%) industry funded. 132 of the 509 RCTs reported author conflict of interest disclosures, with 91 studies (69%) disclosing industry financial ties with one or more authors. The information was, however, seldom reflected in the meta-analyses. Only two (7%) reported RCT funding sources and none reported RCT author-industry ties. The authors concluded "without acknowledgment of COI due to industry funding or author industry financial ties from RCTs included in meta-analyses, readers' understanding and appraisal of the evidence from the meta-analysis may be compromised."
 
Some RCTs are fully or partly funded by the health care industry (e.g., the pharmaceutical industry) as opposed to government, nonprofit, or other sources. A systematic review published in 2003 found four 1986–2002 articles comparing industry-sponsored and nonindustry-sponsored RCTs, and in all the articles there was a correlation of industry sponsorship and positive study outcome. A 2004 study of 1999–2001 RCTs published in leading medical and surgical journals determined that industry-funded RCTs "are more likely to be associated with statistically significant pro-industry findings." These results have been mirrored in trials in surgery, where although industry funding did not affect the rate of trial discontinuation it was however associated with a lower odds of publication for completed trials. One possible reason for the pro-industry results in industry-funded published RCTs is publication bias. Other authors have cited the differing goals of academic and industry sponsored research as contributing to the difference. Commercial sponsors may be more focused on performing trials of drugs that have already shown promise in early stage trials, and on replicating previous positive results to fulfill regulatory requirements for drug approval.

Ethics

If a disruptive innovation in medical technology is developed, it may be difficult to test this ethically in an RCT if it becomes "obvious" that the control subjects have poorer outcomes—either due to other foregoing testing, or within the initial phase of the RCT itself. Ethically it may be necessary to abort the RCT prematurely, and getting ethics approval (and patient agreement) to withhold the innovation from the control group in future RCT's may not be feasible. 

Historical control trials (HCT) exploit the data of previous RCTs to reduce the sample size; however, these approaches are controversial in the scientific community and must be handled with care.

In social science

Due to the recent emergence of RCTs in social science, the use of RCTs in social sciences is a contested issue. Some writers from a medical or health background have argued that existing research in a range of social science disciplines lacks rigour, and should be improved by greater use of randomized control trials.

Transport science

Researchers in transport science argue that public spending on programs such as school travel plans could not be justified unless their efficacy is demonstrated by randomized controlled trials. Graham-Rowe and colleagues reviewed 77 evaluations of transport interventions found in the literature, categorising them into 5 "quality levels". They concluded that most of the studies were of low quality and advocated the use of randomized controlled trials wherever possible in future transport research.

Dr. Steve Melia took issue with these conclusions, arguing that claims about the advantages of RCTs, in establishing causality and avoiding bias, have been exaggerated. He proposed the following 8 criteria for the use of RCTs in contexts where interventions must change human behaviour to be effective

The intervention:
  1. Has not been applied to all members of a unique group of people (e.g. the population of a whole country, all employees of a unique organization etc.)
  2. Is applied in a context or setting similar to that which applies to the control group
  3. Can be isolated from other activities – and the purpose of the study is to assess this isolated effect
  4. Has a short timescale between its implementation and maturity of its effects.
And the causal mechanisms:
  1. Are either known to the researchers, or else all possible alternatives can be tested
  2. Do not involve significant feedback mechanisms between the intervention group and external environments
  3. Have a stable and predictable relationship to exogenous factors
  4. Would act in the same way if the control group and intervention group were reversed.

International development

RCTs are currently being used by a number of international development experts to measure the impact of development interventions worldwide. Development economists at research organizations including Abdul Latif Jameel Poverty Action Lab (J-PAL) and Innovations for Poverty Action have used RCTs to measure the effectiveness of poverty, health, and education programs in the developing world. While RCTs can be useful in policy evaluation, it is necessary to exercise care in interpreting the results in social science settings. For example, interventions can inadvertently induce socioeconomic and behavioral changes that can confound the relationships (Bhargava, 2008).

For some development economists, the main benefit to using RCTs compared to other research methods is that randomization guards against selection bias, a problem present in many current studies of development policy. In one notable example of a cluster RCT in the field of development economics, Olken (2007) randomized 608 villages in Indonesia in which roads were about to be built into six groups (no audit vs. audit, and no invitations to accountability meetings vs. invitations to accountability meetings vs. invitations to accountability meetings along with anonymous comment forms). After estimating "missing expenditures" (a measure of corruption), Olken concluded that government audits were more effective than "increasing grassroots participation in monitoring" in reducing corruption. Overall, it is important in social sciences to account for the intended as well as the unintended consequences of interventions for policy evaluations.

Criminology

A 2005 review found 83 randomized experiments in criminology published in 1982-2004, compared with only 35 published in 1957-1981. The authors classified the studies they found into five categories: "policing", "prevention", "corrections", "court", and "community". Focusing only on offending behavior programs, Hollin (2008) argued that RCTs may be difficult to implement (e.g., if an RCT required "passing sentences that would randomly assign offenders to programs") and therefore that experiments with quasi-experimental design are still necessary.

Education

RCTs have been used in evaluating a number of educational interventions. Between 1980 and 2016, over 1,000 reports of RCTs have been published. For example, a 2009 study randomized 260 elementary school teachers' classrooms to receive or not receive a program of behavioral screening, classroom intervention, and parent training, and then measured the behavioral and academic performance of their students. Another 2009 study randomized classrooms for 678 first-grade children to receive a classroom-centered intervention, a parent-centered intervention, or no intervention, and then followed their academic outcomes through age 19.

Mock randomized controlled trials, or simulations using confectionery, can conducted in the classroom to teach students and health professionals the principles of RCT design and critical appraisal.

Bad Science (book)

From Wikipedia, the free encyclopedia

Bad Science
Book badscience cover.jpg
First edition cover
AuthorBen Goldacre
CountryUnited Kingdom
LanguageEnglish
SubjectPseudoscience
GenreNon-fiction
PublisherFourth Estate
Publication date
September 2008
Media typePrint (Paperback)
Pages338
ISBN978-0-00-724019-7
OCLC259713114
500 22
LC ClassQ172.5.E77 G65 2008

Bad Science is a book by Ben Goldacre, criticising mainstream media reporting on health and science issues. It was published by Fourth Estate in September 2008. It has been positively reviewed by the British Medical Journal and the Daily Telegraph and has reached the Top 10 bestseller list for Amazon Books. It was shortlisted for the 2009 Samuel Johnson Prize. Bad Science or BadScience is also the title of Goldacre's column in The Guardian and his website.

Contents

Introduction

A brief introduction (by Goldacre) touching on subjects covered by subsequent chapters. It bemoans the widespread lack of understanding of evidence-based science.

Chapter 1: Matter

Chapter 1 is entitled Matter, but is really concerned with the modern trend for Detoxification. Goldacre looks at three supposed detox treatments: aqua detox (a footbath detox), Hopi Ear Candles, and detox patches. Each of these so-called treatments is intended to remove toxins and impurities from the body. 

According to Goldacre, the manufacturers of these detox remedies are unable or unwilling to state which toxins exactly are being removed from the body. Goldacre debunks the claims made for each of these products and says that the whole idea of detox is an invention. There are no such toxins floating around the body in excessive quantities, waiting to be removed by detox treatments.

Goldacre has no problem with the idea of someone choosing to give their body a rest after overindulging, say. That is just common sense. But he sees detox treatments as the modern equivalent of religious rituals of purifcation or abstinence. Clearly, such rituals fill a human need in some way - Goldacre has no problem with that. What is wrong, however, is to pretend that these detox rituals are based in science. They are not. At worst, they are supposed quick-fixes that distract from the genuine lifestyle risk factors for ill health that affect us over the long term.

Chapter 2: Brain Gym

Brain Gym is a set of exercises and activities that are supposed to 'enhance the experience of whole brain learning'. At the time when Goldacre’s book was written, Brain Gym was promoted by the Department for Education and used in hundreds of state schools across the country. But Goldacre says this is pseudoscience dressed up in clever long phrases and jargon. He goes on to describe a 2008 study that suggested that people will tend to believe a bad explanation written in sciencey terms, rather than a good explanation that isn't decorated with sciencey words.

Some of the underlying ideas in Brain Gym are sensible: regular breaks, intermittent light exercise and drinking plenty of water are likely to help children learn. But Goldacre sees the pseudoscientific explanation around it (such as the 26 special movements and the ‘brain buttons’ concept) as an attempt to 'proprietorialise' common sense. That is, turn it into something that you can patent, own, sell and make profit from. He sees this trend particularly strongly among nutritionists. The corrosive side effect of this ‘privatisation of common sense’ is that we become dependent on outside systems and people, instead of taking control ourselves.

Chapter 3: The Progenium XY Complex

In this short chapter, Goldacre looks at cosmetics - specifically, moisturizers. According to Goldacre, expensive moisturisers tend to contain three groups of ingredients: powerful chemicals that were effective in making skin look younger, before they had to be watered down because of their side effects; vegetable protein, which does actually shrink wrinkles temporarily; and esoteric chemicals that are meant to 'make you believe that all sorts of claims are being made'. But the manufacturers are very careful to claim only that the moisturiser as a whole will have beneficial effects - they don't make specific claims about their 'magic ingredients', because such claims could be easily challenged by the regulator. 

Instead, the magic ingredients (such as the made-up Progenium XY Complex) are only included to make it sound like some complicated science is involved. And that is Goldacre's main complaint: The cosmetics companies sell their products by appealing to the misleading idea that science is complicated, incomprehensible, and impenetrable. This is bad because the target audience who are bombarded with this dubious world view are young women, a group who are under-represented in science.

Chapter 4: Homeopathy

Goldacre provides an overview of the origins of Homeopathy (its ‘invention’ by Samuel Hahnemann in the late eighteenth century) and the basic ideas that characterise it: ‘like cures like’, the increase in potency by dilution, succussion, proving, and the collation of remedies in a reference book. He shows that the levels of dilution used in preparing homeopathic remedies are so high, that the final ‘medicine’ contains no active ingredient. He dismisses the idea of ‘water memory’, which has been used in more recent times to explain why homeopathic remedies still work, in spite of extreme dilution. 

As far as Goldacre is concerned, it’s fine if someone wants to take a homeopathic remedy because ‘it made me feel better last time’. However, the experience of an individual (or a small group of people) cannot be used to as a basis for saying that homeopathy works or that it is science. First of all, an individual can have no way of knowing if they got better because of the homeopathic remedy they took, the placebo effect or regression to the mean (that is, the natural cycle of the disease). Secondly, homeopathic remedies should be subjected to a ‘fair test’: a placebo-controlled trial. In fact, such tests have been carried out for homeopathic remedies and it has been shown that they are no better than placebo. 

Goldacre says that some individual trials have shown that a homeopathic remedy works. But usually these trials are found to have methodological flaws. Typical problems with these trials have included a poor quality approach to blinding or randomization. Another problem is that trials of homeopathic remedies often don’t provide full information about the methods used. Poor quality research studies tend to exaggerate positive results. Goldacre provides a summary of a paper by Ernst et al, which suggested that this has occurred in studies of homeopathic arnica. That said, Goldacre does concede that the overall experience of going to see a homeopath does seem to have a positive effect on some patients, and that would be worth investigating further.

What we really need, says Goldacre, is meta-analyses. This is when the results of smaller research studies are pooled and analysed together as a single group. The Cochrane Collaboration was set up to carry out systematic reviews and meta-analyses. A landmark study by Shang et al (2005), which looked at a vast number of homeopathic trials, again found that homeopathic remedies perform no better than placebo. 

Goldacre criticises the homeopathic community for their lack of understanding of how to carry out high quality research, their lack of openness and transparency, their unwillingness to submit their research to full and proper scrutiny, their rejection of justified academic criticism and their overall aggressiveness. Using the specific example of an interview with Elizabeth Thompson, he illustrates how homeopaths will use nuanced language to avoid actually admitting that their pills don’t work.

Chapter 5: The Placebo Effect

Examples of the power of the mind over pain, anxiety and depression are presented with studies showing how higher prices, fancy packaging, theatrical procedures and a confident attitude in the doctor all contribute to the relief of symptoms. In patients with no specific diagnosed condition, even a fake diagnosis and prognosis with no other treatment helps recovery, but ethical and time constraints usually prevent doctors from giving this reassurance. Exploiting the placebo effect is presented as possibly justifiable if used in conjunction with effective conventional treatments. The author links its use by alternative medicine practitioners with the diversion of patients away from effective treatments and the undermining of public health campaigns on AIDS and malaria.

Chapter 6: The Nonsense du Jour

Nutritionists are accused of misusing science and mystifying diet to bamboozle the public. Misrepresentations of the results of legitimate scientific research to lend bogus authority to nutritionist theories, while ignoring alternative explanations are cited in evidence. The use of weak circumstantial associations between diet and health found in observational studies as if they proved nutritionist claims is criticised. The unjustified over-interpretation of surrogate outcomes in animal (or tissue culture) experiments as proving human health benefits is explored. The cherry picking of published research to support a favored view is contrasted with the systematic review designed to minimise such bias. The supposed benefits of antioxidants are questioned with studies showing they may be ineffective or even harmful in some cases. The methods used by the food supplement industry to manufacture doubt about any critical scientific reports are likened to those previously used by the tobacco and asbestos

Chapter 7: Dr Gillian McKeith PhD

The Scottish TV diet guru and self-styled "doctor" Gillian McKeith and her scientific claims are dissected. Statements exemplifying her scientific knowledge include that the consumption of dark-leaved vegetables like spinach "will really oxygenate your blood" as they are high in chlorophyll, and that "each sprouting seed is packed with the nutritional energy needed to create a fully-grown, healthy plant". She is described masquerading as a genuine medical doctor on her TV reality/health shows. Her publications are compared with a Melanesian cargo cult; superficially correct but lacking any scientific substance. Her belief in the special nutritional value of plant enzymes (which are broken down in the gut like any other proteins) is ridiculed. The general problems involved in establishing any firm links between diet and health are examined.

Chapter 8: 'Pill Solves Complex Social Problem'

The claim that fish oil capsules make children smarter is examined. The book probes the methodological weaknesses of the widely publicised "Durham trial" where the pills were given to children to improve their school performance and behaviour, but without any control groups and wide open to a range of confounding factors. The failure to publish any results and backtracking on earlier claims by the education authorities is slated, with their refusal to divulge any data through Freedom of Information Requests specifically mentioned. The media's preference for simple science stories and role in promoting dubious health products is highlighted. Parallels are drawn between the Equazen company behind the Durham fish oil trials and the Efamol company's promotion of evening primrose oil.

Chapter 9: Professor Patrick Holford

The influence of the best-selling author, media commentator, businessman and founder of the Institute for Optimum Nutrition (which has trained most of the UK's "nutrition therapists") is acknowledged. Holford's success in presenting nutritionism as a scientific discipline in the media, and forging links with some British universities is also noted. The book judges that his success is based on misinterpreting and cherry-picking favourable results from the medical literature, in order to market his vitamin pills. His promotion of vitamin C in preference to AZT as a treatment for AIDS, vitamin E to prevent heart attacks, and vitamin A to treat autism are all condemned as lacking in sound evidential support. His reliance on the work of discredited fellow nutritionist Dr. R.K. Chandra is likewise slated. The Universities of Luton and Teesside are criticised for their past associations with Holford and the ION.

Chapter 10: Is Mainstream Medicine Evil?

The book remarks on the relatively low percentage of conventional medical activity (50 to 80%) which could be called "evidence-based". The efforts of the medical profession to weed out bad treatments are seen to be hampered by the withholding or distortion of evidence by drug companies. The science and economics of drug development are outlined, with criticism of the lack of independence of industrial research and the neglect of Third World diseases. Some underhand tricks used by drug companies to engineer positive trial results for their products are explored. The publication bias produced by researchers not publishing negative results is illustrated with funnel plots. Examples are made of the SSRI antidepressants and Vioxx drugs. Reform of trials registers to prevent abuses is proposed. The ethics of drug advertising and manipulation of patient advocacy groups are questioned.

Chapter 11: How the Media Promote the Public Misunderstanding of Science

The misrepresentation of science and scientists in the media is attributed to the preponderance of humanities graduates in journalism. The dumbing-down of science to produce easily assimilated wacky, breakthrough or scare stories is criticised. Wacky "formula stories" like those for "the perfect boiled egg" or "most depressing day of the year" are revealed to be the product of PR companies using biddable academics to add weight to their marketing campaigns. Among other examples, the speculation by Dr. Oliver Curry (a political theorist at the LSE) that the human race will evolve into two separate races, presented as a science story across the British media, is exposed as a PR stunt for a men's TV channel. The relative scarcity of sensational medical breakthroughs since a golden age of discovery between 1935 and 1975, is seen as motivating the production of dumbed-down stories which trumpet unpublished research and ill-founded speculation. An inability to evaluate the soundness of scientific evidence is seen to give undeserved prominence to marginal figures with fringe views.

Chapter 12: Why Clever People Believe Stupid Things

This chapter is a brief introduction to the research on cognitive biases, which, Goldacre argues, explain some of the appeal of alternative medicine ideas. Biases mentioned include confirmation bias, the availability heuristic, illusory superiority and the clustering illusion (the misperception of random data). It also discusses Solomon Asch's classic study of social conformity.

Chapter 13: Bad Stats

This chapter covers the cases of Sally Clark and Lucia de Berk, in which the author says poor understanding and presentation of statistics played an important part in their criminal trials.

Chapter 14: Health Scares

In this chapter, the author claims that the press selectively used a "laboratory" that gave positive MRSA results where other pathology labs found none. Creating an "expert" from Chris Malyszewicz who worked from a garden shed. 

Goldacre notes how the Daily Mirror once managed to combine "three all-time classic bogus science stories" into one editorial: the Arpad Pusztai affair of GM crops, Andrew Wakefield and the MMR vaccine controversy and Chris Malyszewicz and the MRSA hoax. On the other hand, journalists were very poor in uncovering or reporting on the thalidomide tragedy - only covering well the ultimate political issue of compensation.

Chapter 15: The Media's MMR Hoax

Andrew Wakefield and the MMR vaccine controversy. The author continues to discuss the lab results in previous chapter and discusses the MRSA mix up in hospitals wrong patients get wrong results.

Index

The hardback and first paperback editions did not include an index. Several indexes were prepared by bloggers, including one prepared by Oliblog. The latest paperback issue includes a full index.

Previously unpublished chapter: "The Doctor Will Sue You Now"

Further to the release of this book a resolution of the legal status of one of the chapters has come about since Goldacre won a libel case filed against him by Matthias Rath. The post dated 9 April 2009 states: "This is the 'missing chapter' about vitamin pill salesman Matthias Rath. Sadly I was unable to write about him at the time that book was initially published, as he was suing my ass in the High Court." 

The full chapter has been made universally available under a Creative Commons license with the title "The Doctor Will Sue You Now". Additionally, this full chapter is included as chapter 10 in the New Paperback Edition.

In this chapter the author explains its origin, its reasons for being excluded, and describes his personal reasons and tribulations in the said legal resolution. It contains an account of his anger at being gagged due to legal/financial restrictions, his support by the Guardian (for whom he writes) and his now encyclopedic knowledge of the subject in question.

Bad Pharma

From Wikipedia, the free encyclopedia

Bad Pharma
Bad Pharma.jpg
AuthorBen Goldacre
SubjectPharmaceutical industry
PublisherFourth Estate (UK), Faber & Faber (US), Signal (Canada)
Publication date
25 September 2012
Media typePrint (Hardcover and Paperback)
Pages430 (first edition)
ISBN978-0-00-735074-2
Preceded byBad Science 

Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients is a book by the British physician and academic Ben Goldacre about the pharmaceutical industry, its relationship with the medical profession, and the extent to which it controls academic research into its own products. It was published in the UK in September 2012 by the Fourth Estate imprint of HarperCollins, and in the United States in February 2013 by Faber and Faber.

Goldacre argues in the book that "the whole edifice of medicine is broken", because the evidence on which it is based is systematically distorted by the pharmaceutical industry. He writes that the industry finances most of the clinical trials into its own products and much of doctors' continuing education, that clinical trials are often conducted on small groups of unrepresentative subjects and negative data is routinely withheld, and that apparently independent academic papers may be planned and even ghostwritten by pharmaceutical companies or their contractors, without disclosure. Describing the situation as a "murderous disaster", he makes suggestions for action by patients' groups, physicians, academics and the industry itself.

Responding to the book's publication, the Association of the British Pharmaceutical Industry issued a statement in 2012 arguing that the examples the book offers were historical, that the concerns had been addressed, that the industry is among the most regulated in the world, and that it discloses all data in accordance with international standards.

In January 2013 Goldacre joined the Cochrane Collaboration, British Medical Journal and others in setting up AllTrials, a campaign calling for the results of all past and current clinical trials to be reported. The British House of Commons Public Accounts Committee expressed concern in January 2014 that drug companies were still only publishing around 50 percent of clinical-trial results.

Author

photograph
After graduating in 1995 with a first-class honors degree in medicine from Magdalen College, Oxford, Goldacre obtained an MA in philosophy from King's College London, then undertook clinical training at UCL Medical School, qualifying as a medical doctor in 2000 and as a psychiatrist in 2005. As of 2014 he was Wellcome Research Fellow in Epidemiology at the London School of Hygiene and Tropical Medicine.

Goldacre is known for his "Bad Science" column in the Guardian, which he has written since 2003, and for his first book, Bad Science (2008). This unpicked the claims of several forms of alternative medicine, and criticized certain physicians and the media for a lack of critical thinking. It also looked at the MMR vaccine controversy, AIDS denialism, the placebo effect and the misuse of statistics. Goldacre was recognized in June 2013 by Health Service Journal as having done "more than any other single individual to shine a light on how science and research gets distorted by the media, politicians, quacks, PR and the pharmaceutical industry."

Synopsis

Introduction

Goldacre writes in the introduction of Bad Pharma that he aims to defend the following:
Drugs are tested by the people who manufacture them, in poorly designed trials, on hopelessly small numbers of weird, unrepresentative patients, and analysed using techniques which are flawed by design, in such a way that they exaggerate the benefits of treatments. Unsurprisingly, these trials tend to produce results that favour the manufacturer. When trials throw up results that companies don't like, they are perfectly entitled to hide them from doctors and patients, so we only ever see a distorted picture of any drug's true effects. Regulators see most of the trial data, but only from early on in a drug's life, and even then they don't give this data to doctors or patients, or even to other parts of government. This distorted evidence is then communicated and applied in a distorted fashion.


In their forty years of practice after leaving medical school, doctors hear about what works through ad hoc oral traditions, from sales reps, colleagues or journals. But those colleagues can be in the pay of drug companies – often undisclosed – and the journals are too. And so are the patient groups. And finally, academic papers, which everyone thinks of as objective, are often covertly planned and written by people who work directly for the companies, without disclosure. Sometimes whole academic journals are even owned outright by one drug company. Aside from all this, for several of the most important and enduring problems in medicine, we have no idea what the best treatment is, because it's not in anyone's financial interest to conduct any trials at all.

Chapter 1: "Missing Data"

In "Missing Data," Goldacre argues that the clinical trials undertaken by drug companies routinely reach conclusions favorable to the company. For example, in a 2007 journal article published in PLOS Medicine, researchers studied every published trial on statins, drugs prescribed to reduce cholesterol levels. In the 192 trials they looked at, industry-funded trials were 20 times more likely to produce results that favored the drug.

He writes that these positive results are achieved in a number of ways. Sometimes the industry-sponsored studies are flawed by design (for example by comparing the new drug to an existing drug at an inadequate dose), and sometimes patients are selected to make a positive result more likely. In addition, the data is analysed as the trial progresses. If the trial seems to be producing negative data it is stopped prematurely and the results are not published, or if it is producing positive data it may be stopped early so that longer-term effects are not examined. He writes that this publication bias, where negative results remain unpublished, is endemic within medicine and academia. As a consequence, he argues, doctors may have no idea what the effects are of the drugs they prescribe.

An example he gives of the difficulty of obtaining missing data from drug companies is that of oseltamivir (Tamiflu), manufactured by Roche to reduce the complications of bird flu. Governments spent billions of pounds stockpiling this, based in large part on a meta-analysis that was funded by the industry. Bad Pharma charts the efforts of independent researchers, particularly Tom Jefferson of the Cochrane Collaboration Respiratory Group, to gain access to information about the drug.

Chapter 2: "Where Do New Drugs Come From?"

In the second chapter, the book describes the process as new drugs move from animal testing through phase 1 (first-in-man study), phase 2, and phase 3 clinical trials. Phase 1 participants are referred to as volunteers, but in the US are paid $200–$400 per day, and because studies can last several weeks and subjects may volunteer several times a year, earning potential becomes the main reason for participation. Participants are usually taken from the poorest groups in society, and outsourcing increasingly means that trials may be conducted in countries with highly competitive wages by contract research organizations (CROs). The rate of growth for clinical trials in India is 20 percent a year, in Argentina 27 percent, and in China 47 percent, while trials in the UK have fallen by 10 percent a year and in the US by six percent.

The shift to outsourcing raises issues about data integrity, regulatory oversight, language difficulties, the meaning of informed consent among a much poorer population, the standards of clinical care, the extent to which corruption may be regarded as routine in certain countries, and the ethical problem of raising a population's expectations for drugs that most of that population cannot afford. It also raises the question of whether the results of clinical trials using one population can invariably be applied elsewhere. There are both social and physical differences: Goldacre asks whether patients diagnosed with depression in China are really the same as patients diagnosed with depression in California, and notes that people of Asian descent metabolize drugs differently from Westerners.

There have also been cases of available treatment being withheld during clinical trials. In 1996 in Kano, Nigeria, the drug company Pfizer compared a new antibiotic during a meningitis outbreak to a competing antibiotic that was known to be effective at a higher dose than was used during the trial. Goldacre writes that 11 children died, divided almost equally between the two groups. The families taking part in the trial were apparently not told that the competing antibiotic at the effective dose was available from Médecins Sans Frontières in the next-door building.

Chapter 3: "Bad Regulators"

Chapter three describes the concept of "regulatory capture," whereby a regulator – such as the Medicines and Healthcare products Regulatory Agency (MHRA) in the UK, or the Food and Drug Administration (FDA) in the United States – ends up advancing the interests of the drug companies rather than the interests of the public. Goldacre writes that this happens for a number of reasons, including the revolving door of employees between the regulator and the companies, and the fact that friendships develop between regulator and company employees simply because they have knowledge and interests in common. The chapter also discusses surrogate outcomes and accelerated approval, and the difficulty of having ineffective drugs removed from the market once they have been approved. He argues that regulators do not require that new drugs offer an improvement over what is already available, or even that they be particularly effective.

Chapter 4: "Bad Trials"

"Bad Trials" examines the ways in which clinical trials can be flawed. Goldacre writes that this happens by design and by analysis, and that it has the effect of maximizing a drug's benefits and minimizing harm. There have been instances of fraud, though he says these are rare. More common are what he calls the "wily tricks, close calls, and elegant mischief at the margins of acceptability."

These include testing drugs on unrepresentative, "freakishly ideal" patients; comparing new drugs to something known to be ineffective, or effective at a different dose or if used differently; conducting trials that are too short or too small; and stopping trials early or late. It also includes measuring uninformative outcomes; packaging the data so that it is misleading; ignoring patients who drop out (i.e. using per-protocol analysis, where only patients who complete the trial are counted in the final results, rather than intention-to-treat analysis, where everyone who starts the trial is counted); changing the main outcome of the trial once it has finished; producing subgroup analyses that show apparently positive outcomes for certain tightly defined groups (such as Chinese men between the ages of 56 and 71), thereby hiding an overall negative outcome; and conducting "seeding trials," where the objective is to persuade physicians to use the drug.

Another criticism is that outcomes are presented in terms of relative risk reduction to exaggerate the apparent benefits of the treatment. For example, he writes, if four people out of 1,000 will have a heart attack within the year, but on statins only two will, that is a 50 percent reduction if expressed as relative risk reduction. But if expressed as absolute risk reduction, it is a reduction of just 0.2 percent.

Chapter 5: "Bigger, Simpler Trials"

In chapter five Goldacre suggests using the General Practice Research Database in the UK, which contains the anonymized records of several million patients, to conduct randomized trials to determine the most effective of competing treatments. For example, to compare two statins, atorvastatin and simvastatin, doctors would randomly assign patients to one or the other. The patients would be followed up by having data about their cholesterol levels, heart attacks, strokes and deaths taken from their computerized medical records. The trials would not be blind – patients would know which statin they had been prescribed – but Goldacre writes that they would be unlikely to hold such firm beliefs about which one is preferable to the extent that it could affect their health.

Chapter 6: "Marketing"

In the final chapter, Goldacre looks at how doctors are persuaded to prescribe "me-too drugs," brand-name drugs that are no more effective than significantly cheaper off-patent ones. He cites as examples the statins atorvastatin (Lipitor, made by Pfizer) and simvastatin (Zocor), which he writes seem to be equally effective, or at least there is no evidence to suggest otherwise. Simvastatin came off patent several years ago, yet there are still three million prescriptions a year in the UK for atorvastatin, costing the National Health Service (NHS) an annual £165 million extra.

He addresses the issue of medicalization of certain conditions (or, as he argues, of personhood), whereby pharmaceutical companies "widen the boundaries of diagnosis" before offering solutions. Female sexual dysfunction was highlighted in 1999 by a study published in the Journal of the American Medical Association, which alleged that 43 percent of women were suffering from it. After the article appeared, the New York Times wrote that two of its three authors had worked as consultants for Pfizer, which at the time was preparing to launch UK-414,495, known as female Viagra. The journal's editor said that the failure to disclose the relationship with Pfizer was the journal's mistake.

The chapter also examines celebrity endorsement of certain drugs, the extent to which claims in advertisements aimed at doctors are appropriately sourced, and whether direct-to-consumer advertising (currently permitted in the US and New Zealand) ought to be allowed. It discusses how PR firms promote stories from patients who complain in the media that certain drugs are not made available by the funder, which in the UK is the NHS and the National Institute for Health and Clinical Excellence (NICE). Two breast-cancer patients who campaigned in the UK in 2006 for trastuzumab (Herceptin) to be available on the NHS were being handled by a law firm working for Roche, the drug's manufacturer. The historian Lisa Jardine, who was suffering from breast cancer, told the Guardian that she had been approached by a PR firm working for the company.

The chapter also covers the influence of drug reps, how ghostwriters are employed by the drug companies to write papers for academics to publish, how independent the academic journals really are, how the drug companies finance doctors' continuing education, and how patients' groups are often funded by industry.

Afterword: "Better Data"

In the afterword and throughout the book, Goldacre makes suggestions for action by doctors, medical students, patients, patient groups and the industry. He advises doctors, nurses and managers to stop seeing drug reps, to ban them from clinics, hospitals and medical schools, to declare online and in waiting rooms all gifts and hospitality received from the industry, and to remove all drug company promotional material from offices and waiting rooms. (He praises the website of the American Medical Student Association – www.amsascorecard.org – which ranks institutions according to their conflict-of-interest policies, writing that it makes him "feel weepy.") He also suggests that regulations be introduced to prevent pharmacists from sharing doctors' prescribing records with drug reps.

He asks academics to lobby their universities and academic societies to forbid academics from being involved in ghostwriting, and to lobby for "film credit" contributions at the end of every academic paper, listing everyone involved, including who initiated the idea of publishing the paper. He also asks for full disclosure of all past clinical trial results, and a list of academic papers that were, as he puts it, "rigged" by industry, so that they can be retracted or annotated. He asks drug company employees to become whistleblowers, either by writing an anonymous blog, or by contacting him.

He advises patients to ask their doctors whether they accept drug-company hospitality or sponsorship, and if so to post details in their waiting rooms, and to make clear whether it is acceptable to the patient for the doctor to discuss his or her medical history with drug reps. Patients who are invited to take part in a trial are advised to ask, among other things, for a written guarantee that the trial has been publicly registered, and that the main outcome of the trial will be published within a year of its completion. He advises patient groups to write to drug companies with the following: "We are living with this disease; is there anything at all that you're withholding? If so, tell us today."

Reception

The book was generally well received. The Economist described it as "slightly technical, eminently readable, consistently shocking, occasionally hectoring, and unapologetically polemical". Helen Lewis in the New Statesman called it an important book, while Luisa Dillner, writing in the Guardian, described it as a "thorough piece of investigative medical journalism".

Andrew Jack wrote in the Financial Times that Goldacre is "at his best in methodically dissecting poor clinical trials. ... He is less strong in explaining the complex background reality, such as the general constraints and individual slips of regulators and pharma companies' employees." Jack also argued that the book failed to reflect how many lives have been improved by the current system, for example with new treatments for HIV, rheumatoid arthritis and cancer.

Max Pemberton, a psychiatrist, wrote in the Daily Telegraph that "this is a book to make you enraged ... because it's about how big business puts profits over patient welfare, allows people to die because they don't want to disclose damning research evidence, and the tricks they play to make sure doctors do not have all the evidence when it comes to appraising whether a drug really works or not."

The Association of the British Pharmaceutical Industry (ABPI) replied in the New Statesman that Goldacre was "stuck in a bygone era where pharmaceutical companies wine and dine doctors in exchange for signing on the dotted line". The ABPI issued a press release, writing that the pharmaceutical industry is responsible for the discovery of 90 percent of all medicines, and that it takes an average of 10–12 years and £1.1bn to introduce a medicine to the market, with just one in 5,000 new compounds receiving regulatory approval. This makes research and development an expensive and risky business. They wrote that the industry is one of the most heavily regulated in the world, and is committed to ensuring full transparency in the research and development of new medicines. They also maintained that the examples Goldacre offered were "long documented and historical, and the companies concerned have long addressed these issues". Goldacre argues in the book that "the most dangerous tactic of all is the industry's enduring claim that these problems are all in the past".

Humphrey Rang of the British Pharmacological Society wrote that Goldacre had chosen his target well and had produced some shocking examples of secrecy and dishonesty, particularly the nondisclosure of data on the antidepressant reboxetine (chapter one), in which only one trial out of seven was published (the published study showed positive results, while the unpublished trials suggested otherwise). He argued that Goldacre had gone "over the top" in devoting a whole chapter (chapter five) to recommending large clinical trials using electronic patient data from general practitioners, without fully pointing out how problematic these can be; such trials raise issues, for example, about informed consent and regulatory oversight. Rang also criticized Goldacre's style, describing the book as too long, repetitive, hyperbolic, and in places too conversational. He particularly objected to the line, "medicine is broken", calling it a "foolish remark".

AllTrials

Following the book's publication, Goldacre co-founded AllTrials with David Tovey, editor-in-chief of the Cochrane Library, together with the British Medical Journal, the Centre for Evidence-based Medicine, and others in the UK, and Dartmouth College's Geisel School of Medicine and the Dartmouth Institute for Health Policy and Clinical Practice in the US. Set up in January 2013, the group campaigns for all past and current clinical trials to be registered and reported, for all treatments in use.

The British House of Commons Public Accounts Committee produced a report in January 2014, after hearing evidence from Goldacre, Fiona Godlee, editor-in-chief of the British Medical Journal, and others, about the stockpiling of Tamiflu and the withholding of data about the drug by its manufacturer, Roche. The committee said it was "surprised and concerned" to learn that information from clinical trials is routinely withheld from doctors, and recommended that the Department of Health take steps to ensure that all clinical-trial data be made available for currently prescribed treatments.

Publication details

  • Bad Pharma: How drug companies mislead doctors and harm patients, Fourth Estate, 2012 (UK). ISBN 978-0-00-735074-2
  • Faber and Faber, 2013 (US). ISBN 978-0-86547-800-8
  • Signal, 2013 (Canada). ISBN 978-0-7710-3629-3
  • As of December 2012 foreign rights had been sold for Brazil, the Czech Republic, Netherlands, Germany, Israel, Italy, Korea, Norway, Poland, Portugal, Spain and Turkey.

Mandatory Palestine

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Mandatory_Palestine   Palestine 1920–...