Engineering is the profession aimed at modifying the natural environment, through the design, manufacture and maintenance of artifacts and technological systems. It might then be contrasted with science, the aim of which is to understand nature. Engineering at its core is about causing change, and therefore management of change
is central to engineering practice. The philosophy of engineering is
then the consideration of philosophical issues as they apply to
engineering. Such issues might include the objectivity of experiments,
the ethics of engineering activity in the workplace and in society, the
aesthetics of engineered artifacts, etc.
While engineering seems historically to have meant devising, the distinction between art, craft and technology isn't clearcut. The Latin root ars, the Germanic root kraft and the Greek root techne all originally meant the skill or ability to produce something, as opposed to, say, athletic ability. The something might be tangible, like a sculpture or a building, or less tangible, like a work of literature. Nowadays, art is commonly applied to the visual, performing or literary fields, especially the so-called fine arts ('the art of writing'), craft usually applies to the manual skill involved in the manufacture of an object, whether embroidery or aircraft ('the craft of typesetting') and technology tends to mean the products and processes currently used in an industry ('the technology of printing'). In contrast, engineering is the activity of effecting change through the design and manufacture of artifacts ('the engineering of print technology').
What distinguishes engineering design
from artistic design is the requirement for the engineer to make
quantitative predictions of the behavior and effect of the artifact
prior to its manufacture. Such predictions may be more or less accurate
but usually includes the effects on individuals and/or society. In this
sense, engineering can be considered a social as well a technological
discipline and judged not just by whether its artifacts work, in a
narrow sense, but also by how they influence and serve social values. What engineers do is subject to moral evaluation.
Modeling
Socio-technical systems, such as transport, utilities and their related infrastructures comprise human elements as well as artifacts. Traditional mathematical and physical modeling techniques may not take adequate account of the effects of engineering on people, and culture. The Civil Engineering discipline makes elaborate attempts to ensure
that a structure meets its specifications and other requirements prior
to its actual construction. The methods employed are well known as
Analysis and Design. Systems Modelling and Description makes an effort to extract the generic unstated principles behind the engineering approach.
The traditional engineering disciplines seem discrete but the
engineering of artifacts has implications that extend beyond such
disciplines into areas that might include psychology, finance and sociology.
The design of any artifact will then take account of the conditions
under which it will be manufactured, the conditions under which it will
be used, and the conditions under which it will be disposed. Engineers
can consider such "life cycle" issues without losing the precision and
rigor necessary to design functional systems.
Research ethics is a discipline within the study of applied ethics. Its scope ranges from general scientific integrity and misconduct to the treatment of human and animal subjects. The social responsibilities of scientists and researchers are not traditionally included and are less well defined.
The discipline is most developed in medical research. Beyond the issues of falsification, fabrication, and plagiarism that arise in every scientific field, research design in human subject research and animal testing are the areas that raise ethical questions most often.
The list of historic cases includes many large scale violations and crimes against humanity such as Nazi human experimentation and the Tuskegee syphilis experiment which led to international codes of research ethics. Medical ethics
developed out of centuries of general malpractice and science motivated
only by results. Medical ethics in turn led to today's more broad
understanding in bioethics.
First introduced in the 19th century by Charles Babbage, the concept of research integrity came to the fore in the late 1970s. A series of publicized scandals in the United States
led to heightened debate on the ethical norms of sciences and the
limitations of the self-regulation processes implemented by scientific
communities and institutions. Formalized definitions of scientific misconduct, and codes of conduct, became the main policy response after 1990. In the 21st century, codes of conduct or ethics codes
for research integrity are widespread. Along with codes of conduct at
institutional and national levels, major international texts include the
European Charter for Researchers
(2005), the Singapore statement on research integrity (2010), the
European Code of Conduct for Research Integrity (2011 & 2017) and
the Hong Kong principles for assessing researchers (2020).
Scientific literature on research integrity falls mostly into two
categories: first, mapping of the definitions and categories,
especially in regard to scientific misconduct, and second, empirical
surveys of the attitudes and practices of scientists. Following the development of codes of conduct, taxonomies of
non-ethical uses have been significantly expanded, beyond the
long-established forms of scientific fraud (plagiarism, falsification
and fabrication of results). Definitions of "questionable research
practices" and the debate over reproducibility also target a grey area of dubious scientific results, which may not be the outcome of voluntary manipulations.
The concrete impact of codes of conduct and other measures put in
place to ensure research integrity remain uncertain. Several case
studies have highlighted that while the principles of typical codes of
conduct adhere to common scientific ideals, they are seen as remote from
actual work practices and their efficiency is criticized.
After 2010, debates on research integrity have been increasingly linked to open science.
International codes of conduct and national legislation on research
integrity have officially endorsed open sharing of scientific output
(publications, data, and code used to perform statistical analyses on
the data)
as ways to limit questionable research practices and to enhance
reproducibility. Having both the data and the actual code enables others
to reproduce the results for themselves (or to uncover problems in the
analyses when trying to do so). The European Code of Conduct for
Research Integrity 2023 states, for example, the principles that,
"Researchers, research institutions, and organisations ensure that
access to data is as open as possible, as closed as necessary, and where
appropriate in line with the FAIR Principles (Findable, Accessible,
Interoperable and Reusable)
for data management" and that "Researchers, research institutions, and
organisations are transparent about how to access and gain permission to
use data,
metadata, protocols, code, software, and other research materials". References to open science have incidentally opened up the debate over
scientific integrity beyond academic communities, as it increasingly
concerns a wider audience of scientific readers.
Scientific misconduct
A reconstruction of the skull purportedly belonging to the Piltdown Man, a long-lasting case of scientific misconduct
A Lancet review on Handling of Scientific Misconduct in Scandinavian countries provides the following sample definitions, reproduced in The COPE report 1999:
Danish definition: "Intention or gross negligence leading to
fabrication of the scientific message or a false credit or emphasis
given to a scientist"
Swedish definition: "Intention[al] distortion of the research
process by fabrication of data, text, hypothesis, or methods from
another researcher's manuscript form or publication; or distortion of
the research process in other ways."
The consequences of scientific misconduct can be damaging for perpetrators and journal audiences and for any individual who exposes it. In addition there are public health implications attached to the
promotion of medical or other interventions based on false or fabricated
research findings. Scientific misconduct can result in loss of public trust in the integrity of science.
Three percent of the 3,475 research institutions that report to the US Department of Health and Human Services' Office of Research Integrity indicate some form of scientific misconduct. However the ORI will only investigate allegations of impropriety where
research was funded by federal grants. They routinely monitor such
research publications for red flags and their investigation is subject
to a statute of limitations. Other private organizations like the
Committee of Medical Journal Editors (COJE) can only police their own
members.
Medical ethics is an applied branch of ethics which analyzes the practice of clinical medicine and related scientific research. Medical ethics is based on a set of values that professionals can refer
to in the case of any confusion or conflict. These values include the
respect for autonomy, non-maleficence, beneficence, and justice. Such tenets may allow doctors, care providers, and families to create a treatment plan and work towards the same common goal. These four values are not ranked in order of importance or relevance
and they all encompass values pertaining to medical ethics. However, a conflict may arise leading to the need for hierarchy in an
ethical system, such that some moral elements overrule others with the
purpose of applying the best moral judgement to a difficult medical
situation. Medical ethics is particularly relevant in decisions regarding involuntary treatment and involuntary commitment.
There are several codes of conduct. The Hippocratic Oath discusses basic principles for medical professionals. This document dates back to the fifth century BCE. Both The Declaration of Helsinki (1964) and The Nuremberg Code
(1947) are two well-known and well respected documents contributing to
medical ethics. Other important markings in the history of medical
ethics include Roe v. Wade in 1973 and the development of hemodialysis in the 1960s. With hemodialysis
now available, but a limited number of dialysis machines to treat
patients, an ethical question arose on which patients to treat and which
ones not to treat, and which factors to use in making such a decision. More recently, new techniques for gene editing
aiming at treating, preventing, and curing diseases utilizing gene
editing, are raising important moral questions about their applications
in medicine and treatments as well as societal impacts on future
generations.
As this field continues to develop and change throughout history,
the focus remains on fair, balanced, and moral thinking across all
cultural and religious backgrounds around the world. The field of medical ethics encompasses both practical application in clinical settings and scholarly work in philosophy, history, and sociology.
Medical ethics encompasses beneficence, autonomy, and justice as they
relate to conflicts such as euthanasia, patient confidentiality,
informed consent, and conflicts of interest in healthcare. In addition, medical ethics and culture are interconnected as different
cultures implement ethical values differently, sometimes placing more
emphasis on family values and downplaying the importance of autonomy.
This leads to an increasing need for culturally sensitive physicians and ethical committees in hospitals and other healthcare settings.
Bioethics
Bioethics is both a field of study and professional practice, interested in ethical issues related to health (primarily focused on the human, but also increasingly includes animal ethics), including those emerging from advances in biology, medicine,
and technologies. It proposes the discussion about moral discernment in
society (what decisions are "good" or "bad" and why) and it is often
related to medical policy and practice, but also to broader questions as
environment, well-being and public health. Bioethics is concerned with the ethical questions that arise in the relationships among life sciences, biotechnology, medicine, politics, law, theology and philosophy. It includes the study of values relating to primary care, other branches of medicine ("the ethics of the ordinary"), ethical education in science, animal, and environmental ethics, and public health.
Study
participants are entitled to some degree of autonomy in deciding their
participation. One measure for safeguarding this right is the use of informed consent for clinical research. Researchers refer to populations with limited autonomy as "vulnerable
populations"; these are subjects who may not be able to fairly decide
for themselves whether to participate. Examples of vulnerable
populations include incarcerated persons, children, prisoners, soldiers, people under detention, migrants, persons exhibiting insanity
or any other condition that precludes their autonomy, and to a lesser
extent, any population for which there is reason to believe that the
research study could seem particularly or unfairly persuasive or
misleading. Ethical problems particularly encumber using children in clinical trials.
Society
Consequences for the environment, for society and for future generations must be considered.
In Canada, there are different committees for different agencies. The committees are the Research Ethics Board (REB) as well as two others that split their committee duties between conduct (PRCR) and ethics committee (PRE).
The European Union only sets the guidelines for its member's ethics committees.
Large international organizations like the WHO have their own ethics committees.
In Canada, mandatory research ethics training is required for students, professors and others who work in research. The US first legislated institutional review boards procedures in the 1974 National Research Act.
Criticism
Published in Social Sciences & Medicine (2009) several authors suggested that research ethics in a medical context is dominated by principlism.
Preregistration is the practice of registering the hypotheses, methods, or analyses of a scientific study before it is conducted. Clinical trial registration is similar, although it may not require the registration of a study's analysis protocol. Finally, registered reports include the peer review and in principle acceptance of a study protocol prior to data collection.
Preregistration has the goal to transparently evaluate the severity of hypothesis tests, and can have a number of secondary goals (which can also be achieved without preregistering),
including (a) facilitating and documenting research plans, (b)
identifying and reducing questionable research practices and researcher
biases, (c) distinguishing between confirmatory and exploratory analyses, and, in the case of Registered Reports, (d) facilitating results-blind peer review, and (e) reducing publication bias.
A number of research practices such as p-hacking, publication bias, data dredging, inappropriate forms of post hoc analysis, and HARKing increase the probability of incorrect claims. Although the idea of preregistration is old, the practice of preregistering studies has gained prominence to mitigate to some of the issues that underlie the replication crisis.
Types
Standard preregistration
In
the standard preregistration format, researchers prepare a research
protocol document prior to conducting their research. Ideally, this
document indicates the research hypotheses, sampling procedure, sample
size, research design, testing conditions, stimuli, measures, data
coding and aggregation method, criteria for data exclusions, and
statistical analyses, including potential variations on those analyses.
This preregistration document is then posted on a publicly available
website such as the Open Science Framework or AsPredicted.
The preregistered study is then conducted, and a report of the study
and its results are submitted for publication together with access to
the preregistration document. This preregistration approach allows peer
reviewers and subsequent readers to cross-reference the preregistration
document with the published research article in order to identify the
presence of any opportunistic deviations of the preregistration that
reduce the severity of tests. Deviations from the preregistration are
possible and common in practice, but they should be transparently
reported, and the consequences for the severity of the test should be
evaluated.
Registered reports
The
registered report format requires authors to submit a description of
the study methods and analyses prior to data collection. Once the theoretical introduction, method, and analysis plan has been
peer reviewed (Stage 1 peer review), publication of the findings is
provisionally guaranteed (in principle acceptance). The proposed study
is then performed, and the research report is submitted for Stage 2 peer
review. Stage 2 peer review confirms that the actual research methods
are consistent with the preregistered protocol, that quality thresholds
are met (e.g., manipulation checks confirm the validity of the
experimental manipulation), and that the conclusions follow from the
data. Because studies are accepted for publication regardless of whether
the results are statistically significant Registered Reports prevent
publication bias. Meta-scientific research has shown that the percentage
of non-significant results in Registered Reports is substantially
higher than in standard publications.
Specialised preregistration
Preregistration can be used in relation to a variety of different research designs and methods, including:
Quantitative research in psychology
Qualitative research
Preexisting data
Single case designs
Electroencephalogram research
Experience sampling
Exploratory research
Animal Research
Clinical trial registration
Clinical trial registration is the practice of documenting clinical trials before they are performed in a clinical trials registry so as to combat publication bias and selective reporting. Registration of clinical trials is required in some countries and is increasingly being standardized. Some top medical journals will only publish the results of trials that have been pre-registered.
A clinical trials registry is a platform which catalogs registered clinical trials. ClinicalTrials.gov, run by the United States National Library of Medicine
(NLM) was the first online registry for clinical trials, and remains
the largest and most widely used. In addition to combating bias,
clinical trial registries serve to increase transparency and access to
clinical trials for the public. Clinical trials registries are often
searchable (e.g. by disease/indication, drug, location, etc.). Trials
are registered by the pharmaceutical, biotech or medical device company
(Sponsor) or by the hospital or foundation which is sponsoring the
study, or by another organization, such as a contract research organization (CRO) which is running the study.
There has been a push from governments and international
organizations, especially since 2005, to make clinical trial information
more widely available and to standardize registries and processes of
registering. The World Health Organization is working toward "achieving consensus on both the minimal and the optimal operating standards for trial registration".[28]
Creation and development
For
many years, scientists and others have worried about reporting biases
such that negative or null results from initiated clinical trials may be
less likely to be published than positive results, thus skewing the
literature and our understanding of how well interventions work. This worry has been international and written about for over 50 years. One of the proposals to address this potential bias was a
comprehensive register of initiated clinical trials that would inform
the public which trials had been started. Ethical issues were those that seemed to interest the public most, as
trialists (including those with potential commercial gain) benefited
from those who enrolled in trials, but were not required to “give back,”
telling the public what they had learned.
Those who were particularly concerned by the double standard were
systematic reviewers, those who summarize what is known from clinical
trials. If the literature is skewed, then the results of a systematic
review are also likely to be skewed, possibly favoring the test
intervention when in fact the accumulated data do not show this, if all
data were made public.
ClinicalTrials.gov was originally developed largely as a result of breast cancer consumer lobbying, which led to authorizing language in the FDA Modernization Act of 1997
(Food and Drug Administration Modernization Act of 1997. Pub L No.
105-115, §113 Stat 2296), but the law provided neither funding nor a
mechanism of enforcement. In addition, the law required that
ClinicalTrials.gov only include trials of serious and life-threatening
diseases.
Then, two events occurred in 2004 that increased public awareness
of the problems of reporting bias. First, the then-New York State
Attorney General Eliot Spitzer sued GlaxoSmithKline (GSK) because they had failed to reveal results from trials showing that certain antidepressants might be harmful.
Shortly thereafter, the International Committee of Medical Journal Editors
(ICMJE) announced that their journals would not publish reports of
trials unless they had been registered. The ICMJE action was probably
the most important motivator for trial registration, as investigators
wanted to reserve the possibility that they could publish their results
in prestigious journals, should they want to.
In 2007, the Food and Drug Administration Amendments Act of 2007
(FDAAA) clarified the requirements for registration and also set
penalties for non-compliance (Public Law 110-85. The Food and Drug
Administration Amendments Act of 2007.
International participation
The International Committee of Medical Journal Editors
(ICMJE) decided that from July 1, 2005 no trials will be considered for
publication unless they are included on a clinical trials registry. The World Health Organization has begun the push for clinical trial registration with the initiation of the International Clinical Trials Registry Platform.
There has also been action from the pharmaceutical industry, which
released plans to make clinical trial data more transparent and publicly
available. Released in October 2008, the revised Declaration of Helsinki,
states that "Every clinical trial must be registered in a publicly
accessible database before recruitment of the first subject."
The World Health Organization maintains an international registry portal at http://apps.who.int/trialsearch/. WHO states that the international registry's mission is "to ensure that
a complete view of research is accessible to all those involved in
health care decision making. This will improve research transparency and
will ultimately strengthen the validity and value of the scientific
evidence base."
Since 2007, the International Committee of Medical Journal Editors
ICMJE accepts all primary registries in the WHO network in addition to
clinicaltrials.gov. Clinical trial registration in other registries
excluding ClinicalTrials.gov has increased irrespective of study designs
since 2014.
Reporting compliance
Various
studies have measured the extent to which various trials are in
compliance with the reporting standards of their registry.
Worldwide, there is growing number of registries. A 2013 study identified the following top five registries (numbers updated as of August 2013):
1.
ClinicalTrials.gov
150,551
2.
EU register
21,060
3.
Japan registries network (JPRN)
12,728
4.
ISRCTN
11,794
5.
Australia and New Zealand (ANZCTR)
8,216
Overview of preclinical study registries
Similar
to clinical research, preregistration can help to improve transparency
and quality of research data in preclinical research. In contrast to clinical research where preregistration is mandatory for
vast parts it is still new in preclinical research. A large part of
preclinical and basic biomedical research relies on animal experiments.
The non-publication of results gained from animal experiments not only
distorts the state of research by reinforcing the publication bias, it
further represents an ethical issue. Preregistration is discussed as a measure that could counteract this
problem. Following registries are suited for the preregistration of
preclinical studies.
Over 200 journals offer a registered reports option (Centre for Open Science, 2019), and the number of journals that are adopting registered reports is approximately doubling each year (Chambers et al., 2019).
Psychological Science has encouraged the preregistration of studies and the reporting of effect sizes and confidence intervals. The editor-in-chief
also noted that the editorial staff will be asking for replication of
studies with surprising findings from examinations using small sample
sizes before allowing the manuscripts to be published.
Nature Human Behaviour
has adopted the registered report format, as it “shift[s] the emphasis
from the results of research to the questions that guide the research
and the methods used to answer them”.
European Journal of Personality
defines this format: “In a registered report, authors create a study
proposal that includes theoretical and empirical background, research
questions/hypotheses, and pilot data (if available). Upon submission,
this proposal will then be reviewed prior to data collection, and if
accepted, the paper resulting from this peer-reviewed procedure will be
published, regardless of the study outcomes.”
Note that only a very small proportion of academic journals in
psychology and neurosciences explicitly stated that they welcome
submissions of replication studies in their aim and scope or
instructions to authors. This phenomenon does not encourage the reporting or even attempt on replication studies.
Overall, the number of participating journals is increasing, as indicated by the Center for Open Science, which maintains a list of journals encouraging the submission of registered reports.
Benefits
Several articles have outlined the rationale for preregistration (e.g., Lakens, 2019; Nosek et al., 2018; Wagenmakers et al., 2012). The primary goal of preregistration is to improve the transparency of
reported hypothesis tests, which allows readers to evaluate the extent
to which decisions during the data analysis were pre-planned
(maintaining statistical error control) or data-driven (increasing the
Type 1 or Type 2 error rate).
Meta-scientific research has revealed additional benefits.
Researchers indicate preregistering a study leads to a more carefully
thought through research hypothesis, experimental design, and
statistical analysis. In addition, preregistration has been shown to encourage better
learning of Open Science concepts and students felt that they understood
their dissertation and it improved the clarity of the manuscript
writing, promoted rigour and were more likely to avoid questionable
research practices. In addition, it becomes a tool that can supervisors can use to shape students to combat any questionable research practices.
A 2024 study in the Journal of Political Economy: Microeconomics
preregistration in economics journals found that preregistration
reduced p-hacking and publication bias if the preregistration was
accompanied by a preanalysis plan, but not if the preregistration did
not specify the planned analyses.
Criticisms
Proponents of preregistration have argued that it is "a method to increase the credibility of published results" (Nosek & Lakens, 2014), that it "makes your science better by increasing the credibility of your results" (Centre for Open Science), and that it "improves the interpretability and credibility of research findings" (Nosek et al., 2018, p. 2605). This argument assumes that on average non-preregistered analyses are
less "credible" and/or "interpretable" than preregistered analyses
because researchers may opportunistically abuse flexibility in the data
analysis to reduce the severity of the tests. Some critics have argued
that preregistration is not necessary to identify circular reasoning
during exploratory analyses (Rubin, 2020),
as it can be identified by analysing the reasoning per se without
needing to know whether that reasoning was preregistered. However, this
criticism itself has been criticized as "Authors who have raised this
criticism on preregistration fail to provide any real-life examples of
theories that sufficiently constrain how they can be tested, nor do they
provide empirical support for their
hypothesis that peers can identify systematic bias".
Critics have also noted that the idea that preregistration
improves research credibility may deter researchers from undertaking
non-preregistered exploratory analyses (Coffman & Niederle, 2015; see also Collins et al., 2021, Study 1). In response, preregistration advocates have stressed a) exploratory analyses were rarely published to begin with, b) that exploratory analyses are permitted in preregistered studies,
and that the results of these analyses retain some value vis-a-vis
hypothesis generation rather than hypothesis testing. Preregistration
merely makes the distinction between confirmatory and exploratory
research clearer (Nosek et al., 2018; Nosek & Lakens, 2014; Wagenmakers et al., 2012). Hence, although preregistraton is supposed to reduce researcher degrees of freedom during the data analysis stage, it is also supposed to be “a plan, not a prison” (Dehaven, 2017). Deviations are sometimes improvements, and should be transparently reported so that others can evaluate the consequences of the deviation.
Finally, and more fundamentally, critics have argued that the
distinction between confirmatory and exploratory analyses is unclear
and/or irrelevant (Devezer et al., 2020; Rubin, 2020; Szollosi & Donkin, 2019).
However, more recent work has provided a more principled definition of
'exploratory' and 'confirmatory' by arguing that "hypothesis tests are
confirmatory when their error rates are controlled, and exploratory when
the error rates are not controlled." which both clarifies the distinction, and demonstrates the relevance of the distinction for preregistration
Additional concerns have been raised that inflated familywise
error rates are unjustified when those error rates refer to abstract,
atheoretical studywise hypotheses that are not being tested (Rubin, 2020, 2021; Szollosi et al., 2020).
There are also concerns about the practical implementation of
preregistration. Many preregistered protocols leave plenty of room for p-hacking (Bakker et al., 2020; Heirene et al., 2021; Ikeda et al., 2019; Singh et al., 2021; Van den Akker et al., 2023), and researchers rarely follow the exact research methods and analyses that they preregister (Abrams et al., 2020; Claesen et al., 2019; Heirene et al., 2021; Clayson et al., 2025; see also Boghdadly et al., 2018; Singh et al., 2021; Sun et al., 2019). For example, pre-registered studies are only of higher quality than
non-pre-registered studies if the former has a power analysis and higher
sample size than the latter but other than that they do not seem to
prevent p-hacking and HARKing, as both the proportion of positive
results and effect sizes are similar between preregistered and
non-preregistered studies (Van den Akker et al., 2023). In addition, a survey of 27 preregistered studies found that researchers deviated from their preregistered plans in all cases (Claesen et al., 2019). The most frequent deviations were with regards to the planned sample
size, exclusion criteria, and statistical model. Hence, what were
intended as preregistered confirmatory tests ended up as unplanned
exploratory tests. Again, preregistration advocates argue that
deviations from preregistered plans are acceptable as long as they are
reported transparently and justified. They also point out that even
vague preregistrations help to reduce researcher degrees of freedom and make any residual flexibility transparent (Simmons et al., 2021, p. 180). A larger study of 92 EEG/ERP studies showed that only 60% of studies
adhered to their preregistrations or disclosed all deviations. Notably, registered reports had the higher adherence rates (92%) than unreviewed preregistrations (60%).
However, critics have argued that it is not useful to identify or
justify deviations from preregistered plans when those plans do not
reflect high quality theory and research practice. As Rubin (2020) explained, “we should be more interested in the rationale for the current method and analyses than in the rationale for historical changes that have led up to the current method and analyses” (pp. 378–379). In addition, pre-registering a study requires careful deliberation
about the study's hypotheses, research design and statistical analyses.
This depends on the use of pre-registration templates that provides
detailed guidance on what to include and why (Bowman et al., 2016; Haven & Van Grootel, 2019; Van den Akker et al., 2021).Many pre-registration template stress the importance of a power
analysis but not only stress the importance of why the methodology was
used. Additionally to the concerns raised about its practical
implementation in quantitative research, critics have also argued that
preregistration is less applicable, or even unsuitable, for qualitative
research. Pre-registration imposes rigidity, limiting researchers' ability to
adapt to emerging data and evolving contexts, which are essential to
capturing the richness of participants' lived experiences (Souza-Neto & Moyle, 2025). Additionally, it conflicts with the inductive and flexible nature of
theory-building in qualitative research, constraining the exploratory
approach that is central to this methodology (Souza-Neto & Moyle, 2025).
Finally, some commentators have argued that, under some
circumstances, preregistration may actually harm science by providing a
false sense of credibility to research studies and analyses (Devezer et al., 2020; McPhetres, 2020; Pham & Oh, 2020; Szollosi et al., 2020). Consistent with this view, there is some evidence that researchers view
registered reports as being more credible than standard reports on a
range of dimensions (Soderberg et al., 2020; see also Field et al., 2020 for inconclusive evidence), although it is unclear whether this represents a "false" sense of
credibility due to pre-existing positive community attitudes about
preregistration or a genuine causal effect of registered reports on
quality of research.
Even very young children perform rudimentary experiments to learn about the world and how things work.
An experiment is a procedure carried out to support or refute a hypothesis, or determine the efficacy or likelihood of something previously untried. Experiments provide insight into cause-and-effect
by demonstrating what outcome occurs when a particular factor is
manipulated. Experiments vary greatly in goal and scale but always rely
on repeatable procedure and logical analysis of the results. There also
exist natural experimental studies.
A child may carry out basic experiments to understand how things
fall to the ground, while teams of scientists may take years of
systematic investigation to advance their understanding of a phenomenon.
Experiments and other types of hands-on activities are very important
to student learning in the science classroom. Experiments can raise test
scores and help a student become more engaged and interested in the
material they are learning, especially when used over time. Experiments can vary from personal and informal natural comparisons
(e.g. tasting a range of chocolates to find a favorite), to highly
controlled (e.g. tests requiring complex apparatus overseen by many
scientists that hope to discover information about subatomic particles).
Uses of experiments vary considerably between the natural and human sciences.
Experiments typically include controls, which are designed to minimize the effects of variables other than the single independent variable. This increases the reliability of the results, often through a comparison between control measurements and the other measurements. Scientific controls are a part of the scientific method. Ideally, all variables
in an experiment are controlled (accounted for by the control
measurements) and none are uncontrolled. In such an experiment, if all
controls work as expected, it is possible to conclude that the
experiment works as intended, and that results are due to the effect of
the tested variables.
Overview
In the scientific method, an experiment is an empirical procedure that arbitrates competing models or hypotheses. Researchers also use experimentation to test existing theories or new hypotheses to support or disprove them.
An experiment usually tests a hypothesis,
which is an expectation about how a particular process or phenomenon
works. However, an experiment may also aim to answer a "what-if"
question, without a specific expectation about what the experiment
reveals, or to confirm prior results. If an experiment is carefully
conducted, the results usually either support or disprove the
hypothesis. According to some philosophies of science, an experiment can never "prove" a hypothesis, it can only add support. On the other hand, an experiment that provides a counterexample can disprove a theory or hypothesis, but a theory can always be salvaged by appropriate ad hoc modifications at the expense of simplicity.
In engineering and the physical sciences,
experiments are a primary component of the scientific method. They are
used to test theories and hypotheses about how physical processes work
under particular conditions (e.g., whether a particular engineering
process can produce a desired chemical compound). Typically, experiments
in these fields focus on replication of identical procedures in hopes of producing identical results in each replication. Random assignment is uncommon.
In medicine and the social sciences,
the prevalence of experimental research varies widely across
disciplines. When used, however, experiments typically follow the form
of the clinical trial,
where experimental units (usually individual human beings) are randomly
assigned to a treatment or control condition where one or more outcomes
are assessed. In contrast to norms in the physical sciences, the focus is typically on the average treatment effect (the difference in outcomes between the treatment and control groups) or another test statistic produced by the experiment. A single study typically does not involve replications of the experiment, but separate studies may be aggregated through systematic review and meta-analysis.
There are various differences in experimental practice in each of the branches of science. For example, agricultural research frequently uses randomized experiments (e.g., to test the comparative effectiveness of different fertilizers), while experimental economics
often involves experimental tests of theorized human behaviors without
relying on random assignment of individuals to treatment and control
conditions.
One of the first methodical approaches to experiments in the modern
sense is visible in the works of the Arab mathematician and scholar Ibn al-Haytham. He conducted his experiments in the field of optics—going back to optical and mathematical problems in the works of Ptolemy—by
controlling his experiments due to factors such as self-criticality,
reliance on visible results of the experiments as well as a criticality
in terms of earlier results. He was one of the first scholars to use an
inductive-experimental method for achieving results. In his Book of Optics he describes the fundamentally new approach to knowledge and research in an experimental sense:
We should, that is, recommence the
inquiry into its principles and premisses, beginning our investigation
with an inspection of the things that exist and a survey of the
conditions of visible objects. We should distinguish the properties of
particulars, and gather by induction what pertains to the eye when
vision takes place and what is found in the manner of sensation to be
uniform, unchanging, manifest and not subject to doubt. After which we
should ascend in our inquiry and reasonings, gradually and orderly,
criticizing premisses and exercising caution in regard to
conclusions—our aim in all that we make subject to inspection and review
being to employ justice, not to follow prejudice, and to take care in
all that we judge and criticize that we seek the truth and not to be
swayed by opinion. We may in this way eventually come to the truth that
gratifies the heart and gradually and carefully reach the end at which
certainty appears; while through criticism and caution we may seize the
truth that dispels disagreement and resolves doubtful matters. For all
that, we are not free from that human turbidity which is in the nature
of man; but we must do our best with what we possess of human power.
From God we derive support in all things.
According to his explanation, a strictly controlled test execution
with a sensibility for the subjectivity and susceptibility of outcomes
due to the nature of man is necessary. Furthermore, a critical view on
the results and outcomes of earlier scholars is necessary:
It is thus the duty of the man who
studies the writings of scientists, if learning the truth is his goal,
to make himself an enemy of all that he reads, and, applying his mind to
the core and margins of its content, attack it from every side. He
should also suspect himself as he performs his critical examination of
it, so that he may avoid falling into either prejudice or leniency.
Thus, a comparison of earlier results with the experimental results
is necessary for an objective experiment—the visible results being more
important. In the end, this may mean that an experimental researcher
must find enough courage to discard traditional opinions or results,
especially if these results are not experimental but results from a
logical/ mental derivation. In this process of critical consideration,
the man himself should not forget that he tends to subjective
opinions—through "prejudices" and "leniency"—and thus has to be critical
about his own way of building hypotheses.
Francis Bacon (1561–1626), an English philosopher and scientist active in the 17th century, became an influential supporter of experimental science in the English renaissance. He disagreed with the method of answering scientific questions by deduction—similar to Ibn al-Haytham—and
described it as follows: "Having first determined the question
according to his will, man then resorts to experience, and bending her
to conformity with his placets, leads her about like a captive in a
procession." Bacon wanted a method that relied on repeatable observations, or
experiments. Notably, he first ordered the scientific method as we
understand it today.
There
remains simple experience; which, if taken as it comes, is called
accident, if sought for, experiment. The true method of experience first
lights the candle [hypothesis], and then by means of the candle shows
the way [arranges and delimits the experiment]; commencing as it does
with experience duly ordered and digested, not bungling or erratic, and
from it deducing axioms [theories], and from established axioms again
new experiments.
In the centuries that followed, people who applied the scientific
method in different areas made important advances and discoveries. For
example, Galileo Galilei
(1564–1642) accurately measured time and experimented to make accurate
measurements and conclusions about the speed of a falling body. Antoine Lavoisier (1743–1794), a French chemist, used experiment to describe new areas, such as combustion and biochemistry and to develop the theory of conservation of mass (matter). Louis Pasteur (1822–1895) used the scientific method to disprove the prevailing theory of spontaneous generation and to develop the germ theory of disease. Because of the importance of controlling potentially confounding variables, the use of well-designed laboratory experiments is preferred when possible.
A considerable amount of progress on the design and analysis of
experiments occurred in the early 20th century, with contributions from
statisticians such as Ronald Fisher (1890–1962), Jerzy Neyman (1894–1981), Oscar Kempthorne (1919–2000), Gertrude Mary Cox (1900–1978), and William Gemmell Cochran (1909–1980), among others.
Types
Experiments
might be categorized according to a number of dimensions, depending
upon professional norms and standards in different fields of study.
In some disciplines (e.g., psychology or political science), a 'true experiment' is a method of social research in which there are two kinds of variables. The independent variable is manipulated by the experimenter, and the dependent variable is measured. The signifying characteristic of a true experiment is that it randomly allocates the subjects to neutralize experimenter bias, and ensures, over a large number of iterations of the experiment, that it controls for all confounding factors.
Depending on the discipline, experiments can be conducted to accomplish different but not mutually exclusive goals: test theories, search for and document phenomena, develop theories, or
advise policymakers. These goals also relate differently to validity concerns.
A controlled experiment often compares the results obtained from experimental samples against control
samples, which are practically identical to the experimental sample
except for the one aspect whose effect is being tested (the independent variable). A good example would be a drug trial. The sample or group receiving the drug would be the experimental group (treatment group); and the one receiving the placebo or regular treatment would be the control one. In many laboratory experiments it is good practice to have several replicate samples for the test being performed and have both a positive control and a negative control.
The results from replicate samples can often be averaged, or if one of
the replicates is obviously inconsistent with the results from the other
samples, it can be discarded as being the result of an experimental
error (some step of the test procedure may have been mistakenly omitted
for that sample). Most often, tests are done in duplicate or triplicate.
A positive control is a procedure similar to the actual experimental
test but is known from previous experience to give a positive result. A
negative control is known to give a negative result. The positive
control confirms that the basic conditions of the experiment were able
to produce a positive result, even if none of the actual experimental
samples produce a positive result. The negative control demonstrates the
base-line result obtained when a test does not produce a measurable
positive result. Most often the value of the negative control is treated
as a "background" value to subtract from the test sample results.
Sometimes the positive control takes the quadrant of a standard curve.
An example that is often used in teaching laboratories is a controlled proteinassay.
Students might be given a fluid sample containing an unknown (to the
student) amount of protein. It is their job to correctly perform a
controlled experiment in which they determine the concentration of
protein in the fluid sample (usually called the "unknown sample"). The
teaching lab would be equipped with a protein standard solution
with a known protein concentration. Students could make several
positive control samples containing various dilutions of the protein
standard. Negative control samples would contain all of the reagents for
the protein assay but no protein. In this example, all samples are
performed in duplicate. The assay is a colorimetric assay in which a spectrophotometer
can measure the amount of protein in samples by detecting a colored
complex formed by the interaction of protein molecules and molecules of
an added dye. In the illustration, the results for the diluted test
samples can be compared to the results of the standard curve (the blue
line in the illustration) to estimate the amount of protein in the
unknown sample.
Controlled experiments can be performed when it is difficult to
exactly control all the conditions in an experiment. In this case, the
experiment begins by creating two or more sample groups that are
probabilistically equivalent, which means that measurements of traits
should be similar among the groups and that the groups should respond in
the same manner if given the same treatment. This equivalency is
determined by statistical methods that take into account the amount of variation between individuals and the number of individuals in each group. In fields such as microbiology and chemistry,
where there is very little variation between individuals and the group
size is easily in the millions, these statistical methods are often
bypassed and simply splitting a solution into equal parts is assumed to
produce identical sample groups.
Once equivalent groups have been formed, the experimenter tries to treat them identically except for the one variable that he or she wishes to isolate. Human experimentation requires special safeguards against outside variables such as the placebo effect. Such experiments are generally double blind,
meaning that neither the volunteer nor the researcher knows which
individuals are in the control group or the experimental group until
after all of the data have been collected. This ensures that any effects
on the volunteer are due to the treatment itself and are not a response
to the knowledge that he is being treated.
In human experiments, researchers may give a subject (person) a stimulus that the subject responds to. The goal of the experiment is to measure the response to the stimulus by a test method.
In the design of experiments, two or more "treatments" are applied to estimate the difference between the mean responses
for the treatments. For example, an experiment on baking bread could
estimate the difference in the responses associated with quantitative
variables, such as the ratio of water to flour, and with qualitative
variables, such as strains of yeast. Experimentation is the step in the scientific method that helps people decide between two or more competing explanations—or hypotheses.
These hypotheses suggest reasons to explain a phenomenon or predict the
results of an action. An example might be the hypothesis that "if I
release this ball, it will fall to the floor": this suggestion can then
be tested by carrying out the experiment of letting go of the ball, and
observing the results. Formally, a hypothesis is compared against its
opposite or null hypothesis
("if I release this ball, it will not fall to the floor"). The null
hypothesis is that there is no explanation or predictive power of the
phenomenon through the reasoning that is being investigated. Once
hypotheses are defined, an experiment can be carried out and the results
analysed to confirm, refute, or define the accuracy of the hypotheses.
The term "experiment" usually implies a controlled experiment, but
sometimes controlled experiments are prohibitively difficult,
impossible, unethical or illegal. In this case researchers resort to
natural experiments or quasi-experiments. Natural experiments rely solely on observations of the variables of the system
under study, rather than manipulation of just one or a few variables as
occurs in controlled experiments. To the degree possible, they attempt
to collect data for the system in such a way that contribution from all
variables can be determined, and where the effects of variation in
certain variables remain approximately constant so that the effects of
other variables can be discerned. The degree to which this is possible
depends on the observed correlation between explanatory variables in the observed data. When these variables are not
well correlated, natural experiments can approach the power of
controlled experiments. Usually, however, there is some correlation
between these variables, which reduces the reliability of natural
experiments relative to what could be concluded if a controlled
experiment were performed. Also, because natural experiments usually
take place in uncontrolled environments, variables from undetected
sources are neither measured nor held constant, and these may produce
illusory correlations in variables under study.
Much research in several science disciplines, including economics, human geography, archaeology, sociology, cultural anthropology, geology, paleontology, ecology, meteorology, and astronomy,
relies on quasi-experiments. For example, in astronomy it is clearly
impossible, when testing the hypothesis "Stars are collapsed clouds of
hydrogen", to start out with a giant cloud of hydrogen, and then perform
the experiment of waiting a few billion years for it to form a star.
However, by observing various clouds of hydrogen in various states of
collapse, and other implications of the hypothesis (for example, the
presence of various spectral emissions from the light of stars), we can
collect data we require to support the hypothesis. An early example of
this type of experiment was the first verification in the 17th century
that light does not travel from place to place instantaneously, but
instead has a measurable speed. Observation of the appearance of the
moons of Jupiter were slightly delayed when Jupiter was farther from
Earth, as opposed to when Jupiter was closer to Earth; and this
phenomenon was used to demonstrate that the difference in the time of
appearance of the moons was consistent with a measurable speed.
Field experiments are so named to distinguish them from laboratory
experiments, which enforce scientific control by testing a hypothesis
in the artificial and highly controlled setting of a laboratory. Often
used in the social sciences, and especially in economic analyses of
education and health interventions, field experiments have the advantage
that outcomes are observed in a natural setting rather than in a
contrived laboratory environment. For this reason, field experiments are
sometimes seen as having higher external validity
than laboratory experiments. However, like natural experiments, field
experiments suffer from the possibility of contamination: experimental
conditions can be controlled with more precision and certainty in the
lab. Yet some phenomena (e.g., voter turnout in an election) cannot be
easily studied in a laboratory.
Observational studies
The black box model for observation (input and output are observables). When there are a feedback with some observer's control, as illustrated, the observation is also an experiment.
An observational study
is used when it is impractical, unethical, cost-prohibitive (or
otherwise inefficient) to fit a physical or social system into a
laboratory setting, to completely control confounding factors, or to
apply random assignment. It can also be used when confounding factors
are either limited or known well enough to analyze the data in light of
them (though this may be rare when social phenomena are under
examination). For an observational science to be valid, the experimenter
must know and account for confounding
factors. In these situations, observational studies have value because
they often suggest hypotheses that can be tested with randomized
experiments or by collecting fresh data.
Fundamentally, however, observational studies are not
experiments. By definition, observational studies lack the manipulation
required for Baconian experiments.
In addition, observational studies (e.g., in biological or social
systems) often involve variables that are difficult to quantify or
control. Observational studies are limited because they lack the
statistical properties of randomized experiments. In a randomized
experiment, the method of randomization specified in the experimental
protocol guides the statistical analysis, which is usually specified
also by the experimental protocol. Without a statistical model that reflects an objective randomization, the statistical analysis relies on a subjective model. Inferences from subjective models are unreliable in theory and practice. In fact, there are several cases where carefully conducted
observational studies consistently give wrong results, that is, where
the results of the observational studies are inconsistent and also
differ from the results of experiments. For example, epidemiological
studies of colon cancer consistently show beneficial correlations with
broccoli consumption, while experiments find no benefit.
A particular problem with observational studies involving human
subjects is the great difficulty attaining fair comparisons between
treatments (or exposures), because such studies are prone to selection bias,
and groups receiving different treatments (exposures) may differ
greatly according to their covariates (age, height, weight, medications,
exercise, nutritional status, ethnicity, family medical history, etc.).
In contrast, randomization implies that for each covariate, the mean
for each group is expected to be the same. For any randomized trial,
some variation from the mean is expected, of course, but the
randomization ensures that the experimental groups have mean values that
are close, due to the central limit theorem and Markov's inequality.
With inadequate randomization or low sample size, the systematic
variation in covariates between the treatment groups (or exposure
groups) makes it difficult to separate the effect of the treatment
(exposure) from the effects of the other covariates, most of which have
not been measured. The mathematical models used to analyze such data
must consider each differing covariate (if measured), and results are
not meaningful if a covariate is neither randomized nor included in the
model.
To avoid conditions that render an experiment far less useful, physicians conducting medical trials—say for U.S. Food and Drug Administration
approval—quantify and randomize the covariates that can be identified.
Researchers attempt to reduce the biases of observational studies with matching methods such as propensity score matching,
which require large populations of subjects and extensive information
on covariates. However, propensity score matching is no longer
recommended as a technique because it can increase, rather than
decrease, bias. Outcomes are also quantified when possible (bone density, the amount of
some cell or substance in the blood, physical strength or endurance,
etc.) and not based on a subject's or a professional observer's opinion.
In this way, the design of an observational study can render the
results more objective and therefore, more convincing.
By placing the distribution of the independent variable(s) under the
control of the researcher, an experiment—particularly when it involves human subjects—introduces
potential ethical considerations, such as balancing benefit and harm,
fairly distributing interventions (e.g., treatments for a disease), and informed consent.
For example, in psychology or health care, it is unethical to provide a
substandard treatment to patients. Therefore, ethical review boards are
supposed to stop clinical trials and other experiments unless a new
treatment is believed to offer benefits as good as current best
practice. It is also generally unethical (and often illegal) to conduct
randomized experiments on the effects of substandard or harmful
treatments, such as the effects of ingesting arsenic on human health. To
understand the effects of such exposures, scientists sometimes use
observational studies to understand the effects of those factors.
Even when experimental research does not directly involve human
subjects, it may still present ethical concerns. For example, the
nuclear bomb experiments conducted by the Manhattan Project
implied the use of nuclear reactions to harm human beings even though
the experiments did not directly involve any human subjects.