Search This Blog

Monday, April 19, 2021

Scientific method

From Wikipedia, the free encyclopedia

The scientific method is an empirical method of acquiring knowledge that has characterized the development of science since at least the 17th century. It involves careful observation, applying rigorous skepticism about what is observed, given that cognitive assumptions can distort how one interprets the observation. It involves formulating hypotheses, via induction, based on such observations; experimental and measurement-based testing of deductions drawn from the hypotheses; and refinement (or elimination) of the hypotheses based on the experimental findings. These are principles of the scientific method, as distinguished from a definitive series of steps applicable to all scientific enterprises.

Although procedures vary from one field of inquiry to another, the underlying process is frequently the same from one field to another. The process in the scientific method involves making conjectures (hypotheses), deriving predictions from them as logical consequences, and then carrying out experiments or empirical observations based on those predictions. A hypothesis is a conjecture, based on knowledge obtained while seeking answers to the question. The hypothesis might be very specific, or it might be broad. Scientists then test hypotheses by conducting experiments or studies. A scientific hypothesis must be falsifiable, implying that it is possible to identify a possible outcome of an experiment or observation that conflicts with predictions deduced from the hypothesis; otherwise, the hypothesis cannot be meaningfully tested.

The purpose of an experiment is to determine whether observations agree with or conflict with the predictions derived from a hypothesis. Experiments can take place anywhere from a garage to CERN's Large Hadron Collider. There are difficulties in a formulaic statement of method, however. Though the scientific method is often presented as a fixed sequence of steps, it represents rather a set of general principles. Not all steps take place in every scientific inquiry (nor to the same degree), and they are not always in the same order.

History

Aristotle (384–322 BCE). "As regards his method, Aristotle is recognized as the inventor of scientific method because of his refined analysis of logical implications contained in demonstrative discourse, which goes well beyond natural logic and does not owe anything to the ones who philosophized before him." – Riccardo Pozzo
 
Ibn al-Haytham (965–1039). A polymath, considered by some to be the father of modern scientific methodology, due to his emphasis on experimental data and reproducibility of its results.
 
Johannes Kepler (1571–1630). "Kepler shows his keen logical sense in detailing the whole process by which he finally arrived at the true orbit. This is the greatest piece of Retroductive reasoning ever performed." – C. S. Peirce, c. 1896, on Kepler's reasoning through explanatory hypotheses
 
Galileo Galilei (1564–1642). According to Albert Einstein, "All knowledge of reality starts from experience and ends in it. Propositions arrived at by purely logical means are completely empty as regards reality. Because Galileo saw this, and particularly because he drummed it into the scientific world, he is the father of modern physics – indeed, of modern science altogether."

Important debates in the history of science concern rationalism, especially as advocated by René Descartes; inductivism and/or empiricism, as argued for by Francis Bacon, and rising to particular prominence with Isaac Newton and his followers; and hypothetico-deductivism, which came to the fore in the early 19th century.

The term "scientific method" emerged in the 19th century, when a significant institutional development of science was taking place and terminologies establishing clear boundaries between science and non-science, such as "scientist" and "pseudoscience", appeared. Throughout the 1830s and 1850s, by which time Baconianism was popular, naturalists like William Whewell, John Herschel, John Stuart Mill engaged in debates over "induction" and "facts" and were focused on how to generate knowledge. In the late 19th and early 20th centuries, a debate over realism vs. antirealism was conducted as powerful scientific theories extended beyond the realm of the observable.

The term "scientific method" came into popular use in the twentieth century, popping up in dictionaries and science textbooks, although there was little scientific consensus over its meaning. Although there was a growth through the middle of the twentieth century, by the 1960s and 1970s numerous influential philosophers of science such as Thomas Kuhn and Paul Feyerabend had questioned the universality of the "scientific method" and in doing so largely replaced the notion of science as a homogeneous and universal method with that of it being a heterogeneous and local practice. In particular, Paul Feyerabend, in the 1975 first edition of his book Against Method, argued against there being any universal rules of science. Later examples include physicist Lee Smolin's 2013 essay "There Is No Scientific Method" and historian of science Daniel Thurs's chapter in the 2015 book Newton's Apple and Other Myths about Science, which concluded that the scientific method is a myth or, at best, an idealization. Philosophers Robert Nola and Howard Sankey, in their 2007 book Theories of Scientific Method, said that debates over scientific method continue, and argued that Feyerabend, despite the title of Against Method, accepted certain rules of method and attempted to justify those rules with a metamethodology.

Overview

The scientific method is the process by which science is carried out. As in other areas of inquiry, science (through the scientific method) can build on previous knowledge and develop a more sophisticated understanding of its topics of study over time. This model can be seen to underlie the scientific revolution.

The ubiquitous element in scientific method is empiricism. This is in opposition to stringent forms of rationalism: the scientific method embodies that reason alone cannot solve a particular scientific problem. A strong formulation of the scientific method is not always aligned with a form of empiricism in which the empirical data is put forward in the form of experience or other abstracted forms of knowledge; in current scientific practice, however, the use of scientific modelling and reliance on abstract typologies and theories is normally accepted. The scientific method is of necessity also an expression of an opposition to claims that e.g. revelation, political or religious dogma, appeals to tradition, commonly held beliefs, common sense, or, importantly, currently held theories, are the only possible means of demonstrating truth.

Different early expressions of empiricism and the scientific method can be found throughout history, for instance with the ancient Stoics, Epicurus, Alhazen, Roger Bacon, and William of Ockham. From the 16th century onwards, experiments were advocated by Francis Bacon, and performed by Giambattista della Porta, Johannes Kepler, and Galileo Galilei. There was particular development aided by theoretical works by Francisco Sanches, John Locke, George Berkeley, and David Hume.

The hypothetico-deductive model formulated in the 20th century, is the ideal although it has undergone significant revision since first proposed. Staddon (2017) argues it is a mistake to try following rules which are best learned through careful study of examples of scientific investigation.

Process

The overall process involves making conjectures (hypotheses), deriving predictions from them as logical consequences, and then carrying out experiments based on those predictions to determine whether the original conjecture was correct. There are difficulties in a formulaic statement of method, however. Though the scientific method is often presented as a fixed sequence of steps, these actions are better considered as general principles. Not all steps take place in every scientific inquiry (nor to the same degree), and they are not always done in the same order. As noted by scientist and philosopher William Whewell (1794–1866), "invention, sagacity, [and] genius" are required at every step.

Formulation of a question

The question can refer to the explanation of a specific observation, as in "Why is the sky blue?" but can also be open-ended, as in "How can I design a drug to cure this particular disease?" This stage frequently involves finding and evaluating evidence from previous experiments, personal scientific observations or assertions, as well as the work of other scientists. If the answer is already known, a different question that builds on the evidence can be posed. When applying the scientific method to research, determining a good question can be very difficult and it will affect the outcome of the investigation.

Hypothesis

A hypothesis is a conjecture, based on knowledge obtained while formulating the question, that may explain any given behavior. The hypothesis might be very specific; for example, Einstein's equivalence principle or Francis Crick's "DNA makes RNA makes protein", or it might be broad; for example, unknown species of life dwell in the unexplored depths of the oceans. A statistical hypothesis is a conjecture about a given statistical population. For example, the population might be people with a particular disease. The conjecture might be that a new drug will cure the disease in some of those people. Terms commonly associated with statistical hypotheses are null hypothesis and alternative hypothesis. A null hypothesis is the conjecture that the statistical hypothesis is false; for example, that the new drug does nothing and that any cure is caused by chance. Researchers normally want to show that the null hypothesis is false. The alternative hypothesis is the desired outcome, that the drug does better than chance. A final point: a scientific hypothesis must be falsifiable, meaning that one can identify a possible outcome of an experiment that conflicts with predictions deduced from the hypothesis; otherwise, it cannot be meaningfully tested.

Prediction

This step involves determining the logical consequences of the hypothesis. One or more predictions are then selected for further testing. The more unlikely that a prediction would be correct simply by coincidence, then the more convincing it would be if the prediction were fulfilled; evidence is also stronger if the answer to the prediction is not already known, due to the effects of hindsight bias (see also postdiction). Ideally, the prediction must also distinguish the hypothesis from likely alternatives; if two hypotheses make the same prediction, observing the prediction to be correct is not evidence for either one over the other. (These statements about the relative strength of evidence can be mathematically derived using Bayes' Theorem).

Testing

This is an investigation of whether the real world behaves as predicted by the hypothesis. Scientists (and other people) test hypotheses by conducting experiments. The purpose of an experiment is to determine whether observations of the real world agree with or conflict with the predictions derived from a hypothesis. If they agree, confidence in the hypothesis increases; otherwise, it decreases. Agreement does not assure that the hypothesis is true; future experiments may reveal problems. Karl Popper advised scientists to try to falsify hypotheses, i.e., to search for and test those experiments that seem most doubtful. Large numbers of successful confirmations are not convincing if they arise from experiments that avoid risk. Experiments should be designed to minimize possible errors, especially through the use of appropriate scientific controls. For example, tests of medical treatments are commonly run as double-blind tests. Test personnel, who might unwittingly reveal to test subjects which samples are the desired test drugs and which are placebos, are kept ignorant of which are which. Such hints can bias the responses of the test subjects. Furthermore, failure of an experiment does not necessarily mean the hypothesis is false. Experiments always depend on several hypotheses, e.g., that the test equipment is working properly, and a failure may be a failure of one of the auxiliary hypotheses. (See the Duhem–Quine thesis.) Experiments can be conducted in a college lab, on a kitchen table, at CERN's Large Hadron Collider, at the bottom of an ocean, on Mars (using one of the working rovers), and so on. Astronomers do experiments, searching for planets around distant stars. Finally, most individual experiments address highly specific topics for reasons of practicality. As a result, evidence about broader topics is usually accumulated gradually.

Analysis

This involves determining what the results of the experiment show and deciding on the next actions to take. The predictions of the hypothesis are compared to those of the null hypothesis, to determine which is better able to explain the data. In cases where an experiment is repeated many times, a statistical analysis such as a chi-squared test may be required. If the evidence has falsified the hypothesis, a new hypothesis is required; if the experiment supports the hypothesis but the evidence is not strong enough for high confidence, other predictions from the hypothesis must be tested. Once a hypothesis is strongly supported by evidence, a new question can be asked to provide further insight on the same topic. Evidence from other scientists and experience are frequently incorporated at any stage in the process. Depending on the complexity of the experiment, many iterations may be required to gather sufficient evidence to answer a question with confidence or to build up many answers to highly specific questions in order to answer a single broader question.

DNA example

The basic elements of the scientific method are illustrated by the following example from the discovery of the structure of DNA:

  • Question: Previous investigation of DNA had determined its chemical composition (the four nucleotides), the structure of each individual nucleotide, and other properties. X-ray diffraction patterns of DNA by Florence Bell in her Ph.D. thesis (1939) were similar to (although not as good as) "photo 51", but this research was interrupted by the events of World War II. DNA had been identified as the carrier of genetic information by the Avery–MacLeod–McCarty experiment in 1944, but the mechanism of how genetic information was stored in DNA was unclear.
  • Hypothesis: Linus Pauling, Francis Crick and James D. Watson hypothesized that DNA had a helical structure.
  • Prediction: If DNA had a helical structure, its X-ray diffraction pattern would be X-shaped. This prediction was determined using the mathematics of the helix transform, which had been derived by Cochran, Crick and Vand (and independently by Stokes). This prediction was a mathematical construct, completely independent from the biological problem at hand.
  • Experiment: Rosalind Franklin used pure DNA to perform X-ray diffraction to produce photo 51. The results showed an X-shape.
  • Analysis: When Watson saw the detailed diffraction pattern, he immediately recognized it as a helix. He and Crick then produced their model, using this information along with the previously known information about DNA's composition, especially Chargaff's rules of base pairing.

The discovery became the starting point for many further studies involving the genetic material, such as the field of molecular genetics, and it was awarded the Nobel Prize in 1962. Each step of the example is examined in more detail later in the article.

Other components

The scientific method also includes other components required even when all the iterations of the steps above have been completed:

Replication

If an experiment cannot be repeated to produce the same results, this implies that the original results might have been in error. As a result, it is common for a single experiment to be performed multiple times, especially when there are uncontrolled variables or other indications of experimental error. For significant or surprising results, other scientists may also attempt to replicate the results for themselves, especially if those results would be important to their own work. Replication has become a contentious issue in social and biomedical science where treatments are administered to groups of individuals. 

Typically an experimental group gets the treatment, such as drug, and the control group gets a placebo. John Ioannidis in 2005 pointed out that the method being used has led to many findings that cannot be replicated.

External review

The process of peer review involves evaluation of the experiment by experts, who typically give their opinions anonymously. Some journals request that the experimenter provide lists of possible peer reviewers, especially if the field is highly specialized. Peer-review does not certify the correctness of the results, only that, in the opinion of the reviewer, the experiments themselves were sound (based on the description supplied by the experimenter). If the work passes peer review, which occasionally may require new experiments requested by the reviewers, it will be published in a peer-reviewed scientific journal. The specific journal that publishes the results indicates the perceived quality of the work.

Data recording and sharing

Scientists typically are careful in recording their data, a requirement promoted by Ludwik Fleck (1896–1961) and others. Though not typically required, they might be requested to supply this data to other scientists who wish to replicate their original results (or parts of their original results), extending to the sharing of any experimental samples that may be difficult to obtain.

Scientific inquiry

Scientific inquiry generally aims to obtain knowledge in the form of testable explanations that scientists can use to predict the results of future experiments. This allows scientists to gain a better understanding of the topic under study, and later to use that understanding to intervene in its causal mechanisms (such as to cure disease). The better an explanation is at making predictions, the more useful it frequently can be, and the more likely it will continue to explain a body of evidence better than its alternatives. The most successful explanations – those which explain and make accurate predictions in a wide range of circumstances – are often called scientific theories.

Most experimental results do not produce large changes in human understanding; improvements in theoretical scientific understanding typically result from a gradual process of development over time, sometimes across different domains of science. Scientific models vary in the extent to which they have been experimentally tested and for how long, and in their acceptance in the scientific community. In general, explanations become accepted over time as evidence accumulates on a given topic, and the explanation in question proves more powerful than its alternatives at explaining the evidence. Often subsequent researchers re-formulate the explanations over time, or combined explanations to produce new explanations.

Tow sees the scientific method in terms of an evolutionary algorithm applied to science and technology.

Properties of scientific inquiry

Scientific knowledge is closely tied to empirical findings and can remain subject to falsification if new experimental observations are incompatible with what is found. That is, no theory can ever be considered final since new problematic evidence might be discovered. If such evidence is found, a new theory may be proposed, or (more commonly) it is found that modifications to the previous theory are sufficient to explain the new evidence. The strength of a theory can be argued to relate to how long it has persisted without major alteration to its core principles.

Theories can also become subsumed by other theories. For example, Newton's laws explained thousands of years of scientific observations of the planets almost perfectly. However, these laws were then determined to be special cases of a more general theory (relativity), which explained both the (previously unexplained) exceptions to Newton's laws and predicted and explained other observations such as the deflection of light by gravity. Thus, in certain cases independent, unconnected, scientific observations can be connected to each other, unified by principles of increasing explanatory power.

Since new theories might be more comprehensive than what preceded them, and thus be able to explain more than previous ones, successor theories might be able to meet a higher standard by explaining a larger body of observations than their predecessors. For example, the theory of evolution explains the diversity of life on Earth, how species adapt to their environments, and many other patterns observed in the natural world; its most recent major modification was unification with genetics to form the modern evolutionary synthesis. In subsequent modifications, it has also subsumed aspects of many other fields such as biochemistry and molecular biology.

Beliefs and biases

Flying gallop as shown by this painting (Théodore Géricault, 1821) is falsified; see below.
 
Muybridge's photographs of The Horse in Motion, 1878, were used to answer the question of whether all four feet of a galloping horse are ever off the ground at the same time. This demonstrates a use of photography as an experimental tool in science.

Scientific methodology often directs that hypotheses be tested in controlled conditions wherever possible. This is frequently possible in certain areas, such as in the biological sciences, and more difficult in other areas, such as in astronomy.

The practice of experimental control and reproducibility can have the effect of diminishing the potentially harmful effects of circumstance, and to a degree, personal bias. For example, pre-existing beliefs can alter the interpretation of results, as in confirmation bias; this is a heuristic that leads a person with a particular belief to see things as reinforcing their belief, even if another observer might disagree (in other words, people tend to observe what they expect to observe).

A historical example is the belief that the legs of a galloping horse are splayed at the point when none of the horse's legs touch the ground, to the point of this image being included in paintings by its supporters. However, the first stop-action pictures of a horse's gallop by Eadweard Muybridge showed this to be false, and that the legs are instead gathered together.

Another important human bias that plays a role is a preference for new, surprising statements (see appeal to novelty), which can result in a search for evidence that the new is true. Poorly attested beliefs can be believed and acted upon via a less rigorous heuristic.

Goldhaber and Nieto published in 2010 the observation that if theoretical structures with "many closely neighboring subjects are described by connecting theoretical concepts, then the theoretical structure acquires a robustness which makes it increasingly hard – though certainly never impossible – to overturn". When a narrative is constructed its elements become easier to believe. For more on the narrative fallacy, see also Fleck 1979, p. 27: "Words and ideas are originally phonetic and mental equivalences of the experiences coinciding with them. ... Such proto-ideas are at first always too broad and insufficiently specialized. ... Once a structurally complete and closed system of opinions consisting of many details and relations has been formed, it offers enduring resistance to anything that contradicts it." Sometimes, these have their elements assumed a priori, or contain some other logical or methodological flaw in the process that ultimately produced them. Donald M. MacKay has analyzed these elements in terms of limits to the accuracy of measurement and has related them to instrumental elements in a category of measurement.

Elements of the scientific method

There are different ways of outlining the basic method used for scientific inquiry. The scientific community and philosophers of science generally agree on the following classification of method components. These methodological elements and organization of procedures tend to be more characteristic of natural sciences than social sciences. Nonetheless, the cycle of formulating hypotheses, testing and analyzing the results, and formulating new hypotheses, will resemble the cycle described below.

The scientific method is an iterative, cyclical process through which information is continually revised. It is generally recognized to develop advances in knowledge through the following elements, in varying combinations or contributions:

  • Characterizations (observations, definitions, and measurements of the subject of inquiry)
  • Hypotheses (theoretical, hypothetical explanations of observations and measurements of the subject)
  • Predictions (inductive and deductive reasoning from the hypothesis or theory)
  • Experiments (tests of all of the above)

Each element of the scientific method is subject to peer review for possible mistakes. These activities do not describe all that scientists do but apply mostly to experimental sciences (e.g., physics, chemistry, and biology). The elements above are often taught in the educational system as "the scientific method".

The scientific method is not a single recipe: it requires intelligence, imagination, and creativity. In this sense, it is not a mindless set of standards and procedures to follow, but is rather an ongoing cycle, constantly developing more useful, accurate and comprehensive models and methods. For example, when Einstein developed the Special and General Theories of Relativity, he did not in any way refute or discount Newton's Principia. On the contrary, if the astronomically massive, the feather-light, and the extremely fast are removed from Einstein's theories – all phenomena Newton could not have observed – Newton's equations are what remain. Einstein's theories are expansions and refinements of Newton's theories and, thus, increase confidence in Newton's work.

A linearized, pragmatic scheme of the four points above is sometimes offered as a guideline for proceeding:

  1. Define a question
  2. Gather information and resources (observe)
  3. Form an explanatory hypothesis
  4. Test the hypothesis by performing an experiment and collecting data in a reproducible manner
  5. Analyze the data
  6. Interpret the data and draw conclusions that serve as a starting point for new hypothesis
  7. Publish results
  8. Retest (frequently done by other scientists)

The iterative cycle inherent in this step-by-step method goes from point 3 to 6 back to 3 again.

While this schema outlines a typical hypothesis/testing method, a number of philosophers, historians, and sociologists of science, including Paul Feyerabend, claim that such descriptions of scientific method have little relation to the ways that science is actually practiced.

Characterizations

The scientific method depends upon increasingly sophisticated characterizations of the subjects of investigation. (The subjects can also be called unsolved problems or the unknowns.) For example, Benjamin Franklin conjectured, correctly, that St. Elmo's fire was electrical in nature, but it has taken a long series of experiments and theoretical changes to establish this. While seeking the pertinent properties of the subjects, careful thought may also entail some definitions and observations; the observations often demand careful measurements and/or counting.

The systematic, careful collection of measurements or counts of relevant quantities is often the critical difference between pseudo-sciences, such as alchemy, and science, such as chemistry or biology. Scientific measurements are usually tabulated, graphed, or mapped, and statistical manipulations, such as correlation and regression, performed on them. The measurements might be made in a controlled setting, such as a laboratory, or made on more or less inaccessible or unmanipulatable objects such as stars or human populations. The measurements often require specialized scientific instruments such as thermometers, spectroscopes, particle accelerators, or voltmeters, and the progress of a scientific field is usually intimately tied to their invention and improvement.

I am not accustomed to saying anything with certainty after only one or two observations.

— Andreas Vesalius, (1546)

Uncertainty

Measurements in scientific work are also usually accompanied by estimates of their uncertainty. The uncertainty is often estimated by making repeated measurements of the desired quantity. Uncertainties may also be calculated by consideration of the uncertainties of the individual underlying quantities used. Counts of things, such as the number of people in a nation at a particular time, may also have an uncertainty due to data collection limitations. Or counts may represent a sample of desired quantities, with an uncertainty that depends upon the sampling method used and the number of samples taken.

Definition

Measurements demand the use of operational definitions of relevant quantities. That is, a scientific quantity is described or defined by how it is measured, as opposed to some more vague, inexact or "idealized" definition. For example, electric current, measured in amperes, may be operationally defined in terms of the mass of silver deposited in a certain time on an electrode in an electrochemical device that is described in some detail. The operational definition of a thing often relies on comparisons with standards: the operational definition of "mass" ultimately relies on the use of an artifact, such as a particular kilogram of platinum-iridium kept in a laboratory in France.

The scientific definition of a term sometimes differs substantially from its natural language usage. For example, mass and weight overlap in meaning in common discourse, but have distinct meanings in mechanics. Scientific quantities are often characterized by their units of measure which can later be described in terms of conventional physical units when communicating the work.

New theories are sometimes developed after realizing certain terms have not previously been sufficiently clearly defined. For example, Albert Einstein's first paper on relativity begins by defining simultaneity and the means for determining length. These ideas were skipped over by Isaac Newton with, "I do not define time, space, place and motion, as being well known to all." Einstein's paper then demonstrates that they (viz., absolute time and length independent of motion) were approximations. Francis Crick cautions us that when characterizing a subject, however, it can be premature to define something when it remains ill-understood. In Crick's study of consciousness, he actually found it easier to study awareness in the visual system, rather than to study free will, for example. His cautionary example was the gene; the gene was much more poorly understood before Watson and Crick's pioneering discovery of the structure of DNA; it would have been counterproductive to spend much time on the definition of the gene, before them.

DNA-characterizations

The history of the discovery of the structure of DNA is a classic example of the elements of the scientific method: in 1950 it was known that genetic inheritance had a mathematical description, starting with the studies of Gregor Mendel, and that DNA contained genetic information (Oswald Avery's transforming principle). But the mechanism of storing genetic information (i.e., genes) in DNA was unclear. Researchers in Bragg's laboratory at Cambridge University made X-ray diffraction pictures of various molecules, starting with crystals of salt, and proceeding to more complicated substances. Using clues painstakingly assembled over decades, beginning with its chemical composition, it was determined that it should be possible to characterize the physical structure of DNA, and the X-ray images would be the vehicle. ..2. DNA-hypotheses

Another example: precession of Mercury

Precession of the perihelion – exaggerated in the case of Mercury, but observed in the case of S2's apsidal precession around Sagittarius A*

The characterization element can require extended and extensive study, even centuries. It took thousands of years of measurements, from the Chaldean, Indian, Persian, Greek, Arabic and European astronomers, to fully record the motion of planet Earth. Newton was able to include those measurements into consequences of his laws of motion. But the perihelion of the planet Mercury's orbit exhibits a precession that cannot be fully explained by Newton's laws of motion, as Leverrier pointed out in 1859. The observed difference for Mercury's precession between Newtonian theory and observation was one of the things that occurred to Albert Einstein as a possible early test of his theory of General relativity. His relativistic calculations matched observation much more closely than did Newtonian theory. The difference is approximately 43 arc-seconds per century.

Hypothesis development

A hypothesis is a suggested explanation of a phenomenon, or alternately a reasoned proposal suggesting a possible correlation between or among a set of phenomena.

Normally hypotheses have the form of a mathematical model. Sometimes, but not always, they can also be formulated as existential statements, stating that some particular instance of the phenomenon being studied has some characteristic and causal explanations, which have the general form of universal statements, stating that every instance of the phenomenon has a particular characteristic.

Scientists are free to use whatever resources they have – their own creativity, ideas from other fields, inductive reasoning, Bayesian inference, and so on – to imagine possible explanations for a phenomenon under study. Albert Einstein once observed that "there is no logical bridge between phenomena and their theoretical principles." Charles Sanders Peirce, borrowing a page from Aristotle (Prior Analytics, 2.25) described the incipient stages of inquiry, instigated by the "irritation of doubt" to venture a plausible guess, as abductive reasoning. The history of science is filled with stories of scientists claiming a "flash of inspiration", or a hunch, which then motivated them to look for evidence to support or refute their idea. Michael Polanyi made such creativity the centerpiece of his discussion of methodology.

William Glen observes that

the success of a hypothesis, or its service to science, lies not simply in its perceived "truth", or power to displace, subsume or reduce a predecessor idea, but perhaps more in its ability to stimulate the research that will illuminate ... bald suppositions and areas of vagueness.

In general scientists tend to look for theories that are "elegant" or "beautiful". Scientists often use these terms to refer to a theory that is in accordance with the known facts, but is nevertheless relatively simple and easy to handle. Occam's Razor serves as a rule of thumb for choosing the most desirable amongst a group of equally explanatory hypotheses.

To minimize the confirmation bias which results from entertaining a single hypothesis, strong inference emphasizes the need for entertaining multiple alternative hypotheses.

DNA-hypotheses

Linus Pauling proposed that DNA might be a triple helix. This hypothesis was also considered by Francis Crick and James D. Watson but discarded. When Watson and Crick learned of Pauling's hypothesis, they understood from existing data that Pauling was wrong and that Pauling would soon admit his difficulties with that structure. So, the race was on to figure out the correct structure (except that Pauling did not realize at the time that he was in a race) ..3. DNA-predictions

Predictions from the hypothesis

Any useful hypothesis will enable predictions, by reasoning including deductive reasoning. It might predict the outcome of an experiment in a laboratory setting or the observation of a phenomenon in nature. The prediction can also be statistical and deal only with probabilities.

It is essential that the outcome of testing such a prediction be currently unknown. Only in this case does a successful outcome increase the probability that the hypothesis is true. If the outcome is already known, it is called a consequence and should have already been considered while formulating the hypothesis.

If the predictions are not accessible by observation or experience, the hypothesis is not yet testable and so will remain to that extent unscientific in a strict sense. A new technology or theory might make the necessary experiments feasible. For example, while a hypothesis on the existence of other intelligent species may be convincing with scientifically based speculation, there is no known experiment that can test this hypothesis. Therefore, science itself can have little to say about the possibility. In the future, a new technique may allow for an experimental test and the speculation would then become part of accepted science.

DNA-predictions

James D. Watson, Francis Crick, and others hypothesized that DNA had a helical structure. This implied that DNA's X-ray diffraction pattern would be 'x shaped'. This prediction followed from the work of Cochran, Crick and Vand (and independently by Stokes). The Cochran-Crick-Vand-Stokes theorem provided a mathematical explanation for the empirical observation that diffraction from helical structures produces x shaped patterns.

In their first paper, Watson and Crick also noted that the double helix structure they proposed provided a simple mechanism for DNA replication, writing, "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material". ..4. DNA-experiments

Another example: general relativity

Einstein's theory of general relativity makes several specific predictions about the observable structure of spacetime, such as that light bends in a gravitational field, and that the amount of bending depends in a precise way on the strength of that gravitational field. Arthur Eddington's observations made during a 1919 solar eclipse supported General Relativity rather than Newtonian gravitation.

Experiments

Once predictions are made, they can be sought by experiments. If the test results contradict the predictions, the hypotheses which entailed them are called into question and become less tenable. Sometimes the experiments are conducted incorrectly or are not very well designed when compared to a crucial experiment. If the experimental results confirm the predictions, then the hypotheses are considered more likely to be correct, but might still be wrong and continue to be subject to further testing. The experimental control is a technique for dealing with observational error. This technique uses the contrast between multiple samples (or observations) under differing conditions to see what varies or what remains the same. We vary the conditions for each measurement, to help isolate what has changed. Mill's canons can then help us figure out what the important factor is. Factor analysis is one technique for discovering the important factor in an effect.

Depending on the predictions, the experiments can have different shapes. It could be a classical experiment in a laboratory setting, a double-blind study or an archaeological excavation. Even taking a plane from New York to Paris is an experiment that tests the aerodynamical hypotheses used for constructing the plane.

Scientists assume an attitude of openness and accountability on the part of those conducting an experiment. Detailed record-keeping is essential, to aid in recording and reporting on the experimental results, and supports the effectiveness and integrity of the procedure. They will also assist in reproducing the experimental results, likely by others. Traces of this approach can be seen in the work of Hipparchus (190–120 BCE), when determining a value for the precession of the Earth, while controlled experiments can be seen in the works of Jābir ibn Hayyān (721–815 CE), al-Battani (853–929) and Alhazen (965–1039).

DNA-experiments

Watson and Crick showed an initial (and incorrect) proposal for the structure of DNA to a team from Kings College – Rosalind Franklin, Maurice Wilkins, and Raymond Gosling. Franklin immediately spotted the flaws which concerned the water content. Later Watson saw Franklin's detailed X-ray diffraction images which showed an X-shape and was able to confirm the structure was helical. This rekindled Watson and Crick's model building and led to the correct structure. ..1. DNA-characterizations

Evaluation and improvement

The scientific method is iterative. At any stage, it is possible to refine its accuracy and precision, so that some consideration will lead the scientist to repeat an earlier part of the process. Failure to develop an interesting hypothesis may lead a scientist to re-define the subject under consideration. Failure of a hypothesis to produce interesting and testable predictions may lead to reconsideration of the hypothesis or of the definition of the subject. Failure of an experiment to produce interesting results may lead a scientist to reconsider the experimental method, the hypothesis, or the definition of the subject.

Other scientists may start their own research and enter the process at any stage. They might adopt the characterization and formulate their own hypothesis, or they might adopt the hypothesis and deduce their own predictions. Often the experiment is not done by the person who made the prediction, and the characterization is based on experiments done by someone else. Published results of experiments can also serve as a hypothesis predicting their own reproducibility.

DNA-iterations

After considerable fruitless experimentation, being discouraged by their superior from continuing, and numerous false starts, Watson and Crick were able to infer the essential structure of DNA by concrete modeling of the physical shapes of the nucleotides which comprise it. They were guided by the bond lengths which had been deduced by Linus Pauling and by Rosalind Franklin's X-ray diffraction images. ..DNA Example

Confirmation

Science is a social enterprise, and scientific work tends to be accepted by the scientific community when it has been confirmed. Crucially, experimental and theoretical results must be reproduced by others within the scientific community. Researchers have given their lives for this vision; Georg Wilhelm Richmann was killed by ball lightning (1753) when attempting to replicate the 1752 kite-flying experiment of Benjamin Franklin.

To protect against bad science and fraudulent data, government research-granting agencies such as the National Science Foundation, and science journals, including Nature and Science, have a policy that researchers must archive their data and methods so that other researchers can test the data and methods and build on the research that has gone before. Scientific data archiving can be done at a number of national archives in the U.S. or in the World Data Center.

Models of scientific inquiry

Classical model

The classical model of scientific inquiry derives from Aristotle, who distinguished the forms of approximate and exact reasoning, set out the threefold scheme of abductive, deductive, and inductive inference, and also treated the compound forms such as reasoning by analogy.

Hypothetico-deductive model

The hypothetico-deductive model or method is a proposed description of scientific method. Here, predictions from the hypothesis are central: if you assume the hypothesis to be true, what consequences follow?

If subsequent empirical investigation does not demonstrate that these consequences or predictions correspond to the observable world, the hypothesis can be concluded to be false.

Pragmatic model

In 1877, Charles Sanders Peirce (1839–1914) characterized inquiry in general not as the pursuit of truth per se but as the struggle to move from irritating, inhibitory doubts born of surprises, disagreements, and the like, and to reach a secure belief, belief being that on which one is prepared to act. He framed scientific inquiry as part of a broader spectrum and as spurred, like inquiry generally, by actual doubt, not mere verbal or hyperbolic doubt, which he held to be fruitless. He outlined four methods of settling opinion, ordered from least to most successful:

  1. The method of tenacity (policy of sticking to initial belief) – which brings comforts and decisiveness but leads to trying to ignore contrary information and others' views as if truth were intrinsically private, not public. It goes against the social impulse and easily falters since one may well notice when another's opinion is as good as one's own initial opinion. Its successes can shine but tend to be transitory.
  2. The method of authority – which overcomes disagreements but sometimes brutally. Its successes can be majestic and long-lived, but it cannot operate thoroughly enough to suppress doubts indefinitely, especially when people learn of other societies present and past.
  3. The method of the a priori – which promotes conformity less brutally but fosters opinions as something like tastes, arising in conversation and comparisons of perspectives in terms of "what is agreeable to reason." Thereby it depends on fashion in paradigms and goes in circles over time. It is more intellectual and respectable but, like the first two methods, sustains accidental and capricious beliefs, destining some minds to doubt it.
  4. The scientific method – the method wherein inquiry regards itself as fallible and purposely tests itself and criticizes, corrects, and improves itself.

Peirce held that slow, stumbling ratiocination can be dangerously inferior to instinct and traditional sentiment in practical matters, and that the scientific method is best suited to theoretical research, which in turn should not be trammeled by the other methods and practical ends; reason's "first rule" is that, in order to learn, one must desire to learn and, as a corollary, must not block the way of inquiry. The scientific method excels the others by being deliberately designed to arrive – eventually – at the most secure beliefs, upon which the most successful practices can be based. Starting from the idea that people seek not truth per se but instead to subdue irritating, inhibitory doubt, Peirce showed how, through the struggle, some can come to submit to truth for the sake of belief's integrity, seek as truth the guidance of potential practice correctly to its given goal, and wed themselves to the scientific method.

For Peirce, rational inquiry implies presuppositions about truth and the real; to reason is to presuppose (and at least to hope), as a principle of the reasoner's self-regulation, that the real is discoverable and independent of our vagaries of opinion. In that vein he defined truth as the correspondence of a sign (in particular, a proposition) to its object and, pragmatically, not as actual consensus of some definite, finite community (such that to inquire would be to poll the experts), but instead as that final opinion which all investigators would reach sooner or later but still inevitably, if they were to push investigation far enough, even when they start from different points. In tandem he defined the real as a true sign's object (be that object a possibility or quality, or an actuality or brute fact, or a necessity or norm or law), which is what it is independently of any finite community's opinion and, pragmatically, depends only on the final opinion destined in a sufficient investigation. That is a destination as far, or near, as the truth itself to you or me or the given finite community. Thus, his theory of inquiry boils down to "Do the science." Those conceptions of truth and the real involve the idea of a community both without definite limits (and thus potentially self-correcting as far as needed) and capable of definite increase of knowledge. As inference, "logic is rooted in the social principle" since it depends on a standpoint that is, in a sense, unlimited.

Paying special attention to the generation of explanations, Peirce outlined the scientific method as a coordination of three kinds of inference in a purposeful cycle aimed at settling doubts, as follows (in §III–IV in "A Neglected Argument" except as otherwise noted):

  1. Abduction (or retroduction). Guessing, inference to explanatory hypotheses for selection of those best worth trying. From abduction, Peirce distinguishes induction as inferring, on the basis of tests, the proportion of truth in the hypothesis. Every inquiry, whether into ideas, brute facts, or norms and laws, arises from surprising observations in one or more of those realms (and for example at any stage of an inquiry already underway). All explanatory content of theories comes from abduction, which guesses a new or outside idea so as to account in a simple, economical way for a surprising or complicative phenomenon. Oftenest, even a well-prepared mind guesses wrong. But the modicum of success of our guesses far exceeds that of sheer luck and seems born of attunement to nature by instincts developed or inherent, especially insofar as best guesses are optimally plausible and simple in the sense, said Peirce, of the "facile and natural", as by Galileo's natural light of reason and as distinct from "logical simplicity". Abduction is the most fertile but least secure mode of inference. Its general rationale is inductive: it succeeds often enough and, without it, there is no hope of sufficiently expediting inquiry (often multi-generational) toward new truths. Coordinative method leads from abducing a plausible hypothesis to judging it for its testability and for how its trial would economize inquiry itself. Peirce calls his pragmatism "the logic of abduction". His pragmatic maxim is: "Consider what effects that might conceivably have practical bearings you conceive the objects of your conception to have. Then, your conception of those effects is the whole of your conception of the object". His pragmatism is a method of reducing conceptual confusions fruitfully by equating the meaning of any conception with the conceivable practical implications of its object's conceived effects – a method of experimentational mental reflection hospitable to forming hypotheses and conducive to testing them. It favors efficiency. The hypothesis, being insecure, needs to have practical implications leading at least to mental tests and, in science, lending themselves to scientific tests. A simple but unlikely guess, if uncostly to test for falsity, may belong first in line for testing. A guess is intrinsically worth testing if it has instinctive plausibility or reasoned objective probability, while subjective likelihood, though reasoned, can be misleadingly seductive. Guesses can be chosen for trial strategically, for their caution (for which Peirce gave as an example the game of Twenty Questions), breadth, and incomplexity. One can hope to discover only that which time would reveal through a learner's sufficient experience anyway, so the point is to expedite it; the economy of research is what demands the leap, so to speak, of abduction and governs its art.
  2. Deduction. Two stages:
    1. Explication. Unclearly premised, but deductive, analysis of the hypothesis in order to render its parts as clear as possible.
    2. Demonstration: Deductive argumentation, Euclidean in procedure. Explicit deduction of hypothesis's consequences as predictions, for induction to test, about evidence to be found. Corollarial or, if needed, theorematic.
  3. Induction. The long-run validity of the rule of induction is deducible from the principle (presuppositional to reasoning in general) that the real is only the object of the final opinion to which adequate investigation would lead; anything to which no such process would ever lead would not be real. Induction involving ongoing tests or observations follows a method which, sufficiently persisted in, will diminish its error below any predesignate degree. Three stages:
    1. Classification. Unclearly premised, but inductive, classing of objects of experience under general ideas.
    2. Probation: direct inductive argumentation. Crude (the enumeration of instances) or gradual (new estimate of proportion of truth in the hypothesis after each test). Gradual induction is qualitative or quantitative; if qualitative, then dependent on weightings of qualities or characters; if quantitative, then dependent on measurements, or on statistics, or on countings.
    3. Sentential Induction. "... which, by inductive reasonings, appraises the different probations singly, then their combinations, then makes self-appraisal of these very appraisals themselves, and passes final judgment on the whole result".

Invariant explanation

Model of DNA with David Deutsch, proponent of invariant scientific explanations (2009)

In a 2009 TED talk, Deutsch expounded a criterion for scientific explanation, which is to formulate invariants: "State an explanation [publicly, so that it can be dated and verified by others later] that remains invariant [in the face of apparent change, new information, or unexpected conditions]".

"A bad explanation is easy to vary.":minute 11:22
"The search for hard-to-vary explanations is the origin of all progress":minute 15:05
"That the truth consists of hard-to-vary assertions about reality is the most important fact about the physical world.":minute 16:15

Invariance as a fundamental aspect of a scientific account of reality had long been part of philosophy of science: for example, Friedel Weinert's book The Scientist as Philosopher (2004) noted the presence of the theme in many writings from the turn of the 20th century onward, such as works by Henri Poincaré (1902), Ernst Cassirer (1920), Max Born (1949 and 1953), Paul Dirac (1958), Olivier Costa de Beauregard (1966), Eugene Wigner (1967), Lawrence Sklar (1974), Michael Friedman (1983), John D. Norton (1992), Nicholas Maxwell (1993), Alan Cook (1994), Alistair Cameron Crombie (1994), Margaret Morrison (1995), Richard Feynman (1997), Robert Nozick (2001), and Tim Maudlin (2002).

Science of complex systems

Science applied to complex systems can involve elements such as transdisciplinarity, systems theory and scientific modelling. The Santa Fe Institute studies such systems; Murray Gell-Mann interconnects these topics with message passing.

In general, the scientific method may be difficult to apply stringently to diverse, interconnected systems and large data sets. In particular, practices used within Big data, such as predictive analytics, may be considered to be at odds with the scientific method.

Communication and community

Frequently the scientific method is employed not only by a single person but also by several people cooperating directly or indirectly. Such cooperation can be regarded as an important element of a scientific community. Various standards of scientific methodology are used within such an environment.

Peer review evaluation

Scientific journals use a process of peer review, in which scientists' manuscripts are submitted by editors of scientific journals to (usually one to three, and usually anonymous) fellow scientists familiar with the field for evaluation. In certain journals, the journal itself selects the referees; while in others (especially journals that are extremely specialized), the manuscript author might recommend referees. The referees may or may not recommend publication, or they might recommend publication with suggested modifications, or sometimes, publication in another journal. This standard is practiced to various degrees by different journals, and can have the effect of keeping the literature free of obvious errors and to generally improve the quality of the material, especially in the journals who use the standard most rigorously. The peer-review process can have limitations when considering research outside the conventional scientific paradigm: problems of "groupthink" can interfere with open and fair deliberation of some new research.

Documentation and replication

Sometimes experimenters may make systematic errors during their experiments, veer from standard methods and practices (Pathological science) for various reasons, or, in rare cases, deliberately report false results. Occasionally because of this then, other scientists might attempt to repeat the experiments in order to duplicate the results.

Archiving

Researchers sometimes practice scientific data archiving, such as in compliance with the policies of government funding agencies and scientific journals. In these cases, detailed records of their experimental procedures, raw data, statistical analyses and source code can be preserved in order to provide evidence of the methodology and practice of the procedure and assist in any potential future attempts to reproduce the result. These procedural records may also assist in the conception of new experiments to test the hypothesis, and may prove useful to engineers who might examine the potential practical applications of a discovery.

Data sharing

When additional information is needed before a study can be reproduced, the author of the study might be asked to provide it. They might provide it, or if the author refuses to share data, appeals can be made to the journal editors who published the study or to the institution which funded the research.

Limitations

Since it is impossible for a scientist to record everything that took place in an experiment, facts selected for their apparent relevance are reported. This may lead, unavoidably, to problems later if some supposedly irrelevant feature is questioned. For example, Heinrich Hertz did not report the size of the room used to test Maxwell's equations, which later turned out to account for a small deviation in the results. The problem is that parts of the theory itself need to be assumed in order to select and report the experimental conditions. The observations are hence sometimes described as being 'theory-laden'.

Philosophy and sociology of science

Analytical philosophy

Philosophy of science looks at the underpinning logic of the scientific method, at what separates science from non-science, and the ethic that is implicit in science. There are basic assumptions, derived from philosophy by at least one prominent scientist, that form the base of the scientific method – namely, that reality is objective and consistent, that humans have the capacity to perceive reality accurately, and that rational explanations exist for elements of the real world. These assumptions from methodological naturalism form a basis on which science may be grounded. Logical Positivist, empiricist, falsificationist, and other theories have criticized these assumptions and given alternative accounts of the logic of science, but each has also itself been criticized.

Thomas Kuhn examined the history of science in his The Structure of Scientific Revolutions, and found that the actual method used by scientists differed dramatically from the then-espoused method. His observations of science practice are essentially sociological and do not speak to how science is or can be practiced in other times and other cultures.

Norwood Russell Hanson, Imre Lakatos and Thomas Kuhn have done extensive work on the "theory-laden" character of observation. Hanson (1958) first coined the term for the idea that all observation is dependent on the conceptual framework of the observer, using the concept of gestalt to show how preconceptions can affect both observation and description. He opens Chapter 1 with a discussion of the Golgi bodies and their initial rejection as an artefact of staining technique, and a discussion of Brahe and Kepler observing the dawn and seeing a "different" sun rise despite the same physiological phenomenon. Kuhn and Feyerabend acknowledge the pioneering significance of his work.

Kuhn (1961) said the scientist generally has a theory in mind before designing and undertaking experiments so as to make empirical observations, and that the "route from theory to measurement can almost never be traveled backward". This implies that the way in which theory is tested is dictated by the nature of the theory itself, which led Kuhn (1961, p. 166) to argue that "once it has been adopted by a profession ... no theory is recognized to be testable by any quantitative tests that it has not already passed".

Post-modernism and science wars

Paul Feyerabend similarly examined the history of science, and was led to deny that science is genuinely a methodological process. In his book Against Method he argues that scientific progress is not the result of applying any particular method. In essence, he says that for any specific method or norm of science, one can find a historic episode where violating it has contributed to the progress of science. Thus, if believers in scientific method wish to express a single universally valid rule, Feyerabend jokingly suggests, it should be 'anything goes'. Criticisms such as his led to the strong programme, a radical approach to the sociology of science.

The postmodernist critiques of science have themselves been the subject of intense controversy. This ongoing debate, known as the science wars, is the result of conflicting values and assumptions between the postmodernist and realist camps. Whereas postmodernists assert that scientific knowledge is simply another discourse (note that this term has special meaning in this context) and not representative of any form of fundamental truth, realists in the scientific community maintain that scientific knowledge does reveal real and fundamental truths about reality. Many books have been written by scientists which take on this problem and challenge the assertions of the postmodernists while defending science as a legitimate method of deriving truth.

Anthropology and sociology

In anthropology and sociology, following the field research in an academic scientific laboratory by Latour and Woolgar, Karin Knorr Cetina has conducted a comparative study of two scientific fields (namely high energy physics and molecular biology) to conclude that the epistemic practices and reasonings within both scientific communities are different enough to introduce the concept of "epistemic cultures", in contradiction with the idea that a so-called "scientific method" is unique and a unifying concept.

Role of chance in discovery

Somewhere between 33% and 50% of all scientific discoveries are estimated to have been stumbled upon, rather than sought out. This may explain why scientists so often express that they were lucky. Louis Pasteur is credited with the famous saying that "Luck favours the prepared mind", but some psychologists have begun to study what it means to be 'prepared for luck' in the scientific context. Research is showing that scientists are taught various heuristics that tend to harness chance and the unexpected. This is what Nassim Nicholas Taleb calls "Anti-fragility"; while some systems of investigation are fragile in the face of human error, human bias, and randomness, the scientific method is more than resistant or tough – it actually benefits from such randomness in many ways (it is anti-fragile). Taleb believes that the more anti-fragile the system, the more it will flourish in the real world.

Psychologist Kevin Dunbar says the process of discovery often starts with researchers finding bugs in their experiments. These unexpected results lead researchers to try to fix what they think is an error in their method. Eventually, the researcher decides the error is too persistent and systematic to be a coincidence. The highly controlled, cautious and curious aspects of the scientific method are thus what make it well suited for identifying such persistent systematic errors. At this point, the researcher will begin to think of theoretical explanations for the error, often seeking the help of colleagues across different domains of expertise.

Relationship with mathematics

Science is the process of gathering, comparing, and evaluating proposed models against observables. A model can be a simulation, mathematical or chemical formula, or set of proposed steps. Science is like mathematics in that researchers in both disciplines try to distinguish what is known from what is unknown at each stage of discovery. Models, in both science and mathematics, need to be internally consistent and also ought to be falsifiable (capable of disproof). In mathematics, a statement need not yet be proven; at such a stage, that statement would be called a conjecture. But when a statement has attained mathematical proof, that statement gains a kind of immortality which is highly prized by mathematicians, and for which some mathematicians devote their lives.

Mathematical work and scientific work can inspire each other. For example, the technical concept of time arose in science, and timelessness was a hallmark of a mathematical topic. But today, the Poincaré conjecture has been proven using time as a mathematical concept in which objects can flow (see Ricci flow).

Nevertheless, the connection between mathematics and reality (and so science to the extent it describes reality) remains obscure. Eugene Wigner's paper, The Unreasonable Effectiveness of Mathematics in the Natural Sciences, is a very well known account of the issue from a Nobel Prize-winning physicist. In fact, some observers (including some well-known mathematicians such as Gregory Chaitin, and others such as Lakoff and Núñez) have suggested that mathematics is the result of practitioner bias and human limitation (including cultural ones), somewhat like the post-modernist view of science.

George Pólya's work on problem solving, the construction of mathematical proofs, and heuristic show that the mathematical method and the scientific method differ in detail, while nevertheless resembling each other in using iterative or recursive steps.


Mathematical method Scientific method
1 Understanding Characterization from experience and observation
2 Analysis Hypothesis: a proposed explanation
3 Synthesis Deduction: prediction from the hypothesis
4 Review/Extend Test and experiment

In Pólya's view, understanding involves restating unfamiliar definitions in your own words, resorting to geometrical figures, and questioning what we know and do not know already; analysis, which Pólya takes from Pappus, involves free and heuristic construction of plausible arguments, working backward from the goal, and devising a plan for constructing the proof; synthesis is the strict Euclidean exposition of step-by-step details of the proof; review involves reconsidering and re-examining the result and the path taken to it.

Gauss, when asked how he came about his theorems, once replied "durch planmässiges Tattonieren" (through systematic palpable experimentation).

Imre Lakatos argued that mathematicians actually use contradiction, criticism and revision as principles for improving their work. In like manner to science, where truth is sought, but certainty is not found, in Proofs and refutations (1976), what Lakatos tried to establish was that no theorem of informal mathematics is final or perfect. This means that we should not think that a theorem is ultimately true, only that no counterexample has yet been found. Once a counterexample, i.e. an entity contradicting/not explained by the theorem is found, we adjust the theorem, possibly extending the domain of its validity. This is a continuous way our knowledge accumulates, through the logic and process of proofs and refutations. (If axioms are given for a branch of mathematics, however, Lakatos claimed that proofs from those axioms were tautological, i.e. logically true, by rewriting them, as did Poincaré (Proofs and Refutations, 1976).)

Lakatos proposed an account of mathematical knowledge based on Polya's idea of heuristics. In Proofs and Refutations, Lakatos gave several basic rules for finding proofs and counterexamples to conjectures. He thought that mathematical 'thought experiments' are a valid way to discover mathematical conjectures and proofs.

Relationship with statistics

When the scientific method employs statistics as part of its arsenal, there are mathematical and practical issues that can have a deleterious effect on the reliability of the output of scientific methods. This is described in a popular 2005 scientific paper "Why Most Published Research Findings Are False" by John Ioannidis, which is considered foundational to the field of metascience. Much research in metascience seeks to identify poor use of statistics and improve its use.

The particular points raised are statistical ("The smaller the studies conducted in a scientific field, the less likely the research findings are to be true" and "The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.") and economical ("The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true" and "The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.") Hence: "Most research findings are false for most research designs and for most fields" and "As shown, the majority of modern biomedical research is operating in areas with very low pre- and poststudy probability for true findings." However: "Nevertheless, most new discoveries will continue to stem from hypothesis-generating research with low or very low pre-study odds," which means that *new* discoveries will come from research that, when that research started, had low or very low odds (a low or very low chance) of succeeding. Hence, if the scientific method is used to expand the frontiers of knowledge, research into areas that are outside the mainstream will yield most new discoveries.

Media bias

From Wikipedia, the free encyclopedia

Media bias is the bias of journalists and news producers within the mass media in the selection of many events and stories that are reported and how they are covered. The term "media bias" implies a pervasive or widespread bias contravening the standards of journalism, rather than the perspective of an individual journalist or article. The direction and degree of media bias in various countries is widely disputed.

Practical limitations to media neutrality include the inability of journalists to report all available stories and facts, and the requirement that selected facts be linked into a coherent narrative. Government influence, including overt and covert censorship, biases the media in some countries, for example China, North Korea and Myanmar. Market forces that result in a biased presentation include the ownership of the news source, concentration of media ownership, the subjective selection of staff, or the preferences of an intended audience.

There are a number of national and international watchdog groups that report on bias of the media.

Types

The most commonly discussed types of bias occur when the (allegedly partisan) media support or attack a particular political party, candidate, or ideology.

D'Alessio and Allen list three forms of media bias as the most widely studied:

  • Coverage bias (also known as visibility bias), when actors or issues are more or less visible in the news.
  • Gatekeeping bias (also known as selectivity or selection bias), when stories are selected or deselected, sometimes on ideological grounds. It is sometimes also referred to as agenda bias, when the focus is on political actors and whether they are covered based on their preferred policy issues.
  • Statement bias (also known as tonality bias or presentation bias), when media coverage is slanted towards or against particular actors or issues.

Other common forms of political and non-political media bias include:

  • Advertising bias, when stories are selected or slanted to please advertisers.
  • Concision bias, a tendency to report views that can be summarized succinctly, crowding out more unconventional views that take time to explain.
  • Corporate bias, when stories are selected or slanted to please corporate owners of media.
  • Mainstream bias, a tendency to report what everyone else is reporting, and to avoid stories that will offend anyone.
  • Partisan bias, a tendency to report to serve particular political party leaning.
  • Sensationalism, bias in favor of the exceptional over the ordinary, giving the impression that rare events, such as airplane crashes, are more common than common events, such as automobile crashes.
  • Structural bias, when an actor or issue receives more or less favorable coverage as a result of newsworthiness and media routines, not as the result of ideological decisions (e.g. incumbency bonus).
  • False balance, when an issue is presented as even sided, despite disproportionate amounts of evidence.
  • Undue weight, when a story is given much greater significance or portent than a neutral journalist or editor would give.
  • Speculative content, when stories focus not on what has occurred, but primarily on what might occur, using words like "could," "might," or "what if," without labeling the article as analysis or opinion.
  • False timeliness, implying that an event is a new event, and thus deriving notability, without addressing past events of the same kind.
  • Ventriloquism, when experts or witnesses are quoted in a way that intentionally voices the author's own opinion.

Other forms of bias include reporting that favors or attacks a particular race, religion, gender, age, sexual orientation, ethnic group, or even person

History

Political bias has been a feature of the mass media since its birth with the invention of the printing press. The expense of early printing equipment restricted media production to a limited number of people. Historians have found that publishers often served the interests of powerful social groups.

John Milton's pamphlet Areopagitica, a Speech for the Liberty of Unlicensed Printing, published in 1644, was one of the first publications advocating freedom of the press.

In the 19th century, journalists began to recognize the concept of unbiased reporting as an integral part of journalistic ethics. This coincided with the rise of journalism as a powerful social force. Even today, though, the most conscientiously objective journalists cannot avoid accusations of bias.

Like newspapers, the broadcast media (radio and television) have been used as a mechanism for propaganda from their earliest days, a tendency made more pronounced by the initial ownership of broadcast spectrum by national governments. Although a process of media deregulation has placed the majority of the western broadcast media in private hands, there still exists a strong government presence, or even monopoly, in the broadcast media of many countries across the globe. At the same time, the concentration of media ownership in private hands, and frequently amongst a comparatively small number of individuals, has also led to accusations of media bias.

There are many examples of accusations of bias being used as a political tool, sometimes resulting in government censorship.

  • In the United States, in 1798, Congress passed the Alien and Sedition Acts, which prohibited newspapers from publishing "false, scandalous, or malicious writing" against the government, including any public opposition to any law or presidential act. This act was in effect until 1801.
  • During the American Civil War, President Abraham Lincoln accused newspapers in the border states of bias in favor of the Southern cause, and ordered many newspapers closed.
  • Anti-Semitic politicians who favored the United States entering World War II on the Nazi side asserted that the international media were controlled by Jews, and that reports of German mistreatment of Jews were biased and without foundation. Hollywood was accused of Jewish bias, and films such as Charlie Chaplin’s The Great Dictator were offered as alleged proof.
  • In the US during the labor union movement and the civil rights movement, newspapers supporting liberal social reform were accused by conservative newspapers of communist bias. Film and television media were accused of bias in favor of mixing of the races, and many television programs with racially mixed casts, such as I Spy and Star Trek, were not aired on Southern stations.
  • During the war between the United States and North Vietnam, Vice President Spiro Agnew accused newspapers of anti-American bias, and in a famous speech delivered in San Diego in 1970, called anti-war protesters "the nattering nabobs of negativism."

Not all accusations of bias are political. Science writer Martin Gardner has accused the entertainment media of anti-science bias. He claims that television programs such as The X-Files promote superstition. In contrast, the Competitive Enterprise Institute, which is funded by businesses, accuses the media of being biased in favor of science and against business interests, and of credulously reporting science that shows that greenhouse gasses cause global warming.

Confirmation bias

A major problem in studies is confirmation bias. Research into studies of media bias in the United States shows that liberal experimenters tend to get results that say the media has a conservative bias, while conservative experimenters tend to get results that say the media has a liberal bias, and those who do not identify themselves as either liberal or conservative get results indicating little bias, or mixed bias.

The study "A Measure of Media Bias", by political scientist Timothy J. Groseclose of UCLA and economist Jeffrey D. Milyo of the University of Missouri-Columbia, purports to rank news organizations in terms of identifying with liberal or conservative values relative to each other. They used the Americans for Democratic Action (ADA) scores as a quantitative proxy for political leanings of the referential organizations. Thus their definition of "liberal" includes the RAND Corporation, a nonprofit research organization with strong ties to the Defense Department. Their work claims to detect a bias towards liberalism in the American media.

United States political bias

Media bias in the United States occurs when the media in the United States systematically emphasizes one particular point of view in a manner that contravenes the standards of professional journalism. Claims of media bias in the United States include claims of liberal bias, conservative bias, mainstream bias, corporate bias and activist/cause bias. To combat this, a variety of watchdog groups that attempt to find the facts behind both biased reporting and unfounded claims of bias have been founded. These include:

Research about media bias is now a subject of systematic scholarship in a variety of disciplines.

Scholarly treatment in the United States and United Kingdom

Media bias is studied at schools of journalism, university departments (including Media studies, Cultural studies and Peace studies) and by independent watchdog groups from various parts of the political spectrum. In the United States, many of these studies focus on issues of a conservative/liberal balance in the media. Other focuses include international differences in reporting, as well as bias in reporting of particular issues such as economic class or environmental interests. Currently, most of these analyses are performed manually, requiring exacting and time-consuming effort. However, an interdisciplinary literature review from 2018 found that automated methods, mostly from computer science and computational linguistics, are available or could with comparably low effort be adapted for the analysis of the various forms of media bias. Employing or adapting such techniques would help to further automate the analyses in the social sciences, such as content analysis and frame analysis.

Martin Harrison's TV News: Whose Bias? (1985) criticized the methodology of the Glasgow Media Group, arguing that the GMG identified bias selectively, via their own preconceptions about what phrases qualify as biased descriptions. For example, the GMG sees the word "idle" to describe striking workers as pejorative, despite the word being used by strikers themselves.

Herman and Chomsky (1988) proposed a propaganda model hypothesizing systematic biases of U.S. media from structural economic causes. They hypothesize media ownership by corporations, funding from advertising, the use of official sources, efforts to discredit independent media ("flak"), and "anti-communist" ideology as the filters that bias news in favor of U.S. corporate interests.

Many of the positions in the preceding study are supported by a 2002 study by Jim A. Kuypers: Press Bias and Politics: How the Media Frame Controversial Issues. In this study of 116 mainstream US papers, including The New York Times, the Washington Post, Los Angeles Times, and the San Francisco Chronicle, Kuypers found that the mainstream print press in America operate within a narrow range of liberal beliefs. Those who expressed points of view further to the left were generally ignored, whereas those who expressed moderate or conservative points of view were often actively denigrated or labeled as holding a minority point of view. In short, if a political leader, regardless of party, spoke within the press-supported range of acceptable discourse, he or she would receive positive press coverage. If a politician, again regardless of party, were to speak outside of this range, he or she would receive negative press or be ignored. Kuypers also found that the liberal points of view expressed in editorial and opinion pages were found in hard news coverage of the same issues. Although focusing primarily on the issues of race and homosexuality, Kuypers found that the press injected opinion into its news coverage of other issues such as welfare reform, environmental protection, and gun control; in all, cases favoring a liberal point of view.

Henry Silverman (2011) of Roosevelt University analyzed a sample of fifty news-oriented articles on the Middle East conflict published on the Reuters.com websites for the use of classic propaganda techniques, logical fallacies and violations of the Reuters Handbook of Journalism, a manual of guiding ethical principles for the company's journalists. Across the articles, over 1,100 occurrences of propaganda, fallacies and handbook violations in 41 categories were identified and classified. In the second part of the study, a group of thirty-three university students were surveyed, before and after reading the articles, to assess their attitudes and motivation to support one or the other belligerent parties in the Middle East conflict, i.e., the Palestinians/Arabs or the Israelis. The study found that on average, subject sentiment shifted significantly following the readings in favor of the Arabs and that this shift was associated with particular propaganda techniques and logical fallacies appearing in the stories. Silverman inferred from the evidence that Reuters engages in systematically biased storytelling in favor of the Arabs/Palestinians and is able to influence audience affective behavior and motivate direct action along the same trajectory.

Studies reporting perceptions of bias in the media are not limited to studies of print media. A joint study by the Joan Shorenstein Center on Press, Politics and Public Policy at Harvard University and the Project for Excellence in Journalism found that people see media bias in television news media such as CNN. Although both CNN and Fox were perceived in the study as not being centrist, CNN was perceived as being more liberal than Fox. Moreover, the study's findings concerning CNN's perceived bias are echoed in other studies. There is also a growing economics literature on mass media bias, both on the theoretical and the empirical side. On the theoretical side the focus is on understanding to what extent the political positioning of mass media outlets is mainly driven by demand or supply factors. This literature is surveyed by Andrea Prat of Columbia University and David Stromberg of Stockholm University.

According to Dan Sutter of the University of Oklahoma, a systematic liberal bias in the U.S. media could depend on the fact that owners and/or journalists typically lean to the left.

Along the same lines, David Baron of Stanford GSB presents a game-theoretic model of mass media behaviour in which, given that the pool of journalists systematically leans towards the left or the right, mass media outlets maximise their profits by providing content that is biased in the same direction. They can do so, because it is cheaper to hire journalists who write stories that are consistent with their political position. A concurrent theory would be that supply and demand would cause media to attain a neutral balance because consumers would of course gravitate towards the media they agreed with. This argument fails in considering the imbalance in self-reported political allegiances by journalists themselves, that distort any market analogy as regards offer: (..) Indeed, in 1982, 85 percent of Columbia Graduate School of Journalism students identified themselves as liberal, versus 11 percent conservative" (Lichter, Rothman, and Lichter 1986: 48), quoted in Sutter, 2001.

This same argument would have news outlets in equal numbers increasing profits of a more balanced media far more than the slight increase in costs to hire unbiased journalists, notwithstanding the extreme rarity of self-reported conservative journalists (Sutton, 2001).

As mentioned above, Tim Groseclose of UCLA and Jeff Milyo of the University of Missouri at Columbia use think tank quotes, in order to estimate the relative position of mass media outlets in the political spectrum. The idea is to trace out which think tanks are quoted by various mass media outlets within news stories, and to match these think tanks with the political position of members of the U.S. Congress who quote them in a non-negative way. Using this procedure, Groseclose and Milyo obtain the stark result that all sampled news providers -except Fox News' Special Report and the Washington Times- are located to the left of the average Congress member, i.e. there are signs of a liberal bias in the US news media.

The methods Groseclose and Milyo used to calculate this bias have been criticized by Mark Liberman, a professor of Linguistics at the University of Pennsylvania. Liberman concludes by saying he thinks "that many if not most of the complaints directed against G&M are motivated in part by ideological disagreement – just as much of the praise for their work is motivated by ideological agreement. It would be nice if there were a less politically fraught body of data on which such modeling exercises could be explored."

Sendhil Mullainathan and Andrei Shleifer of Harvard University construct a behavioural model, which is built around the assumption that readers and viewers hold beliefs that they would like to see confirmed by news providers. When news customers share common beliefs, profit-maximizing media outlets find it optimal to select and/or frame stories in order to pander to those beliefs. On the other hand, when beliefs are heterogeneous, news providers differentiate their offer and segment the market, by providing news stories that are slanted towards the two extreme positions in the spectrum of beliefs.

Matthew Gentzkow and Jesse Shapiro of Chicago GSB present another demand-driven theory of mass media bias. If readers and viewers have a priori views on the current state of affairs and are uncertain about the quality of the information about it being provided by media outlets, then the latter have an incentive to slant stories towards their customers' prior beliefs, in order to build and keep a reputation for high-quality journalism. The reason for this is that rational agents would tend to believe that pieces of information that go against their prior beliefs in fact originate from low-quality news providers.

Given that different groups in society have different beliefs, priorities, and interests, to which group would the media tailor its bias? David Stromberg constructs a demand-driven model where media bias arises because different audiences have different effects on media profits. Advertisers pay more for affluent audiences and media may tailor content to attract this audience, perhaps producing a right-wing bias. On the other hand, urban audiences are more profitable to newspapers because of lower delivery costs. Newspapers may for this reason tailor their content to attract the profitable predominantly liberal urban audiences. Finally, because of the increasing returns to scale in news production, small groups such as minorities are less profitable. This biases media content against the interest of minorities.

Steve Ansolabehere, Rebecca Lessem and Jim Snyder of the Massachusetts Institute of Technology analyze the political orientation of endorsements by U.S. newspapers. They find an upward trend in the average propensity to endorse a candidate, and in particular an incumbent one. There are also some changes in the average ideological slant of endorsements: while in the 1940s and in the 1950s there was a clear advantage to Republican candidates, this advantage continuously eroded in subsequent decades, to the extent that in the 1990s the authors find a slight Democratic lead in the average endorsement choice.

John Lott and Kevin Hassett of the American Enterprise Institute study the coverage of economic news by looking at a panel of 389 U.S. newspapers from 1991 to 2004, and from 1985 to 2004 for a subsample comprising the top 10 newspapers and the Associated Press. For each release of official data about a set of economic indicators, the authors analyze how newspapers decide to report on them, as reflected by the tone of the related headlines. The idea is to check whether newspapers display some kind of partisan bias, by giving more positive or negative coverage to the same economic figure, as a function of the political affiliation of the incumbent president. Controlling for the economic data being released, the authors find that there are between 9.6 and 14.7 percent fewer positive stories when the incumbent president is a Republican.

Riccardo Puglisi of the Massachusetts Institute of Technology looks at the editorial choices of the New York Times from 1946 to 1997. He finds that the Times displays Democratic partisanship, with some watchdog aspects. This is the case, because during presidential campaigns the Times systematically gives more coverage to Democratic topics of civil rights, health care, labor and social welfare, but only when the incumbent president is a Republican. These topics are classified as Democratic ones, because Gallup polls show that on average U.S. citizens think that Democratic candidates would be better at handling problems related to them. According to Puglisi, in the post-1960 period the Times displays a more symmetric type of watchdog behaviour, just because during presidential campaigns it also gives more coverage to the typically Republican issue of Defense when the incumbent president is a Democrat, and less so when the incumbent is a Republican.

Alan Gerber and Dean Karlan of Yale University use an experimental approach to examine not whether the media are biased, but whether the media influence political decisions and attitudes. They conduct a randomized control trial just prior to the November 2005 gubernatorial election in Virginia and randomly assign individuals in Northern Virginia to (a) a treatment group that receives a free subscription to the Washington Post, (b) a treatment group that receives a free subscription to the Washington Times, or (c) a control group. They find that those who are assigned to the Washington Post treatment group are eight percentage points more likely to vote for the Democrat in the elections. The report also found that "exposure to either newspaper was weakly linked to a movement away from the Bush administration and Republicans."

A self-described "progressive" media watchdog group, Fairness and Accuracy in Reporting (FAIR), in consultation with the Survey and Evaluation Research Laboratory at Virginia Commonwealth University, sponsored a 1998 survey in which 141 Washington bureau chiefs and Washington-based journalists were asked a range of questions about how they did their work and about how they viewed the quality of media coverage in the broad area of politics and economic policy. "They were asked for their opinions and views about a range of recent policy issues and debates. Finally, they were asked for demographic and identifying information, including their political orientation". They then compared to the same or similar questions posed with "the public" based on Gallup, and Pew Trust polls. Their study concluded that a majority of journalists, although relatively liberal on social policies, were significantly to the right of the public on economic, labor, health care and foreign policy issues.

This study continues: "we learn much more about the political orientation of news content by looking at sourcing patterns rather than journalists' personal views. As this survey shows, it is government officials and business representatives to whom journalists "nearly always" turn when covering economic policy. Labor representatives and consumer advocates were at the bottom of the list. This is consistent with earlier research on sources. For example, analysts from the non-partisan Brookings Institution and from conservative think tanks such as the Heritage Foundation and the American Enterprise Institute are those most quoted in mainstream news accounts.

In direct contrast to the FAIR survey, in 2014, media communication researcher Jim A. Kuypers published a 40-year longitudinal, aggregate study of the political beliefs and actions of American journalists. In every single category, for instance, social, economic, unions, health care, and foreign policy, he found that nationwide, print and broadcast journalists and editors as a group were "considerably" to the political left of the majority of Americans, and that these political beliefs found their way into news stories. Kuypers concluded, "Do the political proclivities of journalists influence their interpretation of the news? I answer that with a resounding, yes. As part of my evidence, I consider testimony from journalists themselves. ... [A] solid majority of journalists do allow their political ideology to influence their reporting."

Jonathan M. Ladd, who has conducted intensive studies of media trust and media bias, concluded that the primary cause of belief in media bias is media telling their audience that particular media are biased. People who are told that a medium is biased tend to believe that it is biased, and this belief is unrelated to whether that medium is actually biased or not. The only other factor with as strong an influence on belief that media is biased is extensive coverage of celebrities. A majority of people see such media as biased, while at the same time preferring media with extensive coverage of celebrities.

Starting in 2017, the Knight Foundation and Gallup conducted research to try to understand the effect of reader bias on the reader's perception of news source bias. They created the NewsLens site to present news from a variety of sources without labeling where the article came from. Their research showed that those with more extreme political views tend to provide more biased ratings of news. NewsLens became generally available in 2020, with the goals of expanding on the research and helping the US public to read and share news with less bias. However, as of January 2021, the platform was closed.

Efforts to correct bias

A technique used to avoid bias is the "point/counterpoint" or "round table", an adversarial format in which representatives of opposing views comment on an issue. This approach theoretically allows diverse views to appear in the media. However, the person organizing the report still has the responsibility to choose people who really represent the breadth of opinion, to ask them non-prejudicial questions, and to edit or arbitrate their comments fairly. When done carelessly, a point/counterpoint can be as unfair as a simple biased report, by suggesting that the "losing" side lost on its merits. The impact of such techniques to oppose different viewpoints also shows mixed effects in ongoing research

Using this format can also lead to accusations that the reporter has created a misleading appearance that viewpoints have equal validity (sometimes called "false balance"). This may happen when a taboo exists around one of the viewpoints, or when one of the representatives habitually makes claims that are easily shown to be inaccurate.

One such allegation of misleading balance came from Mark Halperin, political director of ABC News. He stated in an internal e-mail message that reporters should not "artificially hold George W. Bush and John Kerry 'equally' accountable" to the public interest, and that complaints from Bush supporters were an attempt to "get away with ... renewed efforts to win the election by destroying Senator Kerry." When the conservative web site the Drudge Report published this message, many Bush supporters viewed it as "smoking gun" evidence that Halperin was using ABC to propagandize against Bush to Kerry's benefit, by interfering with reporters' attempts to avoid bias. An academic content analysis of election news later found that coverage at ABC, CBS, and NBC was more favorable toward Kerry than Bush, while coverage at Fox News Channel was more favorable toward Bush.

Scott Norvell, the London bureau chief for Fox News, stated in a May 20, 2005 interview with the Wall Street Journal that:

"Even we at Fox News manage to get some lefties on the air occasionally, and often let them finish their sentences before we club them to death and feed the scraps to Karl Rove and Bill O'Reilly. And those who hate us can take solace in the fact that they aren't subsidizing Bill's bombast; we payers of the BBC license fee don't enjoy that peace of mind.
Fox News is, after all, a private channel and our presenters are quite open about where they stand on particular stories. That's our appeal. People watch us because they know what they are getting. The Beeb's (British Broadcasting Corporation) (BBC) institutionalized leftism would be easier to tolerate if the corporation was a little more honest about it".

Another technique used to avoid bias is disclosure of affiliations that may be considered a possible conflict of interest. This is especially apparent when a news organization is reporting a story with some relevancy to the news organization itself or to its ownership individuals or conglomerate. Often this disclosure is mandated by the laws or regulations pertaining to stocks and securities. Commentators on news stories involving stocks are often required to disclose any ownership interest in those corporations or in its competitors.

In rare cases, a news organization may dismiss or reassign staff members who appear biased. This approach was used in the Killian documents affair and after Peter Arnett's interview with the Iraqi press. This approach is presumed to have been employed in the case of Dan Rather over a story that he ran on 60 Minutes in the month prior to the 2004 election that attempted to impugn the military record of George W. Bush by relying on allegedly fake documents that were provided by Bill Burkett, a retired Lieutenant Colonel in the Texas Army National Guard.

Finally, some countries have laws enforcing balance in state-owned media. Since 1991, the CBC and Radio Canada, its French language counterpart, are governed by the Broadcasting Act. This act states, among other things:

...the programming provided by the Canadian broadcasting system should:

  • (i) be varied and comprehensive, providing a balance of information, enlightenment and entertainment for men, women and children of all ages, interests and tastes,

(...)

  • (iv) provide a reasonable opportunity for the public to be exposed to the expression of differing views on matters of public concern

Besides these manual approaches, several (semi-)automated approaches have been developed by social scientists and computer scientists. These approaches identify differences in news coverage, which potentially resulted from media bias, by analyzing the text and meta data, such as author and publishing date. For instance, NewsCube is a news aggregator that extracts key phrases that describe a topic differently. Other approaches make use of text- and meta-data, e.g., matrix-based news aggregation spans a matrix over two dimensions, such as publisher countries (in which articles have been published) and mentioned countries (on which country an article reports). As a result, each cell contains only articles that have been published in one country and that report on another country. Particularly in international news topics, matrix-based news aggregation helps to reveal differences in media coverage between the involved countries. Attempts have also been made to utilize machine-learning to analyze the bias of text.

National and ethnic viewpoint

Many news organizations reflect, or are perceived to reflect in some way, the viewpoint of the geographic, ethnic, and national population that they primarily serve. Media within countries are sometimes seen as being sycophantic or unquestioning about the country's government.

Western media are often criticized in the rest of the world (including eastern Europe, Asia, Africa, and the Middle East) as being pro-Western with regard to a variety of political, cultural and economic issues. Al Jazeera is frequently criticized both in the West and in the Arab world.

The Israeli–Palestinian conflict and wider Arab–Israeli issues are a particularly controversial area, and nearly all coverage of any kind generates accusation of bias from one or both sides. This topic is covered in a separate article.

Anglophone bias in the world media

It has been observed that the world's principal suppliers of news, the news agencies, and the main buyers of news are Anglophone corporations and this gives an Anglophone bias to the selection and depiction of events. Anglophone definitions of what constitutes news are paramount; the news provided originates in Anglophone capitals and responds first to their own rich domestic markets.

Despite the plethora of news services, most news printed and broadcast throughout the world each day comes from only a few major agencies, the three largest of which are the Associated Press, Reuters and Agence France-Presse. Although these agencies are 'global' in the sense of their activities, they each retain significant associations with particular nations, namely the United States (AP), the United Kingdom (Reuters) and France (AFP). Chambers and Tinckell suggest that the so-called global media are agents of Anglophone values which privilege norms of 'competitive individualism, laissez-faire capitalism, parliamentary democracy and consumerism.' They see the presentation of the English language as international as a further feature of Anglophone dominance.

Religious bias

The media are often accused of bias favoring a particular religion or of bias against a particular religion. In some countries, only reporting approved by a state religion is permitted. In other countries, derogatory statements about any belief system are considered hate crimes and are illegal.

According to the Encyclopedia of Social Work (19th edition), the news media play an influential role in the general public's perception of cults. As reported in several studies, the media have depicted cults as problematic, controversial, and threatening from the beginning, tending to favor sensationalistic stories over balanced public debates. It furthers the analysis that media reports on cults rely heavily on police officials and cult "experts" who portray cult activity as dangerous and destructive, and when divergent views are presented, they are often overshadowed by horrific stories of ritualistic torture, sexual abuse, mind control, and other such practices. Furthermore, unfounded allegations, when proved untrue, receive little or no media attention.

In 2012, Huffington Post, columnist Jacques Berlinerblau argued that secularism has often been misinterpreted in the media as another word for atheism, stating that: "Secularism must be the most misunderstood and mangled ism in the American political lexicon. Commentators on the right and the left routinely equate it with Stalinism, Nazism and Socialism, among other dreaded isms. In the United States, of late, another false equation has emerged. That would be the groundless association of secularism with atheism. The religious right has profitably promulgated this misconception at least since the 1970s."

According to Stuart A. Wright, there are six factors that contribute to media bias against minority religions: first, the knowledge and familiarity of journalists with the subject matter; second, the degree of cultural accommodation of the targeted religious group; third, limited economic resources available to journalists; fourth, time constraints; fifth, sources of information used by journalists; and finally, the front-end/back-end disproportionality of reporting. According to Yale Law professor Stephen Carter, "it has long been the American habit to be more suspicious of—and more repressive toward—religions that stand outside the mainline Protestant-Roman Catholic-Jewish troika that dominates America's spiritual life." As for front-end/back-end disproportionality, Wright says: "news stories on unpopular or marginal religions frequently are predicated on unsubstantiated allegations or government actions based on faulty or weak evidence occurring at the front-end of an event. As the charges weighed in against material evidence, these cases often disintegrate. Yet rarely is there equal space and attention in the mass media given to the resolution or outcome of the incident. If the accused are innocent, often the public is not made aware."

Social media bias

One of the hotbeds of political bias is social media. Within the United States, Pew Research Center reported that 64% of Americans believed that social media had a toxic effect on our society and culture in July 2020. Only 10% of Americans believed that it had a positive effect on society. Even still, 72% of US Adults had at least one social media account in 2019, and 65% of Americans believe that social media is an effective way to reach out to politicians. Some of the main concerns with social media lie with the spread of deliberately false or misinterpreted information and the spread of hate and extremism. Social scientist experts explain the growth of misinformation and hate as a result of the increase in echo chambers.

Fueled by confirmation bias, online echo chambers allow users to be steeped within their own ideology. Because social media is tailored to your interests and your selected friends, it is an easy outlet for political echo chambers. Another Pew Research poll in 2019 showed that 28% of US adults "often" find their news through social media, and 55% of US adults get their news from social media either "often" or "sometimes". Additionally, more people are reported as going to social media for their news as the Coronavirus has restricted politicians to online campaigns and social media live streams. GCF Global encourages online users to avoid echo chambers by interacting with different people and perspectives along with avoiding the temptation of confirmation bias.

How people view media

In 1997, two-thirds (67%) said agreed with the statement: "In dealing with political and social issues, news organizations tend to favor one side." That was up 14 points from 53 percent who gave that answer in 1985. Those who believed the media "deal fairly with all sides" fell from 34 percent to 27 percent. "In one of the most telling complaints, a majority (54%) of Americans believe the news media gets in the way of society solving its problems," Pew reported. Republicans "are more likely to say news organizations favor one side than are Democrats or independents (77 percent vs. 58 percent and 69 percent, respectively)." The percentage who felt "news organizations get the facts straight" fell from 55 percent to 37 percent.

Role of language

Bias is often reflected in which language is used, and in the way that language is used. Mass media has a worldwide reach, but must communicate with each linguistic group in some language they understand. The use of language may be neutral, or may attempt to be as neutral as possible, using careful translation and avoiding culturally charged words and phrases. Or it may be intentionally or accidentally biased, using mistranslations and trigger words targeting particular groups.

For example, in Bosnia and Herzegovina there are three mutually intelligible languages, Bosnian, Croatian, and Serbian. Media that try to reach as large an audience as possible use words common to all three languages. Media that want to target just one group may choose words that are unique to that group. In the United States, while most media is in English, in the 2020 election both major political parties used Spanish language advertising to reach out to Hispanic voters. Al Jazeera originally used Arabic, to reach its target audience, but in 2003 launched Al Jazeera English to broaden that audience.

Attempts to use language designed to appeal to a particular cultural group can backfire, as when Kimberly Guilfoyle, speaking at the Republican National Convention in 2020, said she was proud that her mother was an immigrant from Puerto Rico. Puerto Ricans were quick to point out that they are born American citizens, and are not immigrants.

There are also false flag broadcasts, that pretend to be favoring one group, while using language deliberately chosen to anger the target audience.

Language may also introduce a more subtle form of bias. The selection of metaphors and analogies, or the inclusion of personal information in one situation but not another can introduce bias, such as a gender bias. Use of a word with positive or negative connotations rather than a more neutral synonym can form a biased picture in the audience's mind. For example, it makes a difference whether the media calls a group "terrorists" or "freedom fighters" or "insurgents". A 2005 memo to the staff of the CBC states:

Rather than calling assailants "terrorists," we can refer to them as bombers, hijackers, gunmen (if we're sure no women were in the group), militants, extremists, attackers or some other appropriate noun.

In a widely criticized episode, initial online BBC reports of the 7 July 2005 London bombings identified the perpetrators as terrorists, in contradiction to the BBC's internal policy. But by the next day, journalist Tom Gross noted that the online articles had been edited, replacing "terrorists" by "bombers". In another case, March 28, 2007, the BBC paid almost $400,000 in legal fees in a London court to keep an internal memo dealing with alleged anti-Israeli bias from becoming public. The BBC has been accused of having a pro-Israel bias.

Other influences

The apparent bias of media is not always specifically political in nature. The news media tend to appeal to a specific audience, which means that stories that affect a large number of people on a global scale often receive less coverage in some markets than local stories, such as a public school shooting, a celebrity wedding, a plane crash, a "missing white woman", or similarly glamorous or shocking stories. For example, the deaths of millions of people in an ethnic conflict in Africa might be afforded scant mention in American media, while the shooting of five people in a high school is analyzed in depth. Bias is also known to exist in sports broadcasting; in the United States, broadcasters tend to favor teams on the East Coast, teams in major markets, older and more established teams and leagues, teams based in their respective country (in international sport) and teams that include high-profile celebrity athletes. The reason for these types of bias is a function of what the public wants to watch and/or what producers and publishers believe the public wants to watch.

Bias has also been claimed in instances referred to as conflict of interest, whereby the owners of media outlets have vested interests in other commercial enterprises or political parties. In such cases in the United States, the media outlet is required to disclose the conflict of interest.

However, the decisions of the editorial department of a newspaper and the corporate parent frequently are not connected, as the editorial staff retains freedom to decide what is covered as well as what is not. Biases, real or implied, frequently arise when it comes to deciding what stories will be covered and who will be called for those stories.

Accusations that a source is biased, if accepted, may cause media consumers to distrust certain kinds of statements, and place added confidence on others.

Christian privilege

From Wikipedia, the free encyclopedia

Christian privilege is a social advantage that is bestowed upon Christians in society. This arises out of the presumption that Christian belief is a social norm, that leads to the marginalization of the nonreligious and members of other religions through institutional religious discrimination or religious persecution. Christian privilege can also lead to the neglect of outsiders' cultural heritage and religious practices.

Overview

Christian privilege is a type of dominant group privilege where the unconscious or conscious attitudes and beliefs of Christians advantage Christians over non-Christians. Examples include opinions that non-Christian beliefs are inferior or dangerous, or that those who adhere to non-Christian beliefs are amoral, immoral, sinful, or misguided. Such prejudices pervade established social institutions, are reinforced by the broader American society, and have evolved as part of its history.

Lewis Z. Schlosser observes that the exposure of Christian privilege breaks a "sacred taboo", and that "both subtle and obvious pressures exist to ensure that these privileges continue to be in the sole domain of Christians. This process is quite similar to the way in which whites and males continue to (consciously and unconsciously) ensure the privilege of their racial and gender groups".

There is a hierarchy of Christian privilege in the United States. White mainstream Protestant denominations have greater degrees of privilege than minority Christian denominations. Such groups include African American churches, Christian Hispanics and Latinos, Amish people, Mennonite, Quakers, Seventh-day Adventists, Jehovah's Witnesses, adherents of the Eastern Orthodox Church, Christian scientists, Mormons, and in some instances, Catholics.

When the dominant Christian groups impose their cultural norms, values, and perspectives on people with different beliefs, those people are seen in social justice terms as oppressed. These values are imposed "on institutions by individuals and on individuals by institutions". These social and cultural values define ideas of good and evil, health and sickness, normality and deviance, and how one should live one's life. The dominant group's social values serve to justify and rationalize social oppression, while the dominant group members may not be aware of the ways in which they are privileged because of their own social identity; "unpacking" McIntosh's allegorical knapsack of privilege (of any kind) is to become aware of and to develop critical consciousness of its existence and how it impacts the daily lives of both those with and those without this privilege.

History

Alexis de Tocqueville the French political scientist and diplomat, traveled across the United States for nine months between 1831–1832, conducting research for his book Democracy in America. He noted a paradox of religion in the U.S. On the one hand, the United States promoted itself around the world as a country that valued both the "separation of church and state", and religious freedom and tolerance. On the other hand, "There is no country in the world where the Christian religion retains a greater influence over the souls of men than in America". He explained this paradox by proposing that with no officially sanctioned governmental religion, Christian denominations were compelled to compete with one another and promote themselves in order to attract and keep parishioners, thereby making religion even stronger. While the government did not support Christian churches as such, Tocqueville argued that religion should be considered the first political institution because of the enormous influence that churches had on the political process.

Although de Tocqueville favored U.S. style democracy, he found its major limitation to be in its limiting of independent thought and independent beliefs. In a country that promoted the notion that the majority rules, this effectively silenced minorities by what Tocqueville termed the "tyranny of the majority". Without specific guarantees of minority rights—in this case minority religious rights—there is a danger of religious domination over religious minorities and non-believers. The religious majority in the U.S. has historically been adherents of mainline Protestant Christian denominations who often assume that their values and standard apply equally to others.

Another traveler to the United States, social theorist Gunnar Myrdal examined U.S. society following World War II, and he noted a contradiction, which he termed "an American dilemma". He found an overriding commitment to democracy, liberty, freedom, human dignity, and egalitarian values, coexisting alongside deep-seated patterns of racial discrimination, privileging of white people, and the subordination of peoples of color. This contradiction has been reframed for contemporary consideration by the religious scholar, Diana Eck:

"The new American dilemma is real religious pluralism, and it poses challenges to America's Christian churches that are as difficult and divisive as those of race. Today, the invocation of a Christian America takes on a new set of tensions as our population of Muslim, Hindu, Sikh, and Buddhist neighbors grows. The ideal of a Christian America stands in contradiction to the spirit, if not the letter, of America's foundational principle of religious freedom"

Christian hegemony

The concept of hegemony describes the ways in which a dominant group, in this case mainly U.S. Christians, disseminate their dominant social constructions as common sense, normative, or even universal, even though most of the world's inhabitants are not Christian. Christian hegemony also accepts Christianity as part of the natural order, even at times by those who are marginalized, disempowered, or rendered invisible by it. Thus, Christian hegemony helps to maintain the marginality of other religions and beliefs. According to Beaman, "the binary opposition of sameness/difference is reflected in Protestant/minority religion in which mainstream Protestantism is representative of the 'normal'".

The French philosopher, Michel Foucault, described how a dominant-group's hegemony is advanced through "discourses". Discourses include the ideas, written expressions, theoretical foundations, and language of the dominant culture. According to Foucault, dominant-group discourses pervade networks of social and political control, which he called "regimes of truth", and which function to legitimize what can be said, who has the authority to speak and be heard, and what is authorized as true or as the truth.

Pervasiveness

Christian privilege at the individual level occurs in proselytizing to convert or reconvert non-Christians to Christianity. While many Christians view proselytizing as offering the gift of Jesus to the non-Christians, some non-believers and people of other faiths may view this as an imposition, manipulation, or oppression.

Social institutions—including but not limited to educational, governmental, and religious bodies—often maintain and perpetuate policies that explicitly or implicitly privilege and promote some groups while limiting access, excluding, or rendering invisible other groups based on social identity and social status.

Overt forms of oppression, when a dominant group tyrannizes a subordinate group, for example, apartheid, slavery and ethnic cleansing, are obvious. However, dominant group privilege is not as obvious, especially to members of dominant groups. Oppression in its fullest sense refers to structural or systemic constraints imposed on groups, even within constitutional democracies, and its "causes are embedded in unquestioned norms, habits, and symbols, in the assumptions underlying institutional rules and the collective consequences of following those rules".

Christian dominance is facilitated by its relative invisibility, and because of this invisibility, it is not analyzed, scrutinized, or confronted. Dominance is perceived as unremarkable or "normal". For example, some symbolism and rituals associated with religious holidays may appear to be free of religion. However, this very secularization can fortify Christian privilege and perpetuate Christian hegemony by making it harder to recognize and thus circumvent the constitutional requirements for the separation of religion and government.

Christian privilege and religious oppression exist in a symbiotic relationship. Oppression toward non-Christians gives rise to Christian privilege, and Christian privilege maintains oppression toward non-Christian individuals and faith communities.

Criticism

According to Schlosser, many Christians reject the notion that they have any privilege by claiming that all religions are essentially the same. Thus, they have no more and no fewer benefits accorded to them than members of other faith communities. Blumenfeld notes the objections that some of his university students raise when discussing Christian privilege as connected with the celebration of Christian holidays. The students, he notes, state that many of the celebrations and decorations have nothing to do with religion as such, and do not represent Christianity, but are rather part of American culture—however, this could be considered a further example of privilege.

Similarly, some claim that the religious significance of cultural practices stems not from Christianity, but rather from a Judeo-Christian tradition. Beaman argues that "this obscures the pervasiveness of anti-Semitism in the modern world".

Scholars and jurists debate the exact scope of religious liberty protected by the First Amendment. It is unclear whether the amendment requires religious minorities to be exempted from neutral laws and whether the Free Exercise Clause requires Congress to exempt religious pacifists from conscription into the military. At a minimum, it prohibits Congress from, in the words of James Madison, compelling "men to worship God in any manner contrary to their conscience".

Thermodynamic diagrams

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Thermodynamic_diagrams Thermodynamic diagrams are diagrams used to repr...