Search This Blog

Sunday, February 15, 2015

Scientific method


From Wikipedia, the free encyclopedia


An 18th-century depiction of early experimentation in the field of chemistry.
The scientific method is a body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge.[1] To be termed scientific, a method of inquiry is commonly based on empirical or measurable evidence subject to specific principles of reasoning.[2] The Oxford English Dictionary defines the scientific method as "a method or procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses."[3]

Although procedures vary from one field of inquiry to another, identifiable features are frequently shared in common between them. The overall process of the scientific method involves making conjectures ( hypotheses), deriving predictions from them as logical consequences, and then carrying out experiments based on those predictions.[4][5] An hypothesis is a conjecture, based on knowledge obtained while formulating the question. The hypothesis might be very specific or it might be broad. Scientists then test hypotheses by conducting experiments. Under modern interpretations, a scientific hypothesis must be falsifiable, implying that it is possible to identify a possible outcome of an experiment that conflicts with predictions deduced from the hypothesis; otherwise, the hypothesis cannot be meaningfully tested.

The purpose of an experiment is to determine whether observations agree with or conflict with the predictions derived from a hypothesis.[6] Experiments can take place in a college lab, on a kitchen table, at CERN's Large Hadron Collider, at the bottom of an ocean, on Mars, and so on. There are difficulties in a formulaic statement of method, however. Though the scientific method is often presented as a fixed sequence of steps, it represents rather a set of general principles.[7] Not all steps take place in every scientific inquiry (or to the same degree), and are not always in the same order.[8]

Overview

The DNA example below is a synopsis of this method

Ibn al-Haytham (Alhazen), 965–1039 Iraq. The Muslim scholar who is considered by some to be the father of modern scientific methodology due to his emphasis on experimental data and reproducibility of its results.[9][10]

Johannes Kepler (1571–1630). "Kepler shows his keen logical sense in detailing the whole process by which he finally arrived at the true orbit. This is the greatest piece of Retroductive reasoning ever performed." – C. S. Peirce, c. 1896, on Kepler's reasoning through explanatory hypotheses[11]

According to Morris Kline,[12] "Modern science owes its present flourishing state to a new scientific method which was fashioned almost entirely by Galileo Galilei" (1564−1642). Dudley Shapere[13] takes a more measured view of Galileo's contribution.
The scientific method is the process by which science is carried out.[14] As in other areas of inquiry, science (through the scientific method) can build on previous knowledge and develop a more sophisticated understanding of its topics of study over time.[15] [16][17][18][19][20] This model can be seen to underlay the scientific revolution.[21] One thousand years ago, Alhazen argued the importance of forming questions and subsequently testing them,[22] an approach which was advocated by Galileo in 1638 with the publication of Two New Sciences.[23] The current method is based on a hypothetico-deductive model[24] formulated in the 20th century, although it has undergone significant revision since first proposed (for a more formal discussion, see below).

Process

The overall process involves making conjectures (hypotheses), deriving predictions from them as logical consequences, and then carrying out experiments based on those predictions to determine whether the original conjecture was correct.[4] There are difficulties in a formulaic statement of method, however. Though the scientific method is often presented as a fixed sequence of steps, they are better considered as general principles.[25] Not all steps take place in every scientific inquiry (or to the same degree), and are not always in the same order. As noted by William Whewell (1794–1866), "invention, sagacity, [and] genius"[26] are required at every step.

Formulation of a question

The question can refer to the explanation of a specific observation, as in "Why is the sky blue?", but can also be open-ended, as in "How can I design a drug to cure this particular disease?" This stage frequently involves looking up and evaluating evidence from previous experiments, personal scientific observations or assertions, and/or the work of other scientists. If the answer is already known, a different question that builds on the previous evidence can be posed. When applying the scientific method to scientific research, determining a good question can be very difficult and affects the final outcome of the investigation.[27]

Hypothesis

An hypothesis is a conjecture, based on knowledge obtained while formulating the question, that may explain the observed behavior of a part of our universe. The hypothesis might be very specific, e.g., Einstein's equivalence principle or Francis Crick's "DNA makes RNA makes protein",[28] or it might be broad, e.g., unknown species of life dwell in the unexplored depths of the oceans. A statistical hypothesis is a conjecture about some population. For example, the population might be people with a particular disease. The conjecture might be that a new drug will cure the disease in some of those people. Terms commonly associated with statistical hypotheses are null hypothesis and alternative hypothesis. A null hypothesis is the conjecture that the statistical hypothesis is false, e.g., that the new drug does nothing and that any cures are due to chance effects. Researchers normally want to show that the null hypothesis is false. The alternative hypothesis is the desired outcome, e.g., that the drug does better than chance. A final point: a scientific hypothesis must be falsifiable, meaning that one can identify a possible outcome of an experiment that conflicts with predictions deduced from the hypothesis; otherwise, it cannot be meaningfully tested.

Prediction

This step involves determining the logical consequences of the hypothesis. One or more predictions are then selected for further testing. The more unlikely that a prediction would be correct simply by coincidence, then the more convincing it would be if the prediction were fulfilled; evidence is also stronger if the answer to the prediction is not already known, due to the effects of hindsight bias (see also postdiction). Ideally, the prediction must also distinguish the hypothesis from likely alternatives; if two hypotheses make the same prediction, observing the prediction to be correct is not evidence for either one over the other. (These statements about the relative strength of evidence can be mathematically derived using Bayes' Theorem).[29]

Testing

This is an investigation of whether the real world behaves as predicted by the hypothesis. Scientists (and other people) test hypotheses by conducting experiments. The purpose of an experiment is to determine whether observations of the real world agree with or conflict with the predictions derived from an hypothesis. If they agree, confidence in the hypothesis increases; otherwise, it decreases. Agreement does not assure that the hypothesis is true; future experiments may reveal problems. Karl Popper advised scientists to try to falsify hypotheses, i.e., to search for and test those experiments that seem most doubtful. Large numbers of successful confirmations are not convincing if they arise from experiments that avoid risk.[30] Experiments should be designed to minimize possible errors, especially through the use of appropriate scientific controls. For example, tests of medical treatments are commonly run as double-blind tests. Test personnel, who might unwittingly reveal to test subjects which samples are the desired test drugs and which are placebos, are kept ignorant of which are which. Such hints can bias the responses of the test subjects. Furthermore, failure of an experiment does not necessarily mean the hypothesis is false. Experiments always depend on several hypotheses, e.g., that the test equipment is working properly, and a failure may be a failure of one of the auxiliary hypotheses. (See the Duhem-Quine thesis.) Experiments can be conducted in a college lab, on a kitchen table, at CERN's Large Hadron Collider, at the bottom of an ocean, on Mars (using one of the working rovers), and so on. Astronomers do experiments, searching for planets around distant stars.
Finally, most individual experiments address highly specific topics for reasons of practicality. As a result, evidence about broader topics is usually accumulated gradually.

Analysis

This involves determining what the results of the experiment show and deciding on the next actions to take. The predictions of the hypothesis are compared to those of the null hypothesis, to determine which is better able to explain the data. In cases where an experiment is repeated many times, a statistical analysis such as a chi-squared test may be required. If the evidence has falsified the hypothesis, a new hypothesis is required; if the experiment supports the hypothesis but the evidence is not strong enough for high confidence, other predictions from the hypothesis must be tested. Once a hypothesis is strongly supported by evidence, a new question can be asked to provide further insight on the same topic. Evidence from other scientists and experience are frequently incorporated at any stage in the process. Depending on the complexity of the experiment, many iterations may be required to gather sufficient evidence to answer a question with confidence, or to build up many answers to highly specific questions in order to answer a single broader question.

DNA example

DNA icon (25x25).png The basic elements of the scientific method are illustrated by the following example from the discovery of the structure of DNA:
  • Question: Previous investigation of DNA had determined its chemical composition (the four nucleotides), the structure of each individual nucleotide, and other properties. It had been identified as the carrier of genetic information by the Avery–MacLeod–McCarty experiment in 1944,[31] but the mechanism of how genetic information was stored in DNA was unclear.
  • Hypothesis: Linus Pauling, Francis Crick and James D. Watson hypothesized that DNA had a helical structure.[32]
  • Prediction: If DNA had a helical structure, its X-ray diffraction pattern would be X-shaped.[33][34] This prediction was determined using the mathematics of the helix transform, which had been derived by Cochran, Crick and Vand[35] (and independently by Stokes). This prediction was a mathematical construct, completely independent from the biological problem at hand.
  • Experiment: Rosalind Franklin crystallized pure DNA and performed X-ray diffraction to produce photo 51. The results showed an X-shape.
  • Analysis: When Watson saw the detailed diffraction pattern, he immediately recognized it as a helix.[36][37] He and Crick then produced their model, using this information along with the previously known information about DNA's composition and about molecular interactions such as hydrogen bonds.[38]
The discovery became the starting point for many further studies involving the genetic material, such as the field of molecular genetics, and it was awarded the Nobel Prize in 1962. Each step of the example is examined in more detail later in the article.

Other components

The scientific method also includes other components required even when all the iterations of the steps above have been completed:[39]

Replication

If an experiment cannot be repeated to produce the same results, this implies that the original results might have been in error. As a result, it is common for a single experiment to be performed multiple times, especially when there are uncontrolled variables or other indications of experimental error. For significant or surprising results, other scientists may also attempt to replicate the results for themselves, especially if those results would be important to their own work.[40]

External review

The process of peer review involves evaluation of the experiment by experts, who typically give their opinions anonymously. Some journals request that the experimenter provide lists of possible peer reviewers, especially if the field is highly specialized. Peer review does not certify correctness of the results, only that, in the opinion of the reviewer, the experiments themselves were sound (based on the description supplied by the experimenter). If the work passes peer review, which occasionally may require new experiments requested by the reviewers, it will be published in a peer-reviewed scientific journal. The specific journal that publishes the results indicates the perceived quality of the work.[41]

Data recording and sharing

Scientists typically are careful in recording their data, a requirement promoted by Ludwik Fleck (1896–1961) and others.[42] Though not typically required, they might be requested to supply this data to other scientists who wish to replicate their original results (or parts of their original results), extending to the sharing of any experimental samples that may be difficult to obtain.[43]

Scientific inquiry

Scientific inquiry generally aims to obtain knowledge in the form of testable explanations that can be used to predict the results of future experiments. This allows scientists to gain a better understanding of the topic being studied, and later be able to use that understanding to intervene in its causal mechanisms (such as to cure disease). The better an explanation is at making predictions, the more useful it frequently can be, and the more likely it is to continue explaining a body of evidence better than its alternatives. The most successful explanations, which explain and make accurate predictions in a wide range of circumstances, are often called scientific theories.

Most experimental results do not result in large changes in human understanding; improvements in theoretical scientific understanding is typically the result of a gradual process of development over time, sometimes across different domains of science.[44] Scientific models vary in the extent to which they have been experimentally tested and for how long, and in their acceptance in the scientific community. In general, explanations become accepted over time as evidence accumulates on a given topic, and the explanation in question is more powerful than its alternatives at explaining the evidence. Often the explanations are altered over time, or explanations are combined to produce new explanations.

Properties of scientific inquiry


Flying gallop falsified; see image below.

Muybridge's photographs of The Horse in Motion, 1878, were used to answer the question whether all four feet of a galloping horse are ever off the ground at the same time. This demonstrates a use of photography in science.

Scientific knowledge is closely tied to empirical findings, and can remain subject to falsification if new experimental observation incompatible with it is found. That is, no theory can ever be considered final, since new problematic evidence might be discovered. If such evidence is found, a new theory may be proposed, or (more commonly) it is found that modifications to the previous theory are sufficient to explain the new evidence. The strength of a theory can be argued to be related to how long it has persisted without major alteration to its core principles.

Theories can also subject to subsumption by other theories. For example, thousands of years of scientific observations of the planets were explained almost perfectly by Newton's laws. However, these laws were then determined to be special cases of a more general theory (relativity), which explained both the (previously unexplained) exceptions to Newton's laws and predicting and explaining other observations such as the deflection of light by gravity. Thus, in certain cases independent, unconnected, scientific observations can be connected to each other, unified by principles of increasing explanatory power.[45]

Since new theories might be more comprehensive than what preceded them, and thus be able to explain more than previous ones, successor theories might be able to meet a higher standard by explaining a larger body of observations than their predecessors.[45] For example, the theory of evolution explains the diversity of life on Earth, how species adapt to their environments, and many other patterns observed in the natural world;[46][47] its most recent major modification was unification with genetics to form the modern evolutionary synthesis. In subsequent modifications, it has also subsumed aspects of many other fields such as biochemistry and molecular biology.

Beliefs and biases

Scientific methodology often directs that hypotheses be tested in controlled conditions wherever possible. This is frequently possible in certain areas, such as in the biological sciences, and more difficult in other areas, such as in astronomy. The practice of experimental control and reproducibility can have the effect of diminishing the potentially harmful effects of circumstance, and to a degree, personal bias. For example, pre-existing beliefs can alter the interpretation of results, as in confirmation bias; this is a heuristic that leads a person with a particular belief to see things as reinforcing their belief, even if another observer might disagree (in other words, people tend to observe what they expect to observe).

A historical example is the belief that the legs of a galloping horse are splayed at the point when none of the horse's legs touches the ground, to the point of this image being included in paintings by its supporters. However, the first stop-action pictures of a horse's gallop by Eadweard Muybridge showed this to be false, and that the legs are instead gathered together.[48] Another important human bias that plays a role is a preference for new, surprising statements (see appeal to novelty), which can result in a search for evidence that the new is true.[1] In contrast to this standard in the scientific method, poorly attested beliefs can be believed and acted upon via a less rigorous heuristic,[49] sometimes taking advantage of the narrative fallacy that when narrative is constructed its elements become easier to believe.[50][51] Sometimes, these have their elements assumed a priori, or contain some other logical or methodological flaw in the process that ultimately produced them.[52]

Elements of the scientific method

There are different ways of outlining the basic method used for scientific inquiry. The scientific community and philosophers of science generally agree on the following classification of method components. These methodological elements and organization of procedures tend to be more characteristic of natural sciences than social sciences. Nonetheless, the cycle of formulating hypotheses, testing and analyzing the results, and formulating new hypotheses, will resemble the cycle described below.
Four essential elements[53][54][55] of the scientific method[56] are iterations,[57][58] recursions,[59] interleavings, or orderings of the following:
Each element of the scientific method is subject to peer review for possible mistakes. These activities do not describe all that scientists do (see below) but apply mostly to experimental sciences (e.g., physics, chemistry, and biology). The elements above are often taught in the educational system as "the scientific method".[66]

The scientific method is not a single recipe: it requires intelligence, imagination, and creativity.[67] In this sense, it is not a mindless set of standards and procedures to follow, but is rather an ongoing cycle, constantly developing more useful, accurate and comprehensive models and methods. For example, when Einstein developed the Special and General Theories of Relativity, he did not in any way refute or discount Newton's Principia. On the contrary, if the astronomically large, the vanishingly small, and the extremely fast are removed from Einstein's theories – all phenomena Newton could not have observed – Newton's equations are what remain. Einstein's theories are expansions and refinements of Newton's theories and, thus, increase our confidence in Newton's work.

A linearized, pragmatic scheme of the four points above is sometimes offered as a guideline for proceeding:[68]
  1. Define a question
  2. Gather information and resources (observe)
  3. Form an explanatory hypothesis
  4. Test the hypothesis by performing an experiment and collecting data in a reproducible manner
  5. Analyze the data
  6. Interpret the data and draw conclusions that serve as a starting point for new hypothesis
  7. Publish results
  8. Retest (frequently done by other scientists)
The iterative cycle inherent in this step-by-step method goes from point 3 to 6 back to 3 again.

While this schema outlines a typical hypothesis/testing method,[69] it should also be noted that a number of philosophers, historians and sociologists of science (perhaps most notably Paul Feyerabend) claim that such descriptions of scientific method have little relation to the ways that science is actually practiced.

The "operational" paradigm combines the concepts of operational definition, instrumentalism, and utility:

The essential elements of scientific method are operations, observations, models, and a utility function for evaluating models.[70][not in citation given]

Characterizations

The scientific method depends upon increasingly sophisticated characterizations of the subjects of investigation. (The subjects can also be called unsolved problems or the unknowns.) For example, Benjamin Franklin conjectured, correctly, that St. Elmo's fire was electrical in nature, but it has taken a long series of experiments and theoretical changes to establish this. While seeking the pertinent properties of the subjects, careful thought may also entail some definitions and observations; the observations often demand careful measurements and/or counting.

The systematic, careful collection of measurements or counts of relevant quantities is often the critical difference between pseudo-sciences, such as alchemy, and science, such as chemistry or biology. Scientific measurements are usually tabulated, graphed, or mapped, and statistical manipulations, such as correlation and regression, performed on them. The measurements might be made in a controlled setting, such as a laboratory, or made on more or less inaccessible or unmanipulatable objects such as stars or human populations. The measurements often require specialized scientific instruments such as thermometers, spectroscopes, particle accelerators, or voltmeters, and the progress of a scientific field is usually intimately tied to their invention and improvement.
I am not accustomed to saying anything with certainty after only one or two observations.

Uncertainty

Measurements in scientific work are also usually accompanied by estimates of their uncertainty. The uncertainty is often estimated by making repeated measurements of the desired quantity.
Uncertainties may also be calculated by consideration of the uncertainties of the individual underlying quantities used. Counts of things, such as the number of people in a nation at a particular time, may also have an uncertainty due to data collection limitations. Or counts may represent a sample of desired quantities, with an uncertainty that depends upon the sampling method used and the number of samples taken.

Definition

Measurements demand the use of operational definitions of relevant quantities. That is, a scientific quantity is described or defined by how it is measured, as opposed to some more vague, inexact or "idealized" definition. For example, electrical current, measured in amperes, may be operationally defined in terms of the mass of silver deposited in a certain time on an electrode in an electrochemical device that is described in some detail. The operational definition of a thing often relies on comparisons with standards: the operational definition of "mass" ultimately relies on the use of an artifact, such as a particular kilogram of platinum-iridium kept in a laboratory in France.

The scientific definition of a term sometimes differs substantially from its natural language usage. For example, mass and weight overlap in meaning in common discourse, but have distinct meanings in mechanics. Scientific quantities are often characterized by their units of measure which can later be described in terms of conventional physical units when communicating the work.

New theories are sometimes developed after realizing certain terms have not previously been sufficiently clearly defined. For example, Albert Einstein's first paper on relativity begins by defining simultaneity and the means for determining length. These ideas were skipped over by Isaac Newton with, "I do not define time, space, place and motion, as being well known to all." Einstein's paper then demonstrates that they (viz., absolute time and length independent of motion) were approximations. Francis Crick cautions us that when characterizing a subject, however, it can be premature to define something when it remains ill-understood.[72] In Crick's study of consciousness, he actually found it easier to study awareness in the visual system, rather than to study free will, for example. His cautionary example was the gene; the gene was much more poorly understood before Watson and Crick's pioneering discovery of the structure of DNA; it would have been counterproductive to spend much time on the definition of the gene, before them.

DNA-characterizations

DNA icon (25x25).png
The history of the discovery of the structure of DNA is a classic example of the elements of the scientific method: in 1950 it was known that genetic inheritance had a mathematical description, starting with the studies of Gregor Mendel, and that DNA contained genetic information (Oswald Avery's transforming principle).[31] But the mechanism of storing genetic information (i.e., genes) in DNA was unclear. Researchers in Bragg's laboratory at Cambridge University made X-ray diffraction pictures of various molecules, starting with crystals of salt, and proceeding to more complicated substances. Using clues painstakingly assembled over decades, beginning with its chemical composition, it was determined that it should be possible to characterize the physical structure of DNA, and the X-ray images would be the vehicle.[73] ..2. DNA-hypotheses

Another example: precession of Mercury


Precession of the perihelion (exaggerated)

The characterization element can require extended and extensive study, even centuries. It took thousands of years of measurements, from the Chaldean, Indian, Persian, Greek, Arabic and European astronomers, to fully record the motion of planet Earth. Newton was able to include those measurements into consequences of his laws of motion. But the perihelion of the planet Mercury's orbit exhibits a precession that cannot be fully explained by Newton's laws of motion (see diagram to the right), as Leverrier pointed out in 1859. The observed difference for Mercury's precession between Newtonian theory and observation was one of the things that occurred to Einstein as a possible early test of his theory of General Relativity. His relativistic calculations matched observation much more closely than did Newtonian theory. The difference is approximately 43 arc-seconds per century.

Hypothesis development

An hypothesis is a suggested explanation of a phenomenon, or alternately a reasoned proposal suggesting a possible correlation between or among a set of phenomena.
Normally hypotheses have the form of a mathematical model. Sometimes, but not always, they can also be formulated as existential statements, stating that some particular instance of the phenomenon being studied has some characteristic and causal explanations, which have the general form of universal statements, stating that every instance of the phenomenon has a particular characteristic.

Scientists are free to use whatever resources they have – their own creativity, ideas from other fields, induction, Bayesian inference, and so on – to imagine possible explanations for a phenomenon under study. Charles Sanders Peirce, borrowing a page from Aristotle (Prior Analytics, 2.25) described the incipient stages of inquiry, instigated by the "irritation of doubt" to venture a plausible guess, as abductive reasoning. The history of science is filled with stories of scientists claiming a "flash of inspiration", or a hunch, which then motivated them to look for evidence to support or refute their idea. Michael Polanyi made such creativity the centerpiece of his discussion of methodology.

William Glen observes that
the success of a hypothesis, or its service to science, lies not simply in its perceived "truth", or power to displace, subsume or reduce a predecessor idea, but perhaps more in its ability to stimulate the research that will illuminate … bald suppositions and areas of vagueness.[74]
In general scientists tend to look for theories that are "elegant" or "beautiful". In contrast to the usual English use of these terms, they here refer to a theory in accordance with the known facts, which is nevertheless relatively simple and easy to handle. Occam's Razor serves as a rule of thumb for choosing the most desirable amongst a group of equally explanatory hypotheses.

DNA-hypotheses

DNA icon (25x25).png
Linus Pauling proposed that DNA might be a triple helix.[75] This hypothesis was also considered by Francis Crick and James D. Watson but discarded. When Watson and Crick learned of Pauling's hypothesis, they understood from existing data that Pauling was wrong[76] and that Pauling would soon admit his difficulties with that structure. So, the race was on to figure out the correct structure (except that Pauling did not realize at the time that he was in a race) ..3. DNA-predictions

Predictions from the hypothesis

Any useful hypothesis will enable predictions, by reasoning including deductive reasoning. It might predict the outcome of an experiment in a laboratory setting or the observation of a phenomenon in nature. The prediction can also be statistical and deal only with probabilities.
It is essential that the outcome of testing such a prediction be currently unknown. Only in this case does a successful outcome increase the probability that the hypothesis is true. If the outcome is already known, it is called a consequence and should have already been considered while formulating the hypothesis.

If the predictions are not accessible by observation or experience, the hypothesis is not yet testable and so will remain to that extent unscientific in a strict sense. A new technology or theory might make the necessary experiments feasible. Thus, much scientifically based speculation might convince one (or many) that the hypothesis that other intelligent species exist is true. But since there no experiment now known which can test this hypothesis, science itself can have little to say about the possibility. In future, some new technique might lead to an experimental test and the speculation would then become part of accepted science.

DNA-predictions

DNA icon (25x25).png
James D. Watson, Francis Crick, and others hypothesized that DNA had a helical structure. This implied that DNA's X-ray diffraction pattern would be 'x shaped'.[34][77] This prediction followed from the work of Cochran, Crick and Vand[35] (and independently by Stokes). The Cochran-Crick-Vand-Stokes theorem provided a mathematical explanation for the empirical observation that diffraction from helical structures produces x shaped patterns.

In their first paper, Watson and Crick also noted that the double helix structure they proposed provided a simple mechanism for DNA replication, writing, "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material".[78] ..4. DNA-experiments

Another example: general relativity


Einstein's theory of General Relativity makes several specific predictions about the observable structure of space-time, such as that light bends in a gravitational field, and that the amount of bending depends in a precise way on the strength of that gravitational field. Arthur Eddington's observations made during a 1919 solar eclipse supported General Relativity rather than Newtonian gravitation.[79]

Experiments

Once predictions are made, they can be sought by experiments. If the test results contradict the predictions, the hypotheses which entailed them are called into question and become less tenable. 
Sometimes the experiments are conducted incorrectly or are not very well designed, when compared to a crucial experiment. If the experimental results confirm the predictions, then the hypotheses are considered more likely to be correct, but might still be wrong and continue to be subject to further testing. The experimental control is a technique for dealing with observational error. This technique uses the contrast between multiple samples (or observations) under differing conditions to see what varies or what remains the same. We vary the conditions for each measurement, to help isolate what has changed. Mill's canons can then help us figure out what the important factor is.[80] Factor analysis is one technique for discovering the important factor in an effect.Depending on the predictions, the experiments can have different shapes. It could be a classical experiment in a laboratory setting, a double-blind study or an archaeological excavation. Even taking a plane from New York to Paris is an experiment which tests the aerodynamical hypotheses used for constructing the plane.

Scientists assume an attitude of openness and accountability on the part of those conducting an experiment. Detailed record keeping is essential, to aid in recording and reporting on the experimental results, and supports the effectiveness and integrity of the procedure. They will also assist in reproducing the experimental results, likely by others. Traces of this approach can be seen in the work of Hipparchus (190–120 BCE), when determining a value for the precession of the Earth, while controlled experiments can be seen in the works of Jābir ibn Hayyān (721–815 CE), al-Battani (853–929) and Alhazen (965–1039).[81]

DNA-experiments

DNA icon (25x25).png
Watson and Crick showed an initial (and incorrect) proposal for the structure of DNA to a team from Kings College – Rosalind Franklin, Maurice Wilkins, and Raymond Gosling. Franklin immediately spotted the flaws which concerned the water content. Later Watson saw Franklin's detailed X-ray diffraction images which showed an X-shape and was able to confirm the structure was helical.[36][37] This rekindled Watson and Crick's model building and led to the correct structure. ..1. DNA-characterizations

Evaluation and improvement

The scientific method is iterative. At any stage it is possible to refine its accuracy and precision, so that some consideration will lead the scientist to repeat an earlier part of the process. Failure to develop an interesting hypothesis may lead a scientist to re-define the subject under consideration.
Failure of a hypothesis to produce interesting and testable predictions may lead to reconsideration of the hypothesis or of the definition of the subject. Failure of an experiment to produce interesting results may lead a scientist to reconsider the experimental method, the hypothesis, or the definition of the subject.

Other scientists may start their own research and enter the process at any stage. They might adopt the characterization and formulate their own hypothesis, or they might adopt the hypothesis and deduce their own predictions. Often the experiment is not done by the person who made the prediction, and the characterization is based on experiments done by someone else. Published results of experiments can also serve as a hypothesis predicting their own reproducibility.

DNA-iterations

DNA icon (25x25).png
After considerable fruitless experimentation, being discouraged by their superior from continuing, and numerous false starts,[82][83][84] Watson and Crick were able to infer the essential structure of DNA by concrete modeling of the physical shapes of the nucleotides which comprise it.[38][85] They were guided by the bond lengths which had been deduced by Linus Pauling and by Rosalind Franklin's X-ray diffraction images. ..DNA Example

Confirmation

Science is a social enterprise, and scientific work tends to be accepted by the scientific community when it has been confirmed. Crucially, experimental and theoretical results must be reproduced by others within the scientific community. Researchers have given their lives for this vision; Georg Wilhelm Richmann was killed by ball lightning (1753) when attempting to replicate the 1752 kite-flying experiment of Benjamin Franklin.[86]

To protect against bad science and fraudulent data, government research-granting agencies such as the National Science Foundation, and science journals, including Nature and Science, have a policy that researchers must archive their data and methods so that other researchers can test the data and methods and build on the research that has gone before. Scientific data archiving can be done at a number of national archives in the U.S. or in the World Data Center.

Models of scientific inquiry

Classical model

The classical model of scientific inquiry derives from Aristotle,[87] who distinguished the forms of approximate and exact reasoning, set out the threefold scheme of abductive, deductive, and inductive inference, and also treated the compound forms such as reasoning by analogy.

Pragmatic model

In 1877,[15] Charles Sanders Peirce (/ˈpɜrs/ like "purse"; 1839–1914) characterized inquiry in general not as the pursuit of truth per se but as the struggle to move from irritating, inhibitory doubts born of surprises, disagreements, and the like, and to reach a secure belief, belief being that on which one is prepared to act. He framed scientific inquiry as part of a broader spectrum and as spurred, like inquiry generally, by actual doubt, not mere verbal or hyperbolic doubt, which he held to be fruitless.[88] He outlined four methods of settling opinion, ordered from least to most successful:
  1. The method of tenacity (policy of sticking to initial belief) – which brings comforts and decisiveness but leads to trying to ignore contrary information and others' views as if truth were intrinsically private, not public. It goes against the social impulse and easily falters since one may well notice when another's opinion is as good as one's own initial opinion. Its successes can shine but tend to be transitory.[89]
  2. The method of authority – which overcomes disagreements but sometimes brutally. Its successes can be majestic and long-lived, but it cannot operate thoroughly enough to suppress doubts indefinitely, especially when people learn of other societies present and past.
  3. The method of the a priori – which promotes conformity less brutally but fosters opinions as something like tastes, arising in conversation and comparisons of perspectives in terms of "what is agreeable to reason." Thereby it depends on fashion in paradigms and goes in circles over time. It is more intellectual and respectable but, like the first two methods, sustains accidental and capricious beliefs, destining some minds to doubt it.
  4. The scientific method – the method wherein inquiry regards itself as fallible and purposely tests itself and criticizes, corrects, and improves itself.
Peirce held that slow, stumbling ratiocination can be dangerously inferior to instinct and traditional sentiment in practical matters, and that the scientific method is best suited to theoretical research,[90] which in turn should not be trammeled by the other methods and practical ends; reason's "first rule" is that, in order to learn, one must desire to learn and, as a corollary, must not block the way of inquiry.[91] The scientific method excels the others by being deliberately designed to arrive – eventually – at the most secure beliefs, upon which the most successful practices can be based.
Starting from the idea that people seek not truth per se but instead to subdue irritating, inhibitory doubt, Peirce showed how, through the struggle, some can come to submit to truth for the sake of belief's integrity, seek as truth the guidance of potential practice correctly to its given goal, and wed themselves to the scientific method.[15][18]

For Peirce, rational inquiry implies presuppositions about truth and the real; to reason is to presuppose (and at least to hope), as a principle of the reasoner's self-regulation, that the real is discoverable and independent of our vagaries of opinion. In that vein he defined truth as the correspondence of a sign (in particular, a proposition) to its object and, pragmatically, not as actual consensus of some definite, finite community (such that to inquire would be to poll the experts), but instead as that final opinion which all investigators would reach sooner or later but still inevitably, if they were to push investigation far enough, even when they start from different points.[92] In tandem he defined the real as a true sign's object (be that object a possibility or quality, or an actuality or brute fact, or a necessity or norm or law), which is what it is independently of any finite community's opinion and, pragmatically, depends only on the final opinion destined in a sufficient investigation. That is a destination as far, or near, as the truth itself to you or me or the given finite community. Thus, his theory of inquiry boils down to "Do the science." Those conceptions of truth and the real involve the idea of a community both without definite limits (and thus potentially self-correcting as far as needed) and capable of definite increase of knowledge.[93] As inference, "logic is rooted in the social principle" since it depends on a standpoint that is, in a sense, unlimited.[94]

Paying special attention to the generation of explanations, Peirce outlined the scientific method as a coordination of three kinds of inference in a purposeful cycle aimed at settling doubts, as follows (in §III–IV in "A Neglected Argument"[4] except as otherwise noted):

1. Abduction (or retroduction). Guessing, inference to explanatory hypotheses for selection of those best worth trying. From abduction, Peirce distinguishes induction as inferring, on the basis of tests, the proportion of truth in the hypothesis. Every inquiry, whether into ideas, brute facts, or norms and laws, arises from surprising observations in one or more of those realms (and for example at any stage of an inquiry already underway). All explanatory content of theories comes from abduction, which guesses a new or outside idea so as to account in a simple, economical way for a surprising or complicative phenomenon. Oftenest, even a well-prepared mind guesses wrong. But the modicum of success of our guesses far exceeds that of sheer luck and seems born of attunement to nature by instincts developed or inherent, especially insofar as best guesses are optimally plausible and simple in the sense, said Peirce, of the "facile and natural", as by Galileo's natural light of reason and as distinct from "logical simplicity". Abduction is the most fertile but least secure mode of inference. Its general rationale is inductive: it succeeds often enough and, without it, there is no hope of sufficiently expediting inquiry (often multi-generational) toward new truths.[95] Coordinative method leads from abducing a plausible hypothesis to judging it for its testability[96] and for how its trial would economize inquiry itself.[97] Peirce calls his pragmatism "the logic of abduction".[98] His pragmatic maxim is: "Consider what effects that might conceivably have practical bearings you conceive the objects of your conception to have. Then, your conception of those effects is the whole of your conception of the object".[92] His pragmatism is a method of reducing conceptual confusions fruitfully by equating the meaning of any conception with the conceivable practical implications of its object's conceived effects – a method of experimentational mental reflection hospitable to forming hypotheses and conducive to testing them. It favors efficiency. The hypothesis, being insecure, needs to have practical implications leading at least to mental tests and, in science, lending themselves to scientific tests. A simple but unlikely guess, if uncostly to test for falsity, may belong first in line for testing. A guess is intrinsically worth testing if it has instinctive plausibility or reasoned objective probability, while subjective likelihood, though reasoned, can be misleadingly seductive. Guesses can be chosen for trial strategically, for their caution (for which Peirce gave as example the game of Twenty Questions), breadth, and incomplexity.[99] One can hope to discover only that which time would reveal through a learner's sufficient experience anyway, so the point is to expedite it; the economy of research is what demands the leap, so to speak, of abduction and governs its art.[97]

2. Deduction. Two stages:
i. Explication. Unclearly premissed, but deductive, analysis of the hypothesis in order to render its parts as clear as possible.
ii. Demonstration: Deductive Argumentation, Euclidean in procedure. Explicit deduction of hypothesis's consequences as predictions, for induction to test, about evidence to be found. Corollarial or, if needed, Theorematic.
3. Induction. The long-run validity of the rule of induction is deducible from the principle (presuppositional to reasoning in general[92]) that the real is only the object of the final opinion to which adequate investigation would lead;[100] anything to which no such process would ever lead would not be real. Induction involving ongoing tests or observations follows a method which, sufficiently persisted in, will diminish its error below any predesignate degree. Three stages:
i. Classification. Unclearly premissed, but inductive, classing of objects of experience under general ideas.
ii. Probation: direct Inductive Argumentation. Crude (the enumeration of instances) or Gradual (new estimate of proportion of truth in the hypothesis after each test). Gradual Induction is Qualitative or Quantitative; if Qualitative, then dependent on weightings of qualities or characters;[101] if Quantitative, then dependent on measurements, or on statistics, or on countings.
iii. Sentential Induction. "...which, by Inductive reasonings, appraises the different Probations singly, then their combinations, then makes self-appraisal of these very appraisals themselves, and passes final judgment on the whole result".

Communication and community

Frequently the scientific method is employed not only by a single person, but also by several people cooperating directly or indirectly. Such cooperation can be regarded as an important element of a scientific community. Various standards of scientific methodology are used within such an environment.

Peer review evaluation

Scientific journals use a process of peer review, in which scientists' manuscripts are submitted by editors of scientific journals to (usually one to three) fellow (usually anonymous) scientists familiar with the field for evaluation. In certain journals, the journal itself selects the referees; while in others (especially journals that are extremely specialized), the manuscript author might recommend referees. The referees may or may not recommend publication, or they might recommend publication with suggested modifications, or, sometimes, publication in another journal. This standard is practiced to various degrees by different journals, and can have the effect of keeping the literature free of obvious errors and to generally improve the quality of the material, especially in the journals who use the standard most rigorously. The peer review process can have limitations when considering research outside the conventional scientific paradigm: problems of "groupthink" can interfere with open and fair deliberation of some new research.[102]

Documentation and replication

Sometimes experimenters may make systematic errors during their experiments, veer from standard methods and practices (Pathological science) for various reasons, or, in rare cases, deliberately report false results. Occasionally because of this then, other scientists might attempt to repeat the experiments in order to duplicate the results.

Archiving

Researchers sometimes practice scientific data archiving, such as in compliance with the policies of government funding agencies and scientific journals. In these cases, detailed records of their experimental procedures, raw data, statistical analyses and source code can be preserved in order to provide evidence of the methodology and practice of the procedure and assist in any potential future attempts to reproduce the result. These procedural records may also assist in the conception of new experiments to test the hypothesis, and may prove useful to engineers who might examine the potential practical applications of a discovery.

Data sharing

When additional information is needed before a study can be reproduced, the author of the study might be asked to provide it. They might provide it, or if the author refuses to share data, appeals can be made to the journal editors who published the study or to the institution which funded the research.

Limitations

Since it is impossible for a scientist to record everything that took place in an experiment, facts selected for their apparent relevance are reported. This may lead, unavoidably, to problems later if some supposedly irrelevant feature is questioned. For example, Heinrich Hertz did not report the size of the room used to test Maxwell's equations, which later turned out to account for a small deviation in the results. The problem is that parts of the theory itself need to be assumed in order to select and report the experimental conditions. The observations are hence sometimes described as being 'theory-laden'.

Dimensions of practice

The primary constraints on contemporary science are:
  • Publication, i.e. Peer review
  • Resources (mostly funding)
It has not always been like this: in the old days of the "gentleman scientist" funding (and to a lesser extent publication) were far weaker constraints.

Both of these constraints indirectly require scientific method – work that violates the constraints will be difficult to publish and difficult to get funded. Journals require submitted papers to conform to "good scientific practice" and to a degree this can be enforced by peer review. Originality, importance and interest are more important – see for example the author guidelines for Nature.

Philosophy and sociology of science

Philosophy of science looks at the underpinning logic of the scientific method, at what separates science from non-science, and the ethic that is implicit in science. There are basic assumptions, derived from philosophy by at least one prominent scientist, that form the base of the scientific method – namely, that reality is objective and consistent, that humans have the capacity to perceive reality accurately, and that rational explanations exist for elements of the real world.[103] These assumptions from methodological naturalism form a basis on which science may be grounded. Logical Positivist, empiricist, falsificationist, and other theories have criticized these assumptions and given alternative accounts of the logic of science, but each has also itself been criticized.
Thomas Kuhn examined the history of science in his The Structure of Scientific Revolutions, and found that the actual method used by scientists differed dramatically from the then-espoused method. His observations of science practice are essentially sociological and do not speak to how science is or can be practiced in other times and other cultures.

Norwood Russell Hanson, Imre Lakatos and Thomas Kuhn have done extensive work on the "theory laden" character of observation. Hanson (1958) first coined the term for the idea that all observation is dependent on the conceptual framework of the observer, using the concept of gestalt to show how preconceptions can affect both observation and description.[104] He opens Chapter 1 with a discussion of the Golgi bodies and their initial rejection as an artefact of staining technique, and a discussion of Brahe and Kepler observing the dawn and seeing a "different" sun rise despite the same physiological phenomenon. Kuhn[105] and Feyerabend[106] acknowledge the pioneering significance of his work.

Kuhn (1961) said the scientist generally has a theory in mind before designing and undertaking experiments so as to make empirical observations, and that the "route from theory to measurement can almost never be traveled backward". This implies that the way in which theory is tested is dictated by the nature of the theory itself, which led Kuhn (1961, p. 166) to argue that "once it has been adopted by a profession ... no theory is recognized to be testable by any quantitative tests that it has not already passed".[107]

Paul Feyerabend similarly examined the history of science, and was led to deny that science is genuinely a methodological process. In his book Against Method he argues that scientific progress is not the result of applying any particular method. In essence, he says that for any specific method or norm of science, one can find a historic episode where violating it has contributed to the progress of science. Thus, if believers in scientific method wish to express a single universally valid rule, Feyerabend jokingly suggests, it should be 'anything goes'.[108] Criticisms such as his led to the strong programme, a radical approach to the sociology of science.

The postmodernist critiques of science have themselves been the subject of intense controversy. This ongoing debate, known as the science wars, is the result of conflicting values and assumptions between the postmodernist and realist camps. Whereas postmodernists assert that scientific knowledge is simply another discourse (note that this term has special meaning in this context) and not representative of any form of fundamental truth, realists in the scientific community maintain that scientific knowledge does reveal real and fundamental truths about reality. Many books have been written by scientists which take on this problem and challenge the assertions of the postmodernists while defending science as a legitimate method of deriving truth.[109]

Role of chance in discovery

Somewhere between 33% and 50% of all scientific discoveries are estimated to have been stumbled upon, rather than sought out. This may explain why scientists so often express that they were lucky.[110] Louis Pasteur is credited with the famous saying that "Luck favours the prepared mind", but some psychologists have begun to study what it means to be 'prepared for luck' in the scientific context. Research is showing that scientists are taught various heuristics that tend to harness chance and the unexpected.[110][111] This is what Nassim Nicholas Taleb calls "Anti-fragility"; while some systems of investigation are fragile in the face of human error, human bias, and randomness, the scientific method is more than resistant or tough – it actually benefits from such randomness in many ways (it is anti-fragile). Taleb believes that the more anti-fragile the system, the more it will flourish in the real world.[19]
Psychologist Kevin Dunbar says the process of discovery often starts with researchers finding bugs in their experiments. These unexpected results lead researchers to try to fix what they think is an error in their method. Eventually, the researcher decides the error is too persistent and systematic to be a coincidence. The highly controlled, cautious and curious aspects of the scientific method are thus what make it well suited for identifying such persistent systematic errors. At this point, the researcher will begin to think of theoretical explanations for the error, often seeking the help of colleagues across different domains of expertise.[110][111]

History

Aristotle, 384 BCE – 322 BCE. "As regards his method, Aristotle is recognized as the inventor of scientific method because of his refined analysis of logical implications contained in demonstrative discourse, which goes well beyond natural logic and does not owe anything to the ones who philosophized before him." – Riccardo Pozzo[112]

The development of the scientific method is inseparable from the history of science itself. Ancient Egyptian documents describe empirical methods in astronomy,[113] mathematics,[114] and medicine.[115] The ancient Greek philosopher Thales in the 6th century BCE refused to accept supernatural, religious or mythological explanations for natural phenomena, proclaiming that every event had a natural cause. The development of deductive reasoning by Plato was an important step towards the scientific method. Empiricism seems to have been formalized by Aristotle, who believed that universal truths could be reached via induction.

For the beginnings of scientific method: Karl Popper writes of Parmenides (fl. 5th century BCE): "So what was really new in Parmenides was his axiomatic-deductive method, which Leucippus and Democritus turned into a hypothetical-deductive method, and thus made part of scientific methodology."[116]

According to David Lindberg, Aristotle (4th century BCE) wrote about the scientific method even if he and his followers did not actually follow what he said.[65] Lindberg also notes that Ptolemy (2nd century CE) and Ibn al-Haytham (11th century CE) are among the early examples of people who carried out scientific experiments. [117] Also, John Losee writes that "the Physics and the Metaphysics contain discussions of certain aspects of scientific method", of which, he says "Aristotle viewed scientific inquiry as a progression from observations to general principles and back to observations."[118]

Early Christian leaders such as Clement of Alexandria (150–215) and Basil of Caesarea (330–379) encouraged future generations to view the Greek wisdom as "handmaidens to theology" and science was considered a means to more accurate understanding of the Bible and of God.[119] Augustine of Hippo (354–430) who contributed great philosophical wealth to the Latin Middle Ages, advocated the study of science and was wary of philosophies that disagreed with the Bible, such as astrology and the Greek belief that the world had no beginning.[119] This Christian accommodation with Greek science "laid a foundation for the later widespread, intensive study of natural philosophy during the Late Middle Ages."[119] However, the division of Latin-speaking Western Europe from the Greek-speaking East,[119] followed by barbarian invasions, the Plague of Justinian, and the Islamic invasion,[120] resulted in the West largely losing access to Greek wisdom.

By the 8th century Islam had overrun the Christian lands[121] of Syria, Iraq, Iran and Egypt[122] This swift occupation further severed Western Europe from many of the great works of Aristotle, Plato, Euclid and others, many of which were housed in the great library of Alexandria. Having come upon such a wealth of knowledge, the Arabs, who viewed non-Arab languages as inferior, even as a source of pollution,[123] employed conquered Christians and Jews to translate these works from the native Greek and Syriac into Arabic[124]

Thus equipped, Arab philosopher Alhazen (Ibn al-Haytham) performed optical and physiological experiments, reported in his manifold works, the most famous being Book of Optics (1021).[125] He was thus a forerunner of scientific method, having understood that a controlled environment involving experimentation and measurement is required in order to draw educated conclusions. Other Arab polymaths of the same era produced copious works on mathematics, philosophy, astronomy and alchemy. Most stuck closely to Aristotle, being hesitant to admit that some of Aristotle's thinking was errant,[126] while others strongly criticized him.

During these years, occasionally a paraphrased translation from the Arabic, which itself had been translated from Greek and Syriac, might make its way to the West for scholarly study. It was not until 1204, during which the Latins conquered and took Constantinople from the Byzantines in the name of the fourth Crusade, that a renewed scholarly interest in the original Greek manuscripts began to grow. Due to the new easier access to the libraries of Constantinople by Western scholars, a certain revival in the study and analysis of the original Greek texts by Western scholars began.[127] From that point a functional scientific method that would launch modern science was on the horizon.

Grosseteste (1175–1253), an English statesman, scientist and Christian theologian, was "the principal figure" in bringing about "a more adequate method of scientific inquiry" by which "medieval scientists were able eventually to outstrip their ancient European and Muslim teachers" (Dales 1973:62). ... His thinking influenced Roger Bacon, who spread Grosseteste's ideas from Oxford to the University of Paris during a visit there in the 1240s. From the prestigious universities in Oxford and Paris, the new experimental science spread rapidly throughout the medieval universities: "And so it went to Galileo, William Gilbert, Francis Bacon, William Harvey, Descartes, Robert Hooke, Newton, Leibniz, and the world of the seventeenth century" (Crombie 1962:15). So it went to us as well.| Hugh G. Gauch, 2003.[128]

Roger Bacon (c. 1214 – 1294) is sometimes credited as one of the earliest European advocates of the modern scientific method inspired by the works of Aristotle.[129]

Roger Bacon (1214–1294), an English thinker and experimenter, is recognized by many to be the father of modern scientific method. His view that mathematics was essential to a correct understanding of natural philosophy was considered to be 400 years ahead of its time.[130] He was viewed as "a lone genius proclaiming the truth about time," having correctly calculated the calendar[130] His work in optics provided the platform on which Newton, Descartes, Huygens and others later transformed the science of light. Bacon's groundbreaking advances were due largely to his discovery that experimental science must be based on mathematics. (186–187) His works Opus Majus and De Speculis Comburentibus contain many "carefully drawn diagrams showing Bacon's meticulous investigations into the behavior of light."[130] He gives detailed descriptions of systematic studies using prisms and measurements by which he shows how a rainbow functions.[130]

Others who advanced scientific method during this era included Albertus Magnus (c. 1193 – 1280), Theodoric of Freiberg, (c. 1250 – c. 1310), William of Ockham (c. 1285 – c. 1350), and Jean Buridan (c. 1300 – c. 1358). These were not only scientists but leaders of the church – Christian archbishops, friars and priests.

By the late 15th century, the physician-scholar Niccolò Leoniceno was finding errors in Pliny's Natural History. As a physician, Leoniceno was concerned about these botanical errors propagating to the materia medica on which medicines were based.[131] To counter this, a botanical garden was established at Orto botanico di Padova, University of Padua (in use for teaching by 1546), in order that medical students might have empirical access to the plants of a pharmacopia. The philosopher and physician Francisco Sanches was led by his medical training at Rome, 1571–73, and by the philosophical skepticism recently placed in the European mainstream by the publication of Sextus Empiricus' "Outlines of Pyrrhonism", to search for a true method of knowing (modus sciendi), as nothing clear can be known by the methods of Aristotle and his followers[132] – for example, syllogism fails upon circular reasoning. Following the physician Galen's method of medicine, Sanches lists the methods of judgement and experience, which are faulty in the wrong hands,[133] and we are left with the bleak statement That Nothing is Known (1581). This challenge was taken up by René Descartes in the next generation (1637), but at the least, Sanches warns us that we ought to refrain from the methods, summaries, and commentaries on Aristotle, if we seek scientific knowledge. In this, he is echoed by Francis Bacon, also influenced by skepticism; Sanches cites the humanist Juan Luis Vives who sought a better educational system, as well as a statement of human rights as a pathway for improvement of the lot of the poor.

The modern scientific method crystallized no later than in the 17th and 18th centuries. In his work Novum Organum (1620) – a reference to Aristotle's OrganonFrancis Bacon outlined a new system of logic to improve upon the old philosophical process of syllogism.[134] Then, in 1637, René Descartes established the framework for scientific method's guiding principles in his treatise, Discourse on Method. The writings of Alhazen, Bacon and Descartes are considered critical in the historical development of the modern scientific method, as are those of John Stuart Mill.[135]

In the late 19th century, Charles Sanders Peirce proposed a schema that would turn out to have considerable influence in the development of current scientific methodology generally. Peirce accelerated the progress on several fronts. Firstly, speaking in broader context in "How to Make Our Ideas Clear" (1878), Peirce outlined an objectively verifiable method to test the truth of putative knowledge on a way that goes beyond mere foundational alternatives, focusing upon both deduction and induction. He thus placed induction and deduction in a complementary rather than competitive context (the latter of which had been the primary trend at least since David Hume, who wrote in the mid-to-late 18th century). Secondly, and of more direct importance to modern method, Peirce put forth the basic schema for hypothesis/testing that continues to prevail today. Extracting the theory of inquiry from its raw materials in classical logic, he refined it in parallel with the early development of symbolic logic to address the then-current problems in scientific reasoning. Peirce examined and articulated the three fundamental modes of reasoning that, as discussed above in this article, play a role in inquiry today, the processes that are currently known as abductive, deductive, and inductive inference. Thirdly, he played a major role in the progress of symbolic logic itself – indeed this was his primary specialty.

Beginning in the 1930s, Karl Popper argued that there is no such thing as inductive reasoning.[136] All inferences ever made, including in science, are purely[137] deductive according to this view. Accordingly, he claimed that the empirical character of science has nothing to do with induction – but with the deductive property of falsifiability that scientific hypotheses have. Contrasting his views with inductivism and positivism, he even denied the existence of the scientific method: "(1) There is no method of discovering a scientific theory (2) There is no method for ascertaining the truth of a scientific hypothesis, i.e., no method of verification; (3) There is no method for ascertaining whether a hypothesis is 'probable', or probably true".[138] Instead, he held that there is only one universal method, a method not particular to science: The negative method of criticism, or colloquially termed trial and error. It covers not only all products of the human mind, including science, mathematics, philosophy, art and so on, but also the evolution of life. Following Peirce and others, Popper argued that science is fallible and has no authority.[138] In contrast to empiricist-inductivist views, he welcomed metaphysics and philosophical discussion and even gave qualified support to myths[139] and pseudosciences.[140] Popper's view has become known as critical rationalism.

Although science in a broad sense existed before the modern era, and in many historical civilizations (as described above), modern science is so distinct in its approach and successful in its results that it now defines what science is in the strictest sense of the term.[141]

Relationship with mathematics

Science is the process of gathering, comparing, and evaluating proposed models against observables.
A model can be a simulation, mathematical or chemical formula, or set of proposed steps. Science is like mathematics in that researchers in both disciplines can clearly distinguish what is known from what is unknown at each stage of discovery. Models, in both science and mathematics, need to be internally consistent and also ought to be falsifiable (capable of disproof). In mathematics, a statement need not yet be proven; at such a stage, that statement would be called a conjecture. But when a statement has attained mathematical proof, that statement gains a kind of immortality which is highly prized by mathematicians, and for which some mathematicians devote their lives.[142]

Mathematical work and scientific work can inspire each other.[143] For example, the technical concept of time arose in science, and timelessness was a hallmark of a mathematical topic. But today, the Poincaré conjecture has been proven using time as a mathematical concept in which objects can flow (see Ricci flow).

Nevertheless, the connection between mathematics and reality (and so science to the extent it describes reality) remains obscure. Eugene Wigner's paper, The Unreasonable Effectiveness of Mathematics in the Natural Sciences, is a very well known account of the issue from a Nobel Prize-winning physicist. In fact, some observers (including some well known mathematicians such as Gregory Chaitin, and others such as Lakoff and Núñez) have suggested that mathematics is the result of practitioner bias and human limitation (including cultural ones), somewhat like the post-modernist view of science.

George Pólya's work on problem solving,[144] the construction of mathematical proofs, and heuristic[145][146] show that the mathematical method and the scientific method differ in detail, while nevertheless resembling each other in using iterative or recursive steps.

Mathematical method Scientific method
1 Understanding Characterization from experience and observation
2 Analysis Hypothesis: a proposed explanation
3 Synthesis Deduction: prediction from the hypothesis
4 Review/Extend Test and experiment

In Pólya's view, understanding involves restating unfamiliar definitions in your own words, resorting to geometrical figures, and questioning what we know and do not know already; analysis, which Pólya takes from Pappus,[147] involves free and heuristic construction of plausible arguments, working backward from the goal, and devising a plan for constructing the proof; synthesis is the strict Euclidean exposition of step-by-step details[148] of the proof; review involves reconsidering and re-examining the result and the path taken to it.

Gauss, when asked how he came about his theorems, once replied "durch planmässiges Tattonieren" (through systematic palpable experimentation).[149]

Imre Lakatos argued that mathematicians actually use contradiction, criticism and revision as principles for improving their work.[150] In like manner to science, where truth is sought, but certainty is not found, in Proofs and refutations (1976), what Lakatos tried to establish was that no theorem of informal mathematics is final or perfect. This means that we should not think that a theorem is ultimately true, only that no counterexample has yet been found. Once a counterexample, i.e. an entity contradicting/not explained by the theorem is found, we adjust the theorem, possibly extending the domain of its validity. This is a continuous way our knowledge accumulates, through the logic and process of proofs and refutations. (If axioms are given for a branch of mathematics, however, Lakatos claimed that proofs from those axioms were tautological, i.e. logically true, by rewriting them, as did Poincaré (Proofs and Refutations, 1976).)

Lakatos proposed an account of mathematical knowledge based on Polya's idea of heuristics. In Proofs and Refutations, Lakatos gave several basic rules for finding proofs and counterexamples to conjectures. He thought that mathematical 'thought experiments' are a valid way to discover mathematical conjectures and proofs.[151]

Blog


From Wikipedia, the free encyclopedia

A blog (a truncation of the expression weblog)[1] is a discussion or informational site published on the World Wide Web and consisting of discrete entries ("posts") typically displayed in reverse chronological order (the most recent post appears first). Until 2009 blogs were usually the work of a single individual[citation needed], occasionally of a small group, and often covered a single subject.
More recently "multi-author blogs" (MABs) have developed, with posts written by large numbers of authors and professionally edited. MABs from newspapers, other media outlets, universities, think tanks, advocacy groups and similar institutions account for an increasing quantity of blog traffic. The rise of Twitter and other "microblogging" systems helps integrate MABs and single-author blogs into societal newstreams. Blog can also be used as a verb, meaning to maintain or add content to a blog.

The emergence and growth of blogs in the late 1990s coincided with the advent of web publishing tools that facilitated the posting of content by non-technical users. (Previously, a knowledge of such technologies as HTML and FTP had been required to publish content on the Web.)

A majority are interactive, allowing visitors to leave comments and even message each other via GUI widgets on the blogs, and it is this interactivity that distinguishes them from other static websites.[2] In that sense, blogging can be seen as a form of social networking service. Indeed, bloggers do not only produce content to post on their blogs, but also build social relations with their readers and other bloggers.[3] There are high-readership blogs which do not allow comments, such as Daring Fireball.

Many blogs provide commentary on a particular subject; others function as more personal online diaries; others function more as online brand advertising of a particular individual or company. A typical blog combines text, images, and links to other blogs, Web pages, and other media related to its topic. The ability of readers to leave comments in an interactive format is an important contribution to the popularity of many blogs. Most blogs are primarily textual, although some focus on art (art blogs), photographs (photoblogs), videos (video blogs or "vlogs"), music (MP3 blogs), and audio (podcasts). Microblogging is another type of blogging, featuring very short posts. In education, blogs can be used as instructional resources. These blogs are referred to as edublogs.

On 16 February 2011, there were over 156 million public blogs in existence. On 20 February 2014, there were around 172 million Tumblr[4] and 75.8 million WordPress[5] blogs in existence worldwide. According to critics and other bloggers, Blogger is the most popular blogging service used today, however Blogger does not offer public statistics.[6][7] Technorati has 1.3 million blogs as of February 22, 2014[8]

History


Early example of a "diary" style blog consisting of text and images transmitted wirelessly in real time from a wearable computer with head-up display, 22 February 1995

The term "weblog" was coined by Jorn Barger[9] on 17 December 1997. The short form, "blog", was coined by Peter Merholz, who jokingly broke the word weblog into the phrase we blog in the sidebar of his blog Peterme.com in April or May 1999.[10][11][12] Shortly thereafter, Evan Williams at Pyra Labs used "blog" as both a noun and verb ("to blog", meaning "to edit one's weblog or to post to one's weblog") and devised the term "blogger" in connection with Pyra Labs' Blogger product, leading to the popularization of the terms.[13]

Origins

Before blogging became popular, digital communities took many forms, including Usenet, commercial online services such as GEnie, BiX and the early CompuServe, e-mail lists[14] and Bulletin Board Systems (BBS). In the 1990s, Internet forum software, created running conversations with "threads". Threads are topical connections between messages on a virtual "corkboard".
From 14 June 1993 Mosaic Communications Corporation maintained their "What’s New"[15] list of new websites, updated daily and archived monthly. The page was accessible by a special "What's New" button in the Mosaic web browser.

The modern blog evolved from the online diary, where people would keep a running account of their personal lives. Most such writers called themselves diarists, journalists, or journalers. Justin Hall, who began personal blogging in 1994 while a student at Swarthmore College, is generally recognized as one of the earlier bloggers,[16] as is Jerry Pournelle.[17] Dave Winer's Scripting News is also credited with being one of the older and longer running weblogs.[18][19] The Australian Netguide magazine maintained the Daily Net News[20] on their web site from 1996. Daily Net News ran links and daily reviews of new websites, mostly in Australia. Another early blog was Wearable Wireless Webcam, an online shared diary of a person's personal life combining text, video, and pictures transmitted live from a wearable computer and EyeTap device to a web site in 1994. This practice of semi-automated blogging with live video together with text was referred to as sousveillance, and such journals were also used as evidence in legal matters.

Early blogs were simply manually updated components of common Web sites. However, the evolution of tools to facilitate the production and maintenance of Web articles posted in reverse chronological order made the publishing process feasible to a much larger, less technical, population. Ultimately, this resulted in the distinct class of online publishing that produces blogs we recognize today. For instance, the use of some sort of browser-based software is now a typical aspect of "blogging". Blogs can be hosted by dedicated blog hosting services, or they can be run using blog software, or on regular web hosting services.

Some early bloggers, such as The Misanthropic Bitch, who began in 1997, actually referred to their online presence as a zine, before the term blog entered common usage.

Rise in popularity

After a slow start, blogging rapidly gained in popularity. Blog usage spread during 1999 and the years following, being further popularized by the near-simultaneous arrival of the first hosted blog tools:

Political impact

On 6 December 2002, Josh Marshall's talkingpointsmemo.com blog called attention to U.S. Senator Lott's comments regarding Senator Thurmond. Senator Lott was eventually to resign his Senate leadership position over the matter.

An early milestone in the rise in importance of blogs came in 2002, when many bloggers focused on comments by U.S. Senate Majority Leader Trent Lott.[22] Senator Lott, at a party honoring U.S. Senator Strom Thurmond, praised Senator Thurmond by suggesting that the United States would have been better off had Thurmond been elected president. Lott's critics saw these comments as a tacit approval of racial segregation, a policy advocated by Thurmond's 1948 presidential campaign. This view was reinforced by documents and recorded interviews dug up by bloggers. (See Josh Marshall's Talking Points Memo.) Though Lott's comments were made at a public event attended by the media, no major media organizations reported on his controversial comments until after blogs broke the story. Blogging helped to create a political crisis that forced Lott to step down as majority leader.

Similarly, blogs were among the driving forces behind the "Rathergate" scandal. To wit: (television journalist) Dan Rather presented documents (on the CBS show 60 Minutes) that conflicted with accepted accounts of President Bush's military service record. Bloggers declared the documents to be forgeries and presented evidence and arguments in support of that view. Consequently, CBS apologized for what it said were inadequate reporting techniques (see Little Green Footballs). Many bloggers view this scandal as the advent of blogs' acceptance by the mass media, both as a news source and opinion and as means of applying political pressure.[original research?]

The impact of these stories gave greater credibility to blogs as a medium of news dissemination. Though often seen as partisan gossips,[citation needed] bloggers sometimes lead the way in bringing key information to public light, with mainstream media having to follow their lead. More often, however, news blogs tend to react to material already published by the mainstream media. Meanwhile, an increasing number of experts blogged, making blogs a source of in-depth analysis.[original research?]

In Russia, some political bloggers have started to challenge the dominance of official, overwhelmingly pro-government media. Bloggers such as Rustem Adagamov and Alexei Navalny have many followers and the latter's nickname for the ruling United Russia party as the "party of crooks and thieves" and been adopted by anti-regime protesters.[23] This led to the Wall Street Journal calling Navalny "the man Vladimir Putin fears most" in March 2012.[24]

Mainstream popularity

By 2004, the role of blogs became increasingly mainstream, as political consultants, news services, and candidates began using them as tools for outreach and opinion forming. Blogging was established by politicians and political candidates to express opinions on war and other issues and cemented blogs' role as a news source. (See Howard Dean and Wesley Clark.) Even politicians not actively campaigning, such as the UK's Labour Party's MP Tom Watson, began to blog to bond with constituents.

In January 2005, Fortune magazine listed eight bloggers whom business people "could not ignore": Peter Rojas, Xeni Jardin, Ben Trott, Mena Trott, Jonathan Schwartz, Jason Goldman, Robert Scoble, and Jason Calacanis.[25]

Israel was among the first national governments to set up an official blog.[26] Under David Saranga, the Israeli Ministry of Foreign Affairs became active in adopting Web 2.0 initiatives, including an official video blog[26] and a political blog.[27] The Foreign Ministry also held a microblogging press conference via Twitter about its war with Hamas, with Saranga answering questions from the public in common text-messaging abbreviations during a live worldwide press conference.[28] The questions and answers were later posted on IsraelPolitik, the country's official political blog.[29]

The impact of blogging upon the mainstream media has also been acknowledged by governments. In 2009, the presence of the American journalism industry had declined to the point that several newspaper corporations were filing for bankruptcy, resulting in less direct competition between newspapers within the same circulation area. Discussion emerged as to whether the newspaper industry would benefit from a stimulus package by the federal government. U.S. President Barack Obama acknowledged the emerging influence of blogging upon society by saying "if the direction of the news is all blogosphere, all opinions, with no serious fact-checking, no serious attempts to put stories in context, then what you will end up getting is people shouting at each other across the void but not a lot of mutual understanding”.[30]

Between 2009 and 2012 an Orwell Prize for blogging was awarded.

Types

There are many different types of blogs, differing not only in the type of content, but also in the way that content is delivered or written.
Personal blogs
The personal blog is an ongoing diary or commentary written by an individual.
Microblogging
Microblogging is the practice of posting small pieces of digital content—which could be text, pictures, links, short videos, or other media—on the Internet. Microblogging offers a portable communication mode that feels organic and spontaneous to many and has captured the public imagination. Friends use it to keep in touch, business associates use it to coordinate meetings or share useful resources, and celebrities and politicians (or their publicists) microblog about concert dates, lectures, book releases, or tour schedules. A wide and growing range of add-on tools enables sophisticated updates and interaction with other applications, and the resulting profusion of functionality is helping to define new possibilities for this type of communication.[31] Examples of these include Twitter, Facebook, Tumblr, and by far the largest WeiBo.
Corporate and organizational blogs
A blog can be private, as in most cases, or it can be for business purposes. Blogs used internally to enhance the communication and culture in a corporation or externally for marketing, branding or public relations purposes are called corporate blogs. Similar blogs for clubs and societies are called club blogs, group blogs, or by similar names; typical use is to inform members and other interested parties of club and member activities.
By genre
Some blogs focus on a particular subject, such as political blogs, health blogs, travel blogs (also known as travelogs), gardening blogs, house blogs,[32][33] fashion blogs, project blogs, education blogs, niche blogs, classical music blogs, quizzing blogs and legal blogs (often referred to as a blawgs) or dreamlogs. How To/Tutorial blogs are becoming increasing popular.[34] Two common types of genre blogs are art blogs and music blogs. A blog featuring discussions especially about home and family is not uncommonly called a mom blog and one made popular is by Erica Diamond who created Womenonthefence.com which is syndicated to over two million readers monthly.[35][36][37][38][39][40] While not a legitimate type of blog, one used for the sole purpose of spamming is known as a Splog.
By media type
A blog comprising videos is called a vlog, one comprising links is called a linklog, a site containing a portfolio of sketches is called a sketchblog or one comprising photos is called a photoblog. Blogs with shorter posts and mixed media types are called tumblelogs. Blogs that are written on typewriters and then scanned are called typecast or typecast blogs; see typecasting (blogging).
A rare type of blog hosted on the Gopher Protocol is known as a Phlog.
By device
Blogs can also be defined by which type of device is used to compose it. A blog written by a mobile device like a mobile phone or PDA could be called a moblog.[41] One early blog was Wearable Wireless Webcam, an online shared diary of a person's personal life combining text, video, and pictures transmitted live from a wearable computer and EyeTap device to a web site. This practice of semi-automated blogging with live video together with text was referred to as sousveillance. Such journals have been used as evidence in legal matters.[citation needed]
Reverse blog
A Reverse Blog is composed by its users rather than a single blogger. This system has the characteristics of a blog, and the writing of several authors. These can be written by several contributing authors on a topic, or opened up for anyone to write. There is typically some limit to the number of entries to keep it from operating like a Web Forum.[citation needed]

Community and cataloging

The Blogosphere
The collective community of all blogs is known as the blogosphere. Since all blogs are on the internet by definition, they may be seen as interconnected and socially networked, through blogrolls, comments, linkbacks (refbacks, trackbacks or pingbacks) and backlinks. Discussions "in the blogosphere" are occasionally used by the media as a gauge of public opinion on various issues. Because new, untapped communities of bloggers and their readers can emerge in the space of a few years, Internet marketers pay close attention to "trends in the blogosphere".[42]
Blog search engines
Several blog search engines are used to search blog contents, such as Bloglines, BlogScope, and Technorati. Technorati, which is among the more popular blog search engines, provides current information on both popular searches and tags used to categorize blog postings.[43] The research community is working on going beyond simple keyword search, by inventing new ways to navigate through huge amounts of information present in the blogosphere, as demonstrated by projects like BlogScope, which was shut down in 2012.[citation needed]
Blogging communities and directories
Several online communities exist that connect people to blogs and bloggers to other bloggers, including BlogCatalog and MyBlogLog.[44] Interest-specific blogging platforms are also available. For instance, Blogster has a sizable community of political bloggers among its members. Global Voices aggregates international bloggers, "with emphasis on voices that are not ordinarily heard in international mainstream media."[45]
Blogging and advertising
It is common for blogs to feature advertisements either to financially benefit the blogger or to promote the blogger's favorite causes. The popularity of blogs has also given rise to "fake blogs" in which a company will create a fictional blog as a marketing tool to promote a product.[46]

Popularity

Researchers have actively analyzed the dynamics of how blogs become popular. There are essentially two measures of this: popularity through citations, as well as popularity through affiliation (i.e., blogroll). The basic conclusion from studies of the structure of blogs is that while it takes time for a blog to become popular through blogrolls, permalinks can boost popularity more quickly, and are perhaps more indicative of popularity and authority than blogrolls, since they denote that people are actually reading the blog's content and deem it valuable or noteworthy in specific cases.[47]

The blogdex project was launched by researchers in the MIT Media Lab to crawl the Web and gather data from thousands of blogs in order to investigate their social properties. Information was gathered by the tool for over four years, during which it autonomously tracked the most contagious information spreading in the blog community, ranking it by recency and popularity. It can therefore[original research?] be considered the first instantiation of a memetracker. The project was replaced by tailrank.com which in turn has been replaced by spinn3r.com.

Blogs are given rankings by blog search engine Technorati based on the number of incoming links and Alexa Internet (Web hits of Alexa Toolbar users). In August 2006, Technorati found that the most linked-to blog on the internet was that of Chinese actress Xu Jinglei.[48] Chinese media Xinhua reported that this blog received more than 50 million page views, claiming it to be the most popular blog in the world.[49] Technorati rated Boing Boing to be the most-read group-written blog.[48]

Blurring with the mass media

Many bloggers, particularly those engaged in participatory journalism, differentiate themselves from the mainstream media, while others are members of that media working through a different channel.
Some institutions see blogging as a means of "getting around the filter" and pushing messages directly to the public. Some critics[who?] worry that bloggers respect neither copyright nor the role of the mass media in presenting society with credible news. Bloggers and other contributors to user-generated content are behind Time magazine naming their 2006 person of the year as "You".

Many mainstream journalists, meanwhile, write their own blogs—well over 300, according to CyberJournalist.net's J-blog list.[citation needed] The first known use of a blog on a news site was in August 1998, when Jonathan Dube of The Charlotte Observer published one chronicling Hurricane Bonnie.[50]

Some bloggers have moved over to other media. The following bloggers (and others) have appeared on radio and television: Duncan Black (known widely by his pseudonym, Atrios), Glenn Reynolds (Instapundit), Markos Moulitsas Zúniga (Daily Kos), Alex Steffen (Worldchanging), Ana Marie Cox (Wonkette), Nate Silver (FiveThirtyEight.com), and Ezra Klein (Ezra Klein blog in The American Prospect, now in the Washington Post). In counterpoint, Hugh Hewitt exemplifies a mass media personality who has moved in the other direction, adding to his reach in "old media" by being an influential blogger. Similarly, it was Emergency Preparedness and Safety Tips On Air and Online blog articles that captured Surgeon General of the United States Richard Carmona's attention and earned his kudos for the associated broadcasts by talk show host Lisa Tolliver and Westchester Emergency Volunteer Reserves-Medical Reserve Corps Director Marianne Partridge.[51][52][53][54]

Blogs have also had an influence on minority languages, bringing together scattered speakers and learners; this is particularly so with blogs in Gaelic languages. Minority language publishing (which may lack economic feasibility) can find its audience through inexpensive blogging.

There are many examples of bloggers who have published books based on their blogs, e.g., Salam Pax, Ellen Simonetti, Jessica Cutler, ScrappleFace. Blog-based books have been given the name blook. A prize for the best blog-based book was initiated in 2005,[55] the Lulu Blooker Prize.[56] However, success has been elusive offline, with many of these books not selling as well as their blogs. Only blogger Tucker Max made The New York Times Best Seller list.[57] The book based on Julie Powell's blog "The Julie/Julia Project" was made into the film Julie & Julia, apparently the first to do so.

Consumer-generated advertising in blogs

Consumer-generated advertising is a relatively new and controversial development and it has created a new model of marketing communication from businesses to consumers. Among the various forms of advertising on blog, the most controversial are the sponsored posts.[58] These are blog entries or posts and may be in the form of feedback, reviews, opinion, videos, etc. and usually contain a link back to the desired site using a keyword/s.

Blogs have led to some disintermediation and a breakdown of the traditional advertising model where companies can skip over the advertising agencies (previously the only interface with the customer) and contact the customers directly themselves. On the other hand, new companies specialised in blog advertising have been established, to take advantage of this new development as well.

However, there are many people who look negatively on this new development. Some believe that any form of commercial activity on blogs will destroy the blogosphere’s credibility.[59]

Legal and social consequences

Blogging can result in a range of legal liabilities and other unforeseen consequences.[60]

Defamation or liability

Several cases have been brought before the national courts against bloggers concerning issues of defamation or liability. U.S. payouts related to blogging totaled $17.4 million by 2009; in some cases these have been covered by umbrella insurance.[61] The courts have returned with mixed verdicts. Internet Service Providers (ISPs), in general, are immune from liability for information that originates with third parties (U.S. Communications Decency Act and the EU Directive 2000/31/EC).

In Doe v. Cahill, the Delaware Supreme Court held that stringent standards had to be met to unmask the anonymous bloggers, and also took the unusual step of dismissing the libel case itself (as unfounded under American libel law) rather than referring it back to the trial court for reconsideration.[62] In a bizarre twist, the Cahills were able to obtain the identity of John Doe, who turned out to be the person they suspected: the town's mayor, Councilman Cahill's political rival. The Cahills amended their original complaint, and the mayor settled the case rather than going to trial.

In January 2007, two prominent Malaysian political bloggers, Jeff Ooi and Ahirudin Attan, were sued by a pro-government newspaper, The New Straits Times Press (Malaysia) Berhad, Kalimullah bin Masheerul Hassan, Hishamuddin bin Aun and Brenden John a/l John Pereira over an alleged defamation. The plaintiff was supported by the Malaysian government.[63] Following the suit, the Malaysian government proposed to "register" all bloggers in Malaysia in order to better control parties against their interest.[64] This is the first such legal case against bloggers in the country.

In the United States, blogger Aaron Wall was sued by Traffic Power for defamation and publication of trade secrets in 2005.[65] According to Wired Magazine, Traffic Power had been "banned from Google for allegedly rigging search engine results."[66] Wall and other "white hat" search engine optimization consultants had exposed Traffic Power in what they claim was an effort to protect the public. The case addressed the murky legal question of who is liable for comments posted on blogs.[67] The case was dismissed for lack of personal jurisdiction, and Traffic Power failed to appeal within the allowed time.[68]

In 2009, a controversial and landmark decision by The Hon. Mr Justice Eady refused to grant an order to protect the anonymity of Richard Horton. Horton was a police officer in the United Kingdom who blogged about his job under the name "NightJack".[69]

In 2009, NDTV issued a legal notice to Indian blogger Kunte for a blog post criticizing their coverage of the Mumbai attacks.[70] The blogger unconditionally withdrew his post, which resulted in several Indian bloggers criticizing NDTV for trying to silence critics.[71]

Employment

Employees who blog about elements of their place of employment can begin to affect the brand recognition of their employer. In general, attempts by employee bloggers to protect themselves by maintaining anonymity have proved ineffective.[72]

Delta Air Lines fired flight attendant Ellen Simonetti because she posted photographs of herself in uniform on an airplane and because of comments posted on her blog "Queen of Sky: Diary of a Flight Attendant" which the employer deemed inappropriate.[73][74] This case highlighted the issue of personal blogging and freedom of expression versus employer rights and responsibilities, and so it received wide media attention. Simonetti took legal action against the airline for "wrongful termination, defamation of character and lost future wages".[75] The suit was postponed while Delta was in bankruptcy proceedings (court docket).[76]

In early 2006, Erik Ringmar, a tenured senior lecturer at the London School of Economics, was ordered by the convenor of his department to "take down and destroy" his blog in which he discussed the quality of education at the school.[77]

Mark Cuban, owner of the Dallas Mavericks, was fined during the 2006 NBA playoffs for criticizing NBA officials on the court and in his blog.[78]

Mark Jen was terminated in 2005 after 10 days of employment as an Assistant Product Manager at Google for discussing corporate secrets on his personal blog, then called 99zeros and hosted on the Google-owned Blogger service.[79] He blogged about unreleased products and company finances a week before the company's earnings announcement. He was fired two days after he complied with his employer's request to remove the sensitive material from his blog.[80]

In India, blogger Gaurav Sabnis resigned from IBM after his posts questioned the claims of a management school IIPM.[81]

Jessica Cutler, aka "The Washingtonienne",[82] blogged about her sex life while employed as a congressional assistant. After the blog was discovered and she was fired,[83] she wrote a novel based on her experiences and blog: The Washingtonienne: A Novel. Cutler is presently being sued by one of her former lovers in a case that could establish the extent to which bloggers are obligated to protect the privacy of their real life associates.[84]

Catherine Sanderson, a.k.a. Petite Anglaise, lost her job in Paris at a British accountancy firm because of blogging.[85] Although given in the blog in a fairly anonymous manner, some of the descriptions of the firm and some of its people were less than flattering. Sanderson later won a compensation claim case against the British firm, however.[86]

On the other hand, Penelope Trunk wrote an upbeat article in the Boston Globe back in 2006, entitled "Blogs 'essential' to a good career".[87] She was one of the first journalists to point out that a large portion of bloggers are professionals and that a well-written blog can help attract employers.

Political dangers

Blogging can sometimes have unforeseen consequences in politically sensitive areas. Blogs are much harder to control than broadcast or even print media. As a result, totalitarian and authoritarian regimes often seek to suppress blogs and/or to punish those who maintain them.

In Singapore, two ethnic Chinese were imprisoned under the country’s anti-sedition law for posting anti-Muslim remarks in their blogs.[88]

Egyptian blogger Kareem Amer was charged with insulting the Egyptian president Hosni Mubarak and an Islamic institution through his blog. It is the first time in the history of Egypt that a blogger was prosecuted. After a brief trial session that took place in Alexandria, the blogger was found guilty and sentenced to prison terms of three years for insulting Islam and inciting sedition, and one year for insulting Mubarak.[89]

Egyptian blogger Abdel Monem Mahmoud was arrested in April 2007 for anti-government writings in his blog.[90] Monem is a member of the then banned Muslim Brotherhood.

After the 2011 Egyptian revolution, the Egyptian blogger Maikel Nabil Sanad was charged with insulting the military for an article he wrote on his personal blog and sentenced to 3 years.[91]

After expressing opinions in his personal blog about the state of the Sudanese armed forces, Jan Pronk, United Nations Special Representative for the Sudan, was given three days notice to leave Sudan. The Sudanese army had demanded his deportation.[92][93]

In Myanmar, Nay Phone Latt, a blogger, was sentenced to 20 years in jail for posting a cartoon critical of head of state Than Shwe.[94]

Personal safety

One consequence of blogging is the possibility of attacks or threats against the blogger, sometimes without apparent reason. Kathy Sierra, author of the innocuous blog "Creating Passionate Users",[95] was the target of such vicious threats and misogynistic insults that she canceled her keynote speech at a technology conference in San Diego, fearing for her safety.[96] While a blogger's anonymity is often tenuous, Internet trolls who would attack a blogger with threats or insults can be emboldened by anonymity. Sierra and supporters initiated an online discussion aimed at countering abusive online behavior[97] and developed a blogger's code of conduct.

Behavior

The Blogger's Code of Conduct is a proposal by Tim O'Reilly for bloggers to enforce civility on their blogs by being civil themselves and moderating comments on their blog. The code was proposed in 2007 due to threats made to blogger Kathy Sierra.[98] The idea of the code was first reported by BBC News, who quoted O'Reilly saying, "I do think we need some code of conduct around what is acceptable behaviour, I would hope that it doesn't come through any kind of regulation it would come through self-regulation."[99]

O'Reilly and others came up with a list of seven proposed ideas:[100][101][102][103]
  1. Take responsibility not just for your own words, but for the comments you allow on your blog.
  2. Label your tolerance level for abusive comments.
  3. Consider eliminating anonymous comments.
  4. Ignore the trolls.
  5. Take the conversation offline, and talk directly, or find an intermediary who can do so.
  6. If you know someone who is behaving badly, tell them so.
  7. Don't say anything online that you wouldn't say in person.
These ideas were predictably intensely discussed on the Web and in the media. While the internet has continued to grow, with online activity and discourse only picking up both in positive and negative ways in terms of blog interaction, the proposed Code has drawn more widespread attention to the necessity of monitoring blogging activity and social norms being as important online as offline.

Scientists Claim That The Universe is a Giant Brain

Written by Steven Bancarz|

We often speak of the universe being a reflection of ourselves, and point to how the eye, veins, and brain cells mirror visual phenomenon in the natural universe. As above so below right? Well check this out. How about the idea that the universe is a giant brain? The idea of the universe as a ‘giant brain’ has been proposed by scientists and science fiction writers for decades, but now physicists say there may be some evidence that it’s actually true (in a sense).

According to a study published in Nature’s Scientific Reports, the universe may be growing in the same way as a giant brain – with the electrical firing between brain cells ‘mirrored’ by the shape of expanding galaxies. The results of a computer simulation suggest that “natural growth dynamics” – the way that systems evolve – are the same for different kinds of networks – whether its the internet, the human brain or the universe as a whole.

When the team compared the universe’s history with growth of social networks and brain circuits, they found all the networks expanded in similar ways: They balanced links between similar nodes with ones that already had many connections. For instance, a cat lover surfing the Internet may visit mega-sites such as Google or Yahoo, but will also browse cat fancier websites or YouTube kitten videos. In the same way, neighboring brain cells like to connect, but neurons also link to such “Google brain cells” that are hooked up to loads of other brain cells.


“The new study suggests a single fundamental law of nature may govern these networks”, said physicist Kevin Bassler of the University of Houston. “”For a physicist it’s an immediate signal that there is some missing understanding of how nature works,” says Dmitri Krioukov from the University of California San Diego.

As summarized by the original study: “Here we show that the causal network representing the large-scale structure of spacetime in our accelerating universe is a power-law graph with strong clustering, similar to many complex networks such as the Internet, social, or biological networks. We prove that this structural similarity is a consequence of the asymptotic equivalence between the large-scale growth dynamics of complex networks and causal networks.”


So this may seem weird, but let’s think about this.  Scientists always talk about consciousness being the underlying fabric of the universe from which all things emerge (M-theory, Unified Field Theory, etc. see work of Dr. Amit Goswami and Dr. John Hagelin). So not only is the fabric of the universe conscious like a brain, it is growing like a brain as well. But here’s a question…a brain to what? Is it possible we exist as a thought within the mind of some Super Intelligence? Are we just brain cells operating within a Cosmic Mind? Maybe, maybe not, but it’s fascinating to think about. 

About the author: My name is Steven Bancarz, and I am the creator of “Spirit Science and Metaphysics”.  Thank you for reading this article! Within the next month, I plan to have my first YouTube video out called “How To Meditate”, and I am also currently building an online conscious forum to bring truth-seekers together to connect and share advice with one another.  If you are interested in staying connected, feel free to subscribe to my newsletter HERE.

Sources:
NBC:http://www.nbcnews.com/id/49971212/ns/technology_and_science-science/#.UTUg3zCLbzk
Original Study: http://www.nature.com/srep/2012/121113/srep00793/full/srep00793.html
http://www.huffingtonpost.co.uk/2012/11/27/physicists-universe-giant-brain_n_2196346.html
http://www.space.com/18630-universe-grows-like-brain.html

Occam's razor

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Occam%27s_razor In philosophy , Occa...