Search This Blog

Monday, February 16, 2015

Cognitive bias



From Wikipedia, the free encyclopedia

A cognitive bias is a pattern of deviation in judgment, whereby inferences about other people and situations may be drawn in an illogical fashion.[1] Individuals create their own "subjective social reality" from their perception of the input.[2] An individual's construction of social reality, not the objective input, may dictate their behaviour in the social world.[3] Thus, cognitive biases may sometimes lead to perceptual distortion, inaccurate judgment, illogical interpretation, or what is broadly called irrationality.[4][5][6]

Some cognitive biases are presumably adaptive. Cognitive biases may lead to more effective actions in a given context.[7] Furthermore, cognitive biases enable faster decisions when timeliness is more valuable than accuracy, as illustrated in heuristics.[8] Other cognitive biases are a "by-product" of human processing limitations,[9] resulting from a lack of appropriate mental mechanisms (bounded rationality), or simply from a limited capacity for information processing.[10]

A continually evolving list of cognitive biases has been identified over the last six decades of research on human judgment and decision-making in cognitive science, social psychology, and behavioral economics. Cognitive biases are important to study because "systematic errors" highlight the "psychological processes that underlie perception and judgement" (Tversky & Kahneman,1999, p. 582). Moreover, Kahneman and Tversky (1996) argue that cognitive biases have efficient practical implications for areas including clinical judgment.[11]

Overview


Bias arises from various processes that are sometimes difficult to distinguish. These include
  • mental noise
  • the mind's limited information processing capacity[13]
  • emotional and moral motivations[14]
  • social influence[15]
The notion of cognitive biases was introduced by Amos Tversky and Daniel Kahneman in 1972[16] and grew out of their experience of people's innumeracy, or inability to reason intuitively with the greater orders of magnitude. Tversky, Kahneman and colleagues demonstrated several replicable ways in which human judgments and decisions differ from rational choice theory. Tversky and Kahneman explained human differences in judgement and decision making in terms of heuristics.
Heuristics involve mental shortcuts which provide swift estimates about the possibility of uncertain occurrences (Baumeister & Bushman, 2010, p. 141). Heuristics are simple for the brain to compute but sometimes introduce "severe and systematic errors" (Tversky & Kahneman, 1974, p. 1125).[17]

For example, the representativeness heuristic is defined as the tendency to "judge the frequency or likelihood" of an occurrence by the extent of which the event "resembles the typical case" (Baumeister & Bushman, 2010, p. 141). The "Linda Problem" illustrates the representativeness heuristic (Tversky & Kahneman, 1983[18] ). Participants were given a description of "Linda" that suggests Linda might well be a feminist (e.g., she is said to be concerned about discrimination and social justice issues). They were then asked whether they thought Linda was more likely to be a "(a) bank teller" or a "(b) bank teller and active in the feminist movement". A majority chose answer (b). This error (mathematically, answer (b) cannot be more likely than answer (a)) is an example of the “conjunction fallacy”; Tversky and Kahneman argued that respondents chose (b) because it seemed more "representative" or typical of persons who might fit the description of Linda. The representativeness heuristic may lead to errors such as activating stereotypes and inaccurate judgements of others (Haselton et al., 2005, p. 726).

Alternatively, critics of Kahneman and Tversky such as Gerd Gigerenzer argue that heuristics should not lead us to conceive of human thinking as riddled with irrational cognitive biases, but rather to conceive rationality as an adaptive tool that is not identical to the rules of formal logic or the probability calculus.[19] Nevertheless, experiments such as the “Linda problem” grew into the heuristics and biases research program which spread beyond academic psychology into other disciplines including medicine and political science.

Types

Biases can be distinguished on a number of dimensions. For example, there are biases specific to groups (such as the risky shift) as well as biases at the individual level.

Some biases affect decision-making, where the desirability of options has to be considered (e.g., sunk costs fallacy). Others such as illusory correlation affect judgment of how likely something is, or of whether one thing is the cause of another. A distinctive class of biases affect memory,[20] such as consistency bias (remembering one's past attitudes and behavior as more similar to one's present attitudes).

Some biases reflect a subject's motivation,[21] for example, the desire for a positive self-image leading to Egocentric bias[22] and the avoidance of unpleasant cognitive dissonance. Other biases are due to the particular way the brain perceives, forms memories and makes judgments. This distinction is sometimes described as "Hot cognition" versus "Cold Cognition", as motivated reasoning can involve a state of arousal.

Among the "cold" biases,
  • some involve a decision or judgement being affected by irrelevant information (for example the framing effect where the same problem receives different responses depending on how it is described; or the distinction bias where choices presented together have different outcomes than those presented separately)
  • others give excessive weight to an unimportant but salient feature of the problem (e.g., anchoring)
The fact that some biases reflect motivation, and in particular the motivation to have positive attitudes to oneself[22] accounts for the fact that many biases are self-serving or self-directed (e.g. illusion of asymmetric insight, self-serving bias, projection bias). There are also biases in how subjects evaluate in-groups or out-groups; evaluating in-groups as more diverse and "better" in many respects, even when those groups are arbitrarily-defined (ingroup bias, outgroup homogeneity bias).

Some cognitive biases belong to the subgroup of attentional biases which refer to the paying of increased attention to certain stimuli. It has been shown, for example, that people addicted to alcohol and other drugs pay more attention to drug-related stimuli. Common psychological tests to measure those biases are the Stroop Task[23][24] and the Dot Probe Task.

The following is a list of the more commonly studied cognitive biases:
Name Description
Fundamental Attribution Error (FAE) Also known as the correspondence bias (Baumeister & Bushman, 2010) is the tendency for people to over-emphasize personality-based explanations for behaviours observed in others. At the same time, individuals under-emphasize the role and power of situational influences on the same behaviour. Jones and Harris’ (1967)[25] classic study illustrates the FAE. Despite being made aware that the target’s speech direction (pro-Castro/anti-Castro) was assigned to the writer, participants ignored the situational pressures and attributed pro-Castro attitudes to the writer when the speech represented such attitudes.
Confirmation bias The tendency to search for or interpret information in a way that confirms one's preconceptions. In addition, individuals may discredit information that does not support their views.[26] The confirmation bias is related to the concept of cognitive dissonance. Whereby, individuals may reduce inconsistency by searching for information which re-confirms their views (Jermias, 2001, p. 146).[27]
Self-serving bias The tendency to claim more responsibility for successes than failures. It may also manifest itself as a tendency for people to evaluate ambiguous information in a way beneficial to their interests.
Belief bias When one's evaluation of the logical strength of an argument is biased by their belief in the truth or falsity of the conclusion.
Framing Using a too-narrow approach and description of the situation or issue.
Hindsight bias Sometimes called the "I-knew-it-all-along" effect, is the inclination to see past events as being predictable.

A 2012 Psychological Bulletin article suggests that at least 8 seemingly unrelated biases can be produced by the same information-theoretic generative mechanism.[28] It is shown that noisy deviations in the memory-based information processes that convert objective evidence (observations) into subjective estimates (decisions) can produce regressive conservatism, the belief revision (Bayesian conservatism), illusory correlations, illusory superiority (better-than-average effect) and worse-than-average effect, subadditivity effect, exaggerated expectation, overconfidence, and the hard–easy effect.

Practical significance

Many social institutions rely on individuals to make rational judgments.

The securities regulation regime largely assumes that all investors act as perfectly, rational persons. In truth, actual investors face cognitive limitations from biases, heuristics, and framing effects.

A fair jury trial, for example, requires that the jury ignore irrelevant features of the case, weigh the relevant features appropriately, consider different possibilities open-mindedly and resist fallacies such as appeal to emotion. The various biases demonstrated in these psychological experiments suggest that people will frequently fail to do all these things.[29] However, they fail to do so in systematic, directional ways that are predictable.[30]

Cognitive biases are also related to the persistence of superstition, to large social issues such as prejudice, and they also work as a hindrance in the acceptance of scientific non-intuitive knowledge by the public.[31]

Reducing cognitive bias

Similar to Gigerenzer (1996),[32] Haselton et al. (2005) state the content and direction of cognitive biases are not "arbitrary" (p. 730).[9] Moreover, cognitive biases can be controlled. Debiasing is a technique which aims to decrease biases by encouraging individuals to use controlled processing compared to automatic processing (Baumeister & Bushman, 2010, p. 155).[33] In relation to reducing the FAE, monetary incentives[34] and informing participants they will be held accountable for their attributions[35] have been linked to the increase of accurate attributions.

Cognitive bias modification refers to the process of modifying cognitive biases in healthy people and also refers to a growing area of psychological (non-pharmaceutical) therapies for anxiety, depression and addiction called CBMT. Cognitive Bias Modification Therapy (CBMT) is sub-group of therapies within a growing area of psychological therapies based on modifying cognitive processes with or without accompanying medication and talk therapy, sometimes referred to as Applied Cognitive Processing Therapies (ACPT). Although Cognitive Bias Modification can refer to modifying cognitive processes in healthy individuals, CBMT is a growing area of evidence-based psychological therapy, in which cognitive processes are modified to relieve suffering[36][37] from serious Depression,[38] Anxiety,[39] and Addiction.[40] CBMT techniques are technology assisted therapies that are delivered via a computer with or without clinician support. CBM combines evidence and theory from the cognitive model of anxiety,[41] cognitive neuroscience,[42] and attentional models.[43]

Paradigm shift



From Wikipedia, the free encyclopedia

A paradigm shift (or revolutionary science) is, according to Thomas Kuhn, in his influential book The Structure of Scientific Revolutions (1962), a change in the basic assumptions, or paradigms, within the ruling theory of science. It is in contrast to his idea of normal science. According to Kuhn, "A paradigm is what members of a scientific community, and they alone, share" (The Essential Tension, 1977). Unlike a normal scientist, Kuhn held, "a student in the humanities has constantly before him a number of competing and incommensurable solutions to these problems, solutions that he must ultimately examine for himself" (The Structure of Scientific Revolutions).

Once a paradigm shift is complete, a scientist cannot, for example, reject the germ theory of disease to posit the possibility that miasma causes disease or reject modern physics and optics to posit that aether carries light. In contrast, a critic in the humanities can choose to adopt an array of stances (e.g., Marxist criticism, Freudian criticism, Deconstruction, 19th-century-style literary criticism), which may be more or less fashionable during any given period but all regarded as legitimate. Since the 1960s, the term has also been used in numerous non-scientific contexts to describe a profound change in a fundamental model or perception of events, even though Kuhn himself restricted the use of the term to the hard sciences. Compare as a structured form of Zeitgeist.

Kuhnian paradigm shifts


Kuhn used the duck-rabbit optical illusion to demonstrate the way in which a paradigm shift could cause one to see the same information in an entirely different way.

An epistemological paradigm shift was called a "scientific revolution" by epistemologist and historian of science Thomas Kuhn in his book The Structure of Scientific Revolutions.

A scientific revolution occurs, according to Kuhn, when scientists encounter anomalies that cannot be explained by the universally accepted paradigm within which scientific progress has thereto been made. The paradigm, in Kuhn's view, is not simply the current theory, but the entire worldview in which it exists, and all of the implications which come with it. This is based on features of landscape of knowledge that scientists can identify around them.

There are anomalies for all paradigms, Kuhn maintained, that are brushed away as acceptable levels of error, or simply ignored and not dealt with (a principal argument Kuhn uses to reject Karl Popper's model of falsifiability as the key force involved in scientific change). Rather, according to Kuhn, anomalies have various levels of significance to the practitioners of science at the time. To put it in the context of early 20th century physics, some scientists found the problems with calculating Mercury's perihelion more troubling than the Michelson-Morley experiment results, and some the other way around. Kuhn's model of scientific change differs here, and in many places, from that of the logical positivists in that it puts an enhanced emphasis on the individual humans involved as scientists, rather than abstracting science into a purely logical or philosophical venture.

When enough significant anomalies have accrued against a current paradigm, the scientific discipline is thrown into a state of crisis, according to Kuhn. During this crisis, new ideas, perhaps ones previously discarded, are tried. Eventually a new paradigm is formed, which gains its own new followers, and an intellectual "battle" takes place between the followers of the new paradigm and the hold-outs of the old paradigm. Again, for early 20th century physics, the transition between the Maxwellian electromagnetic worldview and the Einsteinian Relativistic worldview was neither instantaneous nor calm, and instead involved a protracted set of "attacks," both with empirical data as well as rhetorical or philosophical arguments, by both sides, with the Einsteinian theory winning out in the long run. Again, the weighing of evidence and importance of new data was fit through the human sieve: some scientists found the simplicity of Einstein's equations to be most compelling, while some found them more complicated than the notion of Maxwell's aether which they banished. Some found Eddington's photographs of light bending around the sun to be compelling, while some questioned their accuracy and meaning. Sometimes the convincing force is just time itself and the human toll it takes, Kuhn said, using a quote from Max Planck: "a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it."[1]

After a given discipline has changed from one paradigm to another, this is called, in Kuhn's terminology, a scientific revolution or a paradigm shift. It is often this final conclusion, the result of the long process, that is meant when the term paradigm shift is used colloquially: simply the (often radical) change of worldview, without reference to the specificities of Kuhn's historical argument.

Science and paradigm shift

A common misinterpretation of paradigms is the belief that the discovery of paradigm shifts and the dynamic nature of science (with its many opportunities for subjective judgments by scientists) are a case for relativism:[2] the view that all kinds of belief systems are equal. Kuhn vehemently denies this interpretation[3] and states that when a scientific paradigm is replaced by a new one, albeit through a complex social process, the new one is always better, not just different.

These claims of relativism are, however, tied to another claim that Kuhn does at least somewhat endorse: that the language and theories of different paradigms cannot be translated into one another or rationally evaluated against one another — that they are incommensurable. This gave rise to much talk of different peoples and cultures having radically different worldviews or conceptual schemes — so different that whether or not one was better, they could not be understood by one another. However, the philosopher Donald Davidson published a highly regarded essay in 1974, "On the Very Idea of a Conceptual Scheme" (Proceedings and Addresses of the American Philosophical Association, Vol. 47, (1973-1974), pp. 5–20) arguing that the notion that any languages or theories could be incommensurable with one another was itself incoherent. If this is correct, Kuhn's claims must be taken in a weaker sense than they often are. Furthermore, the hold of the Kuhnian analysis on social science has long been tenuous with the wide application of multi-paradigmatic approaches in order to understand complex human behaviour (see for example John Hassard, Sociology and Organization Theory: Positivism, Paradigm and Postmodernity. Cambridge University Press, 1993, ISBN 0521350344.)

Paradigm shifts tend to be most dramatic in sciences that appear to be stable and mature, as in physics at the end of the 19th century. At that time, physics seemed to be a discipline filling in the last few details of a largely worked-out system. In 1900, Lord Kelvin famously told an assemblage of physicists at the British Association for the Advancement of Science, "There is nothing new to be discovered in physics now. All that remains is more and more precise measurement."[4] Five years later, Albert Einstein published his paper on special relativity, which challenged the very simple set of rules laid down by Newtonian mechanics, which had been used to describe force and motion for over two hundred years.

In The Structure of Scientific Revolutions, Kuhn wrote, "Successive transition from one paradigm to another via revolution is the usual developmental pattern of mature science." (p. 12) Kuhn's idea was itself revolutionary in its time, as it caused a major change in the way that academics talk about science. Thus, it could be argued that it caused or was itself part of a "paradigm shift" in the history and sociology of science. However, Kuhn would not recognise such a paradigm shift. In the social sciences, people can still use earlier ideas to discuss the history of science.

Philosophers and historians of science, including Kuhn himself, ultimately accepted a modified version of Kuhn's model, which synthesizes his original view with the gradualist model that preceded it.[citation needed]

Examples of paradigm shifts

Natural sciences

Some of the "classical cases" of Kuhnian paradigm shifts in science are:

Social sciences

In Kuhn's view, the existence of a single reigning paradigm is characteristic of the sciences, while philosophy and much of social science were characterized by a "tradition of claims, counterclaims, and debates over fundamentals."[5] Others have applied Kuhn's concept of paradigm shift to the social sciences.
  • The movement, known as the Cognitive revolution, away from Behaviourist approaches to psychological study and the acceptance of cognition as central to studying human behaviour.
  • The Keynesian Revolution is typically viewed as a major shift in macroeconomics.[6] According to John Kenneth Galbraith, Say's Law dominated economic thought prior to Keynes for over a century, and the shift to Keynesianism was difficult. Economists who contradicted the law, which implied that underemployment and underinvestment (coupled with oversaving) were virtually impossible, risked losing their careers.[7] In his magnum opus, Keynes cited one of his predecessors, John Atkinson Hobson,[8] who was repeatedly denied positions at universities for his heretical theory.
  • Later, the movement for Monetarism over Keynesianism marked a second divisive shift. Monetarists held that fiscal policy was not effective for stabilizing inflation, that it was solely a monetary phenomenon, in contrast to the Keynesian view of the time was that both fiscal and monetary policy were important. Keynesians later adopted much of the Monetarists view of the quantity theory of money and shifting Phillips curve, theories they initially rejected.[9]

Marketing

In the later part of the 1990s, 'paradigm shift' emerged as a buzzword, popularized as marketing speak and appearing more frequently in print and publication.[10] In his book Mind The Gaffe, author Larry Trask advises readers to refrain from using it, and to use caution when reading anything that contains the phrase. It is referred to in several articles and books[11][12] as abused and overused to the point of becoming meaningless.

Other uses

The term "paradigm shift" has found uses in other contexts, representing the notion of a major change in a certain thought-pattern — a radical change in personal beliefs, complex systems or organizations, replacing the former way of thinking or organizing with a radically different way of thinking or organizing:
  • M. L. Handa, a professor of sociology in education at O.I.S.E. University of Toronto, Canada, developed the concept of a paradigm within the context of social sciences. He defines what he means by "paradigm" and introduces the idea of a "social paradigm". In addition, he identifies the basic component of any social paradigm. Like Kuhn, he addresses the issue of changing paradigms, the process popularly known as "paradigm shift." In this respect, he focuses on the social circumstances which precipitate such a shift. Relatedly, he addresses how that shift affects social institutions, including the institution of education.[citation needed]
  • The concept has been developed for technology and economics in the identification of new techno-economic paradigms as changes in technological systems that have a major influence on the behaviour of the entire economy (Carlota Perez; earlier work only on technological paradigms by Giovanni Dosi). This concept is linked to Joseph Schumpeter's idea of creative destruction. Examples include the move to mass production and the introduction of microelectronics.[13]
  • Two photographs of the Earth from space, "Earthrise" (1968) and "The Blue Marble" (1972), are thought to have helped to usher in the environmentalist movement which gained great prominence in the years immediately following distribution of those images.[14][15]
  • Hans Küng applies Thomas Kuhn's theory of paradigm change to the entire history of Christian thought and theology. He identifies six historical "macromodels": 1) the apocalyptic paradigm of primitive Christianity, 2) the Hellenistic paradigm of the patristic period, 3) the medieval Roman Catholic paradigm, 4) the Protestant (Reformation) paradigm, 5) the modern Enlightenment paradigm, and 6) the emerging ecumenical paradigm. He also discusses five analogies between natural science and theology in relation to paradigm shifts. Küng addresses paradigm change in his books, Paradigm Change in Theology[16] and Theology for the Third Millennium: An Ecumenical View.[17]

Fringe science



From Wikipedia, the free encyclopedia

There are differing definitions of fringe science. Fringe science may be valid science which is not considered mainstream. Alternatively, it may be a questionable scientific approach to a field of study. In any case, it is an inquiry in an established field of study which departs significantly from the mainstream theory in that field.

Mainstream scientists typically regard fringe ideas as highly speculative or even as actually refuted.[1]

The term "fringe science" covers everything from novel hypotheses which can be tested by means of the scientific method to wild ad hoc hypotheses and New Age mumbo jumbo (mostly the latter). This has resulted in a tendency to dismiss all fringe science as the domain of pseudoscientists, hobbyists, and cranks.[2]

Other terms used for the specific areas of fringe science are pathological science, voodoo science, and cargo cult science. Junk science is a term typically used in the political arena to describe ideas considered to be dubious or fraudulent.

A concept that was once accepted by the mainstream scientific community may become fringe science because of a later evaluation of previous research. For example, focal infection theory, which held that focal infections of the tonsils or teeth are a primary cause of systemic disease, was once considered to be medical fact. It has since been dismissed because of lack of evidence.

Some theories that were once rejected as fringe science, but were eventually accepted as mainstream science, are:

Description

Fringe science is used to describe unusual theories and models of discovery. Those who develop such fringe science ideas may work within the scientific method, but their results are not accepted by the mainstream community. Usually the evidence provided by supporters of a fringe science is believed only by a minority and rejected by most experts. Fringe science may be advocated by a scientist who has a degree of recognition by the larger scientific community (typically through the publication of peer reviewed studies by the scientist), but this is not always the case. While most fringe science views are ignored or rejected, through careful use of the scientific method, including falsificationism, the scientific community has come to accept some ideas from fringe sciences.[9] One example of such is plate tectonics, an idea that had its origin as a fringe science of continental drift, and was held in a negative opinion for decades.[10] It is noted that:
The confusion between science and pseudoscience, between honest scientific error and genuine scientific discovery, is not new, and it is a permanent feature of the scientific landscape [...] Acceptance of new science can come slowly.[11]
The phrase fringe science can be considered pejorative. For example, Lyell D. Henry, Jr. wrote, "'fringe science' [is] a term also suggesting kookiness."[12] Such characterization is perhaps inspired by the eccentric behavior of many researchers on the fringe of science (colloquially and with considerable historical precedent known as mad scientists).[13] The categorical boundary between fringe science and pseudoscience can be disputed. The connotations of fringe science are that the enterprise is still rational, but an unlikely avenue for future results. Fringe science may not be a part of the scientific consensus for a variety of reasons, including incomplete or contradictory evidence.[14]

Examples

Historical

Some historical ideas that are considered refuted by mainstream science include:
  • Wilhelm Reich's work with orgone, a physical energy he claimed to have discovered, contributed to his alienation from the psychiatric community and eventually to his jailing. At the time and continuing today, other scientists and skeptics disputed Reich's claims that he had scientific evidence for the existence of orgone. Nevertheless, dedicated amateurs and a few fringe researchers continue to believe that Reich was correct.[citation needed]
  • Focal infection theory as a primary cause of systemic disease rapidly became accepted by mainstream dentistry and medicine after World War I, largely on the basis of what later turned out to be fundamentally flawed studies providing evidence to support the theory. As a result millions of people were subjected to needless dental extractions and surgeries.[15] While certain mainstream study continues on certain aspects of FIT, the original approach and science of FIT started falling out of favor in the 1930s and was relegated to the fringe of oral medicine by the late 1950s.
  • Clovis First theory: The idea that the Clovis was the first culture in North America was long regarded as mainstream until mounting evidence of pre-Clovis occupation of the Americas discredited it.[16][17][18]

Contemporary

Relatively recent fringe sciences include:
  • Aubrey de Grey, featured in a 2006 60 Minutes special report, is working on advanced studies in human longevity,[19] dubbed "Strategies for Engineered Negligible Senescence" (SENS). Many mainstream scientists[20] believe that his research, especially de Grey's view on the importance of nuclear (epi)mutations and his purported timeline for antiaging therapeutics, constitutes "fringe science".
    • Technology Review controversy: In an article released in a 2006 issue of the magazine Technology Review (part of a larger series), it was written that "SENS De Grey's hypothesis is highly speculative. Many of its proposals have not been reproduced, nor could they be reproduced with today's scientific knowledge and technology. Echoing Myhrvold, we might charitably say that de Grey's proposals exist in a kind of antechamber of science, where they wait (possibly in vain) for independent verification. SENS does not compel the assent of many knowledgeable scientists; but neither is it demonstrably wrong".[21]
  • A nuclear fusion reaction called cold fusion occurring near room temperature and pressure was reported by chemists Martin Fleischmann and Stanley Pons in March 1989. Numerous research efforts at the time were unable to replicate these results.[22] Subsequently, a number of scientists with a variety of credentials have worked on the problem or participated in international conferences on cold fusion. In 2004, the United States Department of Energy decided to take another look at cold fusion to determine whether their policies towards the subject should be altered because of new experimental evidence, and commissioned a panel on cold fusion.
  • The theory of abiogenic petroleum origin holds that natural petroleum was formed from deep carbon deposits, perhaps dating to the formation of the Earth. The ubiquity of hydrocarbons in the solar system is taken as evidence that there may be a great deal more petroleum on Earth than commonly thought, and that petroleum may originate from carbon-bearing fluids that migrate upward from the mantle. Abiogenic hypotheses saw a revival in the last half of the twentieth century by Russian and Ukrainian scientists, and more interest has been generated in the West[citation needed] after the publication by Thomas Gold in 1999 of The Deep Hot Biosphere. Gold's version of the hypothesis is partly based on the existence of a biosphere composed of thermophile bacteria in the Earth's crust, which may explain the existence of certain biomarkers in extracted petroleum.

Responding to fringe science

Michael W. Friedlander suggests some guidelines for responding to fringe science, which he argues is a more difficult problem to handle, "at least procedurally",[23] than scientific misconduct. His suggested methods include impeccable accuracy, checking cited sources, not overstating orthodox science, thorough understanding of the Wegener continental drift example, examples of orthodox science investigating radical proposals, and prepared examples of errors from fringe scientists.[24]

Though there are examples of mainstream scientists supporting maverick ideas within their own discipline of expertise, fringe science theories and ideas are often advanced by individuals either without a traditional academic science background, or by researchers outside the mainstream discipline,[25] although the history of science shows that scientific progress is often marked by interdisciplinary and multicultural interaction.[26] Friedlander suggests that fringe science is necessary for mainstream science "not to atrophy", as scientists must evaluate the plausibility of each new fringe claim and certain fringe discoveries "will later graduate into the ranks of accepted" while others "will never receive confirmation".[27] The general public has difficulty distinguishing between "science and its imitators",[27] and in some cases a "yearning to believe or a generalized suspicion of experts is a very potent incentive to accepting pseudoscientific claims".[28]

Controversies

Towards the end of the 20th century, religiously inspired critics[who?], such as Answers in Genesis, began to cite fringe science theories with limited support. The goal was frequently to classify as "controversial" entire fields of scientific inquiry (notably paleo-anthropology, human sexuality, evolution, geology, and paleontology) that contradicted literal or fundamentalist interpretation of various sacred texts. Describing ongoing debate and research within these fields as evidence of fundamental weaknesses or flaws, these critics argued that "controversies" left open a window for the plausibility of divine intervention and intelligent design.[29][30][31] As Donald E. Simanek asserts, "Too often speculative and tentative hypotheses of cutting edge science are treated as if they were scientific truths, and so accepted by a public eager for answers," ignorant of the fact that "As science progresses from ignorance to understanding it must pass through a transitionary phase of confusion and uncertainty."[32] The media also play a role in the creation and propagation of the view that certain fields of science are "controversial". In "Optimising public understanding of science: A comparative perspective" by Jan Nolin et al., the authors claim, "From a media perspective it is evident that controversial science sells, not only because of its dramatic value but also since it is often connected to high-stake societal issues."[33]

Superseded scientific theories



From Wikipedia, the free encyclopedia

A superseded, or obsolete, scientific theory is a scientific theory that mainstream scientific consensus once commonly accepted but now no longer considers the most complete description of reality, or simply false. This label does not cover protoscientific or fringe science theories with limited support in the scientific community. Also, it does not mean theories that were never widely accepted. Some theories that were only supported under specific political authorities, such as Lysenkoism, may also be described as obsolete or superseded.

In some cases a theory or idea is found baseless and is simply discarded. For example, the phlogiston theory was entirely replaced by the quite different concept of energy and related laws. In other cases an existing theory is replaced by a new theory that retains significant elements of the earlier theory; in these cases, the older theory is often still useful for many purposes, and may be more easily understood than the complete theory and lead to simpler calculations. An example of this is the use of Newtonian physics, which differs from the currently accepted relativistic physics by a factor that is negligibly small at velocities much lower than that of light. All of Newtonian physics is so satisfactory for most purposes that it is more widely used except at velocities that are a significant fraction of the speed of light, and simpler Newtonian but not relativistic mechanics is usually taught in schools. Another case is the theory that the earth is approximately flat; while it has for centuries been known to be wrong for long distances, considering part of the earth's surface as flat is usually sufficient for many maps covering areas that are not extremely large, and surveying.

The obsolete Geocentric model of the universe places the Earth at the centre.

Superseded theories

Biology

Chemistry

Physics

  • Democritus, the originator of atomic theory, held that everything is composed of atoms, which are indestructible
  • John Dalton's model of the atom, which held that atoms are indivisible and indestructible (superseded by nuclear physics) and that all atoms of a given element are identical in mass (superseded by discovery of atomic isotopes).[1]
  • Plum pudding model of the atom—assuming the protons and electrons were mixed together in a single mass
  • Rutherford model of the atom with an impenetrable nucleus orbited by electrons
  • Bohr model with quantized orbits
  • Electron cloud model following the development of quantum mechanics in 1925 and the eventual atomic orbital models derived from the quantum mechanical solution to the hydrogen atom

Astronomy and cosmology

Geography and climate

  • Flat Earth theory. On length scales much smaller than the radius of the Earth, a flat map projection gives a quite accurate and practically useful approximation to true distances and sizes, but departures from flatness become increasingly significant over larger distances.
  • Terra Australis
  • Hollow Earth theory
  • The Open Polar Sea, an ice-free sea once supposed to surround the North Pole
  • Rain follows the plow – the theory that human settlement increases rainfall in arid regions (only true to the extent that crop fields evapotranspirate more than barren wilderness)
  • Island of California – the theory that California was not part of mainland North America but rather a large island

Geology

Psychology

Medicine

Obsolete branches of enquiry

Theories now considered incomplete

Here are theories that are no longer considered the most complete representation of reality, but are still useful in particular domains or under certain conditions. For some theories a more complete model is known, but in practical use the coarser approximation provides good results with much less calculation.

Mass

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Mass   Mass A 2 kg (4.4 lb) cast iro...