Search This Blog

Thursday, July 24, 2025

Philosophy of science

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Philosophy_of_science

Philosophy of science
is the branch of philosophy concerned with the foundations, methods, and implications of science. Amongst its central questions are the difference between science and non-science, the reliability of scientific theories, and the ultimate purpose and meaning of science as a human endeavour. Philosophy of science focuses on metaphysical, epistemic and semantic aspects of scientific practice, and overlaps with metaphysics, ontology, logic, and epistemology, for example, when it explores the relationship between science and the concept of truth. Philosophy of science is both a theoretical and empirical discipline, relying on philosophical theorising as well as meta-studies of scientific practice. Ethical issues such as bioethics and scientific misconduct are often considered ethics or science studies rather than the philosophy of science.

Many of the central problems concerned with the philosophy of science lack contemporary consensus, including whether science can infer truth about unobservable entities and whether inductive reasoning can be justified as yielding definite scientific knowledge. Philosophers of science also consider philosophical problems within particular sciences (such as biology, physics and social sciences such as economics and psychology). Some philosophers of science also use contemporary results in science to reach conclusions about philosophy itself.

While philosophical thought pertaining to science dates back at least to the time of Aristotle, the general philosophy of science emerged as a distinct discipline only in the 20th century following the logical positivist movement, which aimed to formulate criteria for ensuring all philosophical statements' meaningfulness and objectively assessing them. Karl Popper criticized logical positivism and helped establish a modern set of standards for scientific methodology. Thomas Kuhn's 1962 book The Structure of Scientific Revolutions was also formative, challenging the view of scientific progress as the steady, cumulative acquisition of knowledge based on a fixed method of systematic experimentation and instead arguing that any progress is relative to a "paradigm", the set of questions, concepts, and practices that define a scientific discipline in a particular historical period.

Subsequently, the coherentist approach to science, in which a theory is validated if it makes sense of observations as part of a coherent whole, became prominent due to W. V. Quine and others. Some thinkers such as Stephen Jay Gould seek to ground science in axiomatic assumptions, such as the uniformity of nature. A vocal minority of philosophers, and Paul Feyerabend in particular, argue against the existence of the "scientific method", so all approaches to science should be allowed, including explicitly supernatural ones. Another approach to thinking about science involves studying how knowledge is created from a sociological perspective, an approach represented by scholars like David Bloor and Barry Barnes. Finally, a tradition in continental philosophy approaches science from the perspective of a rigorous analysis of human experience.

Philosophies of the particular sciences range from questions about the nature of time raised by Einstein's general relativity, to the implications of economics for public policy. A central theme is whether the terms of one scientific theory can be intra- or intertheoretically reduced to the terms of another. Can chemistry be reduced to physics, or can sociology be reduced to individual psychology? The general questions of philosophy of science also arise with greater specificity in some particular sciences. For instance, the question of the validity of scientific reasoning is seen in a different guise in the foundations of statistics. The question of what counts as science and what should be excluded arises as a life-or-death matter in the philosophy of medicine. Additionally, the philosophies of biology, psychology, and the social sciences explore whether the scientific studies of human nature can achieve objectivity or are inevitably shaped by values and by social relations.

Introduction

Defining science

In formulating 'the problem of induction', David Hume devised one of the most pervasive puzzles in the philosophy of science.
Karl Popper in the 1980s. Popper is credited with formulating 'the demarcation problem', which considers the question of how we distinguish between science and pseudoscience.

Distinguishing between science and non-science is referred to as the demarcation problem. For example, should psychoanalysis, creation science, and historical materialism be considered pseudosciences? Karl Popper called this the central question in the philosophy of science. However, no unified account of the problem has won acceptance among philosophers, and some regard the problem as unsolvable or uninteresting. Martin Gardner has argued for the use of a Potter Stewart standard ("I know it when I see it") for recognizing pseudoscience.

Early attempts by the logical positivists grounded science in observation while non-science was non-observational and hence meaningless. Popper argued that the central property of science is falsifiability. That is, every genuinely scientific claim is capable of being proven false, at least in principle.

An area of study or speculation that masquerades as science in an attempt to claim a legitimacy that it would not otherwise be able to achieve is referred to as pseudoscience, fringe science, or junk science. Physicist Richard Feynman coined the term "cargo cult science" for cases in which researchers believe they are doing science because their activities have the outward appearance of it but actually lack the "kind of utter honesty" that allows their results to be rigorously evaluated.

Scientific explanation

A closely related question is what counts as a good scientific explanation. In addition to providing predictions about future events, society often takes scientific theories to provide explanations for events that occur regularly or have already occurred. Philosophers have investigated the criteria by which a scientific theory can be said to have successfully explained a phenomenon, as well as what it means to say a scientific theory has explanatory power.

One early and influential account of scientific explanation is the deductive-nomological model. It says that a successful scientific explanation must deduce the occurrence of the phenomena in question from a scientific law. This view has been subjected to substantial criticism, resulting in several widely acknowledged counterexamples to the theory. It is especially challenging to characterize what is meant by an explanation when the thing to be explained cannot be deduced from any law because it is a matter of chance, or otherwise cannot be perfectly predicted from what is known. Wesley Salmon developed a model in which a good scientific explanation must be statistically relevant to the outcome to be explained. Others have argued that the key to a good explanation is unifying disparate phenomena or providing a causal mechanism.

Justifying science

Although it is often taken for granted, it is not at all clear how one can infer the validity of a general statement from a number of specific instances or infer the truth of a theory from a series of successful tests. For example, a chicken observes that each morning the farmer comes and gives it food, for hundreds of days in a row. The chicken may therefore use inductive reasoning to infer that the farmer will bring food every morning. However, one morning, the farmer comes and kills the chicken. How is scientific reasoning more trustworthy than the chicken's reasoning?

One approach is to acknowledge that induction cannot achieve certainty, but observing more instances of a general statement can at least make the general statement more probable. So the chicken would be right to conclude from all those mornings that it is likely the farmer will come with food again the next morning, even if it cannot be certain. However, there remain difficult questions about the process of interpreting any given evidence into a probability that the general statement is true. One way out of these particular difficulties is to declare that all beliefs about scientific theories are subjective, or personal, and correct reasoning is merely about how evidence should change one's subjective beliefs over time.

Some argue that what scientists do is not inductive reasoning at all but rather abductive reasoning, or inference to the best explanation. In this account, science is not about generalizing specific instances but rather about hypothesizing explanations for what is observed. As discussed in the previous section, it is not always clear what is meant by the "best explanation". Occam's razor, which counsels choosing the simplest available explanation, thus plays an important role in some versions of this approach. To return to the example of the chicken, would it be simpler to suppose that the farmer cares about it and will continue taking care of it indefinitely or that the farmer is fattening it up for slaughter? Philosophers have tried to make this heuristic principle more precise regarding theoretical parsimony or other measures. Yet, although various measures of simplicity have been brought forward as potential candidates, it is generally accepted that there is no such thing as a theory-independent measure of simplicity. In other words, there appear to be as many different measures of simplicity as there are theories themselves, and the task of choosing between measures of simplicity appears to be every bit as problematic as the job of choosing between theories. Nicholas Maxwell has argued for some decades that unity rather than simplicity is the key non-empirical factor in influencing the choice of theory in science, persistent preference for unified theories in effect committing science to the acceptance of a metaphysical thesis concerning unity in nature. In order to improve this problematic thesis, it needs to be represented in the form of a hierarchy of theses, each thesis becoming more insubstantial as one goes up the hierarchy.

Observation inseparable from theory

Five balls of light are arranged in a cross shape.
Seen through a telescope, the Einstein cross seems to provide evidence for five different objects, but this observation is theory-laden. If we assume the theory of general relativity, the image only provides evidence for two objects.

When making observations, scientists look through telescopes, study images on electronic screens, record meter readings, and so on. Generally, on a basic level, they can agree on what they see, e.g., the thermometer shows 37.9 degrees C. But, if these scientists have different ideas about the theories that have been developed to explain these basic observations, they may disagree about what they are observing. For example, before Albert Einstein's general theory of relativity, observers would have likely interpreted an image of the Einstein cross as five different objects in space. In light of that theory, however, astronomers will tell you that there are actually only two objects, one in the center and four different images of a second object around the sides. Alternatively, if other scientists suspect that something is wrong with the telescope and only one object is actually being observed, they are operating under yet another theory. Observations that cannot be separated from theoretical interpretation are said to be theory-laden.

All observation involves both perception and cognition. That is, one does not make an observation passively, but rather is actively engaged in distinguishing the phenomenon being observed from surrounding sensory data. Therefore, observations are affected by one's underlying understanding of the way in which the world functions, and that understanding may influence what is perceived, noticed, or deemed worthy of consideration. In this sense, it can be argued that all observation is theory-laden.

The purpose of science

Should science aim to determine ultimate truth, or are there questions that science cannot answer? Scientific realists claim that science aims at truth and that one ought to regard scientific theories as true, approximately true, or likely true. Conversely, scientific anti-realists argue that science does not aim (or at least does not succeed) at truth, especially truth about unobservables like electrons or other universes. Instrumentalists argue that scientific theories should only be evaluated on whether they are useful. In their view, whether theories are true or not is beside the point, because the purpose of science is to make predictions and enable effective technology.

Realists often point to the success of recent scientific theories as evidence for the truth (or near truth) of current theories. Antirealists point to either the many false theories in the history of science, epistemic morals, the success of false modeling assumptions, or widely termed postmodern criticisms of objectivity as evidence against scientific realism. Antirealists attempt to explain the success of scientific theories without reference to truth. Some antirealists claim that scientific theories aim at being accurate only about observable objects and argue that their success is primarily judged by that criterion.

Real patterns

The notion of real patterns has been propounded, notably by philosopher Daniel C. Dennett, as an intermediate position between strong realism and eliminative materialism. This concept delves into the investigation of patterns observed in scientific phenomena to ascertain whether they signify underlying truths or are mere constructs of human interpretation. Dennett provides a unique ontological account concerning real patterns, examining the extent to which these recognized patterns have predictive utility and allow for efficient compression of information.

The discourse on real patterns extends beyond philosophical circles, finding relevance in various scientific domains. For example, in biology, inquiries into real patterns seek to elucidate the nature of biological explanations, exploring how recognized patterns contribute to a comprehensive understanding of biological phenomena. Similarly, in chemistry, debates around the reality of chemical bonds as real patterns continue.

Evaluation of real patterns also holds significance in broader scientific inquiries. Researchers, like Tyler Millhouse, propose criteria for evaluating the realness of a pattern, particularly in the context of universal patterns and the human propensity to perceive patterns, even where there might be none. This evaluation is pivotal in advancing research in diverse fields, from climate change to machine learning, where recognition and validation of real patterns in scientific models play a crucial role.

Values and science

Values intersect with science in different ways. There are epistemic values that mainly guide the scientific research. The scientific enterprise is embedded in particular culture and values through individual practitioners. Values emerge from science, both as product and process and can be distributed among several cultures in the society. When it comes to the justification of science in the sense of general public participation by single practitioners, science plays the role of a mediator between evaluating the standards and policies of society and its participating individuals, wherefore science indeed falls victim to vandalism and sabotage adapting the means to the end.

Thomas Kuhn is credited with coining the term "paradigm shift" to describe the creation and evolution of scientific theories.

If it is unclear what counts as science, how the process of confirming theories works, and what the purpose of science is, there is considerable scope for values and other social influences to shape science. Indeed, values can play a role ranging from determining which research gets funded to influencing which theories achieve scientific consensus. For example, in the 19th century, cultural values held by scientists about race shaped research on evolution, and values concerning social class influenced debates on phrenology (considered scientific at the time). Feminist philosophers of science, sociologists of science, and others explore how social values affect science.

History

Pre-modern

The origins of philosophy of science trace back to Plato and Aristotle, who distinguished the forms of approximate and exact reasoning, set out the threefold scheme of abductive, deductive, and inductive inference, and also analyzed reasoning by analogy. The eleventh century Arab polymath Ibn al-Haytham (known in Latin as Alhazen) conducted his research in optics by way of controlled experimental testing and applied geometry, especially in his investigations into the images resulting from the reflection and refraction of light. Roger Bacon (1214–1294), an English thinker and experimenter heavily influenced by al-Haytham, is recognized by many to be the father of modern scientific method. His view that mathematics was essential to a correct understanding of natural philosophy is considered to have been 400 years ahead of its time.

Modern

Francis Bacon's statue at Gray's Inn, South Square, London
Theory of Science by Auguste Comte

Francis Bacon (no direct relation to Roger Bacon, who lived 300 years earlier) was a seminal figure in philosophy of science at the time of the Scientific Revolution. In his work Novum Organum (1620)—an allusion to Aristotle's Organon—Bacon outlined a new system of logic to improve upon the old philosophical process of syllogism. Bacon's method relied on experimental histories to eliminate alternative theories. In 1637, René Descartes established a new framework for grounding scientific knowledge in his treatise, Discourse on Method, advocating the central role of reason as opposed to sensory experience. By contrast, in 1713, the 2nd edition of Isaac Newton's Philosophiae Naturalis Principia Mathematica argued that "... hypotheses ... have no place in experimental philosophy. In this philosophy[,] propositions are deduced from the phenomena and rendered general by induction." This passage influenced a "later generation of philosophically-inclined readers to pronounce a ban on causal hypotheses in natural philosophy". In particular, later in the 18th century, David Hume would famously articulate skepticism about the ability of science to determine causality and gave a definitive formulation of the problem of induction, though both theses would be contested by the end of the 18th century by Immanuel Kant in his Critique of Pure Reason and Metaphysical Foundations of Natural Science. In 19th century Auguste Comte made a major contribution to the theory of science. The 19th century writings of John Stuart Mill are also considered important in the formation of current conceptions of the scientific method, as well as anticipating later accounts of scientific explanation.

Logical positivism

Instrumentalism became popular among physicists around the turn of the 20th century, after which logical positivism defined the field for several decades. Logical positivism accepts only testable statements as meaningful, rejects metaphysical interpretations, and embraces verificationism (a set of theories of knowledge that combines logicism, empiricism, and linguistics to ground philosophy on a basis consistent with examples from the empirical sciences). Seeking to overhaul all of philosophy and convert it to a new scientific philosophy, the Berlin Circle and the Vienna Circle propounded logical positivism in the late 1920s.

Interpreting Ludwig Wittgenstein's early philosophy of language, logical positivists identified a verifiability principle or criterion of cognitive meaningfulness. From Bertrand Russell's logicism they sought reduction of mathematics to logic. They also embraced Russell's logical atomism, Ernst Mach's phenomenalism—whereby the mind knows only actual or potential sensory experience, which is the content of all sciences, whether physics or psychology—and Percy Bridgman's operationalism. Thereby, only the verifiable was scientific and cognitively meaningful, whereas the unverifiable was unscientific, cognitively meaningless "pseudostatements"—metaphysical, emotive, or such—not worthy of further review by philosophers, who were newly tasked to organize knowledge rather than develop new knowledge.

Logical positivism is commonly portrayed as taking the extreme position that scientific language should never refer to anything unobservable—even the seemingly core notions of causality, mechanism, and principles—but that is an exaggeration. Talk of such unobservables could be allowed as metaphorical—direct observations viewed in the abstract—or at worst metaphysical or emotional. Theoretical laws would be reduced to empirical laws, while theoretical terms would garner meaning from observational terms via correspondence rules. Mathematics in physics would reduce to symbolic logic via logicism, while rational reconstruction would convert ordinary language into standardized equivalents, all networked and united by a logical syntax. A scientific theory would be stated with its method of verification, whereby a logical calculus or empirical operation could verify its falsity or truth.

In the late 1930s, logical positivists fled Germany and Austria for Britain and America. By then, many had replaced Mach's phenomenalism with Otto Neurath's physicalism, and Rudolf Carnap had sought to replace verification with simply confirmation. With World War II's close in 1945, logical positivism became milder, logical empiricism, led largely by Carl Hempel, in America, who expounded the covering law model of scientific explanation as a way of identifying the logical form of explanations without any reference to the suspect notion of "causation". The logical positivist movement became a major underpinning of analytic philosophy, and dominated Anglosphere philosophy, including philosophy of science, while influencing sciences, into the 1960s. Yet the movement failed to resolve its central problems, and its doctrines were increasingly assaulted. Nevertheless, it brought about the establishment of philosophy of science as a distinct subdiscipline of philosophy, with Carl Hempel playing a key role.

For Kuhn, the addition of epicycles in Ptolemaic astronomy was "normal science" within a paradigm, whereas the Copernican Revolution was a paradigm shift.

Thomas Kuhn

In the 1962 book The Structure of Scientific Revolutions, Thomas Kuhn argued that the process of observation and evaluation takes place within a "paradigm", which he describes as "universally recognized achievements that for a time provide model problems and solutions to community of practitioners." A paradigm implicitly identifies the objects and relations under study and suggests what experiments, observations or theoretical improvements need to be carried out to produce a useful result. He characterized normal science as the process of observation and "puzzle solving" which takes place within a paradigm, whereas revolutionary science occurs when one paradigm overtakes another in a paradigm shift.

Kurn was a historian of science, and his ideas were inspired by the study of older paradigms that have been discarded, such as Aristotelian mechanics or aether theory. These had often been portrayed by historians as using "unscientific" methods or beliefs. But careful examination showed that they were no less "scientific" than modern paradigms. Both were based on valid evidence, both failed to answer every possible question.

A paradigm shift occurred when a significant number of observational anomalies arose in the old paradigm and efforts to resolve them within the paradigm were unsuccessful. A new paradigm was available that handled the anomalies with less difficulty and yet still covered (most of) the previous results. Over a period of time, often as long as a generation, more practitioners began working within the new paradigm and eventually the old paradigm was abandoned. For Kuhn, acceptance or rejection of a paradigm is a social process as much as a logical process.

Kuhn's position, however, is not one of relativism; he wrote "terms like 'subjective' and 'intuitive' cannot be applied to [paradigms]." Paradigms are grounded in objective, observable evidence, but our use of them is psychological and our acceptance of them is social.

Current approaches

Naturalism's axiomatic assumptions

According to Robert Priddy, all scientific study inescapably builds on at least some essential assumptions that cannot be tested by scientific processes; that is, that scientists must start with some assumptions as to the ultimate analysis of the facts with which it deals. These assumptions would then be justified partly by their adherence to the types of occurrence of which we are directly conscious, and partly by their success in representing the observed facts with a certain generality, devoid of ad hoc suppositions." Kuhn also claims that all science is based on assumptions about the character of the universe, rather than merely on empirical facts. These assumptions – a paradigm – comprise a collection of beliefs, values and techniques that are held by a given scientific community, which legitimize their systems and set the limitations to their investigation. For naturalists, nature is the only reality, the "correct" paradigm, and there is no such thing as supernatural, i.e. anything above, beyond, or outside of nature. The scientific method is to be used to investigate all reality, including the human spirit.

Some claim that naturalism is the implicit philosophy of working scientists, and that the following basic assumptions are needed to justify the scientific method:

  1. That there is an objective reality shared by all rational observers. "The basis for rationality is acceptance of an external objective reality." "Objective reality is clearly an essential thing if we are to develop a meaningful perspective of the world. Nevertheless its very existence is assumed." "Our belief that objective reality exist is an assumption that it arises from a real world outside of ourselves. As infants we made this assumption unconsciously. People are happy to make this assumption that adds meaning to our sensations and feelings, than live with solipsism." "Without this assumption, there would be only the thoughts and images in our own mind (which would be the only existing mind) and there would be no need of science, or anything else."
  2. That this objective reality is governed by natural laws;
    "Science, at least today, assumes that the universe obeys knowable principles that don't depend on time or place, nor on subjective parameters such as what we think, know or how we behave." Hugh Gauch argues that science presupposes that "the physical world is orderly and comprehensible."
  3. That reality can be discovered by means of systematic observation and experimentation.
    Stanley Sobottka said: "The assumption of external reality is necessary for science to function and to flourish. For the most part, science is the discovering and explaining of the external world." "Science attempts to produce knowledge that is as universal and objective as possible within the realm of human understanding."
  4. That Nature has uniformity of laws and most if not all things in nature must have at least a natural cause.
    Biologist Stephen Jay Gould referred to these two closely related propositions as the constancy of nature's laws and the operation of known processes. Simpson agrees that the axiom of uniformity of law, an unprovable postulate, is necessary in order for scientists to extrapolate inductive inference into the unobservable past in order to meaningfully study it. "The assumption of spatial and temporal invariance of natural laws is by no means unique to geology since it amounts to a warrant for inductive inference which, as Bacon showed nearly four hundred years ago, is the basic mode of reasoning in empirical science. Without assuming this spatial and temporal invariance, we have no basis for extrapolating from the known to the unknown and, therefore, no way of reaching general conclusions from a finite number of observations. (Since the assumption is itself vindicated by induction, it can in no way "prove" the validity of induction — an endeavor virtually abandoned after Hume demonstrated its futility two centuries ago)." Gould also notes that natural processes such as Lyell's "uniformity of process" are an assumption: "As such, it is another a priori assumption shared by all scientists and not a statement about the empirical world." According to R. Hooykaas: "The principle of uniformity is not a law, not a rule established after comparison of facts, but a principle, preceding the observation of facts ... It is the logical principle of parsimony of causes and of economy of scientific notions. By explaining past changes by analogy with present phenomena, a limit is set to conjecture, for there is only one way in which two things are equal, but there are an infinity of ways in which they could be supposed different."
  5. That experimental procedures will be done satisfactorily without any deliberate or unintentional mistakes that will influence the results.
  6. That experimenters won't be significantly biased by their presumptions.
  7. That random sampling is representative of the entire population.
    A simple random sample (SRS) is the most basic probabilistic option used for creating a sample from a population. The benefit of SRS is that the investigator is guaranteed to choose a sample that represents the population that ensures statistically valid conclusions.

Coherentism

Jeremiah Horrocks makes the first observation of the transit of Venus in 1639, as imagined by the artist W. R. Lavender in 1903.

In contrast to the view that science rests on foundational assumptions, coherentism asserts that statements are justified by being a part of a coherent system. Or, rather, individual statements cannot be validated on their own: only coherent systems can be justified. A prediction of a transit of Venus is justified by its being coherent with broader beliefs about celestial mechanics and earlier observations. As explained above, observation is a cognitive act. That is, it relies on a pre-existing understanding, a systematic set of beliefs. An observation of a transit of Venus requires a huge range of auxiliary beliefs, such as those that describe the optics of telescopes, the mechanics of the telescope mount, and an understanding of celestial mechanics. If the prediction fails and a transit is not observed, that is likely to occasion an adjustment in the system, a change in some auxiliary assumption, rather than a rejection of the theoretical system.

According to the Duhem–Quine thesis, after Pierre Duhem and W.V. Quine, it is impossible to test a theory in isolation. One must always add auxiliary hypotheses in order to make testable predictions. For example, to test Newton's Law of Gravitation in the solar system, one needs information about the masses and positions of the Sun and all the planets. Famously, the failure to predict the orbit of Uranus in the 19th century led not to the rejection of Newton's Law but rather to the rejection of the hypothesis that the Solar System comprises only seven planets. The investigations that followed led to the discovery of an eighth planet, Neptune. If a test fails, something is wrong. But there is a problem in figuring out what that something is: a missing planet, badly calibrated test equipment, an unsuspected curvature of space, or something else.

One consequence of the Duhem–Quine thesis is that one can make any theory compatible with any empirical observation by the addition of a sufficient number of suitable ad hoc hypotheses. Karl Popper accepted this thesis, leading him to reject naïve falsification. Instead, he favored a "survival of the fittest" view in which the most falsifiable scientific theories are to be preferred.

Anything goes methodology

Paul Karl Feyerabend

Paul Feyerabend (1924–1994) argued that no description of scientific method could possibly be broad enough to include all the approaches and methods used by scientists, and that there are no useful and exception-free methodological rules governing the progress of science. He argued that "the only principle that does not inhibit progress is: anything goes".

Feyerabend said that science started as a liberating movement, but that over time it had become increasingly dogmatic and rigid and had some oppressive features, and thus had become increasingly an ideology. Because of this, he said it was impossible to come up with an unambiguous way to distinguish science from religion, magic, or mythology. He saw the exclusive dominance of science as a means of directing society as authoritarian and ungrounded. Promulgation of this epistemological anarchism earned Feyerabend the title of "the worst enemy of science" from his detractors.

Sociology of scientific knowledge methodology

According to Kuhn, science is an inherently communal activity which can only be done as part of a community. For him, the fundamental difference between science and other disciplines is the way in which the communities function. Others, especially Feyerabend and some post-modernist thinkers, have argued that there is insufficient difference between social practices in science and other disciplines to maintain this distinction. For them, social factors play an important and direct role in scientific method, but they do not serve to differentiate science from other disciplines. On this account, science is socially constructed, though this does not necessarily imply the more radical notion that reality itself is a social construct.

Michel Foucault sought to analyze and uncover how disciplines within the social sciences developed and adopted the methodologies used by their practitioners. In works like The Archaeology of Knowledge, he used the term human sciences. The human sciences do not comprise mainstream academic disciplines; they are rather an interdisciplinary space for the reflection on man who is the subject of more mainstream scientific knowledge, taken now as an object, sitting between these more conventional areas, and of course associating with disciplines such as anthropology, psychology, sociology, and even history. Rejecting the realist view of scientific inquiry, Foucault argued throughout his work that scientific discourse is not simply an objective study of phenomena, as both natural and social scientists like to believe, but is rather the product of systems of power relations struggling to construct scientific disciplines and knowledge within given societies. With the advances of scientific disciplines, such as psychology and anthropology, the need to separate, categorize, normalize and institutionalize populations into constructed social identities became a staple of the sciences. Constructions of what were considered "normal" and "abnormal" stigmatized and ostracized groups of people, like the mentally ill and sexual and gender minorities.

However, some (such as Quine) do maintain that scientific reality is a social construct:

Physical objects are conceptually imported into the situation as convenient intermediaries not by definition in terms of experience, but simply as irreducible posits comparable, epistemologically, to the gods of Homer ... For my part I do, qua lay physicist, believe in physical objects and not in Homer's gods; and I consider it a scientific error to believe otherwise. But in point of epistemological footing, the physical objects and the gods differ only in degree and not in kind. Both sorts of entities enter our conceptions only as cultural posits.

The public backlash of scientists against such views, particularly in the 1990s, became known as the science wars.

A major development in recent decades has been the study of the formation, structure, and evolution of scientific communities by sociologists and anthropologists – including David Bloor, Harry Collins, Bruno Latour, Ian Hacking and Anselm Strauss. Concepts and methods (such as rational choice, social choice or game theory) from economics have also been applied for understanding the efficiency of scientific communities in the production of knowledge. This interdisciplinary field has come to be known as science and technology studies. Here the approach to the philosophy of science is to study how scientific communities actually operate.

Continental philosophy

Philosophers in the continental philosophical tradition are not traditionally categorized as philosophers of science. However, they have much to say about science, some of which has anticipated themes in the analytical tradition. For example, in The Genealogy of Morals (1887) Friedrich Nietzsche advanced the thesis that the motive for the search for truth in sciences is a kind of ascetic ideal.

In general, continental philosophy views science from a world-historical perspective. Philosophers such as Pierre Duhem (1861–1916) and Gaston Bachelard (1884–1962) wrote their works with this world-historical approach to science, predating Kuhn's 1962 work by a generation or more. All of these approaches involve a historical and sociological turn to science, with a priority on lived experience (a kind of Husserlian "life-world"), rather than a progress-based or anti-historical approach as emphasised in the analytic tradition. One can trace this continental strand of thought through the phenomenology of Edmund Husserl (1859–1938), the late works of Merleau-Ponty (Nature: Course Notes from the Collège de France, 1956–1960), and the hermeneutics of Martin Heidegger (1889–1976).

The largest effect on the continental tradition with respect to science came from Martin Heidegger's critique of the theoretical attitude in general, which of course includes the scientific attitude. For this reason, the continental tradition has remained much more skeptical of the importance of science in human life and in philosophical inquiry. Nonetheless, there have been a number of important works: especially those of a Kuhnian precursor, Alexandre Koyré (1892–1964). Another important development was that of Michel Foucault's analysis of historical and scientific thought in The Order of Things (1966) and his study of power and corruption within the "science" of madness. Post-Heideggerian authors contributing to continental philosophy of science in the second half of the 20th century include Jürgen Habermas (e.g., Truth and Justification, 1998), Carl Friedrich von Weizsäcker (The Unity of Nature, 1980; German: Die Einheit der Natur (1971)), and Wolfgang Stegmüller (Probleme und Resultate der Wissenschaftstheorie und Analytischen Philosophie, 1973–1986).

Other topics

Reductionism

Analysis involves breaking an observation or theory down into simpler concepts in order to understand it. Reductionism can refer to one of several philosophical positions related to this approach. One type of reductionism suggests that phenomena are amenable to scientific explanation at lower levels of analysis and inquiry. Perhaps a historical event might be explained in sociological and psychological terms, which in turn might be described in terms of human physiology, which in turn might be described in terms of chemistry and physics. Daniel Dennett distinguishes legitimate reductionism from what he calls greedy reductionism, which denies real complexities and leaps too quickly to sweeping generalizations.

Social accountability

A broad issue affecting the neutrality of science concerns the areas which science chooses to explore—that is, what part of the world and of humankind are studied by science. Philip Kitcher in his Science, Truth, and Democracy argues that scientific studies that attempt to show one segment of the population as being less intelligent, less successful, or emotionally backward compared to others have a political feedback effect which further excludes such groups from access to science. Thus such studies undermine the broad consensus required for good science by excluding certain people, and so proving themselves in the end to be unscientific.

Philosophy of particular sciences

There is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination.

— Daniel Dennett, Darwin's Dangerous Idea, 1995

In addition to addressing the general questions regarding science and induction, many philosophers of science are occupied by investigating foundational problems in particular sciences. They also examine the implications of particular sciences for broader philosophical questions. The late 20th and early 21st century has seen a rise in the number of practitioners of philosophy of a particular science.

Philosophy of statistics

The problem of induction discussed above is seen in another form in debates over the foundations of statistics. The standard approach to statistical hypothesis testing avoids claims about whether evidence supports a hypothesis or makes it more probable. Instead, the typical test yields a p-value, which is the probability of the evidence being such as it is, under the assumption that the null hypothesis is true. If the p-value is too high, the hypothesis is rejected, in a way analogous to falsification. In contrast, Bayesian inference seeks to assign probabilities to hypotheses. Related topics in philosophy of statistics include probability interpretations, overfitting, and the difference between correlation and causation.

Philosophy of mathematics

Philosophy of mathematics is concerned with the philosophical foundations and implications of mathematics. The central questions are whether numbers, triangles, and other mathematical entities exist independently of the human mind and what is the nature of mathematical propositions. Is asking whether "1 + 1 = 2" is true fundamentally different from asking whether a ball is red? Was calculus invented or discovered? A related question is whether learning mathematics requires experience or reason alone. What does it mean to prove a mathematical theorem and how does one know whether a mathematical proof is correct? Philosophers of mathematics also aim to clarify the relationships between mathematics and logic, human capabilities such as intuition, and the material universe.

Philosophy of physics

Philosophy of physics is the study of the fundamental, philosophical questions underlying modern physics, the study of matter and energy and how they interact. The main questions concern the nature of space and time, atoms and atomism. Also included are the predictions of cosmology, the interpretation of quantum mechanics, the foundations of statistical mechanics, causality, determinism, and the nature of physical laws. Classically, several of these questions were studied as part of metaphysics (for example, those about causality, determinism, and space and time).

Philosophy of chemistry

Philosophy of chemistry is the philosophical study of the methodology and content of the science of chemistry. It is explored by philosophers, chemists, and philosopher-chemist teams. It includes research on general philosophy of science issues as applied to chemistry. For example, can all chemical phenomena be explained by quantum mechanics or is it not possible to reduce chemistry to physics? For another example, chemists have discussed the philosophy of how theories are confirmed in the context of confirming reaction mechanisms. Determining reaction mechanisms is difficult because they cannot be observed directly. Chemists can use a number of indirect measures as evidence to rule out certain mechanisms, but they are often unsure if the remaining mechanism is correct because there are many other possible mechanisms that they have not tested or even thought of. Philosophers have also sought to clarify the meaning of chemical concepts which do not refer to specific physical entities, such as chemical bonds.

Philosophy of astronomy

The philosophy of astronomy seeks to understand and analyze the methodologies and technologies used by experts in the discipline, focusing on how observations made about space and astrophysical phenomena can be studied. Given that astronomers rely and use theories and formulas from other scientific disciplines, such as chemistry and physics, the pursuit of understanding how knowledge can be obtained about the cosmos, as well as the relation in which Earth and the Solar System have within personal views of humanity's place in the universe, philosophical insights into how facts about space can be scientifically analyzed and configure with other established knowledge is a main point of inquiry.

Philosophy of Earth sciences

The philosophy of Earth science is concerned with how humans obtain and verify knowledge of the workings of the Earth system, including the atmosphere, hydrosphere, and geosphere (solid earth). Earth scientists' ways of knowing and habits of mind share important commonalities with other sciences, but also have distinctive attributes that emerge from the complex, heterogeneous, unique, long-lived, and non-manipulatable nature of the Earth system.

Philosophy of biology

Peter Godfrey-Smith was awarded the Lakatos Award for his 2009 book Darwinian Populations and Natural Selection, which discusses the philosophical foundations of the theory of evolution.

Philosophy of biology deals with epistemological, metaphysical, and ethical issues in the biological and biomedical sciences. Although philosophers of science and philosophers generally have long been interested in biology (e.g., Aristotle, Descartes, Leibniz and even Kant), philosophy of biology only emerged as an independent field of philosophy in the 1960s and 1970s. Philosophers of science began to pay increasing attention to developments in biology, from the rise of the modern synthesis in the 1930s and 1940s to the discovery of the structure of deoxyribonucleic acid (DNA) in 1953 to more recent advances in genetic engineering. Other key ideas such as the reduction of all life processes to biochemical reactions as well as the incorporation of psychology into a broader neuroscience are also addressed. Research in current philosophy of biology includes investigation of the foundations of evolutionary theory (such as Peter Godfrey-Smith's work), and the role of viruses as persistent symbionts in host genomes. As a consequence, the evolution of genetic content order is seen as the result of competent genome editors  in contrast to former narratives in which error replication events (mutations) dominated.

Philosophy of medicine

A fragment of the Hippocratic Oath from the third century

Beyond medical ethics and bioethics, the philosophy of medicine is a branch of philosophy that includes the epistemology and ontology/metaphysics of medicine. Within the epistemology of medicine, evidence-based medicine (EBM) (or evidence-based practice (EBP)) has attracted attention, most notably the roles of randomisation, blinding and placebo controls. Related to these areas of investigation, ontologies of specific interest to the philosophy of medicine include Cartesian dualism, the monogenetic conception of disease and the conceptualization of 'placebos' and 'placebo effects'. There is also a growing interest in the metaphysics of medicine, particularly the idea of causation. Philosophers of medicine might not only be interested in how medical knowledge is generated, but also in the nature of such phenomena. Causation is of interest because the purpose of much medical research is to establish causal relationships, e.g. what causes disease, or what causes people to get better.

Philosophy of psychiatry

Philosophy of psychiatry explores philosophical questions relating to psychiatry and mental illness. The philosopher of science and medicine Dominic Murphy identifies three areas of exploration in the philosophy of psychiatry. The first concerns the examination of psychiatry as a science, using the tools of the philosophy of science more broadly. The second entails the examination of the concepts employed in discussion of mental illness, including the experience of mental illness, and the normative questions it raises. The third area concerns the links and discontinuities between the philosophy of mind and psychopathology.

Philosophy of psychology

Wilhelm Wundt (seated) with colleagues in his psychological laboratory, the first of its kind

Philosophy of psychology refers to issues at the theoretical foundations of modern psychology. Some of these issues are epistemological concerns about the methodology of psychological investigation. For example, is the best method for studying psychology to focus only on the response of behavior to external stimuli or should psychologists focus on mental perception and thought processes? If the latter, an important question is how the internal experiences of others can be measured. Self-reports of feelings and beliefs may not be reliable because, even in cases in which there is no apparent incentive for subjects to intentionally deceive in their answers, self-deception or selective memory may affect their responses. Then even in the case of accurate self-reports, how can responses be compared across individuals? Even if two individuals respond with the same answer on a Likert scale, they may be experiencing very different things.

Other issues in philosophy of psychology are philosophical questions about the nature of mind, brain, and cognition, and are perhaps more commonly thought of as part of cognitive science, or philosophy of mind. For example, are humans rational creatures? Is there any sense in which they have free will, and how does that relate to the experience of making choices? Philosophy of psychology also closely monitors contemporary work conducted in cognitive neuroscience, psycholinguistics, and artificial intelligence, questioning what they can and cannot explain in psychology.

Philosophy of psychology is a relatively young field, because psychology only became a discipline of its own in the late 1800s. In particular, neurophilosophy has just recently become its own field with the works of Paul Churchland and Patricia Churchland. Philosophy of mind, by contrast, has been a well-established discipline since before psychology was a field of study at all. It is concerned with questions about the very nature of mind, the qualities of experience, and particular issues like the debate between dualism and monism.

Philosophy of social science

The philosophy of social science is the study of the logic and method of the social sciences, such as sociology and cultural anthropology. Philosophers of social science are concerned with the differences and similarities between the social and the natural sciences, causal relationships between social phenomena, the possible existence of social laws, and the ontological significance of structure and agency.

The French philosopher, Auguste Comte (1798–1857), established the epistemological perspective of positivism in The Course in Positivist Philosophy, a series of texts published between 1830 and 1842. The first three volumes of the Course dealt chiefly with the natural sciences already in existence (geoscience, astronomy, physics, chemistry, biology), whereas the latter two emphasised the inevitable coming of social science: "sociologie". For Comte, the natural sciences had to necessarily arrive first, before humanity could adequately channel its efforts into the most challenging and complex "Queen science" of human society itself. Comte offers an evolutionary system proposing that society undergoes three phases in its quest for the truth according to a general 'law of three stages'. These are (1) the theological, (2) the metaphysical, and (3) the positive.

Comte's positivism established the initial philosophical foundations for formal sociology and social research. Durkheim, Marx, and Weber are more typically cited as the fathers of contemporary social science. In psychology, a positivistic approach has historically been favoured in behaviourism. Positivism has also been espoused by 'technocrats' who believe in the inevitability of social progress through science and technology.

The positivist perspective has been associated with 'scientism'; the view that the methods of the natural sciences may be applied to all areas of investigation, be it philosophical, social scientific, or otherwise. Among most social scientists and historians, orthodox positivism has long since lost popular support. Today, practitioners of both social and physical sciences instead take into account the distorting effect of observer bias and structural limitations. This scepticism has been facilitated by a general weakening of deductivist accounts of science by philosophers such as Thomas Kuhn, and new philosophical movements such as critical realism and neopragmatism. The philosopher-sociologist Jürgen Habermas has critiqued pure instrumental rationality as meaning that scientific-thinking becomes something akin to ideology itself.

Philosophy of technology

The philosophy of technology is a sub-field of philosophy that studies the nature of technology. Specific research topics include study of the role of tacit and explicit knowledge in creating and using technology, the nature of functions in technological artifacts, the role of values in design, and ethics related to technology. Technology and engineering can both involve the application of scientific knowledge. The philosophy of engineering is an emerging sub-field of the broader philosophy of technology.

Principle of sufficient reason

The principle of sufficient reason states that everything must have a reason or a cause. The principle was articulated and made prominent by Gottfried Wilhelm Leibniz, with many antecedents, and was further used and developed by Arthur Schopenhauer and William Hamilton.

History

The modern formulation of the principle is usually ascribed to the early Enlightenment philosopher Gottfried Leibniz, who formulated it, but was not its originator. The idea was conceived of and utilized by various philosophers who preceded him, including AnaximanderParmenides, ArchimedesPlato, AristotleCiceroAvicennaThomas Aquinas, and Baruch Spinoza. One often pointed to is in Anselm of Canterbury: his phrase quia Deus nihil sine ratione facit (because God does nothing without reason) and the formulation of the ontological argument for the existence of God. A clearer connection is with the cosmological argument for the existence of God. The principle can be seen in both Aquinas and William of Ockham.

The post-Kantian philosopher Arthur Schopenhauer elaborated the principle, and used it as the foundation of his system. Some philosophers have associated the principle of sufficient reason with Ex nihilo nihil fit (Nothing comes from nothing). William Hamilton identified the laws of inference modus ponens with the "Law of Sufficient Reason, or of Reason and Consequent" and modus tollens with its contrapositive expression.

Formulation

The principle has a variety of expressions, all of which are perhaps best summarized by the following:

  • For every entity X, if X exists, then there is a sufficient explanation for why X exists.
  • For every event E, if E occurs, then there is a sufficient explanation for why E occurs.
  • For every proposition P, if P is true, then there is a sufficient explanation for why P is true.

A sufficient explanation may be understood either in terms of reasons or causes, for like many philosophers of the period, Leibniz did not carefully distinguish between the two. The resulting principle is very different, however, depending on which interpretation is given (see Payne's summary of Schopenhauer's Fourfold Root).

It is an open question whether the principle of sufficient reason can be applied to axioms within a logic construction like a mathematical or a physical theory, because axioms are propositions accepted as having no justification possible within the system. The principle declares that all propositions considered to be true within a system should be deducible from the set axioms at the base of the construction (i.e., that they ensue necessarily if we assume the system's axioms to be true). However, Gödel has shown that for every sufficiently expressive deductive system a proposition exists that can neither be proved nor disproved (see Gödel's incompleteness theorems).

Different views

Leibniz's view

Leibniz identified two kinds of truth, necessary and contingent truths. And he claimed that all truths are based upon two principles: (1) non-contradiction, and (2) sufficient reason. In the Monadology, he says,

Our reasonings are grounded upon two great principles, that of contradiction, in virtue of which we judge false that which involves a contradiction, and true that which is opposed or contradictory to the false; And that of sufficient reason, in virtue of which we hold that there can be no fact real or existing, no statement true, unless there be a sufficient reason, why it should be so and not otherwise, although these reasons usually cannot be known by us (paragraphs 31 and 32).

Necessary truths can be derived from the law of identity (and the principle of non-contradiction): "Necessary truths are those that can be demonstrated through an analysis of terms, so that in the end they become identities, just as in Algebra an equation expressing an identity ultimately results from the substitution of values [for variables]. That is, necessary truths depend upon the principle of contradiction." The sufficient reason for a necessary truth is that its negation is a contradiction.

Leibniz admitted contingent truths, that is, facts in the world that are not necessarily true, but that are nonetheless true. Even these contingent truths, according to Leibniz, can only exist on the basis of sufficient reasons. Since the sufficient reasons for contingent truths are largely unknown to humans, Leibniz made appeal to infinitary sufficient reasons, to which God uniquely has access:

In contingent truths, even though the predicate is in the subject, this can never be demonstrated, nor can a proposition ever be reduced to an equality or to an identity, but the resolution proceeds to infinity, God alone seeing, not the end of the resolution, of course, which does not exist, but the connection of the terms or the containment of the predicate in the subject, since he sees whatever is in the series.

Without this qualification, the principle can be seen as a description of a certain notion of closed system, in which there is no 'outside' to provide unexplained events with causes. It is also in tension with the paradox of Buridan's ass, because although the facts supposed in the paradox would present a counterexample to the claim that all contingent truths are determined by sufficient reasons, the key premise of the paradox must be rejected when one considers Leibniz's typical infinitary conception of the world.

In consequence of this, the case also of Buridan's ass between two meadows, impelled equally towards both of them, is a fiction that cannot occur in the universe....For the universe cannot be halved by a plane drawn through the middle of the ass, which is cut vertically through its length, so that all is equal and alike on both sides.....Neither the parts of the universe nor the viscera of the animal are alike nor are they evenly placed on both sides of this vertical plane. There will therefore always be many things in the ass and outside the ass, although they be not apparent to us, which will determine him to go on one side rather than the other. And although man is free, and the ass is not, nevertheless for the same reason it must be true that in man likewise the case of a perfect equipoise between two courses is impossible. (Theodicy, pg. 150)

Leibniz also used the principle of sufficient reason to refute the idea of absolute space:

I say then, that if space is an absolute being, there would be something for which it would be impossible there should be a sufficient reason. Which is against my axiom. And I prove it thus. Space is something absolutely uniform; and without the things placed in it, one point in space does not absolutely differ in any respect whatsoever from another point of space. Now from hence it follows, (supposing space to be something in itself, beside the order of bodies among themselves,) that 'tis impossible that there should be a reason why God, preserving the same situation of bodies among themselves, should have placed them in space after one particular manner, and not otherwise; why everything was not placed the quite contrary way, for instance, by changing East into West.

Hamilton's fourth law: "Infer nothing without ground or reason"

Here is how William Hamilton, circa 1837–1838, expressed his "fourth law" in his LECT. V. LOGIC. 60–61:

I now go on to the fourth law.

Par. XVII. Law of Sufficient Reason, or of Reason and Consequent:

XVII. The thinking of an object, as actually characterized by positive or by negative attributes, is not left to the caprice of Understanding – the faculty of thought; but that faculty must be necessitated to this or that determinate act of thinking by a knowledge of something different from, and independent of; the process of thinking itself. This condition of our understanding is expressed by the law, as it is called, of Sufficient Reason (principium Rationis Sufficientis); but it is more properly denominated the law of Reason and Consequent (principium Rationis et Consecutionis). That knowledge by which the mind is necessitated to affirm or posit something else, is called the logical reason ground, or antecedent; that something else which the mind is necessitated to affirm or posit, is called the logical consequent; and the relation between the reason and consequent, is called the logical connection or consequence. This law is expressed in the formula – Infer nothing without a ground or reason.

Relations between Reason and Consequent: The relations between Reason and Consequent, when comprehended in a pure thought, are the following:

  1. When a reason is explicitly or implicitly given, then there must exist a consequent; and, vice versa, when a consequent is given, there must also exist a reason.
  2. Where there is no reason there can be no consequent; and, vice versa, where there is no consequent (either implicitly or explicitly) there can be no reason. That is, the concepts of reason and of consequent, as reciprocally relative, involve and suppose each other.

The logical significance of this law: The logical significance of the law of Reason and Consequent lies in this, – That in virtue of it, thought is constituted into a series of acts all indissolubly connected; each necessarily inferring the other. Thus it is that the distinction and opposition of possible, actual and necessary matter, which has been introduced into Logic, is a doctrine wholly extraneous to this science."

Schopenhauer's Four Forms

According to Arthur Schopenhauer's On the Fourfold Root of the Principle of Sufficient Reason, there are four distinct forms of the principle.

First Form: The Principle of Sufficient Reason of Becoming (principium rationis sufficientis fiendi); appears as the law of causality in the understanding.

Second Form: The Principle of Sufficient Reason of Knowing (principium rationis sufficientis cognoscendi); asserts that if a judgment is to express a piece of knowledge, it must have a sufficient ground or reason, in which case it receives the predicate true.

Third Form: The Principle of Sufficient Reason of Being (principium rationis sufficientis essendi); the law whereby the parts of space and time determine one another as regards those relations. Example in arithmetic: Each number presupposes the preceding numbers as grounds or reasons of its being; "I can reach ten only by going through all the preceding numbers; and only by virtue of this insight into the ground of being, do I know that where there are ten, so are there eight, six, four."

"Now just as the subjective correlative to the first class of representations is the understanding, that to the second the faculty of reason, and that to the third pure sensibility, so is the subjective correlative to this fourth class found to be the inner sense, or generally self-consciousness."

Fourth Form: The Principle of Sufficient Reason of Acting (principium rationis sufficientis agendi); briefly known as the law of motivation. "Any judgment that does not follow its previously existing ground or reason" or any state that cannot be explained away as falling under the three previous headings "must be produced by an act of will which has a motive." As his proposition in 43 states, "Motivation is causality seen from within."

As a law of thought

The principle was one of the four recognised laws of thought, that held a place in European pedagogy of logic and reasoning (and, to some extent, philosophy in general) in the 18th and 19th centuries. It was influential in the thinking of Leo Tolstoy, amongst others, in the elevated form that history could not be accepted as random.

A sufficient reason is sometimes described as the coincidence of every single thing that is needed for the occurrence of an effect (i.e. of the so-called necessary conditions).

Definitions of knowledge

From Wikipedia, the free encyclopedia

Definitions of knowledge aim to identify the essential features of knowledge. Closely related terms are conception of knowledge, theory of knowledge, and analysis of knowledge. Some general features of knowledge are widely accepted among philosophers, for example, that it involves cognitive success and epistemic contact with reality. Despite extensive study, disagreements about the nature of knowledge persist, in part because researchers use diverging methodologies, seek definitions for distinct purposes, and have differing intuitions about the standards of knowledge.

An often-discussed definition asserts that knowledge is justified true belief. Justification means that the belief fulfills certain norms like being based on good reasons or being the product of a reliable cognitive process. This approach seeks to distinguish knowledge from mere true beliefs that arise from superstition, lucky guesses, or flawed reasoning. Critics of the justified-true-belief view, like Edmund Gettier, have proposed counterexamples to show that some justified true beliefs do not amount to knowledge if the justification is not genuinely connected to the truth, a condition termed epistemic luck.

In response, some philosophers have expanded the justified-true-belief definition with additional criteria intended to avoid these counterexamples. Suggested criteria include that the known fact caused the belief, that the belief manifests a cognitive virtue, that the belief is not inferred from a falsehood, and that the justification cannot be undermined. However, not all philosophers agree that such modifications are successful. Some propose a radical reconceptualization or hold that knowledge is a unique state not definable as a combination of other states.

Most definitions seek to understand the features of propositional knowledge, which is theoretical knowledge of a fact that can be expressed through a declarative that-clause, such as "knowing that Dave is at home". Other definitions focus on practical knowledge and knowledge by acquaintance. Practical knowledge concerns the ability to do something, like knowing how to swim. Knowledge by acquaintance is a familiarity with something based on experiential contact, like knowing the taste of chocolate.

General characteristics and disagreements

Definitions of knowledge try to describe the essential features of knowledge. This includes clarifying the distinction between knowing something and not knowing it, for example, pointing out what is the difference between knowing that smoking causes cancer and not knowing this. Sometimes the expressions "conception of knowledge", "theory of knowledge", and "analysis of knowledge" are used as synonyms. Various general features of knowledge are widely accepted. For example, it can be understood as a form of cognitive success or epistemic contact with reality, and propositional knowledge may be characterized as "believing a true proposition in a good way". However, such descriptions are too vague to be very useful without further clarifications of what "cognitive success" means, what type of success is involved, or what constitutes "good ways of believing".

The disagreements about the nature of knowledge are both numerous and deep. Some of these disagreements stem from the fact that there are different ways of defining a term, both in relation to the goal one intends to achieve and concerning the method used to achieve it. These difficulties are further exacerbated by the fact that the term "knowledge" has historically been used for a great range of diverse phenomena. These phenomena include theoretical know-that, as in knowing that Paris is in France, practical know-how, as in knowing how to swim, and knowledge by acquaintance, as in personally knowing a celebrity. It is not clear that there is one underlying essence to all of these forms. For this reason, most definitions restrict themselves either explicitly or implicitly to knowledge-that, also termed "propositional knowledge", which is seen as the most paradigmatic type of knowledge.

Even when restricted to propositional knowledge, the differences between the various definitions are usually substantial. For this reason, the choice of one's conception of knowledge matters for questions like whether a particular mental state constitutes knowledge, whether knowledge is fairly common or quite rare, and whether there is knowledge at all. The problem of the definition and analysis of knowledge has been a subject of intense discussion within epistemology both in the 20th and the 21st century. The branch of philosophy studying knowledge is called epistemology.

Goals

An important reason for these disagreements is that different theorists often have very different goals in mind when trying to define knowledge. Some definitions are based mainly on the practical concern of being able to find instances of knowledge. For such definitions to be successful, it is not required that they identify all and only its necessary features. In many cases, easily identifiable contingent features can even be more helpful for the search than precise but complicated formulas. On the theoretical side, on the other hand, there are so-called real definitions that aim to grasp the term's essence in order to understand its place on the conceptual map in relation to other concepts. Real definitions are preferable on the theoretical level since they are very precise. However, it is often very hard to find a real definition that avoids all counterexamples. Real definitions usually presume that knowledge is a natural kind, like "human being" or "water" and unlike "candy" or "large plant". Natural kinds are clearly distinguishable on the scientific level from other phenomena. As a natural kind, knowledge may be understood as a specific type of mental state. In this regard, the term "analysis of knowledge" is used to indicate that one seeks different components that together make up propositional knowledge, usually in the form of its essential features or as the conditions that are individually necessary and jointly sufficient. This may be understood in analogy to a chemist analyzing a sample to discover its chemical compositions in the form of the elements involved in it. In most cases, the proposed features of knowledge apply to many different instances. However, the main difficulty for such a project is to avoid all counterexamples, i.e. there should be no instances that escape the analysis, not even in hypothetical thought experiments. By trying to avoid all possible counterexamples, the analysis of aims at arriving at a necessary truth about knowledge.

However, the assumption that knowledge is a natural kind that has precisely definable criteria is not generally accepted and some hold that the term "knowledge" refers to a merely conventional accomplishment that is artificially constituted and approved by society. In this regard, it may refer to a complex situation involving various external and internal aspects. This distinction is significant because if knowledge is not a natural kind then attempts to provide a real definition would be futile from the start even though definitions based merely on how the word is commonly used may still be successful. However, the term would not have much general scientific importance except for linguists and anthropologists studying how people use language and what they value. Such usage may differ radically from one culture to another. Many epistemologists have accepted, often implicitly, that knowledge has a real definition. But the inability to find an acceptable real definition has led some to understand knowledge in more conventionalist terms.

Methods

Besides these differences concerning the goals of defining knowledge, there are also important methodological differences regarding how one arrives at and justifies one's definition. One approach simply consists in looking at various paradigmatic cases of knowledge to determine what they all have in common. However, this approach is faced with the problem that it is not always clear whether knowledge is present in a particular case, even in paradigmatic cases. This leads to a form of circularity, known as the problem of the criterion: criteria of knowledge are needed to identify individual cases of knowledge and cases of knowledge are needed to learn what the criteria of knowledge are. Two approaches to this problem have been suggested: methodism and particularism. Methodists put their faith in their pre-existing intuitions or hypotheses about the nature of knowledge and use them to identify cases of knowledge. Particularists, on the other hand, hold that our judgments about particular cases are more reliable and use them to arrive at the general criteria. A closely related method, based more on the linguistic level, is to study how the word "knowledge" is used. However, there are numerous meanings ascribed to the term, many of which correspond to the different types of knowledge. This introduces the additional difficulty of first selecting the expressions belonging to the intended type before analyzing their usage.

Standards of knowledge

A further source of disagreement and difficulty in defining of knowledge is posed by the fact that there are many different standards of knowledge. The term "standard of knowledge" refers to how high the requirements are for ascribing knowledge to someone. To claim that a belief amounts to knowledge is to attribute a special epistemic status to this belief. But exactly what status this is, i.e. what standard a true belief has to pass to amount to knowledge, may differ from context to context. While some theorists use very high standards, like infallibility or absence of cognitive luck, others use very low standards by claiming that mere true belief is sufficient for knowledge, that justification is not necessary. For example, according to some standards, having read somewhere that the Solar System has eight planets is a sufficient justification for knowing this fact. According to others, a deep astronomical understanding of the relevant measurements and the precise definition of "planet" is necessary. In the history of philosophy, various theorists have set an even higher standard and assumed that certainty or infallibility is necessary. For example, this is René Descartes's approach, who aims to find absolutely certain or indubitable first principles to act as the foundation of all subsequent knowledge. However, this outlook is uncommon in the contemporary approach. Contextualists have argued that the standards depend on the context in which the knowledge claim is made. For example, in a low-stake situation, a person may know that the Solar System has 8 planets, even though the same person lacks this knowledge in a high-stake situation.

The question of the standards of knowledge is highly relevant to how common or rare knowledge is. According to the standards of everyday discourse, ordinary cases of perception and memory lead to knowledge. In this sense, even small children and animals possess knowledge. But according to a more rigorous conception, they do not possess knowledge since much higher standards need to be fulfilled. The standards of knowledge are also central to the question of whether skepticism, i.e. the thesis that we have no knowledge at all, is true. If very high standards are used, like infallibility, then skepticism becomes plausible. In this case, the skeptic only has to show that any putative knowledge state lacks absolute certainty, that while the actual belief is true, it could have been false. However, the more these standards are weakened to how the term is used in everyday language, the less plausible skepticism becomes.

Justified true belief

Many philosophers define knowledge as justified true belief (JTB). This definition characterizes knowledge in relation to three essential features: S knows that p if and only if (1) p is true, (2) S believes that p, and (3) this belief is justified. A version of this definition was considered and rejected by Socrates in Plato's Theaetetus. Today, there is wide, though not universal, agreement among analytic philosophers that the first two criteria are correct, i.e., that knowledge implies true belief. Most of the controversy concerns the role of justification: what it is, whether it is needed, and what additional requirements it has to fulfill.

Truth

There is wide agreement that knowledge implies truth. In this regard, one cannot know things that are not true even if the corresponding belief is justified and rational. As an example, nobody can know that Winston Churchill won the 1996 US Presidential election, since this was not the result of the election. This reflects the idea that knowledge is a relation through which a person stands in cognitive contact with reality. This contact implies that the known proposition is true.

Nonetheless, some theorists have also proposed that truth may not always be necessary for knowledge. In this regard, a justified belief that is widely held within a community may be seen as knowledge even if it is false. Another doubt is due to some cases in everyday discourse where the term is used to express a strong conviction. For example, a strong supporter of Hillary Clinton might claim that they "knew" she would win the 2016 US presidential election. But such examples have not convinced many theorists. Instead, this claim is probably better understood as an exaggeration than as an actual knowledge claim. Such doubts are minority opinions and most theorists accept that knowledge implies truth.

Belief

Knowledge is usually understood as a form of belief: to know something implies that one believes it. This means that the agent accepts the proposition in question. However, not all theorists agree with this. This rejection is often motivated by contrasts found in ordinary language suggesting that the two are mutually exclusive, as in "I do not believe that; I know it." Some see this difference in the strength of the agent's conviction by holding that belief is a weak affirmation while knowledge entails a strong conviction. However, the more common approach to such expressions is to understand them not literally but through paraphrases, for example, as "I do not merely believe that; I know it." This way, the expression is compatible with seeing knowledge as a form of belief. A more abstract counterargument defines "believing" as "thinking with assent" or as a "commitment to something being true" and goes on to show that this applies to knowledge as well. A different approach, sometimes termed "knowledge first", upholds the difference between belief and knowledge based on the idea that knowledge is unanalyzable and therefore cannot be understood in terms of the elements that compose it. But opponents of this view may simply reject it by denying that knowledge is unanalyzable. So despite the mentioned arguments, there is still wide agreement that knowledge is a form of belief.

A few epistemologists hold that true belief by itself is sufficient for knowledge. However, this view is not very popular and most theorists accept that merely true beliefs do not constitute knowledge. This is based on various counterexamples, in which a person holds a true belief in virtue of faulty reasoning or a lucky guess.

Justification

The third component of the JTB definition is justification. It is based on the idea that having a true belief is not sufficient for knowledge, that knowledge implies more than just being right about something. So beliefs based on dogmatic opinions, blind guesses, or erroneous reasoning do not constitute knowledge even if they are true. For example, if someone believes that Machu Picchu is in Peru because both expressions end with the letter u, this true belief does not constitute knowledge. In this regard, a central question in epistemology concerns the additional requirements for turning a true belief into knowledge. There are many suggestions and deep disagreements within the academic literature about what these additional requirements are. A common approach is to affirm that the additional requirement is justification. So true beliefs that are based on good justification constitute knowledge, as when the belief about Machu Picchu is based on the individual's vivid recent memory of traveling through Peru and visiting Machu Picchu there. This line of thought has led many theorists to the conclusion that knowledge is nothing but true belief that is justified.

However, it has been argued that some knowledge claims in everyday discourse do not require justification. For example, when a teacher is asked how many of his students knew that Vienna is the capital of Austria in their last geography test, he may just cite the number of correct responses given without concern for whether these responses were based on justified beliefs. Some theorists characterize this type of knowledge as "lightweight knowledge" in order to exclude it from their discussion of knowledge.

A further question in this regard is how strong the justification needs to be for a true belief to amount to knowledge. So when the agent has some weak evidence for a belief, it may be reasonable to hold that belief even though no knowledge is involved. Some theorists hold that the justification has to be certain or infallible. This means that the justification of the belief guarantees the belief's truth, similar to how in a deductive argument, the truth of its premises ensures the truth of its conclusion. However, this view severely limits the extension of knowledge to very few beliefs, if any. Such a conception of justification threatens to lead to a full-blown skepticism denying that we know anything at all. The more common approach in the contemporary discourse is to allow fallible justification that makes the justified belief rationally convincing without ensuring its truth. This is similar to how ampliative arguments work, in contrast to deductive arguments. The problem with fallibilism is that the strength of justification comes in degrees: the evidence may make it somewhat likely, quite likely, or extremely likely that the belief is true. This poses the question of how strong the justification needs to be in the case of knowledge. The required degree may also depend on the context: knowledge claims in low-stakes situations, such as among drinking buddies, have lower standards than knowledge claims in high-stakes situations, such as among experts in the academic discourse.

Internalism and externalism

Besides the issue about the strength of justification, there is also the more general question about its nature. Theories of justification are often divided into internalism and externalism depending on whether only factors internal to the subject are responsible for justification. Commonly, an internalist conception is defended. This means that internal mental states of the subject justify beliefs. These states are usually understood as reasons or evidence possessed, like perceptual experiences, memories, rational intuition, or other justified beliefs.

One particular form of this position is evidentialism, which bases justification exclusively on the possession of evidence. It can be expressed by the claim that "Person S is justified in believing proposition p at time t if and only if S's evidence for p at t supports believing p". Some philosophers stipulate as an additional requirement to the possession of evidence that the belief is actually based on this evidence, i.e. that there is some kind of mental or causal link between the evidence and belief. This is often referred to as "doxastic justification". In contrast to this, having sufficient evidence for a true belief but coming to hold this belief based on superstition is a case of mere "propositional justification". Such a belief may not amount to knowledge even though the relevant evidence is possessed. A particularly strict version of internalism is access internalism. It holds that only states introspectively available to the subject's experience are relevant to justification. This means that deep unconscious states cannot act as justification. A closely related issue concerns the question of the internal structure of these states or how they are linked to each other. According to foundationalists, some mental states constitute basic reasons that can justify without being themselves in need of justification. Coherentists defend a more egalitarian position: what matters is not a privileged epistemic status of some special states but the relation to all other states. This means that a belief is justified if it fits into the person's full network of beliefs as a coherent part.

Philosophers have commonly espoused an internalist conception of justification. Various problems with internalism have led some contemporary philosophers to modify the internalist account of knowledge by using externalist conceptions of justification. Externalists include factors external to the person as well, such as the existence of a causal relation to the believed fact or to a reliable belief formation process. A prominent theory in this field is reliabilism, the theory that a true belief is justified if it was brought about by a reliable cognitive process that is likely to result in true beliefs. On this view, a true belief based on standard perceptual processes or good reasoning constitutes knowledge. But this is not the case if wishful thinking or emotional attachment is the cause.

However, not all externalists understand their theories as versions of the JTB account of knowledge. Some theorists defend an externalist conception of justification while others use a narrow notion of "justification" and understand externalism as implying that justification is not required for knowledge, for example, that the feature of being produced by a reliable process is not a form of justification but its surrogate. The same ambiguity is also found in the causal theory of knowledge.

In ancient philosophy

In Plato's Theaetetus, Socrates considers a number of theories as to what knowledge is, first excluding merely true belief as an adequate account. For example, an ill person with no medical training, but with a generally optimistic attitude, might believe that he will recover from his illness quickly. Nevertheless, even if this belief turned out to be true, the patient would not have known that he would get well since his belief lacked justification. The last account that Plato considers is that knowledge is true belief "with an account" that explains or defines it in some way. According to Edmund Gettier, the view that Plato is describing here is that knowledge is justified true belief. The truth of this view would entail that in order to know that a given proposition is true, one must not only believe the relevant true proposition, but must also have a good reason for doing so. One implication of this would be that no one would gain knowledge just by believing something that happened to be true.

Gettier problem and cognitive luck

The JTB definition of knowledge was already rejected in Plato's Theaetetus. The JTB definition came under severe criticism in the 20th century, mainly due to a series of counterexamples given by Edmund Gettier. This is commonly known as the Gettier problem and includes cases in which a justified belief is true because of lucky circumstances, i.e. where the person's reason for the belief is irrelevant to its truth. A well-known example involves a person driving along a country road with many barn facades. The driver does not know this and finally stops in front of the only real barn. The idea of this case is that they have a justified true belief that the object in front of them is a barn even though this does not constitute knowledge. The reason is that it was just a lucky coincidence that they stopped here and not in front of one of the many fake barns, in which case they wouldn't have been able to tell the difference either.

This and similar counterexamples aim to show that justification alone is not sufficient, i.e. that there are some justified true beliefs that do not amount to knowledge. A common explanation of such cases is based on cognitive or epistemic luck. The idea is that it is a lucky coincidence or a fortuitous accident that the justified belief is true. So the justification is in some sense faulty, not because it relies on weak evidence, but because the justification is not responsible for the belief's truth. Various theorists have responded to this problem by talking about warranted true belief instead. In this regard, warrant implies that the corresponding belief is not accepted on the basis of mere cognitive luck or accident. However, not everyone agrees that this and similar cases actually constitute counterexamples to the JTB definition: some have argued that, in these cases, the agent actually knows the fact in question, e.g. that the driver in the fake barn example knows that the object in front of them is a barn despite the luck involved. A similar defense is based on the idea that to insist on the absence of cognitive luck leads to a form of infallibilism about justification, i.e. that justification has to guarantee the belief's truth. However, most knowledge claims are not that strict and allow instead that the justification involved may be fallible.

The Gettier problem

An Euler diagram representing a version of the Justified True Belief definition of knowledge that is adapted to the Gettier problem. This problem gives us reason to think that not all justified true beliefs constitute knowledge.

Edmund Gettier is best known for his 1963 paper entitled "Is Justified True Belief Knowledge?", which called into question the common conception of knowledge as justified true belief. In just two and a half pages, Gettier argued that there are situations in which one's belief may be justified and true, yet fail to count as knowledge. That is, Gettier contended that while justified belief in a true proposition is necessary for that proposition to be known, it is not sufficient.

According to Gettier, there are certain circumstances in which one does not have knowledge, even when all of the above conditions are met. Gettier proposed two thought experiments, which have become known as Gettier cases, as counterexamples to the classical account of knowledge. One of the cases involves two men, Smith and Jones, who are awaiting the results of their applications for the same job. Each man has ten coins in his pocket. Smith has excellent reasons to believe that Jones will get the job (the head of the company told him); and furthermore, Smith knows that Jones has ten coins in his pocket (he recently counted them). From this Smith infers: "The man who will get the job has ten coins in his pocket." However, Smith is unaware that he also has ten coins in his own pocket. Furthermore, it turns out that Smith, not Jones, is going to get the job. While Smith has strong evidence to believe that Jones will get the job, he is wrong. Smith therefore has a justified true belief that the man who will get the job has ten coins in his pocket; however, according to Gettier, Smith does not know that the man who will get the job has ten coins in his pocket, because Smith's belief is "...true by virtue of the number of coins in Jones's pocket, while Smith does not know how many coins are in Smith's pocket, and bases his belief... on a count of the coins in Jones's pocket, whom he falsely believes to be the man who will get the job." These cases fail to be knowledge because the subject's belief is justified, but only happens to be true by virtue of luck. In other words, he made the correct choice (believing that the man who will get the job has ten coins in his pocket) for the wrong reasons. Gettier then goes on to offer a second similar case, providing the means by which the specifics of his examples can be generalized into a broader problem for defining knowledge in terms of justified true belief.

There have been various notable responses to the Gettier problem. Typically, they have involved substantial attempts to provide a new definition of knowledge that is not susceptible to Gettier-style objections, either by providing an additional fourth condition that justified true beliefs must meet to constitute knowledge, or proposing a completely new set of necessary and sufficient conditions for knowledge. While there have been far too many published responses for all of them to be mentioned, some of the most notable responses are discussed below.

Responses and alternative definitions

The problems with the JTB definition of knowledge have provoked diverse responses. Strictly speaking, most contemporary philosophers deny the JTB definition of knowledge, at least in its exact form. Edmund Gettier's counterexamples were very influential in shaping this contemporary outlook. They usually involve some form of cognitive luck whereby the justification is not responsible or relevant to the belief being true. Some responses stay within the standard definition and try to make smaller modifications to mitigate the problems, for example, concerning how justification is defined. Others see the problems as insurmountable and propose radical new conceptions of knowledge, many of which do not require justification at all. Between these two extremes, various epistemologists have settled for a moderate departure from the standard definition. They usually accept that it is a step in the right direction: justified true belief is necessary for knowledge. However, they deny that it is sufficient. This means that knowledge always implies justified true belief but that not every justified true belief constitutes knowledge. Instead, they propose an additional fourth criterion needed for sufficiency. The resulting definitions are sometimes referred to as JTB+X accounts of knowledge. A closely related approach is to replace justification with warrant, which is then defined as justification together with whatever else is needed to amount to knowledge.

The goal of introducing an additional criterion is to avoid counterexamples in the form of Gettier cases. Numerous suggestions for such a fourth feature have been made, for example, the requirement that the belief is not inferred from a falsehood. While alternative accounts are often successful at avoiding many specific cases, it has been argued that most of them fail to avoid all counterexamples because they leave open the possibility of cognitive luck. So while introducing an additional criterion may help exclude various known examples of cognitive luck, the resulting definition is often still susceptible to new cases. The only way to avoid this problem is to ensure that the additional criterion excludes cognitive luck. This is often understood in the sense that the presence of the feature has to entail the belief's truth. So if it is possible that a belief has this feature without being true, then cases of cognitive luck are possible in which a true belief has this feature but is not true because of this feature. The problem is avoided by defining knowledge as non-accidentally true belief. A similar approach introduces an anti-luck condition: the belief is not true merely by luck. But it is not clear how useful these definitions are unless a more precise definition of "non-accidental" or "absence of luck" could be provided. This vagueness makes the application to non-obvious cases difficult. A closely related and more precise definition requires that the belief is safely formed, i.e. that the process responsible would not have produced the corresponding belief if it was not true. This means that, whatever the given situation is like, this process tracks the fact. Richard Kirkham suggests that our definition of knowledge requires that the evidence for the belief necessitates its truth.

Defeasibility theory

Defeasibility theories of knowledge introduce an additional condition based on defeasibility in order to avoid the different problems faced by the JTB accounts. They emphasize that, besides having a good reason for holding the belief, it is also necessary that there is no defeating evidence against it. This is usually understood in a very wide sense: a justified true belief does not amount to knowledge when there is a truth that would constitute a defeating reason of the belief if the person knew about it. This wide sense is necessary to avoid Gettier cases of cognitive luck. So in the barn example above, it explains that the belief does not amount to knowledge because, if the person were aware of the prevalence of fake barns in this area, this awareness would act as a defeater of the belief that this one particular building is a real barn. In this way, the defeasibility theory can identify accidentally justified beliefs as unwarranted. One of its problems is that it excludes too many beliefs from knowledge. This concerns specifically misleading defeaters, i.e. truths that would give the false impression to the agent that one of their reasons was defeated. According to Keith Lehrer, cases of cognitive luck can be avoided by requiring that the justification does not depend on any false statement. On his view, "S knows that p if and only if (i) it is true that p, (ii) S accepts that p, (iii) S is justified in accepting that p, and (iv) S is justified in accepting p in some way that does not depend on any false statement".

Reliabilism and causal theory

Reliabilistic and causal theories are forms of externalism. Some versions only modify the JTB definition of knowledge by reconceptualizing what justification means. Others constitute further departures by holding that justification is not necessary, that reliability or the right causal connections act as replacements of justification. According to reliabilism, a true belief constitutes knowledge if it was produced by a reliable process or method. Putative examples of reliable processes are regular perception under normal circumstances and the scientific method. Defenders of this approach affirm that reliability acts as a safeguard against lucky coincidence. Virtue reliabilism is a special form of reliabilism in which intellectual virtues, such as properly functioning cognitive faculties, are responsible for producing knowledge.

Reliabilists have struggled to give an explicit and plausible account of when a process is reliable. One approach defines it through a high success rate: a belief-forming process is reliable within a certain area if it produces a high ratio of true beliefs in this area. Another approach understands reliability in terms of how the process would fare in counterfactual scenarios. Arguments against both of these definitions have been presented. A further criticism is based on the claim that reliability is not sufficient in cases where the agent is not in possession of any reasons justifying the belief even though the responsible process is reliable.

The causal theory of knowledge holds that the believed fact has to cause the true belief in the right way for the belief to amount to knowledge. For example, the belief that there is a bird in the tree may constitute knowledge if the bird and the tree caused the corresponding perception and belief. The causal connection helps to avoid some cases of cognitive luck since the belief is not accidental anymore. However, it does not avoid all of them, as can be seen in the fake barn example above, where the perception of the real barn caused the belief about the real barn even though it was a lucky coincidence. Another shortcoming of the causal theory is that various beliefs are knowledge even though a causal connection to the represented facts does not exist or may not be possible. This is the case for beliefs in mathematical propositions, like that "2 + 2 = 4", and in certain general propositions, like that "no elephant is smaller than a kitten".

Virtue-theoretic definition

Virtue-theoretic approaches try to avoid the problem of cognitive luck by seeing knowledge as a manifestation of intellectual virtues. On this view, virtues are properties of a person that aim at some good. In the case of intellectual virtues, the principal good is truth. In this regard, Linda Zagzebski defines knowledge as "cognitive contact with reality arising out of acts of intellectual virtue". A closely related approach understands intellectual virtues in analogy to the successful manifestation of skills. This is helpful to clarify how cognitive luck is avoided. For example, an archer may hit the bull's eye due to luck or because of their skill. Based on this line of thought, Ernest Sosa defines knowledge as a belief that "is true in a way manifesting, or attributable to, the believer's skill".

"No false premises" response

One of the earliest suggested replies to Gettier, and perhaps the most intuitive way to respond to the Gettier problem, is the "no false premises" response, sometimes also called the "no false lemmas" response. Most notably, this reply was defended by David Malet Armstrong in his 1973 book, Belief, Truth, and Knowledge. The basic form of the response is to assert that the person who holds the justified true belief (for instance, Smith in Gettier's first case) made the mistake of inferring a true belief (e.g. "The person who will get the job has ten coins in his pocket") from a false belief (e.g. "Jones will get the job"). Proponents of this response therefore propose that we add a fourth necessary and sufficient condition for knowledge, namely, "the justified true belief must not have been inferred from a false belief".

This reply to the Gettier problem is simple, direct, and appears to isolate what goes wrong in forming the relevant beliefs in Gettier cases. However, the general consensus is that it fails. This is because while the original formulation by Gettier includes a person who infers a true belief from a false belief, there are many alternate formulations in which this is not the case. Take, for instance, a case where an observer sees what appears to be a dog walking through a park and forms the belief "There is a dog in the park". In fact, it turns out that the observer is not looking at a dog at all, but rather a very lifelike robotic facsimile of a dog. However, unbeknownst to the observer, there is in fact a dog in the park, albeit one standing behind the robotic facsimile of a dog. Since the belief "There is a dog in the park" does not involve a faulty inference, but is instead formed as the result of misleading perceptual information, there is no inference made from a false premise. It therefore seems that while the observer does in fact have a true belief that her perceptual experience provides justification for holding, she does not actually know that there is a dog in the park. Instead, she just seems to have formed a "lucky" justified true belief.

Infallibilist response

One less common response to the Gettier problem is defended by Richard Kirkham, who has argued that the only definition of knowledge that could ever be immune to all counterexamples is the infallibilist definition. To qualify as an item of knowledge, goes the theory, a belief must not only be true and justified, the justification of the belief must necessitate its truth. In other words, the justification for the belief must be infallible.

While infallibilism is indeed an internally coherent response to the Gettier problem, it is incompatible with our everyday knowledge ascriptions. For instance, as the Cartesian skeptic will point out, all of my perceptual experiences are compatible with a skeptical scenario in which I am completely deceived about the existence of the external world, in which case most (if not all) of my beliefs would be false. The typical conclusion to draw from this is that it is possible to doubt most (if not all) of my everyday beliefs, meaning that if I am indeed justified in holding those beliefs, that justification is not infallible. For the justification to be infallible, my reasons for holding my everyday beliefs would need to completely exclude the possibility that those beliefs were false. Consequently, if a belief must be infallibly justified in order to constitute knowledge, then it must be the case that we are mistaken in most (if not all) instances in which we claim to have knowledge in everyday situations. While it is indeed possible to bite the bullet and accept this conclusion, most philosophers find it implausible to suggest that we know nothing or almost nothing, and therefore reject the infallibilist response as collapsing into radical skepticism.

Tracking condition

Robert Nozick has offered a definition of knowledge according to which S knows that P if and only if:

  • P is true;
  • S believes that P;
  • if P were false, S would not believe that P;
  • if P were true, S would believe that P.

Nozick argues that the third of these conditions serves to address cases of the sort described by Gettier. Nozick further claims this condition addresses a case of the sort described by D.M. Armstrong: A father believes his daughter is innocent of committing a particular crime, both because of faith in his baby girl and (now) because he has seen presented in the courtroom a conclusive demonstration of his daughter's innocence. His belief via the method of the courtroom satisfies the four subjunctive conditions, but his faith-based belief does not. If his daughter were guilty, he would still believe her innocence, on the basis of faith in his daughter; this would violate the third condition.

The British philosopher Simon Blackburn has criticized this formulation by suggesting that we do not want to accept as knowledge beliefs which, while they "track the truth" (as Nozick's account requires), are not held for appropriate reasons. In addition to this, externalist accounts of knowledge, such as Nozick's, are often forced to reject closure in cases where it is intuitively valid.

An account similar to Nozick's has also been offered by Fred Dretske, although his view focuses more on relevant alternatives that might have obtained if things had turned out differently. Views of both the Nozick variety and the Dretske variety have faced serious problems suggested by Saul Kripke.

Knowledge-first response

Timothy Williamson has advanced a theory of knowledge according to which knowledge is not justified true belief plus some extra conditions, but primary. In his book Knowledge and its Limits, Williamson argues that the concept of knowledge cannot be broken down into a set of other concepts through analysis—instead, it is sui generis. Thus, according to Williamson, justification, truth, and belief are necessary but not sufficient for knowledge. Williamson is also known for being one of the only philosophers who take knowledge to be a mental state; most epistemologists assert that belief (as opposed to knowledge) is a mental state. As such, Williamson's claim has been seen to be highly counterintuitive.

Merely true belief

In his 1991 paper, "Knowledge is Merely True Belief", Crispin Sartwell argues that justification is an unnecessary criterion for knowledge. He argues that common counterexample cases of "lucky guesses" are not in fact beliefs at all, as "no belief stands in isolation... the claim that someone believes something entails that that person has some degree of serious commitment to the claim." He gives the example of a mathematician working on a problem who subconsciously, in a "flash of insight", sees the answer, but is unable to comprehensively justify his belief, and says that in such a case the mathematician still knows the answer, despite not being able to give a step-by-step explanation of how he got to it. He also argues that if beliefs require justification to constitute knowledge, then foundational beliefs can never be knowledge, and, as these are the beliefs upon which all our other beliefs depend for their justification, we can thus never have knowledge at all.

Nyaya philosophy

Nyaya is one of the six traditional schools of Indian philosophy with a particular interest in epistemology. The Indian philosopher B.K. Matilal drew on the Navya-Nyāya fallibilist tradition to respond to the Gettier problem. Nyaya theory distinguishes between know p and know that one knows p—these are different events, with different causal conditions. The second level is a sort of implicit inference that usually follows immediately the episode of knowing p (knowledge simpliciter). The Gettier case is examined by referring to a view of Gangesha Upadhyaya (late 12th century), who takes any true belief to be knowledge; thus a true belief acquired through a wrong route may just be regarded as knowledge simpliciter on this view. The question of justification arises only at the second level, when one considers the knowledge-hood of the acquired belief. Initially, there is lack of uncertainty, so it becomes a true belief. But at the very next moment, when the hearer is about to embark upon the venture of knowing whether he knows p, doubts may arise. "If, in some Gettier-like cases, I am wrong in my inference about the knowledge-hood of the given occurrent belief (for the evidence may be pseudo-evidence), then I am mistaken about the truth of my belief—and this is in accordance with Nyaya fallibilism: not all knowledge-claims can be sustained."

Other definitions

According to J. L. Austin, to know just means to be able to make correct assertions about the subject in question. On this pragmatic view, the internal mental states of the knower do not matter.

Philosopher Barry Allen also downplayed the role of mental states in knowledge and defined knowledge as "superlative artifactual performance", that is, exemplary performance with artifacts, including language but also technological objects like bridges, satellites, and diagrams. Allen criticized typical epistemology for its "propositional bias" (treating propositions as prototypical knowledge), its "analytic bias" (treating knowledge as prototypically mental or conceptual), and its "discursive bias" (treating knowledge as prototypically discursive). He considered knowledge to be too diverse to characterize in terms of necessary and sufficient conditions. He claimed not to be substituting knowledge-how for knowledge-that, but instead proposing a definition that is more general than both. For Allen, knowledge is "deeper than language, different from belief, more valuable than truth".

A different approach characterizes knowledge in relation to the role it plays, for example, regarding the reasons it provides or constitutes for doing or thinking something. In this sense, it can be understood as what entitles the agent to assert a fact, to use this fact as a premise when reasoning, or to act as a trustworthy informant concerning this fact. This definition has been adopted in some argumentation theory.

Paul Silva's "awareness first" epistemology posits that the common core of knowledge is awareness, providing a definition that accounts for both beliefless knowledge and knowledge grounded in belief.

Within anthropology, knowledge is often defined in a very broad sense as equivalent to understanding or culture. This includes the idea that knowledge consists in the affirmation of meaning contents and depends on a substrate, such as a brain. Knowledge characterizes social groups in the sense that different individuals belonging to the same social niche tend to be very similar concerning what they know and how they organize information. This topic is of specific interest to the subfield known as the anthropology of knowledge, which uses this and similar definitions to study how knowledge is reproduced and how it changes on the social level in different cultural contexts.

Non-propositional knowledge

Propositional knowledge, also termed factual knowledge or knowledge-that, is the most paradigmatic form of knowledge in analytic philosophy, and most definitions of knowledge in philosophy have this form in mind. It refers to the possession of certain information. The distinction to other types of knowledge is often drawn based on the differences between the linguistic formulations used to express them. It is termed knowledge-that since it can usually be expressed using a that-clause, as in "I know that Dave is at home". In everyday discourse, the term "knowledge" can also refer to various other phenomena as forms of non-propositional knowledge. Some theorists distinguish knowledge-wh from knowledge-that. Knowledge-wh is expressed using a wh-clause, such as knowing why smoke causes cancer or knowing who killed John F. Kennedy. However, the more common approach is to understand knowledge-wh as a type of knowledge-that since the corresponding expressions can usually be paraphrased using a that-clause.

A clearer contrast is between knowledge-that and knowledge-how (know-how). Know-how is also referred to as practical knowledge or ability knowledge. It is expressed in formulations like, "I know how to ride a bike." All forms of practical knowledge involve some type of competence, i.e., having the ability to do something. So to know how to play the guitar means to have the competence to play it or to know the multiplication table is to be able to recite products of numbers. For this reason, know-how may be defined as having the corresponding competence, skills, or abilities. Some forms of know-how include knowledge-that as well and some theorists even argue that practical and propositional knowledge are of the same type. However, propositional knowledge is usually reserved only to humans while practical knowledge is more common in the animal kingdom. For example, an ant knows how to walk but it presumably does not know that it is currently walking in someone's kitchen. The more common view is, therefore, to see knowledge-how and knowledge-that as two distinct types of knowledge.

Another often-discussed alternative type of knowledge is knowledge by acquaintance. It is defined as a direct familiarity with an individual, often with a person, and only arises if one has met this individual personally. In this regard, it constitutes a relation not to a proposition but to an object. Acquaintance implies that one has had a direct perceptual experience with the object of knowledge and is therefore familiar with it. Bertrand Russell contrasts it with knowledge by description, which refers to knowledge of things that the subject has not immediately experienced, such as learning through a documentary about a country one has not yet visited. Knowledge by acquaintance can be expressed using a direct object, such as, "I know Dave." It differs in this regard from knowledge-that since no that-clause is needed. One can know facts about an individual without direct acquaintance with that individual. For example, the reader may know that Napoleon was a French military leader without knowing Napoleon personally. There is controversy whether knowledge by acquaintance is a form of non-propositional knowledge. Some theorists deny this and contend that it is just a grammatically different way of expressing propositional knowledge.

Logical reasoning

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Logical_reasoning   Logical reasoni...