Search This Blog

Wednesday, July 23, 2025

Necessity and sufficiency

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Necessity_and_sufficiency

In logic and mathematics, necessity and sufficiency are terms used to describe a conditional or implicational relationship between two statements. For example, in the conditional statement: "If P then Q", Q is necessary for P, because the truth of Q is "necessarily" guaranteed by the truth of P. (Equivalently, it is impossible to have P without Q, or the falsity of Q ensures the falsity of P.) Similarly, P is sufficient for Q, because P being true always or "sufficiently" implies that Q is true, but P not being true does not always imply that Q is not true.

In general, a necessary condition is one (possibly one of several conditions) that must be present in order for another condition to occur, while a sufficient condition is one that produces the said condition.[3] The assertion that a statement is a "necessary and sufficient" condition of another means that the former statement is true if and only if the latter is true. That is, the two statements must be either simultaneously true, or simultaneously false.

In ordinary English (also natural language) "necessary" and "sufficient" often indicate relations between conditions or states of affairs, not statements. For example, being round is a necessary condition for being a circle, but is not sufficient since ovals and ellipses are round but not circles – while being a circle is a sufficient condition for being round. Any conditional statement consists of at least one sufficient condition and at least one necessary condition.

In data analytics, necessity and sufficiency can refer to different causal logics, where necessary condition analysis and qualitative comparative analysis can be used as analytical techniques for examining necessity and sufficiency of conditions for a particular outcome of interest.

Definitions

In the conditional statement, "if S, then N", the expression represented by S is called the antecedent, and the expression represented by N is called the consequent. This conditional statement may be written in several equivalent ways, such as "N if S", "S only if N", "S implies N", "N is implied by S", SN, SN and "N whenever S".

In the above situation of "N whenever S", N is said to be a necessary condition for S. In common language, this is equivalent to saying that if the conditional statement is a true statement, then the consequent N must be true—if S is to be true (see third column of "truth table" immediately below). In other words, the antecedent S cannot be true without N being true. For example, in order for someone to be called Socrates, it is necessary for that someone to be Named. Similarly, in order for human beings to live, it is necessary that they have air.

One can also say S is a sufficient condition for N (refer again to the third column of the truth table immediately below). If the conditional statement is true, then if S is true, N must be true; whereas if the conditional statement is true and N is true, then S may be true or be false. In common terms, "the truth of S guarantees the truth of N". For example, carrying on from the previous example, one can say that knowing that someone is called Socrates is sufficient to know that someone has a Name.

A necessary and sufficient condition requires that both of the implications and (the latter of which can also be written as ) hold. The first implication suggests that S is a sufficient condition for N, while the second implication suggests that S is a necessary condition for N. This is expressed as "S is necessary and sufficient for N ", "S if and only if N", or .

Truth table
S N
T T T T T
T F F T F
F T T F F
F F T T T

Necessity

The sun being above the horizon is a necessary condition for direct sunlight; but it is not a sufficient condition, as something else may be casting a shadow, e.g., the moon in the case of an eclipse.

The assertion that Q is necessary for P is colloquially equivalent to "P cannot be true unless Q is true" or "if Q is false, then P is false". By contraposition, this is the same thing as "whenever P is true, so is Q".

The logical relation between P and Q is expressed as "if P, then Q" and denoted "PQ" (P implies Q). It may also be expressed as any of "P only if Q", "Q, if P", "Q whenever P", and "Q when P". One often finds, in mathematical prose for instance, several necessary conditions that, taken together, constitute a sufficient condition (i.e., individually necessary and jointly sufficient), as shown in Example 5.

Example 1
For it to be true that "John is a bachelor", it is necessary that it be also true that he is
  1. unmarried,
  2. male,
  3. adult,
since to state "John is a bachelor" implies John has each of those three additional predicates.
Example 2
For the whole numbers greater than two, being odd is necessary to being prime, since two is the only whole number that is both even and prime.
Example 3
Consider thunder, the sound caused by lightning. One says that thunder is necessary for lightning, since lightning never occurs without thunder. Whenever there is lightning, there is thunder. The thunder does not cause the lightning (since lightning causes thunder), but because lightning always comes with thunder, we say that thunder is necessary for lightning. (That is, in its formal sense, necessity doesn't imply causality.)
Example 4
Being at least 30 years old is necessary for serving in the U.S. Senate. If you are under 30 years old, then it is impossible for you to be a senator. That is, if you are a senator, it follows that you must be at least 30 years old.
Example 5
In algebra, for some set S together with an operation to form a group, it is necessary that be associative. It is also necessary that S include a special element e such that for every x in S, it is the case that e x and x e both equal x. It is also necessary that for every x in S there exist a corresponding element x″, such that both x x″ and x x equal the special element e. None of these three necessary conditions by itself is sufficient, but the conjunction of the three is.

Sufficiency

That a train runs on schedule is a sufficient condition for a traveller arriving on time (if one boards the train and it departs on time, then one will arrive on time); but it is not a necessary condition, since there are other ways to travel (if the train does not run to time, one could still arrive on time through other means of transport).

If P is sufficient for Q, then knowing P to be true is adequate grounds to conclude that Q is true; however, knowing P to be false does not meet a minimal need to conclude that Q is false.

The logical relation is, as before, expressed as "if P, then Q" or "PQ". This can also be expressed as "P only if Q", "P implies Q" or several other variants. It may be the case that several sufficient conditions, when taken together, constitute a single necessary condition (i.e., individually sufficient and jointly necessary), as illustrated in example 5.

Example 1
"John is a king" implies that John is male. So knowing that John is a king is sufficient to knowing that he is a male.
Example 2
A number's being divisible by 4 is sufficient (but not necessary) for it to be even, but being divisible by 2 is both sufficient and necessary for it to be even.
Example 3
An occurrence of thunder is a sufficient condition for the occurrence of lightning in the sense that hearing thunder, and unambiguously recognizing it as such, justifies concluding that there has been a lightning bolt.
Example 4
If the U.S. Congress passes a bill, the president's signing of the bill is sufficient to make it law. Note that the case whereby the president did not sign the bill, e.g. through exercising a presidential veto, does not mean that the bill has not become a law (for example, it could still have become a law through a congressional override).
Example 5
That the center of a playing card should be marked with a single large spade (♠) is sufficient for the card to be an ace. Three other sufficient conditions are that the center of the card be marked with a single diamond (♦), heart (♥), or club (♣). None of these conditions is necessary to the card's being an ace, but their disjunction is, since no card can be an ace without fulfilling at least (in fact, exactly) one of these conditions.

Relationship between necessity and sufficiency

Being in the purple region is sufficient for being in A, but not necessary. Being in A is necessary for being in the purple region, but not sufficient. Being in A and being in B is necessary and sufficient for being in the purple region.

A condition can be either necessary or sufficient without being the other. For instance, being a mammal (N) is necessary but not sufficient to being human (S), and that a number is rational (S) is sufficient but not necessary to being a real number (N) (since there are real numbers that are not rational).

A condition can be both necessary and sufficient. For example, at present, "today is the Fourth of July" is a necessary and sufficient condition for "today is Independence Day in the United States". Similarly, a necessary and sufficient condition for invertibility of a matrix M is that M has a nonzero determinant.

Mathematically speaking, necessity and sufficiency are dual to one another. For any statements S and N, the assertion that "N is necessary for S" is equivalent to the assertion that "S is sufficient for N". Another facet of this duality is that, as illustrated above, conjunctions (using "and") of necessary conditions may achieve sufficiency, while disjunctions (using "or") of sufficient conditions may achieve necessity. For a third facet, identify every mathematical predicate N with the set T(N) of objects, events, or statements for which N holds true; then asserting the necessity of N for S is equivalent to claiming that T(N) is a superset of T(S), while asserting the sufficiency of S for N is equivalent to claiming that T(S) is a subset of T(N).

Psychologically speaking, necessity and sufficiency are both key aspects of the classical view of concepts. Under the classical theory of concepts, how human minds represent a category X, gives rise to a set of individually necessary conditions that define X. Together, these individually necessary conditions are sufficient to be X. This contrasts with the probabilistic theory of concepts which states that no defining feature is necessary or sufficient, rather that categories resemble a family tree structure.

Simultaneous necessity and sufficiency

To say that P is necessary and sufficient for Q is to say two things:

  1. that P is necessary for Q, , and that P is sufficient for Q, .
  2. equivalently, it may be understood to say that P and Q is necessary for the other, , which can also be stated as each is sufficient for or implies the other.

One may summarize any, and thus all, of these cases by the statement "P if and only if Q", which is denoted by , whereas cases tell us that is identical to .

For example, in graph theory a graph G is called bipartite if it is possible to assign to each of its vertices the color black or white in such a way that every edge of G has one endpoint of each color. And for any graph to be bipartite, it is a necessary and sufficient condition that it contain no odd-length cycles. Thus, discovering whether a graph has any odd cycles tells one whether it is bipartite and conversely. A philosopher might characterize this state of affairs thus: "Although the concepts of bipartiteness and absence of odd cycles differ in intension, they have identical extension.

In mathematics, theorems are often stated in the form "P is true if and only if Q is true".

Because, as explained in previous section, necessity of one for the other is equivalent to sufficiency of the other for the first one, e.g. is equivalent to , if P is necessary and sufficient for Q, then Q is necessary and sufficient for P. We can write and say that the statements "P is true if and only if Q, is true" and "Q is true if and only if P is true" are equivalent.

Gettier problem

From Wikipedia, the free encyclopedia

The Gettier problem, in the field of epistemology, is a landmark philosophical problem concerning the understanding of descriptive knowledge. Attributed to American philosopher Edmund Gettier, Gettier-type counterexamples (called "Gettier-cases") challenge the long-held justified true belief (JTB) account of knowledge. The JTB account holds that knowledge is equivalent to justified true belief; if all three conditions (justification, truth, and belief) are met of a given claim, then there is knowledge of that claim. In his 1963 three-page paper titled "Is Justified True Belief Knowledge?", Gettier attempts to illustrate by means of two counterexamples that there are cases where individuals can have a justified, true belief regarding a claim but still fail to know it because the reasons for the belief, while justified, turn out to be false. Thus, Gettier claims to have shown that the JTB account is inadequate because it does not account for all of the necessary and sufficient conditions for knowledge.

The terms "Gettier problem", "Gettier case", or even the adjective "Gettiered", are sometimes used to describe any case in the field of epistemology that purports to repudiate the JTB account of knowledge.

Responses to Gettier's paper have been numerous. Some reject Gettier's examples as inadequate justification, while others seek to adjust the JTB account of knowledge and blunt the force of these counterexamples. Gettier problems have even found their way into sociological experiments in which researchers have studied intuitive responses to Gettier cases from people of varying demographics.

History

The question of what constitutes "knowledge" is as old as philosophy itself. Early instances are found in Plato's dialogues, notably Meno (97a–98b) and Theaetetus. Gettier himself was not actually the first to raise the problem named after him; its existence was acknowledged by both Alexius Meinong and Bertrand Russell, the latter of which discussed the problem in his book Human knowledge: Its scope and limits. In fact, the problem has been known since the Middle Ages, and both Indian philosopher Dharmottara and scholastic logician Peter of Mantua presented examples of it.

Dharmottara, in his commentary c. 770 AD on Dharmakirti's Ascertainment of Knowledge, gives the following two examples:

A fire has just been lit to roast some meat. The fire hasn't started sending up any smoke, but the smell of the meat has attracted a cloud of insects. From a distance, an observer sees the dark swarm above the horizon and mistakes it for smoke. "There's a fire burning at that spot," the distant observer says. Does the observer know that there is a fire burning in the distance?

A desert traveller is searching for water. He sees, in the valley ahead, a shimmering blue expanse. Unfortunately, it's a mirage. But fortunately, when he reaches the spot where there appeared to be water, there actually is water, hidden under a rock. Did the traveller know, as he stood on the hilltop hallucinating, that there was water ahead?

Various theories of knowledge, including some of the proposals that emerged in Western philosophy after Gettier in 1963, were debated by Indo-Tibetan epistemologists before and after Dharmottara. In particular, Gaṅgeśa in the 14th century advanced a detailed causal theory of knowledge.

Russell's case, called the stopped clock case, goes as follows: Alice sees a clock that reads two o'clock and believes that the time is two o'clock. It is, in fact, two o'clock. There's a problem, however: unknown to Alice, the clock she's looking at stopped twelve hours ago. Alice thus has an accidentally true, justified belief. Russell provides an answer of his own to the problem. Edmund Gettier's formulation of the problem was important as it coincided with the rise of the sort of philosophical naturalism promoted by W. V. O. Quine and others, and was used as a justification for a shift towards externalist theories of justification. John L. Pollock and Joseph Cruz have stated that the Gettier problem has "fundamentally altered the character of contemporary epistemology" and has become "a central problem of epistemology since it poses a clear barrier to analyzing knowledge".

Alvin Plantinga rejects the historical analysis:

According to the inherited lore of the epistemological tribe, the JTB [justified true belief] account enjoyed the status of epistemological orthodoxy until 1963, when it was shattered by Edmund Gettier... Of course, there is an interesting historical irony here: it isn't easy to find many really explicit statements of a JTB analysis of knowledge prior to Gettier. It is almost as if a distinguished critic created a tradition in the very act of destroying it.

Despite this, Plantinga does accept that some philosophers before Gettier have advanced a JTB account of knowledge, specifically C. I. Lewis and A. J. Ayer.

Knowledge as justified true belief (JTB)

The JTB account of knowledge is the claim that knowledge can be conceptually analyzed as justified true belief, which is to say that the meaning of sentences such as "Smith knows that it rained today" can be given with the following set of conditions, which are necessary and sufficient for knowledge to obtain:

A subject S knows that a proposition P is true if and only if:
  1. P is true, and
  2. S believes that P is true, and
  3. S is justified in believing that P is true

The JTB account was first credited to Plato, though Plato argued against this very account of knowledge in the Theaetetus (210a). This account of knowledge is what Gettier subjected to criticism.

Gettier's two original counterexamples

Gettier's paper used counterexamples to argue that there are cases of beliefs that are both true and justified—therefore satisfying all three conditions for knowledge on the JTB account—but that do not appear to be genuine cases of knowledge. Therefore, Gettier argued, his counterexamples show that the JTB account of knowledge is false, and thus that a different conceptual analysis is needed to correctly track what we mean by "knowledge".

Gettier's case is based on two counterexamples to the JTB analysis, both involving a fictional character named Smith. Each relies on two claims. Firstly, that justification is preserved by entailment, and secondly that this applies coherently to Smith's putative "belief". That is, that if Smith is justified in believing P, and Smith realizes that the truth of P entails the truth of Q, then Smith would also be justified in believing Q. Gettier calls these counterexamples "Case I" and "Case II":

Case I

Suppose that Smith and Jones have applied for a certain job. And suppose that Smith has strong evidence for the following conjunctive proposition: (d) Jones is the man who will get the job, and Jones has ten coins in his pocket.

Smith's evidence for (d) might be that the president of the company assured him that Jones would, in the end, be selected and that he, Smith, had counted the coins in Jones's pocket ten minutes ago. Proposition (d) entails: (e) The man who will get the job has ten coins in his pocket.

Let us suppose that Smith sees the entailment from (d) to (e), and accepts (e) on the grounds of (d), for which he has strong evidence. In this case, Smith is clearly justified in believing that (e) is true.

But imagine, further, that unknown to Smith, he himself, not Jones, will get the job. And, also, unknown to Smith, he himself has ten coins in his pocket. Proposition (e) is true, though proposition (d), from which Smith inferred (e), is false. In our example, then, all of the following are true: (i) (e) is true, (ii) Smith believes that (e) is true, and (iii) Smith is justified in believing that (e) is true. But it is equally clear that Smith does not know that (e) is true; for (e) is true in virtue of the number of coins in Smith's pocket, while Smith does not know how many coins are in his pocket, and bases his belief in (e) on a count of the coins in Jones's pocket, whom he falsely believes to be the man who will get the job.

Case II

Smith, it is claimed by the hidden interlocutor, has a justified belief that "Jones owns a Ford". Smith therefore (justifiably) concludes (by the rule of disjunction introduction) that "Jones owns a Ford, or Brown is in Barcelona", even though Smith has no information whatsoever about the location of Brown. In fact, Jones does not own a Ford, but by sheer coincidence, Brown really is in Barcelona. Again, Smith had a belief that was true and justified, but not knowledge.

False premises and generalized Gettier-style problems

In both of Gettier's actual examples (see also counterfactual conditional), the justified true belief came about, if Smith's purported claims are disputable, as the result of entailment (but see also material conditional) from justified false beliefs that "Jones will get the job" (in case I), and that "Jones owns a Ford" (in case II). This led some early responses to Gettier to conclude that the definition of knowledge could be easily adjusted, so that knowledge was justified true belief that does not depend on false premises. The interesting issue that arises is then of how to know which premises are in reality false or true when deriving a conclusion, because as in the Gettier cases, one sees that premises can be very reasonable to believe and be likely true, but unknown to the believer there are confounding factors and extra information that may have been missed while concluding something. The question that arises is therefore to what extent would one have to be able to go about attempting to "prove" all premises in the argument before solidifying a conclusion.

The generalized problem

In a 1966 scenario known as "The sheep in the field", Roderick Chisholm asks us to imagine that someone, X, is standing outside a field looking at something that looks like a sheep (although in fact, it is a dog disguised as a sheep). X believes there is a sheep in the field, and in fact, X is right because there is a sheep behind the hill in the middle of the field. Hence, X has a justified true belief that there is a sheep in the field.

Another scenario by Brian Skyrms is "The Pyromaniac", in which a struck match lights not for the reasons the pyromaniac imagines but because of some unknown "Q radiation".

A different perspective on the issue is given by Alvin Goldman in the "fake barns" scenario (crediting Carl Ginet with the example). In this one, a man is driving in the countryside, and sees what looks exactly like a barn. Accordingly, he thinks that he is seeing a barn. In fact, that is what he is doing. But what he does not know is that the neighborhood generally consists of many fake barns—barn facades designed to look exactly like real barns when viewed from the road. Since, if he had been looking at one of them, he would have been unable to tell the difference, his "knowledge" that he was looking at a barn would seem to be poorly founded.

Objections to the "no false premises" approach

The "no false premises" (or "no false lemmas") solution which was proposed early in the discussion has been criticized, as more general Gettier-style problems were then constructed or contrived in which the justified true belief is said to not seem to be the result of a chain of reasoning from a justified false belief. For example:

After arranging to meet with Mark for help with homework, Luke arrives at the appointed time and place. Walking into Mark's office Luke clearly sees Mark at his desk; Luke immediately forms the belief "Mark is in the room. He can help me with my logic homework". Luke is justified in his belief; he clearly sees Mark at his desk. In fact, it is not Mark that Luke saw, but rather a hologram, perfect in every respect, giving the appearance of Mark diligently grading papers at his desk. Nevertheless, Mark is in the room; he is crouched under his desk reading Frege. Luke's belief that Mark is in the room is true (he is in the room, under his desk) and justified (Mark's hologram is giving the appearance of Mark hard at work).

It is argued that it seems as though Luke does not "know" that Mark is in the room, even though it is claimed he has a justified true belief that Mark is in the room, but it is not nearly so clear that the perceptual belief that "Mark is in the room" was inferred from any premises at all, let alone any false ones, nor led to significant conclusions on its own; Luke did not seem to be reasoning about anything; "Mark is in the room" seems to have been part of what he seemed to see.

Constructing Gettier problems

The main idea behind Gettier's examples is that the justification for the belief is flawed or incorrect, but the belief turns out to be true by sheer luck. Linda Zagzebski shows that any analysis of knowledge in terms of true belief and some other element of justification that is independent from truth, will be liable to Gettier cases. She offers a formula for generating Gettier cases:

(1) start with a case of justified false belief;

(2) amend the example, making the element of justification strong enough for knowledge, but the belief false by sheer chance;

(3) amend the example again, adding another element of chance such that the belief is true, but which leaves the element of justification unchanged;

This will generate an example of a belief that is sufficiently justified (on some analysis of knowledge) to be knowledge, which is true, and which is intuitively not an example of knowledge. In other words, Gettier cases can be generated for any analysis of knowledge that involves a justification criterion and a truth criterion, which are highly correlated but have some degree of independence.

Responses to Gettier

The Gettier problem is formally a problem in first-order logic, but the introduction by Gettier of terms such as believes and knows moves the discussion into the field of epistemology. Here, the sound (true) arguments ascribed to Smith then need also to be valid (believed) and convincing (justified) if they are to issue in the real-world discussion about justified true belief.

Responses to Gettier problems have fallen into three categories:

  • Affirmations of the JTB account: This response affirms the JTB account of knowledge, but rejects Gettier cases. Typically, the proponent of this response rejects Gettier cases because, they say, Gettier cases involve insufficient levels of justification. Knowledge actually requires higher levels of justification than Gettier cases involve.
  • Fourth condition responses: This response accepts the problem raised by Gettier cases, and affirms that JTB is necessary (but not sufficient) for knowledge. A proper account of knowledge, according to this type of view, will contain at least fourth condition (JTB + ?). With the fourth condition in place, Gettier counterexamples (and other similar counterexamples) will not work, and we will have an adequate set of criteria that are both necessary and sufficient for knowledge.
  • Justification replacement response: This response also accepts the problem raised by Gettier cases. However, instead of invoking a fourth condition, it seeks to replace justification itself with some other third condition (?TB) that will make counterexamples obsolete.

One response, therefore, is that in none of the above cases was the belief justified because it is impossible to justify anything that is not true. Conversely, the fact that a proposition turns out to be untrue is proof that it was not sufficiently justified in the first place. Under this interpretation, the JTB definition of knowledge survives. This shifts the problem to a definition of justification, rather than knowledge. Another view is that justification and non-justification are not in binary opposition. Instead, justification is a matter of degree, with an idea being more or less justified. This account of justification is supported by philosophers such as Paul Boghossian and Stephen Hicks. In common sense usage, an idea can not only be more justified or less justified but it can also be partially justified (Smith's boss told him X) and partially unjustified (Smith's boss is a liar). Gettier's cases involve propositions that were true, believed, but which had weak justification. In case 1, the premise that the testimony of Smith's boss is "strong evidence" is rejected. The case itself depends on the boss being either wrong or deceitful (Jones did not get the job) and therefore unreliable. In case 2, Smith again has accepted a questionable idea (Jones owns a Ford) with unspecified justification. Without justification, both cases do not undermine the JTB account of knowledge.

Other epistemologists accept Gettier's conclusion. Their responses to the Gettier problem, therefore, consist of trying to find alternative analyses of knowledge.

The fourth condition (JTB + G) approaches

The most common direction for this sort of response to take is what might be called a "JTB + G" analysis: that is, an analysis based on finding some fourth condition—a "no-Gettier-problem" condition—which, when added to the conditions of justification, truth, and belief, will yield a set of separately necessary and jointly sufficient conditions.

Goldman's causal theory

One such response is that of Alvin Goldman (1967), who suggested the addition of a causal condition: a subject's belief is justified, for Goldman, only if the truth of a belief has caused the subject to have that belief (in the appropriate way); and for a justified true belief to count as knowledge, the subject must also be able to "correctly reconstruct" (mentally) that causal chain. Goldman's analysis would rule out Gettier cases in that Smith's beliefs are not caused by the truths of those beliefs; it is merely accidental that Smith's beliefs in the Gettier cases happen to be true, or that the prediction made by Smith: "The winner of the job will have 10 coins", on the basis of his putative belief, (see also bundling) came true in this one case. This theory is challenged by the difficulty of giving a principled explanation of how an appropriate causal relationship differs from an inappropriate one (without the circular response of saying that the appropriate sort of causal relationship is the knowledge-producing one); or retreating to a position in which justified true belief is weakly defined as the consensus of learned opinion. The latter would be useful, but not as useful nor desirable as the unchanging definitions of scientific concepts such as momentum. Thus, adopting a causal response to the Gettier problem usually requires one to adopt (as Goldman gladly does) some form of reliabilism about justification.

Lehrer–Paxson's defeasibility condition

Keith Lehrer and Thomas Paxson (1969) proposed another response, by adding a defeasibility condition to the JTB analysis. On their account, knowledge is undefeated justified true belief—which is to say that a justified true belief counts as knowledge if and only if it is also the case that there is no further truth that, had the subject known it, would have defeated their present justification for the belief. (Thus, for example, Smith's justification for believing that the person who will get the job has ten coins in his pocket is his justified belief that Jones will get the job, combined with his justified belief that Jones has ten coins in his pocket. But if Smith had known the truth that Jones will not get the job, that would have defeated the justification for his belief.)

Pragmatism

Pragmatism was developed as a philosophical doctrine by C.S. Peirce and William James (1842–1910). In Peirce's view, the truth is nominally defined as a sign's correspondence to its object and pragmatically defined as the ideal final opinion to which sufficient investigation would lead sooner or later. James' epistemological model of truth was that which works in the way of belief, and a belief was true if in the long run it worked for all of us, and guided us expeditiously through our semihospitable world. Peirce argued that metaphysics could be cleaned up by a pragmatic approach.

Consider what effects that might conceivably have practical bearings you conceive the objects of your conception to have. Then, your conception of those effects is the whole of your conception of the object.

From a pragmatic viewpoint of the kind often ascribed to James, defining on a particular occasion whether a particular belief can rightly be said to be both true and justified is seen as no more than an exercise in pedantry, but being able to discern whether that belief led to fruitful outcomes is a fruitful enterprise. Peirce emphasized fallibilism, considered the assertion of absolute certainty a barrier to inquiry, and in 1901 defined truth as follows: "Truth is that concordance of an abstract statement with the ideal limit towards which endless investigation would tend to bring scientific belief, which concordance the abstract statement may possess by virtue of the confession of its inaccuracy and one-sidedness, and this confession is an essential ingredient of truth." In other words, any unqualified assertion is likely to be at least a little wrong or, if right, still right for not entirely the right reasons. Therefore, one is more veracious by being Socratic, including recognition of one's own ignorance and knowing one may be proved wrong. This is the case, even though in practical matters one sometimes must act, if one is to act at all, with a decision and complete confidence.

Revisions of JTB approaches

The difficulties involved in producing a viable fourth condition have led to claims that attempting to repair the JTB account is a deficient strategy. For example, one might argue that what the Gettier problem shows is not the need for a fourth independent condition in addition to the original three, but rather that the attempt to build up an account of knowledge by conjoining a set of independent conditions was misguided from the outset. Those who have adopted this approach generally argue that epistemological terms like justification, evidence, certainty, etc. should be analyzed in terms of a primitive notion of knowledge, rather than vice versa. Knowledge is understood as factive, that is, as embodying a sort of epistemological "tie" between a truth and a belief. The JTB account is then criticized for trying to get and encapsulate the factivity of knowledge "on the cheap", as it were, or via a circular argument, by replacing an irreducible notion of factivity with the conjunction of some of the properties that accompany it (in particular, truth and justification). Of course, the introduction of irreducible primitives into a philosophical theory is always problematical (some would say a sign of desperation), and such anti-reductionist accounts are unlikely to please those who have other reasons to hold fast to the method behind JTB+G accounts.

Fred Dretske's conclusive reasons and Robert Nozick's truth-tracking

Fred Dretske developed an account of knowledge which he called "conclusive reasons", revived by Robert Nozick as what he called the subjunctive or truth-tracking account. Nozick's formulation posits that proposition p is an instance of knowledge when:

  1. p is true
  2. S believes that p
  3. if p were true, S would believe that p
  4. if p weren't true, S wouldn't believe that p

Nozick's definition is intended to preserve Goldman's intuition that Gettier cases should be ruled out by disacknowledging "accidentally" true justified beliefs, but without risking the potentially onerous consequences of building a causal requirement into the analysis. This tactic though, invites the riposte that Nozick's account merely hides the problem and does not solve it, for it leaves open the question of why Smith would not have had his belief if it had been false. The most promising answer seems to be that it is because Smith's belief was caused by the truth of what he believes; but that puts us back in the causalist camp. The third condition has come to be known as epistemological safety, while the fourth has come to be known as epistemological sensitivity.

Criticisms and counter examples (notably the Grandma case) prompted a revision, which resulted in the alteration of (3) and (4) to limit themselves to the same method (i.e. vision):

  1. p is true
  2. S believes that p
  3. if p were true, S (using method M) would believe that p
  4. if p weren't true, S (using method M) wouldn't believe that p

Saul Kripke has pointed out that this view remains problematic and uses a counterexample called the Fake Barn Country example, which describes a certain locality containing a number of fake barns or facades of barns. In the midst of these fake barns is one real barn, which is painted red. All the fake barns are not painted red.

Jones is driving along the highway, looks up and happens to see the real barn, and so forms the belief:

  • I see a barn.

Though Jones has gotten lucky, he could have just as easily been deceived and not have known it. Therefore, it doesn't fulfill condition 4, for if Jones had seen a fake barn he wouldn't have had any idea it was a fake barn. So, even on the revised account, Jones does not know that he sees a barn.

However, Jones could look up and form the belief:

  • I see a red barn.

This meets all four conditions of Nozick’s account, and therefore Jones knows that he sees a red barn. Thus, Nozick is committed to the view that Jones knows that he sees a red barn, but does not know that he sees a barn. This violates the principle of epistemic closure, which states that one is always in a position to know the consequences of what one knows. Thus, since Jones knows that he sees a red barn, and it is a consequence of him seeing a red barn that he sees a barn, by epistemic closure he should be in a position to know that he sees a barn — but Nozick denies this. Adopting Nozick’s view therefore requires rejecting epistemic closure, which is often seen as an unacceptable cost.

Robert Fogelin's perspectival account

In the first chapter of his book Pyrronian Reflexions on Truth and JustificationRobert Fogelin gives a diagnosis that leads to a dialogical solution to Gettier's problem. The problem always arises when the given justification has nothing to do with what really makes the proposition true. Now, he notes that in such cases there is always a mismatch between the information available to the person who makes the knowledge-claim of some proposition p and the information available to the evaluator of this knowledge-claim (even if the evaluator is the same person in a later time). A Gettierian counterexample arises when the justification given by the person who makes the knowledge-claim cannot be accepted by the knowledge evaluator because it does not fit with his wider informational setting. For instance, in the case of the fake barn the evaluator knows that a superficial inspection from someone who does not know the peculiar circumstances involved isn't a justification acceptable as making the proposition p (that it is a real barn) true.

Richard Kirkham's skepticism

Richard Kirkham has proposed that it is best to start with a definition of knowledge so strong that giving a counterexample to it is logically impossible. Whether it can be weakened without becoming subject to a counterexample should then be checked. He concludes that there will always be a counterexample to any definition of knowledge in which the believer's evidence does not logically necessitate the belief. Since in most cases the believer's evidence does not necessitate a belief, Kirkham embraces skepticism about knowledge; but he notes that a belief can still be rational even if it is not an item of knowledge.

Attempts to dissolve the problem

One might respond to Gettier by finding a way to avoid his conclusion(s) in the first place. However, it can hardly be argued that knowledge is justified true belief if there are cases that are justified true belief without being knowledge; thus, those who want to avoid Gettier's conclusions have to find some way to defuse Gettier's counterexamples. In order to do so, within the parameters of the particular counter-example or exemplar, they must then either accept that

  1. Gettier's cases are not really cases of justified true belief, or
  2. Gettier's cases really are cases of knowledge after all,

or, demonstrate a case in which it is possible to circumvent surrender to the exemplar by eliminating any necessity for it to be considered that JTB apply in just those areas that Gettier has rendered obscure, without thereby lessening the force of JTB to apply in those cases where it actually is crucial. Then, though Gettier's cases stipulate that Smith has a certain belief and that his belief is true, it seems that in order to propose (1), one must argue that Gettier, (or, that is, the writer responsible for the particular form of words on this present occasion known as case (1), and who makes assertion's about Smith's "putative" beliefs), goes wrong because he has the wrong notion of justification. Such an argument often depends on an externalist account on which "justification" is understood in such a way that whether or not a belief is "justified" depends not just on the internal state of the believer, but also on how that internal state is related to the outside world. Externalist accounts typically are constructed such that Smith's putative beliefs in Case I and Case II are not really justified (even though it seems to Smith that they are), because his beliefs are not lined up with the world in the right way, or that it is possible to show that it is invalid to assert that "Smith" has any significant "particular" belief at all, in terms of JTB or otherwise. Such accounts, of course, face the same burden as causalist responses to Gettier: they have to explain what sort of relationship between the world and the believer counts as a justificatory relationship.

Those who accept (2) are by far in the minority in analytic philosophy; generally, those who are willing to accept it are those who have independent reasons to say that more things count as knowledge than the intuitions that led to the JTB account would acknowledge. Chief among these is epistemic minimalists, Crispin Sartwell, who hold that all true belief, including both Gettier's cases and lucky guesses, counts as knowledge.

For his part, Nolbert Briceño, a Venezuelan lawyer, wrote an article entitled "Refutation of the Gettier Problem", where he analyzes Edmund Gettier's reasoning as expressed in his article and claims to demonstrate the errors committed by the latter, thus defending the definition of knowledge given by Plato.

Experimental research

Some early work in the field of experimental philosophy suggested that traditional intuitions about Gettier cases might vary cross-culturally. However, subsequent studies have consistently failed to replicate these results, instead finding that participants from different cultures do share the traditional intuition. More recent studies have been providing evidence for the opposite hypothesis, that people from a variety of different cultures have similar intuitions in these cases.

Mathematical fallacy

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Mathematical_fallacy

In mathematics, certain kinds of mistaken proof are often exhibited, and sometimes collected, as illustrations of a concept called mathematical fallacy. There is a distinction between a simple mistake and a mathematical fallacy in a proof, in that a mistake in a proof leads to an invalid proof while in the best-known examples of mathematical fallacies there is some element of concealment or deception in the presentation of the proof.

For example, the reason why validity fails may be attributed to a division by zero that is hidden by algebraic notation. There is a certain quality of the mathematical fallacy: as typically presented, it leads not only to an absurd result, but does so in a crafty or clever way. Therefore, these fallacies, for pedagogic reasons, usually take the form of spurious proofs of obvious contradictions. Although the proofs are flawed, the errors, usually by design, are comparatively subtle, or designed to show that certain steps are conditional, and are not applicable in the cases that are the exceptions to the rules.

The traditional way of presenting a mathematical fallacy is to give an invalid step of deduction mixed in with valid steps, so that the meaning of fallacy is here slightly different from the logical fallacy. The latter usually applies to a form of argument that does not comply with the valid inference rules of logic, whereas the problematic mathematical step is typically a correct rule applied with a tacit wrong assumption. Beyond pedagogy, the resolution of a fallacy can lead to deeper insights into a subject (e.g., the introduction of Pasch's axiom of Euclidean geometry, the five colour theorem of graph theory). Pseudaria, an ancient lost book of false proofs, is attributed to Euclid.

Mathematical fallacies exist in many branches of mathematics. In elementary algebra, typical examples may involve a step where division by zero is performed, where a root is incorrectly extracted or, more generally, where different values of a multiple valued function are equated. Well-known fallacies also exist in elementary Euclidean geometry and calculus.

Howlers

Anomalous cancellation in calculus

Examples exist of mathematically correct results derived by incorrect lines of reasoning. Such an argument, however true the conclusion appears to be, is mathematically invalid and is commonly known as a howler. The following is an example of a howler involving anomalous cancellation:

Here, although the conclusion 16/64 = 1/4 is correct, there is a fallacious, invalid cancellation in the middle step. Another classical example of a howler is proving the Cayley–Hamilton theorem by simply substituting the scalar variables of the characteristic polynomial with the matrix.

Bogus proofs, calculations, or derivations constructed to produce a correct result in spite of incorrect logic or operations were termed "howlers" by Edwin Maxwell. Outside the field of mathematics the term howler has various meanings, generally less specific.

Division by zero

The division-by-zero fallacy has many variants. The following example uses a disguised division by zero to "prove" that 2 = 1, but can be modified to prove that any number equals any other number.

  1. Let a and b be equal, nonzero quantities
  2. Multiply by a
  3. Subtract b2
  4. Factor both sides: the left factors as a difference of squares, the right is factored by extracting b from both terms
  5. Divide out (ab)
  6. Use the fact that a = b
  7. Combine like terms on the left
  8. Divide by the non-zero b
Q.E.D.[6]

The fallacy is in line 5: the progression from line 4 to line 5 involves division by a − b, which is zero since a = b. Since division by zero is undefined, the argument is invalid.

Analysis

Mathematical analysis as the mathematical study of change and limits can lead to mathematical fallacies — if the properties of integrals and differentials are ignored. For instance, a naïve use of integration by parts can be used to give a false proof that 0 = 1. Letting u = 1/log x and dv = dx/x,

after which the antiderivatives may be cancelled yielding 0 = 1. The problem is that antiderivatives are only defined up to a constant and shifting them by 1 or indeed any number is allowed. The error really comes to light when we introduce arbitrary integration limits a and b.

Since the difference between two values of a constant function vanishes, the same definite integral appears on both sides of the equation.

Multivalued functions

Many functions do not have a unique inverse. For instance, while squaring a number gives a unique value, there are two possible square roots of a positive number. The square root is multivalued. One value can be chosen by convention as the principal value; in the case of the square root the non-negative value is the principal value, but there is no guarantee that the square root given as the principal value of the square of a number will be equal to the original number (e.g. the principal square root of the square of −2 is 2). This remains true for nth roots.

Positive and negative roots

Care must be taken when taking the square root of both sides of an equality. Failing to do so results in a "proof" of 5 = 4.

Proof:

Start from
Write this as
Rewrite as
Add 81/4 on both sides:
These are perfect squares:
Take the square root of both sides:
Add 9/2 on both sides:
Q.E.D.

The fallacy is in the second to last line, where the square root of both sides is taken: a2 = b2 only implies a = b if a and b have the same sign, which is not the case here. In this case, it implies that a = –b, so the equation should read

which, by adding 9/2 on both sides, correctly reduces to 5 = 5.

Another example illustrating the danger of taking the square root of both sides of an equation involves the following fundamental identity

which holds as a consequence of the Pythagorean theorem. Then, by taking a square root,

Evaluating this when x = π , we get that

or

which is incorrect.

The error in each of these examples fundamentally lies in the fact that any equation of the form

where , has two solutions:

and it is essential to check which of these solutions is relevant to the problem at hand. In the above fallacy, the square root that allowed the second equation to be deduced from the first is valid only when cos x is positive. In particular, when x is set to π, the second equation is rendered invalid.

Square roots of negative numbers

Invalid proofs utilizing powers and roots are often of the following kind:

The fallacy is that the rule is generally valid only if at least one of and is non-negative (when dealing with real numbers), which is not the case here.

Alternatively, imaginary roots are obfuscated in the following:

The error here lies in the incorrect usage of multiple-valued functions. has two values and without a prior choice of branch, while only denotes the principal value .  Similarly, has four different values , , , and , of which only is equal to the left side of the first equality.

Complex exponents

When a number is raised to a complex power, the result is not uniquely defined (see Exponentiation § Failure of power and logarithm identities). If this property is not recognized, then errors such as the following can result:

The error here is that the rule of multiplying exponents as when going to the third line does not apply unmodified with complex exponents, even if when putting both sides to the power i only the principal value is chosen. When treated as multivalued functions, both sides produce the same set of values, being

Geometry

Many mathematical fallacies in geometry arise from using an additive equality involving oriented quantities (such as adding vectors along a given line or adding oriented angles in the plane) to a valid identity, but which fixes only the absolute value of (one of) these quantities. This quantity is then incorporated into the equation with the wrong orientation, so as to produce an absurd conclusion. This wrong orientation is usually suggested implicitly by supplying an imprecise diagram of the situation, where relative positions of points or lines are chosen in a way that is actually impossible under the hypotheses of the argument, but non-obviously so.

In general, such a fallacy is easy to expose by drawing a precise picture of the situation, in which some relative positions will be different from those in the provided diagram. In order to avoid such fallacies, a correct geometric argument using addition or subtraction of distances or angles should always prove that quantities are being incorporated with their correct orientation.

Fallacy of the isosceles triangle

The fallacy of the isosceles triangle, from (Maxwell 1959, Chapter II, § 1), purports to show that every triangle is isosceles, meaning that two sides of the triangle are congruent. This fallacy was known to Lewis Carroll and may have been discovered by him. It was published in 1899.

Given a triangle △ABC, prove that AB = AC:

  1. Draw a line bisecting ∠A.
  2. Draw the perpendicular bisector of segment BC, which bisects BC at a point D.
  3. Let these two lines meet at a point O.
  4. Draw line OR perpendicular to AB, line OQ perpendicular to AC.
  5. Draw lines OB and OC.
  6. By AAS, △RAO ≅ △QAO (∠ORA = ∠OQA = 90°; ∠RAO = ∠QAO; AO = AO (common side)).
  7. By RHS, △ROB ≅ △QOC (∠BRO = ∠CQO = 90°; BO = OC (hypotenuse); RO = OQ (leg)).
  8. Thus, AR = AQ, RB = QC, and AB = AR + RB = AQ + QC = AC.

Q.E.D.

As a corollary, one can show that all triangles are equilateral, by showing that AB = BC and AC = BC in the same way.

The error in the proof is the assumption in the diagram that the point O is inside the triangle. In fact, O always lies on the circumcircle of the △ABC (except for isosceles and equilateral triangles where AO and OD coincide). Furthermore, it can be shown that, if AB is longer than AC, then R will lie within AB, while Q will lie outside of AC, and vice versa (in fact, any diagram drawn with sufficiently accurate instruments will verify the above two facts). Because of this, AB is still AR + RB, but AC is actually AQ − QC; and thus the lengths are not necessarily the same.

Proof by induction

There exist several fallacious proofs by induction in which one of the components, basis case or inductive step, is incorrect. Intuitively, proofs by induction work by arguing that if a statement is true in one case, it is true in the next case, and hence by repeatedly applying this, it can be shown to be true for all cases. The following "proof" shows that all horses are the same colour.

  1. Let us say that any group of N horses is all of the same colour.
  2. If we remove a horse from the group, we have a group of N − 1 horses of the same colour. If we add another horse, we have another group of N horses. By our previous assumption, all the horses are of the same colour in this new group, since it is a group of N horses.
  3. Thus we have constructed two groups of N horses all of the same colour, with N − 1 horses in common. Since these two groups have some horses in common, the two groups must be of the same colour as each other.
  4. Therefore, combining all the horses used, we have a group of N + 1 horses of the same colour.
  5. Thus if any N horses are all the same colour, any N + 1 horses are the same colour.
  6. This is clearly true for N = 1 (i.e., one horse is a group where all the horses are the same colour). Thus, by induction, N horses are the same colour for any positive integer N, and so all horses are the same colour.

The fallacy in this proof arises in line 3. For N = 1, the two groups of horses have N − 1 = 0 horses in common, and thus are not necessarily the same colour as each other, so the group of N + 1 = 2 horses is not necessarily all of the same colour. The implication "every N horses are of the same colour, then N + 1 horses are of the same colour" works for any N > 1, but fails to be true when N = 1. The basis case is correct, but the induction step has a fundamental flaw.

Space_weather

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Space_wea...