Search This Blog

Wednesday, June 16, 2021

Consilience

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Consilience

In science and history, consilience (also convergence of evidence or concordance of evidence) is the principle that evidence from independent, unrelated sources can "converge" on strong conclusions. That is, when multiple sources of evidence are in agreement, the conclusion can be very strong even when none of the individual sources of evidence is significantly so on its own. Most established scientific knowledge is supported by a convergence of evidence: if not, the evidence is comparatively weak, and there will not likely be a strong scientific consensus.

The principle is based on the unity of knowledge; measuring the same result by several different methods should lead to the same answer. For example, it should not matter whether one measures the distance between the Giza pyramid complex by laser rangefinding, by satellite imaging, or with a meter stick – in all three cases, the answer should be approximately the same. For the same reason, different dating methods in geochronology should concur, a result in chemistry should not contradict a result in geology, etc.

The word consilience was originally coined as the phrase "consilience of inductions" by William Whewell (consilience refers to a "jumping together" of knowledge). The word comes from Latin com- "together" and -siliens "jumping" (as in resilience).

Description

Consilience requires the use of independent methods of measurement, meaning that the methods have few shared characteristics. That is, the mechanism by which the measurement is made is different; each method is dependent on an unrelated natural phenomenon. For example, the accuracy of laser rangefinding measurements is based on the scientific understanding of lasers, while satellite pictures and meter sticks rely on different phenomena. Because the methods are independent, when one of several methods is in error, it is very unlikely to be in error in the same way as any of the other methods, and a difference between the measurements will be observed. If the scientific understanding of the properties of lasers were inaccurate, then the laser measurement would be inaccurate but the others would not.

As a result, when several different methods agree, this is strong evidence that none of the methods are in error and the conclusion is correct. This is because of a greatly reduced likelihood of errors: for a consensus estimate from multiple measurements to be wrong, the errors would have to be similar for all samples and all methods of measurement, which is extremely unlikely. Random errors will tend to cancel out as more measurements are made, due to regression to the mean; systematic errors will be detected by differences between the measurements (and will also tend to cancel out since the direction of the error will still be random). This is how scientific theories reach high confidence – over time, they build up a large degree of evidence which converges on the same conclusion.

When results from different strong methods do appear to conflict, this is treated as a serious problem to be reconciled. For example, in the 19th century, the Sun appeared to be no more than 20 million years old, but the Earth appeared to be no less than 300 million years (resolved by the discovery of nuclear fusion and radioactivity, and the theory of quantum mechanics); or current attempts to resolve theoretical differences between quantum mechanics and general relativity.

Significance

Because of consilience, the strength of evidence for any particular conclusion is related to how many independent methods are supporting the conclusion, as well as how different these methods are. Those techniques with the fewest (or no) shared characteristics provide the strongest consilience and result in the strongest conclusions. This also means that confidence is usually strongest when considering evidence from different fields, because the techniques are usually very different.

For example, the theory of evolution is supported by a convergence of evidence from genetics, molecular biology, paleontology, geology, biogeography, comparative anatomy, comparative physiology, and many other fields. In fact, the evidence within each of these fields is itself a convergence providing evidence for the theory. (As a result, to disprove evolution, most or all of these independent lines of evidence would have to be found to be in error.) The strength of the evidence, considered together as a whole, results in the strong scientific consensus that the theory is correct. In a similar way, evidence about the history of the universe is drawn from astronomy, astrophysics, planetary geology, and physics.

Finding similar conclusions from multiple independent methods is also evidence for the reliability of the methods themselves, because consilience eliminates the possibility of all potential errors that do not affect all the methods equally. This is also used for the validation of new techniques through comparison with the consilient ones. If only partial consilience is observed, this allows for the detection of errors in methodology; any weaknesses in one technique can be compensated for by the strengths of the others. Alternatively, if using more than one or two techniques for every experiment is infeasible, some of the benefits of consilience may still be obtained if it is well-established that these techniques usually give the same result.

Consilience is important across all of science, including the social sciences, and is often used as an argument for scientific realism by philosophers of science. Each branch of science studies a subset of reality that depends on factors studied in other branches. Atomic physics underlies the workings of chemistry, which studies emergent properties that in turn are the basis of biology. Psychology is not separate from the study of properties emergent from the interaction of neurons and synapses. Sociology, economics, and anthropology are each, in turn, studies of properties emergent from the interaction of countless individual humans. The concept that all the different areas of research are studying one real, existing universe is an apparent explanation of why scientific knowledge determined in one field of inquiry has often helped in understanding other fields.

Deviations

Consilience does not forbid deviations: in fact, since not all experiments are perfect, some deviations from established knowledge are expected. However, when the convergence is strong enough, then new evidence inconsistent with the previous conclusion is not usually enough to outweigh that convergence. Without an equally strong convergence on the new result, the weight of evidence will still favor the established result. This means that the new evidence is most likely to be wrong.

Science denialism (for example, AIDS denialism) is often based on a misunderstanding of this property of consilience. A denier may promote small gaps not yet accounted for by the consilient evidence, or small amounts of evidence contradicting a conclusion without accounting for the pre-existing strength resulting from consilience. More generally, to insist that all evidence converge precisely with no deviations would be naïve falsificationism, equivalent to considering a single contrary result to falsify a theory when another explanation, such as equipment malfunction or misinterpretation of results, is much more likely.

In history

Historical evidence also converges in an analogous way. For example: if five ancient historians, none of whom knew each other, all claim that Julius Caesar seized power in Rome in 49 BCE, this is strong evidence in favor of that event occurring even if each individual historian is only partially reliable. By contrast, if the same historian had made the same claim five times in five different places (and no other types of evidence were available), the claim is much weaker because it originates from a single source. The evidence from the ancient historians could also converge with evidence from other fields, such as archaeology: for example, evidence that many senators fled Rome at the time, that the battles of Caesar’s civil war occurred, and so forth.

Consilience has also been discussed in reference to Holocaust denial.

"We [have now discussed] eighteen proofs all converging on one conclusion...the deniers shift the burden of proof to historians by demanding that each piece of evidence, independently and without corroboration between them, prove the Holocaust. Yet no historian has ever claimed that one piece of evidence proves the Holocaust. We must examine the collective whole."

That is, individually the evidence may underdetermine the conclusion, but together they overdetermine it. A similar way to state this is that to ask for one particular piece of evidence in favor of a conclusion is a flawed question.

Outside the sciences

In addition to the sciences, consilience can be important to the arts, ethics and religion. Both artists and scientists have identified the importance of biology in the process of artistic innovation.

History of the concept

Consilience has its roots in the ancient Greek concept of an intrinsic orderliness that governs our cosmos, inherently comprehensible by logical process, a vision at odds with mystical views in many cultures that surrounded the Hellenes. The rational view was recovered during the high Middle Ages, separated from theology during the Renaissance and found its apogee in the Age of Enlightenment.

Whewell’s definition was that:

The Consilience of Inductions takes place when an Induction, obtained from one class of facts, coincides with an Induction obtained from another different class. Thus Consilience is a test of the truth of the Theory in which it occurs.

More recent descriptions include:

"Where there is convergence of evidence, where the same explanation is implied, there is increased confidence in the explanation. Where there is divergence, then either the explanation is at fault or one or more of the sources of information is in error or requires reinterpretation."

"Proof is derived through a convergence of evidence from numerous lines of inquiry--multiple, independent inductions, all of which point to an unmistakable conclusion."

Edward O. Wilson

Although the concept of consilience in Whewell's sense was widely discussed by philosophers of science, the term was unfamiliar to the broader public until the end of the 20th century, when it was revived in Consilience: The Unity of Knowledge, a 1998 book by the author and biologist E.O. Wilson, as an attempt to bridge the culture gap between the sciences and the humanities that was the subject of C. P. Snow's The Two Cultures and the Scientific Revolution (1959).

Wilson held that with the rise of the modern sciences, the sense of unity gradually was lost in the increasing fragmentation and specialization of knowledge in the last two centuries. He asserted that the sciences, humanities, and arts have a common goal: to give a purpose to understanding the details, to lend to all inquirers "a conviction, far deeper than a mere working proposition, that the world is orderly and can be explained by a small number of natural laws." An important point made by Wilson is that hereditary human nature and evolution itself profoundly effect the evolution of culture, in essence a sociobiological concept. Wilson's concept is a much broader notion of consilience than that of Whewell, who was merely pointing out that generalizations invented to account for one set of phenomena often account for others as well.

A parallel view lies in the term universology, which literally means "the science of the universe." Universology was first promoted for the study of the interconnecting principles and truths of all domains of knowledge by Stephen Pearl Andrews, a 19th-century utopian futurist and anarchist.

Philosophy of mathematics

From Wikipedia, the free encyclopedia

The philosophy of mathematics is the branch of philosophy that studies the assumptions, foundations, and implications of mathematics. It aims to understand the nature and methods of mathematics, and find out the place of mathematics in people's lives. The logical and structural nature of mathematics itself makes this study both broad and unique among its philosophical counterparts.

History

The origin of mathematics is subject to arguments and disagreements. Whether the birth of mathematics was a random happening or induced by necessity during the development of other subjects, like physics, is still a matter of prolific debates.

Many thinkers have contributed their ideas concerning the nature of mathematics. Today, some philosophers of mathematics aim to give accounts of this form of inquiry and its products as they stand, while others emphasize a role for themselves that goes beyond simple interpretation to critical analysis. There are traditions of mathematical philosophy in both Western philosophy and Eastern philosophy. Western philosophies of mathematics go as far back as Pythagoras, who described the theory "everything is mathematics" (mathematicism), Plato, who paraphrased Pythagoras, and studied the ontological status of mathematical objects, and Aristotle, who studied logic and issues related to infinity (actual versus potential).

Greek philosophy on mathematics was strongly influenced by their study of geometry. For example, at one time, the Greeks held the opinion that 1 (one) was not a number, but rather a unit of arbitrary length. A number was defined as a multitude. Therefore, 3, for example, represented a certain multitude of units, and was thus not "truly" a number. At another point, a similar argument was made that 2 was not a number but a fundamental notion of a pair. These views come from the heavily geometric straight-edge-and-compass viewpoint of the Greeks: just as lines drawn in a geometric problem are measured in proportion to the first arbitrarily drawn line, so too are the numbers on a number line measured in proportion to the arbitrary first "number" or "one".

These earlier Greek ideas of numbers were later upended by the discovery of the irrationality of the square root of two. Hippasus, a disciple of Pythagoras, showed that the diagonal of a unit square was incommensurable with its (unit-length) edge: in other words he proved there was no existing (rational) number that accurately depicts the proportion of the diagonal of the unit square to its edge. This caused a significant re-evaluation of Greek philosophy of mathematics. According to legend, fellow Pythagoreans were so traumatized by this discovery that they murdered Hippasus to stop him from spreading his heretical idea. Simon Stevin was one of the first in Europe to challenge Greek ideas in the 16th century. Beginning with Leibniz, the focus shifted strongly to the relationship between mathematics and logic. This perspective dominated the philosophy of mathematics through the time of Frege and of Russell, but was brought into question by developments in the late 19th and early 20th centuries.

Contemporary philosophy

A perennial issue in the philosophy of mathematics concerns the relationship between logic and mathematics at their joint foundations. While 20th-century philosophers continued to ask the questions mentioned at the outset of this article, the philosophy of mathematics in the 20th century was characterized by a predominant interest in formal logic, set theory (both naive set theory and axiomatic set theory), and foundational issues.

It is a profound puzzle that on the one hand mathematical truths seem to have a compelling inevitability, but on the other hand the source of their "truthfulness" remains elusive. Investigations into this issue are known as the foundations of mathematics program.

At the start of the 20th century, philosophers of mathematics were already beginning to divide into various schools of thought about all these questions, broadly distinguished by their pictures of mathematical epistemology and ontology. Three schools, formalism, intuitionism, and logicism, emerged at this time, partly in response to the increasingly widespread worry that mathematics as it stood, and analysis in particular, did not live up to the standards of certainty and rigor that had been taken for granted. Each school addressed the issues that came to the fore at that time, either attempting to resolve them or claiming that mathematics is not entitled to its status as our most trusted knowledge.

Surprising and counter-intuitive developments in formal logic and set theory early in the 20th century led to new questions concerning what was traditionally called the foundations of mathematics. As the century unfolded, the initial focus of concern expanded to an open exploration of the fundamental axioms of mathematics, the axiomatic approach having been taken for granted since the time of Euclid around 300 BCE as the natural basis for mathematics. Notions of axiom, proposition and proof, as well as the notion of a proposition being true of a mathematical object (see Assignment), were formalized, allowing them to be treated mathematically. The Zermelo–Fraenkel axioms for set theory were formulated which provided a conceptual framework in which much mathematical discourse would be interpreted. In mathematics, as in physics, new and unexpected ideas had arisen and significant changes were coming. With Gödel numbering, propositions could be interpreted as referring to themselves or other propositions, enabling inquiry into the consistency of mathematical theories. This reflective critique in which the theory under review "becomes itself the object of a mathematical study" led Hilbert to call such study metamathematics or proof theory.

At the middle of the century, a new mathematical theory was created by Samuel Eilenberg and Saunders Mac Lane, known as category theory, and it became a new contender for the natural language of mathematical thinking. As the 20th century progressed, however, philosophical opinions diverged as to just how well-founded were the questions about foundations that were raised at the century's beginning. Hilary Putnam summed up one common view of the situation in the last third of the century by saying:

When philosophy discovers something wrong with science, sometimes science has to be changed—Russell's paradox comes to mind, as does Berkeley's attack on the actual infinitesimal—but more often it is philosophy that has to be changed. I do not think that the difficulties that philosophy finds with classical mathematics today are genuine difficulties; and I think that the philosophical interpretations of mathematics that we are being offered on every hand are wrong, and that "philosophical interpretation" is just what mathematics doesn't need.

Philosophy of mathematics today proceeds along several different lines of inquiry, by philosophers of mathematics, logicians, and mathematicians, and there are many schools of thought on the subject. The schools are addressed separately in the next section, and their assumptions explained.

Major themes

Mathematical realism

Mathematical realism, like realism in general, holds that mathematical entities exist independently of the human mind. Thus humans do not invent mathematics, but rather discover it, and any other intelligent beings in the universe would presumably do the same. In this point of view, there is really one sort of mathematics that can be discovered; triangles, for example, are real entities, not the creations of the human mind.

Many working mathematicians have been mathematical realists; they see themselves as discoverers of naturally occurring objects. Examples include Paul Erdős and Kurt Gödel. Gödel believed in an objective mathematical reality that could be perceived in a manner analogous to sense perception. Certain principles (e.g., for any two objects, there is a collection of objects consisting of precisely those two objects) could be directly seen to be true, but the continuum hypothesis conjecture might prove undecidable just on the basis of such principles. Gödel suggested that quasi-empirical methodology could be used to provide sufficient evidence to be able to reasonably assume such a conjecture.

Within realism, there are distinctions depending on what sort of existence one takes mathematical entities to have, and how we know about them. Major forms of mathematical realism include Platonism.

Mathematical anti-realism

Mathematical anti-realism generally holds that mathematical statements have truth-values, but that they do not do so by corresponding to a special realm of immaterial or non-empirical entities. Major forms of mathematical anti-realism include formalism and fictionalism.

Contemporary schools of thought

Artistic

The view that claims that mathematics is the aesthetic combination of assumptions, and then also claims that mathematics is an art, a famous mathematician who claims that is the British G. H. Hardy and also metaphorically the French Henri Poincaré., for Hardy, in his book, A Mathematician's Apology, the definition of mathematics was more like the aesthetic combination of concepts.

Platonism

Mathematical Platonism is the form of realism that suggests that mathematical entities are abstract, have no spatiotemporal or causal properties, and are eternal and unchanging. This is often claimed to be the view most people have of numbers. The term Platonism is used because such a view is seen to parallel Plato's Theory of Forms and a "World of Ideas" (Greek: eidos (εἶδος)) described in Plato's allegory of the cave: the everyday world can only imperfectly approximate an unchanging, ultimate reality. Both Plato's cave and Platonism have meaningful, not just superficial connections, because Plato's ideas were preceded and probably influenced by the hugely popular Pythagoreans of ancient Greece, who believed that the world was, quite literally, generated by numbers.

A major question considered in mathematical Platonism is: Precisely where and how do the mathematical entities exist, and how do we know about them? Is there a world, completely separate from our physical one, that is occupied by the mathematical entities? How can we gain access to this separate world and discover truths about the entities? One proposed answer is the Ultimate Ensemble, a theory that postulates that all structures that exist mathematically also exist physically in their own universe.

Kurt Gödel's Platonism postulates a special kind of mathematical intuition that lets us perceive mathematical objects directly. (This view bears resemblances to many things Husserl said about mathematics, and supports Kant's idea that mathematics is synthetic a priori.) Davis and Hersh have suggested in their 1999 book The Mathematical Experience that most mathematicians act as though they are Platonists, even though, if pressed to defend the position carefully, they may retreat to formalism.

Full-blooded Platonism is a modern variation of Platonism, which is in reaction to the fact that different sets of mathematical entities can be proven to exist depending on the axioms and inference rules employed (for instance, the law of the excluded middle, and the axiom of choice). It holds that all mathematical entities exist. They may be provable, even if they cannot all be derived from a single consistent set of axioms.

Set-theoretic realism (also set-theoretic Platonism) a position defended by Penelope Maddy, is the view that set theory is about a single universe of sets. This position (which is also known as naturalized Platonism because it is a naturalized version of mathematical Platonism) has been criticized by Mark Balaguer on the basis of Paul Benacerraf's epistemological problem. A similar view, termed Platonized naturalism, was later defended by the Stanford–Edmonton School: according to this view, a more traditional kind of Platonism is consistent with naturalism; the more traditional kind of Platonism they defend is distinguished by general principles that assert the existence of abstract objects.

Mathematicism

Max Tegmark's mathematical universe hypothesis (or mathematicism) goes further than Platonism in asserting that not only do all mathematical objects exist, but nothing else does. Tegmark's sole postulate is: All structures that exist mathematically also exist physically. That is, in the sense that "in those [worlds] complex enough to contain self-aware substructures [they] will subjectively perceive themselves as existing in a physically 'real' world".

Logicism

Logicism is the thesis that mathematics is reducible to logic, and hence nothing but a part of logic. Logicists hold that mathematics can be known a priori, but suggest that our knowledge of mathematics is just part of our knowledge of logic in general, and is thus analytic, not requiring any special faculty of mathematical intuition. In this view, logic is the proper foundation of mathematics, and all mathematical statements are necessary logical truths.

Rudolf Carnap (1931) presents the logicist thesis in two parts:

  1. The concepts of mathematics can be derived from logical concepts through explicit definitions.
  2. The theorems of mathematics can be derived from logical axioms through purely logical deduction.

Gottlob Frege was the founder of logicism. In his seminal Die Grundgesetze der Arithmetik (Basic Laws of Arithmetic) he built up arithmetic from a system of logic with a general principle of comprehension, which he called "Basic Law V" (for concepts F and G, the extension of F equals the extension of G if and only if for all objects a, Fa equals Ga), a principle that he took to be acceptable as part of logic.

Frege's construction was flawed. Russell discovered that Basic Law V is inconsistent (this is Russell's paradox). Frege abandoned his logicist program soon after this, but it was continued by Russell and Whitehead. They attributed the paradox to "vicious circularity" and built up what they called ramified type theory to deal with it. In this system, they were eventually able to build up much of modern mathematics but in an altered, and excessively complex form (for example, there were different natural numbers in each type, and there were infinitely many types). They also had to make several compromises in order to develop so much of mathematics, such as an "axiom of reducibility". Even Russell said that this axiom did not really belong to logic.

Modern logicists (like Bob Hale, Crispin Wright, and perhaps others) have returned to a program closer to Frege's. They have abandoned Basic Law V in favor of abstraction principles such as Hume's principle (the number of objects falling under the concept F equals the number of objects falling under the concept G if and only if the extension of F and the extension of G can be put into one-to-one correspondence). Frege required Basic Law V to be able to give an explicit definition of the numbers, but all the properties of numbers can be derived from Hume's principle. This would not have been enough for Frege because (to paraphrase him) it does not exclude the possibility that the number 3 is in fact Julius Caesar. In addition, many of the weakened principles that they have had to adopt to replace Basic Law V no longer seem so obviously analytic, and thus purely logical.

Formalism

Formalism holds that mathematical statements may be thought of as statements about the consequences of certain string manipulation rules. For example, in the "game" of Euclidean geometry (which is seen as consisting of some strings called "axioms", and some "rules of inference" to generate new strings from given ones), one can prove that the Pythagorean theorem holds (that is, one can generate the string corresponding to the Pythagorean theorem). According to formalism, mathematical truths are not about numbers and sets and triangles and the like—in fact, they are not "about" anything at all.

Another version of formalism is often known as deductivism. In deductivism, the Pythagorean theorem is not an absolute truth, but a relative one: if one assigns meaning to the strings in such a way that the rules of the game become true (i.e., true statements are assigned to the axioms and the rules of inference are truth-preserving), then one must accept the theorem, or, rather, the interpretation one has given it must be a true statement. The same is held to be true for all other mathematical statements. Thus, formalism need not mean that mathematics is nothing more than a meaningless symbolic game. It is usually hoped that there exists some interpretation in which the rules of the game hold. (Compare this position to structuralism.) But it does allow the working mathematician to continue in his or her work and leave such problems to the philosopher or scientist. Many formalists would say that in practice, the axiom systems to be studied will be suggested by the demands of science or other areas of mathematics.

A major early proponent of formalism was David Hilbert, whose program was intended to be a complete and consistent axiomatization of all of mathematics. Hilbert aimed to show the consistency of mathematical systems from the assumption that the "finitary arithmetic" (a subsystem of the usual arithmetic of the positive integers, chosen to be philosophically uncontroversial) was consistent. Hilbert's goals of creating a system of mathematics that is both complete and consistent were seriously undermined by the second of Gödel's incompleteness theorems, which states that sufficiently expressive consistent axiom systems can never prove their own consistency. Since any such axiom system would contain the finitary arithmetic as a subsystem, Gödel's theorem implied that it would be impossible to prove the system's consistency relative to that (since it would then prove its own consistency, which Gödel had shown was impossible). Thus, in order to show that any axiomatic system of mathematics is in fact consistent, one needs to first assume the consistency of a system of mathematics that is in a sense stronger than the system to be proven consistent.

Hilbert was initially a deductivist, but, as may be clear from above, he considered certain metamathematical methods to yield intrinsically meaningful results and was a realist with respect to the finitary arithmetic. Later, he held the opinion that there was no other meaningful mathematics whatsoever, regardless of interpretation.

Other formalists, such as Rudolf Carnap, Alfred Tarski, and Haskell Curry, considered mathematics to be the investigation of formal axiom systems. Mathematical logicians study formal systems but are just as often realists as they are formalists.

Formalists are relatively tolerant and inviting to new approaches to logic, non-standard number systems, new set theories etc. The more games we study, the better. However, in all three of these examples, motivation is drawn from existing mathematical or philosophical concerns. The "games" are usually not arbitrary.

The main critique of formalism is that the actual mathematical ideas that occupy mathematicians are far removed from the string manipulation games mentioned above. Formalism is thus silent on the question of which axiom systems ought to be studied, as none is more meaningful than another from a formalistic point of view.

Recently, some formalist mathematicians have proposed that all of our formal mathematical knowledge should be systematically encoded in computer-readable formats, so as to facilitate automated proof checking of mathematical proofs and the use of interactive theorem proving in the development of mathematical theories and computer software. Because of their close connection with computer science, this idea is also advocated by mathematical intuitionists and constructivists in the "computability" tradition—see QED project for a general overview.

Conventionalism

The French mathematician Henri Poincaré was among the first to articulate a conventionalist view. Poincaré's use of non-Euclidean geometries in his work on differential equations convinced him that Euclidean geometry should not be regarded as a priori truth. He held that axioms in geometry should be chosen for the results they produce, not for their apparent coherence with human intuitions about the physical world.

Intuitionism

In mathematics, intuitionism is a program of methodological reform whose motto is that "there are no non-experienced mathematical truths" (L. E. J. Brouwer). From this springboard, intuitionists seek to reconstruct what they consider to be the corrigible portion of mathematics in accordance with Kantian concepts of being, becoming, intuition, and knowledge. Brouwer, the founder of the movement, held that mathematical objects arise from the a priori forms of the volitions that inform the perception of empirical objects.

A major force behind intuitionism was L. E. J. Brouwer, who rejected the usefulness of formalized logic of any sort for mathematics. His student Arend Heyting postulated an intuitionistic logic, different from the classical Aristotelian logic; this logic does not contain the law of the excluded middle and therefore frowns upon proofs by contradiction. The axiom of choice is also rejected in most intuitionistic set theories, though in some versions it is accepted.

In intuitionism, the term "explicit construction" is not cleanly defined, and that has led to criticisms. Attempts have been made to use the concepts of Turing machine or computable function to fill this gap, leading to the claim that only questions regarding the behavior of finite algorithms are meaningful and should be investigated in mathematics. This has led to the study of the computable numbers, first introduced by Alan Turing. Not surprisingly, then, this approach to mathematics is sometimes associated with theoretical computer science.

Constructivism

Like intuitionism, constructivism involves the regulative principle that only mathematical entities which can be explicitly constructed in a certain sense should be admitted to mathematical discourse. In this view, mathematics is an exercise of the human intuition, not a game played with meaningless symbols. Instead, it is about entities that we can create directly through mental activity. In addition, some adherents of these schools reject non-constructive proofs, such as a proof by contradiction. Important work was done by Errett Bishop, who managed to prove versions of the most important theorems in real analysis as constructive analysis in his 1967 Foundations of Constructive Analysis. 

Finitism

Finitism is an extreme form of constructivism, according to which a mathematical object does not exist unless it can be constructed from natural numbers in a finite number of steps. In her book Philosophy of Set Theory, Mary Tiles characterized those who allow countably infinite objects as classical finitists, and those who deny even countably infinite objects as strict finitists.

The most famous proponent of finitism was Leopold Kronecker, who said:

God created the natural numbers, all else is the work of man.

Ultrafinitism is an even more extreme version of finitism, which rejects not only infinities but finite quantities that cannot feasibly be constructed with available resources. Another variant of finitism is Euclidean arithmetic, a system developed by John Penn Mayberry in his book The Foundations of Mathematics in the Theory of Sets. Mayberry's system is Aristotelian in general inspiration and, despite his strong rejection of any role for operationalism or feasibility in the foundations of mathematics, comes to somewhat similar conclusions, such as, for instance, that super-exponentiation is not a legitimate finitary function.

Structuralism

Structuralism is a position holding that mathematical theories describe structures, and that mathematical objects are exhaustively defined by their places in such structures, consequently having no intrinsic properties. For instance, it would maintain that all that needs to be known about the number 1 is that it is the first whole number after 0. Likewise all the other whole numbers are defined by their places in a structure, the number line. Other examples of mathematical objects might include lines and planes in geometry, or elements and operations in abstract algebra.

Structuralism is an epistemologically realistic view in that it holds that mathematical statements have an objective truth value. However, its central claim only relates to what kind of entity a mathematical object is, not to what kind of existence mathematical objects or structures have (not, in other words, to their ontology). The kind of existence mathematical objects have would clearly be dependent on that of the structures in which they are embedded; different sub-varieties of structuralism make different ontological claims in this regard.

The ante rem structuralism ("before the thing") has a similar ontology to Platonism. Structures are held to have a real but abstract and immaterial existence. As such, it faces the standard epistemological problem of explaining the interaction between such abstract structures and flesh-and-blood mathematicians.

The in re structuralism ("in the thing") is the equivalent of Aristotelean realism. Structures are held to exist inasmuch as some concrete system exemplifies them. This incurs the usual issues that some perfectly legitimate structures might accidentally happen not to exist, and that a finite physical world might not be "big" enough to accommodate some otherwise legitimate structures.

The post rem structuralism ("after the thing") is anti-realist about structures in a way that parallels nominalism. Like nominalism, the post rem approach denies the existence of abstract mathematical objects with properties other than their place in a relational structure. According to this view mathematical systems exist, and have structural features in common. If something is true of a structure, it will be true of all systems exemplifying the structure. However, it is merely instrumental to talk of structures being "held in common" between systems: they in fact have no independent existence.

Embodied mind theories

Embodied mind theories hold that mathematical thought is a natural outgrowth of the human cognitive apparatus which finds itself in our physical universe. For example, the abstract concept of number springs from the experience of counting discrete objects. It is held that mathematics is not universal and does not exist in any real sense, other than in human brains. Humans construct, but do not discover, mathematics.

With this view, the physical universe can thus be seen as the ultimate foundation of mathematics: it guided the evolution of the brain and later determined which questions this brain would find worthy of investigation. However, the human mind has no special claim on reality or approaches to it built out of math. If such constructs as Euler's identity are true then they are true as a map of the human mind and cognition.

Embodied mind theorists thus explain the effectiveness of mathematics—mathematics was constructed by the brain in order to be effective in this universe.

The most accessible, famous, and infamous treatment of this perspective is Where Mathematics Comes From, by George Lakoff and Rafael E. Núñez. In addition, mathematician Keith Devlin has investigated similar concepts with his book The Math Instinct, as has neuroscientist Stanislas Dehaene with his book The Number Sense. For more on the philosophical ideas that inspired this perspective, see cognitive science of mathematics.

Aristotelian realism

Aristotelian realism holds that mathematics studies properties such as symmetry, continuity and order that can be literally realized in the physical world (or in any other world there might be). It contrasts with Platonism in holding that the objects of mathematics, such as numbers, do not exist in an "abstract" world but can be physically realized. For example, the number 4 is realized in the relation between a heap of parrots and the universal "being a parrot" that divides the heap into so many parrots. Aristotelian realism is defended by James Franklin and the Sydney School in the philosophy of mathematics and is close to the view of Penelope Maddy that when an egg carton is opened, a set of three eggs is perceived (that is, a mathematical entity realized in the physical world). A problem for Aristotelian realism is what account to give of higher infinities, which may not be realizable in the physical world.

The Euclidean arithmetic developed by John Penn Mayberry in his book The Foundations of Mathematics in the Theory of Sets also falls into the Aristotelian realist tradition. Mayberry, following Euclid, considers numbers to be simply "definite multitudes of units" realized in nature—such as "the members of the London Symphony Orchestra" or "the trees in Birnam wood". Whether or not there are definite multitudes of units for which Euclid's Common Notion 5 (the whole is greater than the part) fails and which would consequently be reckoned as infinite is for Mayberry essentially a question about Nature and does not entail any transcendental suppositions.

Psychologism

Psychologism in the philosophy of mathematics is the position that mathematical concepts and/or truths are grounded in, derived from or explained by psychological facts (or laws).

John Stuart Mill seems to have been an advocate of a type of logical psychologism, as were many 19th-century German logicians such as Sigwart and Erdmann as well as a number of psychologists, past and present: for example, Gustave Le Bon. Psychologism was famously criticized by Frege in his The Foundations of Arithmetic, and many of his works and essays, including his review of Husserl's Philosophy of Arithmetic. Edmund Husserl, in the first volume of his Logical Investigations, called "The Prolegomena of Pure Logic", criticized psychologism thoroughly and sought to distance himself from it. The "Prolegomena" is considered a more concise, fair, and thorough refutation of psychologism than the criticisms made by Frege, and also it is considered today by many as being a memorable refutation for its decisive blow to psychologism. Psychologism was also criticized by Charles Sanders Peirce and Maurice Merleau-Ponty.

Empiricism

Mathematical empiricism is a form of realism that denies that mathematics can be known a priori at all. It says that we discover mathematical facts by empirical research, just like facts in any of the other sciences. It is not one of the classical three positions advocated in the early 20th century, but primarily arose in the middle of the century. However, an important early proponent of a view like this was John Stuart Mill. Mill's view was widely criticized, because, according to critics, such as A.J. Ayer, it makes statements like "2 + 2 = 4" come out as uncertain, contingent truths, which we can only learn by observing instances of two pairs coming together and forming a quartet.

Contemporary mathematical empiricism, formulated by W. V. O. Quine and Hilary Putnam, is primarily supported by the indispensability argument: mathematics is indispensable to all empirical sciences, and if we want to believe in the reality of the phenomena described by the sciences, we ought also believe in the reality of those entities required for this description. That is, since physics needs to talk about electrons to say why light bulbs behave as they do, then electrons must exist. Since physics needs to talk about numbers in offering any of its explanations, then numbers must exist. In keeping with Quine and Putnam's overall philosophies, this is a naturalistic argument. It argues for the existence of mathematical entities as the best explanation for experience, thus stripping mathematics of being distinct from the other sciences.

Putnam strongly rejected the term "Platonist" as implying an over-specific ontology that was not necessary to mathematical practice in any real sense. He advocated a form of "pure realism" that rejected mystical notions of truth and accepted much quasi-empiricism in mathematics. This grew from the increasingly popular assertion in the late 20th century that no one foundation of mathematics could be ever proven to exist. It is also sometimes called "postmodernism in mathematics" although that term is considered overloaded by some and insulting by others. Quasi-empiricism argues that in doing their research, mathematicians test hypotheses as well as prove theorems. A mathematical argument can transmit falsity from the conclusion to the premises just as well as it can transmit truth from the premises to the conclusion. Putnam has argued that any theory of mathematical realism would include quasi-empirical methods. He proposed that an alien species doing mathematics might well rely on quasi-empirical methods primarily, being willing often to forgo rigorous and axiomatic proofs, and still be doing mathematics—at perhaps a somewhat greater risk of failure of their calculations. He gave a detailed argument for this in New Directions. Quasi-empiricism was also developed by Imre Lakatos.

The most important criticism of empirical views of mathematics is approximately the same as that raised against Mill. If mathematics is just as empirical as the other sciences, then this suggests that its results are just as fallible as theirs, and just as contingent. In Mill's case the empirical justification comes directly, while in Quine's case it comes indirectly, through the coherence of our scientific theory as a whole, i.e. consilience after E.O. Wilson. Quine suggests that mathematics seems completely certain because the role it plays in our web of belief is extraordinarily central, and that it would be extremely difficult for us to revise it, though not impossible.

For a philosophy of mathematics that attempts to overcome some of the shortcomings of Quine and Gödel's approaches by taking aspects of each see Penelope Maddy's Realism in Mathematics. Another example of a realist theory is the embodied mind theory.

Fictionalism

Mathematical fictionalism was brought to fame in 1980 when Hartry Field published Science Without Numbers, which rejected and in fact reversed Quine's indispensability argument. Where Quine suggested that mathematics was indispensable for our best scientific theories, and therefore should be accepted as a body of truths talking about independently existing entities, Field suggested that mathematics was dispensable, and therefore should be considered as a body of falsehoods not talking about anything real. He did this by giving a complete axiomatization of Newtonian mechanics with no reference to numbers or functions at all. He started with the "betweenness" of Hilbert's axioms to characterize space without coordinatizing it, and then added extra relations between points to do the work formerly done by vector fields. Hilbert's geometry is mathematical, because it talks about abstract points, but in Field's theory, these points are the concrete points of physical space, so no special mathematical objects at all are needed.

Having shown how to do science without using numbers, Field proceeded to rehabilitate mathematics as a kind of useful fiction. He showed that mathematical physics is a conservative extension of his non-mathematical physics (that is, every physical fact provable in mathematical physics is already provable from Field's system), so that mathematics is a reliable process whose physical applications are all true, even though its own statements are false. Thus, when doing mathematics, we can see ourselves as telling a sort of story, talking as if numbers existed. For Field, a statement like "2 + 2 = 4" is just as fictitious as "Sherlock Holmes lived at 221B Baker Street"—but both are true according to the relevant fictions.

By this account, there are no metaphysical or epistemological problems special to mathematics. The only worries left are the general worries about non-mathematical physics, and about fiction in general. Field's approach has been very influential, but is widely rejected. This is in part because of the requirement of strong fragments of second-order logic to carry out his reduction, and because the statement of conservativity seems to require quantification over abstract models or deductions.

Social constructivism

Social constructivism sees mathematics primarily as a social construct, as a product of culture, subject to correction and change. Like the other sciences, mathematics is viewed as an empirical endeavor whose results are constantly evaluated and may be discarded. However, while on an empiricist view the evaluation is some sort of comparison with "reality", social constructivists emphasize that the direction of mathematical research is dictated by the fashions of the social group performing it or by the needs of the society financing it. However, although such external forces may change the direction of some mathematical research, there are strong internal constraints—the mathematical traditions, methods, problems, meanings and values into which mathematicians are enculturated—that work to conserve the historically-defined discipline.

This runs counter to the traditional beliefs of working mathematicians, that mathematics is somehow pure or objective. But social constructivists argue that mathematics is in fact grounded by much uncertainty: as mathematical practice evolves, the status of previous mathematics is cast into doubt, and is corrected to the degree it is required or desired by the current mathematical community. This can be seen in the development of analysis from reexamination of the calculus of Leibniz and Newton. They argue further that finished mathematics is often accorded too much status, and folk mathematics not enough, due to an overemphasis on axiomatic proof and peer review as practices.

The social nature of mathematics is highlighted in its subcultures. Major discoveries can be made in one branch of mathematics and be relevant to another, yet the relationship goes undiscovered for lack of social contact between mathematicians. Social constructivists argue each speciality forms its own epistemic community and often has great difficulty communicating, or motivating the investigation of unifying conjectures that might relate different areas of mathematics. Social constructivists see the process of "doing mathematics" as actually creating the meaning, while social realists see a deficiency either of human capacity to abstractify, or of human's cognitive bias, or of mathematicians' collective intelligence as preventing the comprehension of a real universe of mathematical objects. Social constructivists sometimes reject the search for foundations of mathematics as bound to fail, as pointless or even meaningless.

Contributions to this school have been made by Imre Lakatos and Thomas Tymoczko, although it is not clear that either would endorse the title. More recently Paul Ernest has explicitly formulated a social constructivist philosophy of mathematics. Some consider the work of Paul Erdős as a whole to have advanced this view (although he personally rejected it) because of his uniquely broad collaborations, which prompted others to see and study "mathematics as a social activity", e.g., via the Erdős number. Reuben Hersh has also promoted the social view of mathematics, calling it a "humanistic" approach, similar to but not quite the same as that associated with Alvin White; one of Hersh's co-authors, Philip J. Davis, has expressed sympathy for the social view as well.

Beyond the traditional schools

Unreasonable effectiveness

Rather than focus on narrow debates about the true nature of mathematical truth, or even on practices unique to mathematicians such as the proof, a growing movement from the 1960s to the 1990s began to question the idea of seeking foundations or finding any one right answer to why mathematics works. The starting point for this was Eugene Wigner's famous 1960 paper "The Unreasonable Effectiveness of Mathematics in the Natural Sciences", in which he argued that the happy coincidence of mathematics and physics being so well matched seemed to be unreasonable and hard to explain.

Popper's two senses of number statements

Realist and constructivist theories are normally taken to be contraries. However, Karl Popper argued that a number statement such as "2 apples + 2 apples = 4 apples" can be taken in two senses. In one sense it is irrefutable and logically true. In the second sense it is factually true and falsifiable. Another way of putting this is to say that a single number statement can express two propositions: one of which can be explained on constructivist lines; the other on realist lines.

Philosophy of language

Innovations in the philosophy of language during the 20th century renewed interest in whether mathematics is, as is often said, the language of science. Although some mathematicians and philosophers would accept the statement "mathematics is a language", linguists believe that the implications of such a statement must be considered. For example, the tools of linguistics are not generally applied to the symbol systems of mathematics, that is, mathematics is studied in a markedly different way from other languages. If mathematics is a language, it is a different type of language from natural languages. Indeed, because of the need for clarity and specificity, the language of mathematics is far more constrained than natural languages studied by linguists. However, the methods developed by Frege and Tarski for the study of mathematical language have been extended greatly by Tarski's student Richard Montague and other linguists working in formal semantics to show that the distinction between mathematical language and natural language may not be as great as it seems.

Mohan Ganesalingam has analysed mathematical language using tools from formal linguistics. Ganesalingam notes that some features of natural language are not necessary when analysing mathematical language (such as tense), but many of the same analytical tools can be used (such as context-free grammars). One important difference is that mathematical objects have clearly defined types, which can be explicitly defined in a text: "Effectively, we are allowed to introduce a word in one part of a sentence, and declare its part of speech in another; and this operation has no analogue in natural language."

Arguments

Indispensability argument for realism

This argument, associated with Willard Quine and Hilary Putnam, is considered by Stephen Yablo to be one of the most challenging arguments in favor of the acceptance of the existence of abstract mathematical entities, such as numbers and sets. The form of the argument is as follows.

  1. One must have ontological commitments to all entities that are indispensable to the best scientific theories, and to those entities only (commonly referred to as "all and only").
  2. Mathematical entities are indispensable to the best scientific theories. Therefore,
  3. One must have ontological commitments to mathematical entities.

The justification for the first premise is the most controversial. Both Putnam and Quine invoke naturalism to justify the exclusion of all non-scientific entities, and hence to defend the "only" part of "all and only". The assertion that "all" entities postulated in scientific theories, including numbers, should be accepted as real is justified by confirmation holism. Since theories are not confirmed in a piecemeal fashion, but as a whole, there is no justification for excluding any of the entities referred to in well-confirmed theories. This puts the nominalist who wishes to exclude the existence of sets and non-Euclidean geometry, but to include the existence of quarks and other undetectable entities of physics, for example, in a difficult position.

Epistemic argument against realism

The anti-realist "epistemic argument" against Platonism has been made by Paul Benacerraf and Hartry Field. Platonism posits that mathematical objects are abstract entities. By general agreement, abstract entities cannot interact causally with concrete, physical entities ("the truth-values of our mathematical assertions depend on facts involving Platonic entities that reside in a realm outside of space-time"). Whilst our knowledge of concrete, physical objects is based on our ability to perceive them, and therefore to causally interact with them, there is no parallel account of how mathematicians come to have knowledge of abstract objects. Another way of making the point is that if the Platonic world were to disappear, it would make no difference to the ability of mathematicians to generate proofs, etc., which is already fully accountable in terms of physical processes in their brains.

Field developed his views into fictionalism. Benacerraf also developed the philosophy of mathematical structuralism, according to which there are no mathematical objects. Nonetheless, some versions of structuralism are compatible with some versions of realism.

The argument hinges on the idea that a satisfactory naturalistic account of thought processes in terms of brain processes can be given for mathematical reasoning along with everything else. One line of defense is to maintain that this is false, so that mathematical reasoning uses some special intuition that involves contact with the Platonic realm. A modern form of this argument is given by Sir Roger Penrose.

Another line of defense is to maintain that abstract objects are relevant to mathematical reasoning in a way that is non-causal, and not analogous to perception. This argument is developed by Jerrold Katz in his 2000 book Realistic Rationalism.

A more radical defense is denial of physical reality, i.e. the mathematical universe hypothesis. In that case, a mathematician's knowledge of mathematics is one mathematical object making contact with another.

Aesthetics

Many practicing mathematicians have been drawn to their subject because of a sense of beauty they perceive in it. One sometimes hears the sentiment that mathematicians would like to leave philosophy to the philosophers and get back to mathematics—where, presumably, the beauty lies.

In his work on the divine proportion, H.E. Huntley relates the feeling of reading and understanding someone else's proof of a theorem of mathematics to that of a viewer of a masterpiece of art—the reader of a proof has a similar sense of exhilaration at understanding as the original author of the proof, much as, he argues, the viewer of a masterpiece has a sense of exhilaration similar to the original painter or sculptor. Indeed, one can study mathematical and scientific writings as literature.

Philip J. Davis and Reuben Hersh have commented that the sense of mathematical beauty is universal amongst practicing mathematicians. By way of example, they provide two proofs of the irrationality of 2. The first is the traditional proof by contradiction, ascribed to Euclid; the second is a more direct proof involving the fundamental theorem of arithmetic that, they argue, gets to the heart of the issue. Davis and Hersh argue that mathematicians find the second proof more aesthetically appealing because it gets closer to the nature of the problem.

Paul Erdős was well known for his notion of a hypothetical "Book" containing the most elegant or beautiful mathematical proofs. There is not universal agreement that a result has one "most elegant" proof; Gregory Chaitin has argued against this idea.

Philosophers have sometimes criticized mathematicians' sense of beauty or elegance as being, at best, vaguely stated. By the same token, however, philosophers of mathematics have sought to characterize what makes one proof more desirable than another when both are logically sound.

Another aspect of aesthetics concerning mathematics is mathematicians' views towards the possible uses of mathematics for purposes deemed unethical or inappropriate. The best-known exposition of this view occurs in G. H. Hardy's book A Mathematician's Apology, in which Hardy argues that pure mathematics is superior in beauty to applied mathematics precisely because it cannot be used for war and similar ends.

Equality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Equality_...