Search This Blog

Wednesday, June 16, 2021

The Unreasonable Effectiveness of Mathematics in the Natural Sciences

From Wikipedia, the free encyclopedia

"The Unreasonable Effectiveness of Mathematics in the Natural Sciences" is a 1960 article by the physicist Eugene Wigner. In the paper, Wigner observes that a physical theory's mathematical structure often points the way to further advances in that theory and even to empirical predictions.

The miracle of mathematics in the natural sciences

Wigner begins his paper with the belief, common among those familiar with mathematics, that mathematical concepts have applicability far beyond the context in which they were originally developed. Based on his experience, he writes, "it is important to point out that the mathematical formulation of the physicist's often crude experience leads in an uncanny number of cases to an amazingly accurate description of a large class of phenomena". He then invokes the fundamental law of gravitation as an example. Originally used to model freely falling bodies on the surface of the earth, this law was extended on the basis of what Wigner terms "very scanty observations" to describe the motion of the planets, where it "has proved accurate beyond all reasonable expectations".

Another oft-cited example is Maxwell's equations, derived to model the elementary electrical and magnetic phenomena known as of the mid-19th century. The equations also describe radio waves, discovered by David Edward Hughes in 1879, around the time of James Clerk Maxwell's death. Wigner sums up his argument by saying that "the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and that there is no rational explanation for it". He concludes his paper with the same question with which he began:

The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. We should be grateful for it and hope that it will remain valid in future research and that it will extend, for better or for worse, to our pleasure, even though perhaps also to our bafflement, to wide branches of learning.

The deep connection between science and mathematics

Wigner's work provided a fresh insight into both physics and the philosophy of mathematics, and has been fairly often cited in the academic literature on the philosophy of physics and of mathematics. Wigner speculated on the relationship between the philosophy of science and the foundations of mathematics as follows:

It is difficult to avoid the impression that a miracle confronts us here, quite comparable in its striking nature to the miracle that the human mind can string a thousand arguments together without getting itself into contradictions, or to the two miracles of laws of nature and of the human mind's capacity to divine them.

Later, Hilary Putnam (1975) explained these "two miracles" as necessary consequences of a realist (but not Platonist) view of the philosophy of mathematics. But in a passage discussing cognitive bias Wigner cautiously labeled as "not reliable", he went further:

The writer is convinced that it is useful, in epistemological discussions, to abandon the idealization that the level of human intelligence has a singular position on an absolute scale. In some cases it may even be useful to consider the attainment which is possible at the level of the intelligence of some other species.

Whether humans checking the results of humans can be considered an objective basis for observation of the known (to humans) universe is an interesting question, one followed up in both cosmology and the philosophy of mathematics.

Wigner also laid out the challenge of a cognitive approach to integrating the sciences:

A much more difficult and confusing situation would arise if we could, some day, establish a theory of the phenomena of consciousness, or of biology, which would be as coherent and convincing as our present theories of the inanimate world.

He further proposed that arguments could be found that might

put a heavy strain on our faith in our theories and on our belief in the reality of the concepts which we form. It would give us a deep sense of frustration in our search for what I called 'the ultimate truth'. The reason that such a situation is conceivable is that, fundamentally, we do not know why our theories work so well. Hence, their accuracy may not prove their truth and consistency. Indeed, it is this writer's belief that something rather akin to the situation which was described above exists if the present laws of heredity and of physics are confronted.

Responses to Wigner's original paper

Wigner's original paper has provoked and inspired many responses across a wide range of disciplines. These include Richard Hamming in computer science, Arthur Lesk in molecular biology, Peter Norvig in data mining, Max Tegmark in physics, Ivor Grattan-Guinness in mathematics and Vela Velupillai in economics.

Richard Hamming

Richard Hamming, an applied mathematician and a founder of computer science, reflected on and extended Wigner's Unreasonable Effectiveness in 1980, mulling over four "partial explanations" for it. Hamming concluded that the four explanations he gave were unsatisfactory. They were:

1. Humans see what they look for. The belief that science is experimentally grounded is only partially true. Rather, our intellectual apparatus is such that much of what we see comes from the glasses we put on. Eddington went so far as to claim that a sufficiently wise mind could deduce all of physics, illustrating his point with the following joke: "Some men went fishing in the sea with a net, and upon examining what they caught they concluded that there was a minimum size to the fish in the sea."

Hamming gives four examples of nontrivial physical phenomena he believes arose from the mathematical tools employed and not from the intrinsic properties of physical reality.

  • Hamming proposes that Galileo discovered the law of falling bodies not by experimenting, but by simple, though careful, thinking. Hamming imagines Galileo as having engaged in the following thought experiment (the experiment, which Hamming calls "scholastic reasoning", is described in Galileo's book On Motion.):

Suppose that a falling body broke into two pieces. Of course the two pieces would immediately slow down to their appropriate speeds. But suppose further that one piece happened to touch the other one. Would they now be one piece and both speed up? Suppose I tie the two pieces together. How tightly must I do it to make them one piece? A light string? A rope? Glue? When are two pieces one?

There is simply no way a falling body can "answer" such hypothetical "questions." Hence Galileo would have concluded that "falling bodies need not know anything if they all fall with the same velocity, unless interfered with by another force." After coming up with this argument, Hamming found a related discussion in Pólya (1963: 83-85). Hamming's account does not reveal an awareness of the 20th century scholarly debate over just what Galileo did.

2. Humans create and select the mathematics that fit a situation. The mathematics at hand does not always work. For example, when mere scalars proved awkward for understanding forces, first vectors, then tensors, were invented.

3. Mathematics addresses only a part of human experience. Much of human experience does not fall under science or mathematics but under the philosophy of value, including ethics, aesthetics, and political philosophy. To assert that the world can be explained via mathematics amounts to an act of faith.

4. Evolution has primed humans to think mathematically. The earliest lifeforms must have contained the seeds of the human ability to create and follow long chains of close reasoning.

Max Tegmark

A different response, advocated by physicist Max Tegmark, is that physics is so successfully described by mathematics because the physical world is completely mathematical, isomorphic to a mathematical structure, and that we are simply uncovering this bit by bit. The same interpretation had been advanced some years previously by Peter Atkins. In this interpretation, the various approximations that constitute our current physics theories are successful because simple mathematical structures can provide good approximations of certain aspects of more complex mathematical structures. In other words, our successful theories are not mathematics approximating physics, but mathematics approximating mathematics. Most of Tegmark's propositions are highly speculative, and some of them even far-out by strict scientific standards, and they raise one basic question: can one make precise sense of a notion of isomorphism (rather than hand-waving "correspondence") between the universe – the concrete world of "stuff" and events – on the one hand, and mathematical structures as they are understood by mathematicians, within mathematics? Unless – or optimistically, until – this is achieved, the often-heard proposition that ‘the world/universe is mathematical’ may be nothing but a category mistake.

Ivor Grattan-Guinness

Ivor Grattan-Guinness found the effectiveness in question eminently reasonable and explicable in terms of concepts such as analogy, generalisation and metaphor.

Related quotations

[W]ir auch, gleich als ob es ein glücklicher unsre Absicht begünstigender Zufall wäre, erfreuet (eigentlich eines Bedürfnisses entledigt) werden, wenn wir eine solche systematische Einheit unter bloß empirischen Gesetzen antreffen. [We rejoice (actually we are relieved of a need) when, just as if it were a lucky chance favoring our aim, we do find such systematic unity among merely empirical laws.].

The most incomprehensible thing about the universe is that it is comprehensible.

— Albert Einstein

How can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality? [...] In my opinion the answer to this question is, briefly, this: As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.

— Albert Einstein

Physics is mathematical not because we know so much about the physical world, but because we know so little; it is only its mathematical properties that we can discover.

There is only one thing which is more unreasonable than the unreasonable effectiveness of mathematics in physics, and this is the unreasonable ineffectiveness of mathematics in biology.

Sciences reach a point where they become mathematized..the central issues in the field become sufficiently understood that they can be thought about mathematically..[by the early 1990s] biology was no longer the science of things that smelled funny in refrigerators (my view from undergraduate days in the 1960s)..The field was undergoing a revolution and was rapidly acquiring the depth and power previously associated exclusively with the physical sciences. Biology was now the study of information stored in DNA — strings of four letters: A, T, G, and C..and the transformations that information undergoes in the cell. There was mathematics here!

— Leonard Adleman, a theoretical computer scientist who pioneered the field of DNA computing 

We should stop acting as if our goal is to author extremely elegant theories, and instead embrace complexity and make use of the best ally we have: the unreasonable effectiveness of data.

 

Numerical cognition

From Wikipedia, the free encyclopedia

Numerical cognition is a subdiscipline of cognitive science that studies the cognitive, developmental and neural bases of numbers and mathematics. As with many cognitive science endeavors, this is a highly interdisciplinary topic, and includes researchers in cognitive psychology, developmental psychology, neuroscience and cognitive linguistics. This discipline, although it may interact with questions in the philosophy of mathematics, is primarily concerned with empirical questions.

Topics included in the domain of numerical cognition include:

  • How do non-human animals process numerosity?
  • How do infants acquire an understanding of numbers (and how much is inborn)?
  • How do humans associate linguistic symbols with numerical quantities?
  • How do these capacities underlie our ability to perform complex calculations?
  • What are the neural bases of these abilities, both in humans and in non-humans?
  • What metaphorical capacities and processes allow us to extend our numerical understanding into complex domains such as the concept of infinity, the infinitesimal or the concept of the limit in calculus?
  • Heuristics in numerical cognition

Comparative studies

A variety of research has demonstrated that non-human animals, including rats, lions and various species of primates have an approximate sense of number (referred to as "numerosity") (for a review, see Dehaene 1997). For example, when a rat is trained to press a bar 8 or 16 times to receive a food reward, the number of bar presses will approximate a Gaussian or Normal distribution with peak around 8 or 16 bar presses. When rats are more hungry, their bar pressing behavior is more rapid, so by showing that the peak number of bar presses is the same for either well-fed or hungry rats, it is possible to disentangle time and number of bar presses. In addition, in a few species the parallel individuation system has been shown, for example in the case of guppies which successfully discriminated between 1 and 4 other individuals.

Similarly, researchers have set up hidden speakers in the African savannah to test natural (untrained) behavior in lions (McComb, Packer & Pusey 1994). These speakers can play a number of lion calls, from 1 to 5. If a single lioness hears, for example, three calls from unknown lions, she will leave, while if she is with four of her sisters, they will go and explore. This suggests that not only can lions tell when they are "outnumbered" but that they can do this on the basis of signals from different sensory modalities, suggesting that numerosity is a multisensory concept.

Developmental studies

Developmental psychology studies have shown that human infants, like non-human animals, have an approximate sense of number. For example, in one study, infants were repeatedly presented with arrays of (in one block) 16 dots. Careful controls were in place to eliminate information from "non-numerical" parameters such as total surface area, luminance, circumference, and so on. After the infants had been presented with many displays containing 16 items, they habituated, or stopped looking as long at the display. Infants were then presented with a display containing 8 items, and they looked longer at the novel display.

Because of the numerous controls that were in place to rule out non-numerical factors, the experimenters infer that six-month-old infants are sensitive to differences between 8 and 16. Subsequent experiments, using similar methodologies showed that 6-month-old infants can discriminate numbers differing by a 2:1 ratio (8 vs. 16 or 16 vs. 32) but not by a 3:2 ratio (8 vs. 12 or 16 vs. 24). However, 10-month-old infants succeed both at the 2:1 and the 3:2 ratio, suggesting an increased sensitivity to numerosity differences with age (for a review of this literature see Feigenson, Dehaene & Spelke 2004).

In another series of studies, Karen Wynn showed that infants as young as five months are able to do very simple additions (e.g., 1 + 1 = 2) and subtractions (3 - 1 = 2). To demonstrate this, Wynn used a "violation of expectation" paradigm, in which infants were shown (for example) one Mickey Mouse doll going behind a screen, followed by another. If, when the screen was lowered, infants were presented with only one Mickey (the "impossible event") they looked longer than if they were shown two Mickeys (the "possible" event). Further studies by Karen Wynn and Koleen McCrink found that although infants' ability to compute exact outcomes only holds over small numbers, infants can compute approximate outcomes of larger addition and subtraction events (e.g., "5+5" and "10-5" events).

There is debate about how much these infant systems actually contain in terms of number concepts, harkening to the classic nature versus nurture debate. Gelman & Gallistel 1978 suggested that a child innately has the concept of natural number, and only has to map this onto the words used in her language. Carey 2004, Carey 2009 disagreed, saying that these systems can only encode large numbers in an approximate way, where language-based natural numbers can be exact. Without language, only numbers 1 to 4 are believed to have an exact representation, through the parallel individuation system. One promising approach is to see if cultures that lack number words can deal with natural numbers. The results so far are mixed (e.g., Pica et al. 2004); Butterworth & Reeve 2008, Butterworth, Reeve & Lloyd 2008.

Neuroimaging and neurophysiological studies

Human neuroimaging studies have demonstrated that regions of the parietal lobe, including the intraparietal sulcus (IPS) and the inferior parietal lobule (IPL) are activated when subjects are asked to perform calculation tasks. Based on both human neuroimaging and neuropsychology, Stanislas Dehaene and colleagues have suggested that these two parietal structures play complementary roles. The IPS is thought to house the circuitry that is fundamentally involved in numerical estimation (Piazza et al. 2004), number comparison (Pinel et al. 2001; Pinel et al. 2004) and on-line calculation, or quantity processing (often tested with subtraction) while the IPL is thought to be involved in rote memorization, such as multiplication (see Dehaene 1997). Thus, a patient with a lesion to the IPL may be able to subtract, but not multiply, and vice versa for a patient with a lesion to the IPS. In addition to these parietal regions, regions of the frontal lobe are also active in calculation tasks. These activations overlap with regions involved in language processing such as Broca's area and regions involved in working memory and attention. Additionally, the inferotemporal cortex is implicated in processing the numerical shapes and symbols, necessary for calculations with Arabic digits. More current research has highlighted the networks involved with multiplication and subtraction tasks. Multiplication is often learned through rote memorization and verbal repetitions, and neuroimaging studies have shown that multiplication uses a left lateralized network of the inferior frontal cortex and the superior-middle temporal gyri in addition to the IPL and IPS. Subtraction is taught more with quantity manipulation and strategy use, more reliant upon the right IPS and the posterior parietal lobule.

Single-unit neurophysiology in monkeys has also found neurons in the frontal cortex and in the intraparietal sulcus that respond to numbers. Andreas Nieder (Nieder 2005; Nieder, Freedman & Miller 2002; Nieder & Miller 2004 harvnb error: multiple targets (2×): CITEREFNiederMiller2004 (help)) trained monkeys to perform a "delayed match-to-sample" task. For example, a monkey might be presented with a field of four dots, and is required to keep that in memory after the display is taken away. Then, after a delay period of several seconds, a second display is presented. If the number on the second display match that from the first, the monkey has to release a lever. If it is different, the monkey has to hold the lever. Neural activity recorded during the delay period showed that neurons in the intraparietal sulcus and the frontal cortex had a "preferred numerosity", exactly as predicted by behavioral studies. That is, a certain number might fire strongly for four, but less strongly for three or five, and even less for two or six. Thus, we say that these neurons were "tuned" for specific quantities. Note that these neuronal responses followed Weber's law, as has been demonstrated for other sensory dimensions, and consistent with the ratio dependence observed for non-human animals' and infants' numerical behavior (Nieder & Miller 2003) harv error: multiple targets (2×): CITEREFNiederMiller2003 (help).

It is important to note that while primates have remarkably similar brains to humans, there are differences in function, ability, and sophistication. They make for good preliminary test subjects, but do not show small differences that are the result of different evolutionary tracks and environment. However, in the realm of number, they share many similarities. As identified in monkeys, neurons selectively tuned to number were identified in the bilateral intraparietal sulci and prefrontal cortex in humans. Piazza and colleagues investigated this using fMRI, presenting participants with sets of dots where they either had to make same-different judgments or larger-smaller judgments. The sets of dots consisted of base numbers 16 and 32 dots with ratios in 1.25, 1.5, and 2. Deviant numbers were included in some trials in larger or smaller amounts than the base numbers. Participants displayed similar activation patterns as Neider found in the monkeys. The intraparietal sulcus and the prefrontal cortex, also implicated in number, communicate in approximating number and it was found in both species that the parietal neurons of the IPS had short firing latencies, whereas the frontal neurons had longer firing latencies. This supports the notion that number is first processed in the IPS and, if needed, is then transferred to the associated frontal neurons in the prefrontal cortex for further numerations and applications. Humans displayed Gaussian curves in the tuning curves of approximate magnitude. This aligned with monkeys, displaying a similarly structured mechanism in both species with classic Gaussian curves relative to the increasingly deviant numbers with 16 and 32 as well as habituation. The results followed Weber's Law, with accuracy decreasing as the ratio between numbers became smaller. This supports the findings made by Neider in macaque monkeys and shows definitive evidence for an approximate number logarithmic scale in humans.

With an established mechanism for approximating non-symbolic number in both humans and primates, a necessary further investigation is needed to determine if this mechanism is innate and present in children, which would suggest an inborn ability to process numerical stimuli much like humans are born ready to process language. Cantlon and colleagues set out to investigate this in 4 year old healthy, normally developing children in parallel with adults. A similar task to Piazza's was used in this experiment, without the judgment tasks. Dot arrays of varying size and number were used, with 16 and 32 as the base numerosities. in each block, 232 stimuli were presented with 20 deviant numerosities of a 2.0 ratio both larger and smaller. For example, out of the 232 trials, 16 dots were presented in varying size and distance but 10 of those trials had 8 dots, and 10 of those trials had 32 dots, making up the 20 deviant stimuli. The same applied to the blocks with 32 as the base numerosity. To ensure the adults and children were attending to the stimuli, they put 3 fixation points throughout the trial where the participant had to move a joystick to move forward. Their findings indicated that the adults in the experiment had significant activation of the IPS when viewing the deviant number stimuli, aligning with what was previously found in the aforementioned paragraph. In the 4 year olds, they found significant activation of the IPS to the deviant number stimuli, resembling the activation found in adults. There were some differences in the activations, with adults displaying more robust bilateral activation, where the 4 year olds primarily showed activation in their right IPS and activated 112 less voxels than the adults. This suggests that at age 4, children have an established mechanism of neurons in the IPS tuned for processing non-symbolic numerosities. Other studies have gone deeper into this mechanism in children and discovered that children do also represent approximate numbers on a logarithmic scale, aligning with the claims made by Piazza in adults.

A study by Izard and colleagues investigated abstract number representations in infants using a different paradigm than the previous researchers because of the nature and developmental stage of the infants. For infants, they examined abstract number with both auditory and visual stimuli with a looking-time paradigm. The sets used were 4vs.12, 8vs.16, and 4vs.8. The auditory stimuli consisted of tones in different frequencies with a set number of tones, with some deviant trials where the tones were shorter but more numerous or longer and less numerous to account for duration and its potential confounds. After the auditory stimuli was presented with 2 minutes of familiarization, the visual stimuli was presented with a congruent or incongruent array of colorful dots with facial features. they remained on the screen until the infant looked away. They found that infants looked longer at the stimuli that matched the auditory tones, suggesting that the system for approximating non-symbolic number, even across modalities, is present in infancy. What is important to note across these three particular human studies on nonsymbolic numerosities is that it is present in infancy and develops over the lifetime. The honing of their approximation and number sense abilities as indicated by the improving Weber fractions across time, and usage of the left IPS to provide a wider berth for processing of computations and enumerations lend support for the claims that are made for a nonsymbolic number processing mechanism in human brains.

Relations between number and other cognitive processes

There is evidence that numerical cognition is intimately related to other aspects of thought – particularly spatial cognition. One line of evidence comes from studies performed on number-form synaesthetes. Such individuals report that numbers are mentally represented with a particular spatial layout; others experience numbers as perceivable objects that can be visually manipulated to facilitate calculation. Behavioral studies further reinforce the connection between numerical and spatial cognition. For instance, participants respond quicker to larger numbers if they are responding on the right side of space, and quicker to smaller numbers when on the left—the so-called "Spatial-Numerical Association of Response Codes" or SNARC effect. This effect varies across culture and context, however, and some research has even begun to question whether the SNARC reflects an inherent number-space association, instead invoking strategic problem solving or a more general cognitive mechanism like conceptual metaphor. Moreover, neuroimaging studies reveal that the association between number and space also shows up in brain activity. Regions of the parietal cortex, for instance, show shared activation for both spatial and numerical processing. These various lines of research suggest a strong, but flexible, connection between numerical and spatial cognition.

Modification of the usual decimal representation was advocated by John Colson. The sense of complementation, missing in the usual decimal system, is expressed by signed-digit representation.

Heuristics in numerical cognition

Several consumer psychologists have also studied the heuristics that people use in numerical cognition. For example, Thomas and Morwitz (2009) reviewed several studies showing that the three heuristics that manifest in many everyday judgments and decisions – anchoring, representativeness, and availability – also influence numerical cognition. They identify the manifestations of these heuristics in numerical cognition as: the left-digit anchoring effect, the precision effect, and the ease of computation effect respectively. The left-digit effect refers to the observation that people tend to incorrectly judge the difference between $4.00 and $2.99 to be larger than that between $4.01 and $3.00 because of anchoring on left-most digits. The precision effect reflects the influence of the representativeness of digit patterns on magnitude judgments. Larger magnitudes are usually rounded and therefore have many zeros, whereas smaller magnitudes are usually expressed as precise numbers; so relying on the representativeness of digit patterns can make people incorrectly judge a price of $391,534 to be more attractive than a price of $390,000. The ease of computation effect shows that magnitude judgments are based not only on the output of a mental computation, but also on its experienced ease or difficulty. Usually it is easier to compare two dissimilar magnitudes than two similar magnitudes; overuse of this heuristic can make people incorrectly judge the difference to be larger for pairs with easier computations, e.g. $5.00 minus $4.00, than for pairs with difficult computations, e.g. $4.97 minus $3.96. 

Ethnolinguistic variance

The numeracy of indigenous peoples is studied to identify universal aspects of numerical cognition in humans. Notable examples include the Pirahã people who have no words for specific numbers and the Munduruku people who only have number words up to five. Pirahã adults are unable to mark an exact number of tallies for a pile of nuts containing fewer than ten items. Anthropologist Napoleon Chagnon spent several decades studying the Yanomami in the field. He concluded that they have no need for counting in their everyday lives. Their hunters keep track of individual arrows with the same mental faculties that they use to recognize their family members. There are no known hunter-gatherer cultures that have a counting system in their language. The mental and lingual capabilities for numeracy are tied to the development of agriculture and with it large numbers of indistinguishable items.

Consilience

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Consilience

In science and history, consilience (also convergence of evidence or concordance of evidence) is the principle that evidence from independent, unrelated sources can "converge" on strong conclusions. That is, when multiple sources of evidence are in agreement, the conclusion can be very strong even when none of the individual sources of evidence is significantly so on its own. Most established scientific knowledge is supported by a convergence of evidence: if not, the evidence is comparatively weak, and there will not likely be a strong scientific consensus.

The principle is based on the unity of knowledge; measuring the same result by several different methods should lead to the same answer. For example, it should not matter whether one measures the distance between the Giza pyramid complex by laser rangefinding, by satellite imaging, or with a meter stick – in all three cases, the answer should be approximately the same. For the same reason, different dating methods in geochronology should concur, a result in chemistry should not contradict a result in geology, etc.

The word consilience was originally coined as the phrase "consilience of inductions" by William Whewell (consilience refers to a "jumping together" of knowledge). The word comes from Latin com- "together" and -siliens "jumping" (as in resilience).

Description

Consilience requires the use of independent methods of measurement, meaning that the methods have few shared characteristics. That is, the mechanism by which the measurement is made is different; each method is dependent on an unrelated natural phenomenon. For example, the accuracy of laser rangefinding measurements is based on the scientific understanding of lasers, while satellite pictures and meter sticks rely on different phenomena. Because the methods are independent, when one of several methods is in error, it is very unlikely to be in error in the same way as any of the other methods, and a difference between the measurements will be observed. If the scientific understanding of the properties of lasers were inaccurate, then the laser measurement would be inaccurate but the others would not.

As a result, when several different methods agree, this is strong evidence that none of the methods are in error and the conclusion is correct. This is because of a greatly reduced likelihood of errors: for a consensus estimate from multiple measurements to be wrong, the errors would have to be similar for all samples and all methods of measurement, which is extremely unlikely. Random errors will tend to cancel out as more measurements are made, due to regression to the mean; systematic errors will be detected by differences between the measurements (and will also tend to cancel out since the direction of the error will still be random). This is how scientific theories reach high confidence – over time, they build up a large degree of evidence which converges on the same conclusion.

When results from different strong methods do appear to conflict, this is treated as a serious problem to be reconciled. For example, in the 19th century, the Sun appeared to be no more than 20 million years old, but the Earth appeared to be no less than 300 million years (resolved by the discovery of nuclear fusion and radioactivity, and the theory of quantum mechanics); or current attempts to resolve theoretical differences between quantum mechanics and general relativity.

Significance

Because of consilience, the strength of evidence for any particular conclusion is related to how many independent methods are supporting the conclusion, as well as how different these methods are. Those techniques with the fewest (or no) shared characteristics provide the strongest consilience and result in the strongest conclusions. This also means that confidence is usually strongest when considering evidence from different fields, because the techniques are usually very different.

For example, the theory of evolution is supported by a convergence of evidence from genetics, molecular biology, paleontology, geology, biogeography, comparative anatomy, comparative physiology, and many other fields. In fact, the evidence within each of these fields is itself a convergence providing evidence for the theory. (As a result, to disprove evolution, most or all of these independent lines of evidence would have to be found to be in error.) The strength of the evidence, considered together as a whole, results in the strong scientific consensus that the theory is correct. In a similar way, evidence about the history of the universe is drawn from astronomy, astrophysics, planetary geology, and physics.

Finding similar conclusions from multiple independent methods is also evidence for the reliability of the methods themselves, because consilience eliminates the possibility of all potential errors that do not affect all the methods equally. This is also used for the validation of new techniques through comparison with the consilient ones. If only partial consilience is observed, this allows for the detection of errors in methodology; any weaknesses in one technique can be compensated for by the strengths of the others. Alternatively, if using more than one or two techniques for every experiment is infeasible, some of the benefits of consilience may still be obtained if it is well-established that these techniques usually give the same result.

Consilience is important across all of science, including the social sciences, and is often used as an argument for scientific realism by philosophers of science. Each branch of science studies a subset of reality that depends on factors studied in other branches. Atomic physics underlies the workings of chemistry, which studies emergent properties that in turn are the basis of biology. Psychology is not separate from the study of properties emergent from the interaction of neurons and synapses. Sociology, economics, and anthropology are each, in turn, studies of properties emergent from the interaction of countless individual humans. The concept that all the different areas of research are studying one real, existing universe is an apparent explanation of why scientific knowledge determined in one field of inquiry has often helped in understanding other fields.

Deviations

Consilience does not forbid deviations: in fact, since not all experiments are perfect, some deviations from established knowledge are expected. However, when the convergence is strong enough, then new evidence inconsistent with the previous conclusion is not usually enough to outweigh that convergence. Without an equally strong convergence on the new result, the weight of evidence will still favor the established result. This means that the new evidence is most likely to be wrong.

Science denialism (for example, AIDS denialism) is often based on a misunderstanding of this property of consilience. A denier may promote small gaps not yet accounted for by the consilient evidence, or small amounts of evidence contradicting a conclusion without accounting for the pre-existing strength resulting from consilience. More generally, to insist that all evidence converge precisely with no deviations would be naïve falsificationism, equivalent to considering a single contrary result to falsify a theory when another explanation, such as equipment malfunction or misinterpretation of results, is much more likely.

In history

Historical evidence also converges in an analogous way. For example: if five ancient historians, none of whom knew each other, all claim that Julius Caesar seized power in Rome in 49 BCE, this is strong evidence in favor of that event occurring even if each individual historian is only partially reliable. By contrast, if the same historian had made the same claim five times in five different places (and no other types of evidence were available), the claim is much weaker because it originates from a single source. The evidence from the ancient historians could also converge with evidence from other fields, such as archaeology: for example, evidence that many senators fled Rome at the time, that the battles of Caesar’s civil war occurred, and so forth.

Consilience has also been discussed in reference to Holocaust denial.

"We [have now discussed] eighteen proofs all converging on one conclusion...the deniers shift the burden of proof to historians by demanding that each piece of evidence, independently and without corroboration between them, prove the Holocaust. Yet no historian has ever claimed that one piece of evidence proves the Holocaust. We must examine the collective whole."

That is, individually the evidence may underdetermine the conclusion, but together they overdetermine it. A similar way to state this is that to ask for one particular piece of evidence in favor of a conclusion is a flawed question.

Outside the sciences

In addition to the sciences, consilience can be important to the arts, ethics and religion. Both artists and scientists have identified the importance of biology in the process of artistic innovation.

History of the concept

Consilience has its roots in the ancient Greek concept of an intrinsic orderliness that governs our cosmos, inherently comprehensible by logical process, a vision at odds with mystical views in many cultures that surrounded the Hellenes. The rational view was recovered during the high Middle Ages, separated from theology during the Renaissance and found its apogee in the Age of Enlightenment.

Whewell’s definition was that:

The Consilience of Inductions takes place when an Induction, obtained from one class of facts, coincides with an Induction obtained from another different class. Thus Consilience is a test of the truth of the Theory in which it occurs.

More recent descriptions include:

"Where there is convergence of evidence, where the same explanation is implied, there is increased confidence in the explanation. Where there is divergence, then either the explanation is at fault or one or more of the sources of information is in error or requires reinterpretation."

"Proof is derived through a convergence of evidence from numerous lines of inquiry--multiple, independent inductions, all of which point to an unmistakable conclusion."

Edward O. Wilson

Although the concept of consilience in Whewell's sense was widely discussed by philosophers of science, the term was unfamiliar to the broader public until the end of the 20th century, when it was revived in Consilience: The Unity of Knowledge, a 1998 book by the author and biologist E.O. Wilson, as an attempt to bridge the culture gap between the sciences and the humanities that was the subject of C. P. Snow's The Two Cultures and the Scientific Revolution (1959).

Wilson held that with the rise of the modern sciences, the sense of unity gradually was lost in the increasing fragmentation and specialization of knowledge in the last two centuries. He asserted that the sciences, humanities, and arts have a common goal: to give a purpose to understanding the details, to lend to all inquirers "a conviction, far deeper than a mere working proposition, that the world is orderly and can be explained by a small number of natural laws." An important point made by Wilson is that hereditary human nature and evolution itself profoundly effect the evolution of culture, in essence a sociobiological concept. Wilson's concept is a much broader notion of consilience than that of Whewell, who was merely pointing out that generalizations invented to account for one set of phenomena often account for others as well.

A parallel view lies in the term universology, which literally means "the science of the universe." Universology was first promoted for the study of the interconnecting principles and truths of all domains of knowledge by Stephen Pearl Andrews, a 19th-century utopian futurist and anarchist.

Philosophy of mathematics

From Wikipedia, the free encyclopedia

The philosophy of mathematics is the branch of philosophy that studies the assumptions, foundations, and implications of mathematics. It aims to understand the nature and methods of mathematics, and find out the place of mathematics in people's lives. The logical and structural nature of mathematics itself makes this study both broad and unique among its philosophical counterparts.

History

The origin of mathematics is subject to arguments and disagreements. Whether the birth of mathematics was a random happening or induced by necessity during the development of other subjects, like physics, is still a matter of prolific debates.

Many thinkers have contributed their ideas concerning the nature of mathematics. Today, some philosophers of mathematics aim to give accounts of this form of inquiry and its products as they stand, while others emphasize a role for themselves that goes beyond simple interpretation to critical analysis. There are traditions of mathematical philosophy in both Western philosophy and Eastern philosophy. Western philosophies of mathematics go as far back as Pythagoras, who described the theory "everything is mathematics" (mathematicism), Plato, who paraphrased Pythagoras, and studied the ontological status of mathematical objects, and Aristotle, who studied logic and issues related to infinity (actual versus potential).

Greek philosophy on mathematics was strongly influenced by their study of geometry. For example, at one time, the Greeks held the opinion that 1 (one) was not a number, but rather a unit of arbitrary length. A number was defined as a multitude. Therefore, 3, for example, represented a certain multitude of units, and was thus not "truly" a number. At another point, a similar argument was made that 2 was not a number but a fundamental notion of a pair. These views come from the heavily geometric straight-edge-and-compass viewpoint of the Greeks: just as lines drawn in a geometric problem are measured in proportion to the first arbitrarily drawn line, so too are the numbers on a number line measured in proportion to the arbitrary first "number" or "one".

These earlier Greek ideas of numbers were later upended by the discovery of the irrationality of the square root of two. Hippasus, a disciple of Pythagoras, showed that the diagonal of a unit square was incommensurable with its (unit-length) edge: in other words he proved there was no existing (rational) number that accurately depicts the proportion of the diagonal of the unit square to its edge. This caused a significant re-evaluation of Greek philosophy of mathematics. According to legend, fellow Pythagoreans were so traumatized by this discovery that they murdered Hippasus to stop him from spreading his heretical idea. Simon Stevin was one of the first in Europe to challenge Greek ideas in the 16th century. Beginning with Leibniz, the focus shifted strongly to the relationship between mathematics and logic. This perspective dominated the philosophy of mathematics through the time of Frege and of Russell, but was brought into question by developments in the late 19th and early 20th centuries.

Contemporary philosophy

A perennial issue in the philosophy of mathematics concerns the relationship between logic and mathematics at their joint foundations. While 20th-century philosophers continued to ask the questions mentioned at the outset of this article, the philosophy of mathematics in the 20th century was characterized by a predominant interest in formal logic, set theory (both naive set theory and axiomatic set theory), and foundational issues.

It is a profound puzzle that on the one hand mathematical truths seem to have a compelling inevitability, but on the other hand the source of their "truthfulness" remains elusive. Investigations into this issue are known as the foundations of mathematics program.

At the start of the 20th century, philosophers of mathematics were already beginning to divide into various schools of thought about all these questions, broadly distinguished by their pictures of mathematical epistemology and ontology. Three schools, formalism, intuitionism, and logicism, emerged at this time, partly in response to the increasingly widespread worry that mathematics as it stood, and analysis in particular, did not live up to the standards of certainty and rigor that had been taken for granted. Each school addressed the issues that came to the fore at that time, either attempting to resolve them or claiming that mathematics is not entitled to its status as our most trusted knowledge.

Surprising and counter-intuitive developments in formal logic and set theory early in the 20th century led to new questions concerning what was traditionally called the foundations of mathematics. As the century unfolded, the initial focus of concern expanded to an open exploration of the fundamental axioms of mathematics, the axiomatic approach having been taken for granted since the time of Euclid around 300 BCE as the natural basis for mathematics. Notions of axiom, proposition and proof, as well as the notion of a proposition being true of a mathematical object (see Assignment), were formalized, allowing them to be treated mathematically. The Zermelo–Fraenkel axioms for set theory were formulated which provided a conceptual framework in which much mathematical discourse would be interpreted. In mathematics, as in physics, new and unexpected ideas had arisen and significant changes were coming. With Gödel numbering, propositions could be interpreted as referring to themselves or other propositions, enabling inquiry into the consistency of mathematical theories. This reflective critique in which the theory under review "becomes itself the object of a mathematical study" led Hilbert to call such study metamathematics or proof theory.

At the middle of the century, a new mathematical theory was created by Samuel Eilenberg and Saunders Mac Lane, known as category theory, and it became a new contender for the natural language of mathematical thinking. As the 20th century progressed, however, philosophical opinions diverged as to just how well-founded were the questions about foundations that were raised at the century's beginning. Hilary Putnam summed up one common view of the situation in the last third of the century by saying:

When philosophy discovers something wrong with science, sometimes science has to be changed—Russell's paradox comes to mind, as does Berkeley's attack on the actual infinitesimal—but more often it is philosophy that has to be changed. I do not think that the difficulties that philosophy finds with classical mathematics today are genuine difficulties; and I think that the philosophical interpretations of mathematics that we are being offered on every hand are wrong, and that "philosophical interpretation" is just what mathematics doesn't need.

Philosophy of mathematics today proceeds along several different lines of inquiry, by philosophers of mathematics, logicians, and mathematicians, and there are many schools of thought on the subject. The schools are addressed separately in the next section, and their assumptions explained.

Major themes

Mathematical realism

Mathematical realism, like realism in general, holds that mathematical entities exist independently of the human mind. Thus humans do not invent mathematics, but rather discover it, and any other intelligent beings in the universe would presumably do the same. In this point of view, there is really one sort of mathematics that can be discovered; triangles, for example, are real entities, not the creations of the human mind.

Many working mathematicians have been mathematical realists; they see themselves as discoverers of naturally occurring objects. Examples include Paul Erdős and Kurt Gödel. Gödel believed in an objective mathematical reality that could be perceived in a manner analogous to sense perception. Certain principles (e.g., for any two objects, there is a collection of objects consisting of precisely those two objects) could be directly seen to be true, but the continuum hypothesis conjecture might prove undecidable just on the basis of such principles. Gödel suggested that quasi-empirical methodology could be used to provide sufficient evidence to be able to reasonably assume such a conjecture.

Within realism, there are distinctions depending on what sort of existence one takes mathematical entities to have, and how we know about them. Major forms of mathematical realism include Platonism.

Mathematical anti-realism

Mathematical anti-realism generally holds that mathematical statements have truth-values, but that they do not do so by corresponding to a special realm of immaterial or non-empirical entities. Major forms of mathematical anti-realism include formalism and fictionalism.

Contemporary schools of thought

Artistic

The view that claims that mathematics is the aesthetic combination of assumptions, and then also claims that mathematics is an art, a famous mathematician who claims that is the British G. H. Hardy and also metaphorically the French Henri Poincaré., for Hardy, in his book, A Mathematician's Apology, the definition of mathematics was more like the aesthetic combination of concepts.

Platonism

Mathematical Platonism is the form of realism that suggests that mathematical entities are abstract, have no spatiotemporal or causal properties, and are eternal and unchanging. This is often claimed to be the view most people have of numbers. The term Platonism is used because such a view is seen to parallel Plato's Theory of Forms and a "World of Ideas" (Greek: eidos (εἶδος)) described in Plato's allegory of the cave: the everyday world can only imperfectly approximate an unchanging, ultimate reality. Both Plato's cave and Platonism have meaningful, not just superficial connections, because Plato's ideas were preceded and probably influenced by the hugely popular Pythagoreans of ancient Greece, who believed that the world was, quite literally, generated by numbers.

A major question considered in mathematical Platonism is: Precisely where and how do the mathematical entities exist, and how do we know about them? Is there a world, completely separate from our physical one, that is occupied by the mathematical entities? How can we gain access to this separate world and discover truths about the entities? One proposed answer is the Ultimate Ensemble, a theory that postulates that all structures that exist mathematically also exist physically in their own universe.

Kurt Gödel's Platonism postulates a special kind of mathematical intuition that lets us perceive mathematical objects directly. (This view bears resemblances to many things Husserl said about mathematics, and supports Kant's idea that mathematics is synthetic a priori.) Davis and Hersh have suggested in their 1999 book The Mathematical Experience that most mathematicians act as though they are Platonists, even though, if pressed to defend the position carefully, they may retreat to formalism.

Full-blooded Platonism is a modern variation of Platonism, which is in reaction to the fact that different sets of mathematical entities can be proven to exist depending on the axioms and inference rules employed (for instance, the law of the excluded middle, and the axiom of choice). It holds that all mathematical entities exist. They may be provable, even if they cannot all be derived from a single consistent set of axioms.

Set-theoretic realism (also set-theoretic Platonism) a position defended by Penelope Maddy, is the view that set theory is about a single universe of sets. This position (which is also known as naturalized Platonism because it is a naturalized version of mathematical Platonism) has been criticized by Mark Balaguer on the basis of Paul Benacerraf's epistemological problem. A similar view, termed Platonized naturalism, was later defended by the Stanford–Edmonton School: according to this view, a more traditional kind of Platonism is consistent with naturalism; the more traditional kind of Platonism they defend is distinguished by general principles that assert the existence of abstract objects.

Mathematicism

Max Tegmark's mathematical universe hypothesis (or mathematicism) goes further than Platonism in asserting that not only do all mathematical objects exist, but nothing else does. Tegmark's sole postulate is: All structures that exist mathematically also exist physically. That is, in the sense that "in those [worlds] complex enough to contain self-aware substructures [they] will subjectively perceive themselves as existing in a physically 'real' world".

Logicism

Logicism is the thesis that mathematics is reducible to logic, and hence nothing but a part of logic. Logicists hold that mathematics can be known a priori, but suggest that our knowledge of mathematics is just part of our knowledge of logic in general, and is thus analytic, not requiring any special faculty of mathematical intuition. In this view, logic is the proper foundation of mathematics, and all mathematical statements are necessary logical truths.

Rudolf Carnap (1931) presents the logicist thesis in two parts:

  1. The concepts of mathematics can be derived from logical concepts through explicit definitions.
  2. The theorems of mathematics can be derived from logical axioms through purely logical deduction.

Gottlob Frege was the founder of logicism. In his seminal Die Grundgesetze der Arithmetik (Basic Laws of Arithmetic) he built up arithmetic from a system of logic with a general principle of comprehension, which he called "Basic Law V" (for concepts F and G, the extension of F equals the extension of G if and only if for all objects a, Fa equals Ga), a principle that he took to be acceptable as part of logic.

Frege's construction was flawed. Russell discovered that Basic Law V is inconsistent (this is Russell's paradox). Frege abandoned his logicist program soon after this, but it was continued by Russell and Whitehead. They attributed the paradox to "vicious circularity" and built up what they called ramified type theory to deal with it. In this system, they were eventually able to build up much of modern mathematics but in an altered, and excessively complex form (for example, there were different natural numbers in each type, and there were infinitely many types). They also had to make several compromises in order to develop so much of mathematics, such as an "axiom of reducibility". Even Russell said that this axiom did not really belong to logic.

Modern logicists (like Bob Hale, Crispin Wright, and perhaps others) have returned to a program closer to Frege's. They have abandoned Basic Law V in favor of abstraction principles such as Hume's principle (the number of objects falling under the concept F equals the number of objects falling under the concept G if and only if the extension of F and the extension of G can be put into one-to-one correspondence). Frege required Basic Law V to be able to give an explicit definition of the numbers, but all the properties of numbers can be derived from Hume's principle. This would not have been enough for Frege because (to paraphrase him) it does not exclude the possibility that the number 3 is in fact Julius Caesar. In addition, many of the weakened principles that they have had to adopt to replace Basic Law V no longer seem so obviously analytic, and thus purely logical.

Formalism

Formalism holds that mathematical statements may be thought of as statements about the consequences of certain string manipulation rules. For example, in the "game" of Euclidean geometry (which is seen as consisting of some strings called "axioms", and some "rules of inference" to generate new strings from given ones), one can prove that the Pythagorean theorem holds (that is, one can generate the string corresponding to the Pythagorean theorem). According to formalism, mathematical truths are not about numbers and sets and triangles and the like—in fact, they are not "about" anything at all.

Another version of formalism is often known as deductivism. In deductivism, the Pythagorean theorem is not an absolute truth, but a relative one: if one assigns meaning to the strings in such a way that the rules of the game become true (i.e., true statements are assigned to the axioms and the rules of inference are truth-preserving), then one must accept the theorem, or, rather, the interpretation one has given it must be a true statement. The same is held to be true for all other mathematical statements. Thus, formalism need not mean that mathematics is nothing more than a meaningless symbolic game. It is usually hoped that there exists some interpretation in which the rules of the game hold. (Compare this position to structuralism.) But it does allow the working mathematician to continue in his or her work and leave such problems to the philosopher or scientist. Many formalists would say that in practice, the axiom systems to be studied will be suggested by the demands of science or other areas of mathematics.

A major early proponent of formalism was David Hilbert, whose program was intended to be a complete and consistent axiomatization of all of mathematics. Hilbert aimed to show the consistency of mathematical systems from the assumption that the "finitary arithmetic" (a subsystem of the usual arithmetic of the positive integers, chosen to be philosophically uncontroversial) was consistent. Hilbert's goals of creating a system of mathematics that is both complete and consistent were seriously undermined by the second of Gödel's incompleteness theorems, which states that sufficiently expressive consistent axiom systems can never prove their own consistency. Since any such axiom system would contain the finitary arithmetic as a subsystem, Gödel's theorem implied that it would be impossible to prove the system's consistency relative to that (since it would then prove its own consistency, which Gödel had shown was impossible). Thus, in order to show that any axiomatic system of mathematics is in fact consistent, one needs to first assume the consistency of a system of mathematics that is in a sense stronger than the system to be proven consistent.

Hilbert was initially a deductivist, but, as may be clear from above, he considered certain metamathematical methods to yield intrinsically meaningful results and was a realist with respect to the finitary arithmetic. Later, he held the opinion that there was no other meaningful mathematics whatsoever, regardless of interpretation.

Other formalists, such as Rudolf Carnap, Alfred Tarski, and Haskell Curry, considered mathematics to be the investigation of formal axiom systems. Mathematical logicians study formal systems but are just as often realists as they are formalists.

Formalists are relatively tolerant and inviting to new approaches to logic, non-standard number systems, new set theories etc. The more games we study, the better. However, in all three of these examples, motivation is drawn from existing mathematical or philosophical concerns. The "games" are usually not arbitrary.

The main critique of formalism is that the actual mathematical ideas that occupy mathematicians are far removed from the string manipulation games mentioned above. Formalism is thus silent on the question of which axiom systems ought to be studied, as none is more meaningful than another from a formalistic point of view.

Recently, some formalist mathematicians have proposed that all of our formal mathematical knowledge should be systematically encoded in computer-readable formats, so as to facilitate automated proof checking of mathematical proofs and the use of interactive theorem proving in the development of mathematical theories and computer software. Because of their close connection with computer science, this idea is also advocated by mathematical intuitionists and constructivists in the "computability" tradition—see QED project for a general overview.

Conventionalism

The French mathematician Henri Poincaré was among the first to articulate a conventionalist view. Poincaré's use of non-Euclidean geometries in his work on differential equations convinced him that Euclidean geometry should not be regarded as a priori truth. He held that axioms in geometry should be chosen for the results they produce, not for their apparent coherence with human intuitions about the physical world.

Intuitionism

In mathematics, intuitionism is a program of methodological reform whose motto is that "there are no non-experienced mathematical truths" (L. E. J. Brouwer). From this springboard, intuitionists seek to reconstruct what they consider to be the corrigible portion of mathematics in accordance with Kantian concepts of being, becoming, intuition, and knowledge. Brouwer, the founder of the movement, held that mathematical objects arise from the a priori forms of the volitions that inform the perception of empirical objects.

A major force behind intuitionism was L. E. J. Brouwer, who rejected the usefulness of formalized logic of any sort for mathematics. His student Arend Heyting postulated an intuitionistic logic, different from the classical Aristotelian logic; this logic does not contain the law of the excluded middle and therefore frowns upon proofs by contradiction. The axiom of choice is also rejected in most intuitionistic set theories, though in some versions it is accepted.

In intuitionism, the term "explicit construction" is not cleanly defined, and that has led to criticisms. Attempts have been made to use the concepts of Turing machine or computable function to fill this gap, leading to the claim that only questions regarding the behavior of finite algorithms are meaningful and should be investigated in mathematics. This has led to the study of the computable numbers, first introduced by Alan Turing. Not surprisingly, then, this approach to mathematics is sometimes associated with theoretical computer science.

Constructivism

Like intuitionism, constructivism involves the regulative principle that only mathematical entities which can be explicitly constructed in a certain sense should be admitted to mathematical discourse. In this view, mathematics is an exercise of the human intuition, not a game played with meaningless symbols. Instead, it is about entities that we can create directly through mental activity. In addition, some adherents of these schools reject non-constructive proofs, such as a proof by contradiction. Important work was done by Errett Bishop, who managed to prove versions of the most important theorems in real analysis as constructive analysis in his 1967 Foundations of Constructive Analysis. 

Finitism

Finitism is an extreme form of constructivism, according to which a mathematical object does not exist unless it can be constructed from natural numbers in a finite number of steps. In her book Philosophy of Set Theory, Mary Tiles characterized those who allow countably infinite objects as classical finitists, and those who deny even countably infinite objects as strict finitists.

The most famous proponent of finitism was Leopold Kronecker, who said:

God created the natural numbers, all else is the work of man.

Ultrafinitism is an even more extreme version of finitism, which rejects not only infinities but finite quantities that cannot feasibly be constructed with available resources. Another variant of finitism is Euclidean arithmetic, a system developed by John Penn Mayberry in his book The Foundations of Mathematics in the Theory of Sets. Mayberry's system is Aristotelian in general inspiration and, despite his strong rejection of any role for operationalism or feasibility in the foundations of mathematics, comes to somewhat similar conclusions, such as, for instance, that super-exponentiation is not a legitimate finitary function.

Structuralism

Structuralism is a position holding that mathematical theories describe structures, and that mathematical objects are exhaustively defined by their places in such structures, consequently having no intrinsic properties. For instance, it would maintain that all that needs to be known about the number 1 is that it is the first whole number after 0. Likewise all the other whole numbers are defined by their places in a structure, the number line. Other examples of mathematical objects might include lines and planes in geometry, or elements and operations in abstract algebra.

Structuralism is an epistemologically realistic view in that it holds that mathematical statements have an objective truth value. However, its central claim only relates to what kind of entity a mathematical object is, not to what kind of existence mathematical objects or structures have (not, in other words, to their ontology). The kind of existence mathematical objects have would clearly be dependent on that of the structures in which they are embedded; different sub-varieties of structuralism make different ontological claims in this regard.

The ante rem structuralism ("before the thing") has a similar ontology to Platonism. Structures are held to have a real but abstract and immaterial existence. As such, it faces the standard epistemological problem of explaining the interaction between such abstract structures and flesh-and-blood mathematicians.

The in re structuralism ("in the thing") is the equivalent of Aristotelean realism. Structures are held to exist inasmuch as some concrete system exemplifies them. This incurs the usual issues that some perfectly legitimate structures might accidentally happen not to exist, and that a finite physical world might not be "big" enough to accommodate some otherwise legitimate structures.

The post rem structuralism ("after the thing") is anti-realist about structures in a way that parallels nominalism. Like nominalism, the post rem approach denies the existence of abstract mathematical objects with properties other than their place in a relational structure. According to this view mathematical systems exist, and have structural features in common. If something is true of a structure, it will be true of all systems exemplifying the structure. However, it is merely instrumental to talk of structures being "held in common" between systems: they in fact have no independent existence.

Embodied mind theories

Embodied mind theories hold that mathematical thought is a natural outgrowth of the human cognitive apparatus which finds itself in our physical universe. For example, the abstract concept of number springs from the experience of counting discrete objects. It is held that mathematics is not universal and does not exist in any real sense, other than in human brains. Humans construct, but do not discover, mathematics.

With this view, the physical universe can thus be seen as the ultimate foundation of mathematics: it guided the evolution of the brain and later determined which questions this brain would find worthy of investigation. However, the human mind has no special claim on reality or approaches to it built out of math. If such constructs as Euler's identity are true then they are true as a map of the human mind and cognition.

Embodied mind theorists thus explain the effectiveness of mathematics—mathematics was constructed by the brain in order to be effective in this universe.

The most accessible, famous, and infamous treatment of this perspective is Where Mathematics Comes From, by George Lakoff and Rafael E. Núñez. In addition, mathematician Keith Devlin has investigated similar concepts with his book The Math Instinct, as has neuroscientist Stanislas Dehaene with his book The Number Sense. For more on the philosophical ideas that inspired this perspective, see cognitive science of mathematics.

Aristotelian realism

Aristotelian realism holds that mathematics studies properties such as symmetry, continuity and order that can be literally realized in the physical world (or in any other world there might be). It contrasts with Platonism in holding that the objects of mathematics, such as numbers, do not exist in an "abstract" world but can be physically realized. For example, the number 4 is realized in the relation between a heap of parrots and the universal "being a parrot" that divides the heap into so many parrots. Aristotelian realism is defended by James Franklin and the Sydney School in the philosophy of mathematics and is close to the view of Penelope Maddy that when an egg carton is opened, a set of three eggs is perceived (that is, a mathematical entity realized in the physical world). A problem for Aristotelian realism is what account to give of higher infinities, which may not be realizable in the physical world.

The Euclidean arithmetic developed by John Penn Mayberry in his book The Foundations of Mathematics in the Theory of Sets also falls into the Aristotelian realist tradition. Mayberry, following Euclid, considers numbers to be simply "definite multitudes of units" realized in nature—such as "the members of the London Symphony Orchestra" or "the trees in Birnam wood". Whether or not there are definite multitudes of units for which Euclid's Common Notion 5 (the whole is greater than the part) fails and which would consequently be reckoned as infinite is for Mayberry essentially a question about Nature and does not entail any transcendental suppositions.

Psychologism

Psychologism in the philosophy of mathematics is the position that mathematical concepts and/or truths are grounded in, derived from or explained by psychological facts (or laws).

John Stuart Mill seems to have been an advocate of a type of logical psychologism, as were many 19th-century German logicians such as Sigwart and Erdmann as well as a number of psychologists, past and present: for example, Gustave Le Bon. Psychologism was famously criticized by Frege in his The Foundations of Arithmetic, and many of his works and essays, including his review of Husserl's Philosophy of Arithmetic. Edmund Husserl, in the first volume of his Logical Investigations, called "The Prolegomena of Pure Logic", criticized psychologism thoroughly and sought to distance himself from it. The "Prolegomena" is considered a more concise, fair, and thorough refutation of psychologism than the criticisms made by Frege, and also it is considered today by many as being a memorable refutation for its decisive blow to psychologism. Psychologism was also criticized by Charles Sanders Peirce and Maurice Merleau-Ponty.

Empiricism

Mathematical empiricism is a form of realism that denies that mathematics can be known a priori at all. It says that we discover mathematical facts by empirical research, just like facts in any of the other sciences. It is not one of the classical three positions advocated in the early 20th century, but primarily arose in the middle of the century. However, an important early proponent of a view like this was John Stuart Mill. Mill's view was widely criticized, because, according to critics, such as A.J. Ayer, it makes statements like "2 + 2 = 4" come out as uncertain, contingent truths, which we can only learn by observing instances of two pairs coming together and forming a quartet.

Contemporary mathematical empiricism, formulated by W. V. O. Quine and Hilary Putnam, is primarily supported by the indispensability argument: mathematics is indispensable to all empirical sciences, and if we want to believe in the reality of the phenomena described by the sciences, we ought also believe in the reality of those entities required for this description. That is, since physics needs to talk about electrons to say why light bulbs behave as they do, then electrons must exist. Since physics needs to talk about numbers in offering any of its explanations, then numbers must exist. In keeping with Quine and Putnam's overall philosophies, this is a naturalistic argument. It argues for the existence of mathematical entities as the best explanation for experience, thus stripping mathematics of being distinct from the other sciences.

Putnam strongly rejected the term "Platonist" as implying an over-specific ontology that was not necessary to mathematical practice in any real sense. He advocated a form of "pure realism" that rejected mystical notions of truth and accepted much quasi-empiricism in mathematics. This grew from the increasingly popular assertion in the late 20th century that no one foundation of mathematics could be ever proven to exist. It is also sometimes called "postmodernism in mathematics" although that term is considered overloaded by some and insulting by others. Quasi-empiricism argues that in doing their research, mathematicians test hypotheses as well as prove theorems. A mathematical argument can transmit falsity from the conclusion to the premises just as well as it can transmit truth from the premises to the conclusion. Putnam has argued that any theory of mathematical realism would include quasi-empirical methods. He proposed that an alien species doing mathematics might well rely on quasi-empirical methods primarily, being willing often to forgo rigorous and axiomatic proofs, and still be doing mathematics—at perhaps a somewhat greater risk of failure of their calculations. He gave a detailed argument for this in New Directions. Quasi-empiricism was also developed by Imre Lakatos.

The most important criticism of empirical views of mathematics is approximately the same as that raised against Mill. If mathematics is just as empirical as the other sciences, then this suggests that its results are just as fallible as theirs, and just as contingent. In Mill's case the empirical justification comes directly, while in Quine's case it comes indirectly, through the coherence of our scientific theory as a whole, i.e. consilience after E.O. Wilson. Quine suggests that mathematics seems completely certain because the role it plays in our web of belief is extraordinarily central, and that it would be extremely difficult for us to revise it, though not impossible.

For a philosophy of mathematics that attempts to overcome some of the shortcomings of Quine and Gödel's approaches by taking aspects of each see Penelope Maddy's Realism in Mathematics. Another example of a realist theory is the embodied mind theory.

Fictionalism

Mathematical fictionalism was brought to fame in 1980 when Hartry Field published Science Without Numbers, which rejected and in fact reversed Quine's indispensability argument. Where Quine suggested that mathematics was indispensable for our best scientific theories, and therefore should be accepted as a body of truths talking about independently existing entities, Field suggested that mathematics was dispensable, and therefore should be considered as a body of falsehoods not talking about anything real. He did this by giving a complete axiomatization of Newtonian mechanics with no reference to numbers or functions at all. He started with the "betweenness" of Hilbert's axioms to characterize space without coordinatizing it, and then added extra relations between points to do the work formerly done by vector fields. Hilbert's geometry is mathematical, because it talks about abstract points, but in Field's theory, these points are the concrete points of physical space, so no special mathematical objects at all are needed.

Having shown how to do science without using numbers, Field proceeded to rehabilitate mathematics as a kind of useful fiction. He showed that mathematical physics is a conservative extension of his non-mathematical physics (that is, every physical fact provable in mathematical physics is already provable from Field's system), so that mathematics is a reliable process whose physical applications are all true, even though its own statements are false. Thus, when doing mathematics, we can see ourselves as telling a sort of story, talking as if numbers existed. For Field, a statement like "2 + 2 = 4" is just as fictitious as "Sherlock Holmes lived at 221B Baker Street"—but both are true according to the relevant fictions.

By this account, there are no metaphysical or epistemological problems special to mathematics. The only worries left are the general worries about non-mathematical physics, and about fiction in general. Field's approach has been very influential, but is widely rejected. This is in part because of the requirement of strong fragments of second-order logic to carry out his reduction, and because the statement of conservativity seems to require quantification over abstract models or deductions.

Social constructivism

Social constructivism sees mathematics primarily as a social construct, as a product of culture, subject to correction and change. Like the other sciences, mathematics is viewed as an empirical endeavor whose results are constantly evaluated and may be discarded. However, while on an empiricist view the evaluation is some sort of comparison with "reality", social constructivists emphasize that the direction of mathematical research is dictated by the fashions of the social group performing it or by the needs of the society financing it. However, although such external forces may change the direction of some mathematical research, there are strong internal constraints—the mathematical traditions, methods, problems, meanings and values into which mathematicians are enculturated—that work to conserve the historically-defined discipline.

This runs counter to the traditional beliefs of working mathematicians, that mathematics is somehow pure or objective. But social constructivists argue that mathematics is in fact grounded by much uncertainty: as mathematical practice evolves, the status of previous mathematics is cast into doubt, and is corrected to the degree it is required or desired by the current mathematical community. This can be seen in the development of analysis from reexamination of the calculus of Leibniz and Newton. They argue further that finished mathematics is often accorded too much status, and folk mathematics not enough, due to an overemphasis on axiomatic proof and peer review as practices.

The social nature of mathematics is highlighted in its subcultures. Major discoveries can be made in one branch of mathematics and be relevant to another, yet the relationship goes undiscovered for lack of social contact between mathematicians. Social constructivists argue each speciality forms its own epistemic community and often has great difficulty communicating, or motivating the investigation of unifying conjectures that might relate different areas of mathematics. Social constructivists see the process of "doing mathematics" as actually creating the meaning, while social realists see a deficiency either of human capacity to abstractify, or of human's cognitive bias, or of mathematicians' collective intelligence as preventing the comprehension of a real universe of mathematical objects. Social constructivists sometimes reject the search for foundations of mathematics as bound to fail, as pointless or even meaningless.

Contributions to this school have been made by Imre Lakatos and Thomas Tymoczko, although it is not clear that either would endorse the title. More recently Paul Ernest has explicitly formulated a social constructivist philosophy of mathematics. Some consider the work of Paul Erdős as a whole to have advanced this view (although he personally rejected it) because of his uniquely broad collaborations, which prompted others to see and study "mathematics as a social activity", e.g., via the Erdős number. Reuben Hersh has also promoted the social view of mathematics, calling it a "humanistic" approach, similar to but not quite the same as that associated with Alvin White; one of Hersh's co-authors, Philip J. Davis, has expressed sympathy for the social view as well.

Beyond the traditional schools

Unreasonable effectiveness

Rather than focus on narrow debates about the true nature of mathematical truth, or even on practices unique to mathematicians such as the proof, a growing movement from the 1960s to the 1990s began to question the idea of seeking foundations or finding any one right answer to why mathematics works. The starting point for this was Eugene Wigner's famous 1960 paper "The Unreasonable Effectiveness of Mathematics in the Natural Sciences", in which he argued that the happy coincidence of mathematics and physics being so well matched seemed to be unreasonable and hard to explain.

Popper's two senses of number statements

Realist and constructivist theories are normally taken to be contraries. However, Karl Popper argued that a number statement such as "2 apples + 2 apples = 4 apples" can be taken in two senses. In one sense it is irrefutable and logically true. In the second sense it is factually true and falsifiable. Another way of putting this is to say that a single number statement can express two propositions: one of which can be explained on constructivist lines; the other on realist lines.

Philosophy of language

Innovations in the philosophy of language during the 20th century renewed interest in whether mathematics is, as is often said, the language of science. Although some mathematicians and philosophers would accept the statement "mathematics is a language", linguists believe that the implications of such a statement must be considered. For example, the tools of linguistics are not generally applied to the symbol systems of mathematics, that is, mathematics is studied in a markedly different way from other languages. If mathematics is a language, it is a different type of language from natural languages. Indeed, because of the need for clarity and specificity, the language of mathematics is far more constrained than natural languages studied by linguists. However, the methods developed by Frege and Tarski for the study of mathematical language have been extended greatly by Tarski's student Richard Montague and other linguists working in formal semantics to show that the distinction between mathematical language and natural language may not be as great as it seems.

Mohan Ganesalingam has analysed mathematical language using tools from formal linguistics. Ganesalingam notes that some features of natural language are not necessary when analysing mathematical language (such as tense), but many of the same analytical tools can be used (such as context-free grammars). One important difference is that mathematical objects have clearly defined types, which can be explicitly defined in a text: "Effectively, we are allowed to introduce a word in one part of a sentence, and declare its part of speech in another; and this operation has no analogue in natural language."

Arguments

Indispensability argument for realism

This argument, associated with Willard Quine and Hilary Putnam, is considered by Stephen Yablo to be one of the most challenging arguments in favor of the acceptance of the existence of abstract mathematical entities, such as numbers and sets. The form of the argument is as follows.

  1. One must have ontological commitments to all entities that are indispensable to the best scientific theories, and to those entities only (commonly referred to as "all and only").
  2. Mathematical entities are indispensable to the best scientific theories. Therefore,
  3. One must have ontological commitments to mathematical entities.

The justification for the first premise is the most controversial. Both Putnam and Quine invoke naturalism to justify the exclusion of all non-scientific entities, and hence to defend the "only" part of "all and only". The assertion that "all" entities postulated in scientific theories, including numbers, should be accepted as real is justified by confirmation holism. Since theories are not confirmed in a piecemeal fashion, but as a whole, there is no justification for excluding any of the entities referred to in well-confirmed theories. This puts the nominalist who wishes to exclude the existence of sets and non-Euclidean geometry, but to include the existence of quarks and other undetectable entities of physics, for example, in a difficult position.

Epistemic argument against realism

The anti-realist "epistemic argument" against Platonism has been made by Paul Benacerraf and Hartry Field. Platonism posits that mathematical objects are abstract entities. By general agreement, abstract entities cannot interact causally with concrete, physical entities ("the truth-values of our mathematical assertions depend on facts involving Platonic entities that reside in a realm outside of space-time"). Whilst our knowledge of concrete, physical objects is based on our ability to perceive them, and therefore to causally interact with them, there is no parallel account of how mathematicians come to have knowledge of abstract objects. Another way of making the point is that if the Platonic world were to disappear, it would make no difference to the ability of mathematicians to generate proofs, etc., which is already fully accountable in terms of physical processes in their brains.

Field developed his views into fictionalism. Benacerraf also developed the philosophy of mathematical structuralism, according to which there are no mathematical objects. Nonetheless, some versions of structuralism are compatible with some versions of realism.

The argument hinges on the idea that a satisfactory naturalistic account of thought processes in terms of brain processes can be given for mathematical reasoning along with everything else. One line of defense is to maintain that this is false, so that mathematical reasoning uses some special intuition that involves contact with the Platonic realm. A modern form of this argument is given by Sir Roger Penrose.

Another line of defense is to maintain that abstract objects are relevant to mathematical reasoning in a way that is non-causal, and not analogous to perception. This argument is developed by Jerrold Katz in his 2000 book Realistic Rationalism.

A more radical defense is denial of physical reality, i.e. the mathematical universe hypothesis. In that case, a mathematician's knowledge of mathematics is one mathematical object making contact with another.

Aesthetics

Many practicing mathematicians have been drawn to their subject because of a sense of beauty they perceive in it. One sometimes hears the sentiment that mathematicians would like to leave philosophy to the philosophers and get back to mathematics—where, presumably, the beauty lies.

In his work on the divine proportion, H.E. Huntley relates the feeling of reading and understanding someone else's proof of a theorem of mathematics to that of a viewer of a masterpiece of art—the reader of a proof has a similar sense of exhilaration at understanding as the original author of the proof, much as, he argues, the viewer of a masterpiece has a sense of exhilaration similar to the original painter or sculptor. Indeed, one can study mathematical and scientific writings as literature.

Philip J. Davis and Reuben Hersh have commented that the sense of mathematical beauty is universal amongst practicing mathematicians. By way of example, they provide two proofs of the irrationality of 2. The first is the traditional proof by contradiction, ascribed to Euclid; the second is a more direct proof involving the fundamental theorem of arithmetic that, they argue, gets to the heart of the issue. Davis and Hersh argue that mathematicians find the second proof more aesthetically appealing because it gets closer to the nature of the problem.

Paul Erdős was well known for his notion of a hypothetical "Book" containing the most elegant or beautiful mathematical proofs. There is not universal agreement that a result has one "most elegant" proof; Gregory Chaitin has argued against this idea.

Philosophers have sometimes criticized mathematicians' sense of beauty or elegance as being, at best, vaguely stated. By the same token, however, philosophers of mathematics have sought to characterize what makes one proof more desirable than another when both are logically sound.

Another aspect of aesthetics concerning mathematics is mathematicians' views towards the possible uses of mathematics for purposes deemed unethical or inappropriate. The best-known exposition of this view occurs in G. H. Hardy's book A Mathematician's Apology, in which Hardy argues that pure mathematics is superior in beauty to applied mathematics precisely because it cannot be used for war and similar ends.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...