Search This Blog

Wednesday, June 16, 2021

Foundations of mathematics

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Foundations_of_mathematics

Foundations of mathematics is the study of the philosophical and logical and/or algorithmic basis of mathematics, or, in a broader sense, the mathematical investigation of what underlies the philosophical theories concerning the nature of mathematics. In this latter sense, the distinction between foundations of mathematics and philosophy of mathematics turns out to be quite vague. Foundations of mathematics can be conceived as the study of the basic mathematical concepts (set, function, geometrical figure, number, etc.) and how they form hierarchies of more complex structures and concepts, especially the fundamentally important structures that form the language of mathematics (formulas, theories and their models giving a meaning to formulas, definitions, proofs, algorithms, etc.) also called metamathematical concepts, with an eye to the philosophical aspects and the unity of mathematics. The search for foundations of mathematics is a central question of the philosophy of mathematics; the abstract nature of mathematical objects presents special philosophical challenges.

The foundations of mathematics as a whole does not aim to contain the foundations of every mathematical topic. Generally, the foundations of a field of study refers to a more-or-less systematic analysis of its most basic or fundamental concepts, its conceptual unity and its natural ordering or hierarchy of concepts, which may help to connect it with the rest of human knowledge. The development, emergence, and clarification of the foundations can come late in the history of a field, and might not be viewed by everyone as its most interesting part.

Mathematics always played a special role in scientific thought, serving since ancient times as a model of truth and rigor for rational inquiry, and giving tools or even a foundation for other sciences (especially physics). Mathematics' many developments towards higher abstractions in the 19th century brought new challenges and paradoxes, urging for a deeper and more systematic examination of the nature and criteria of mathematical truth, as well as a unification of the diverse branches of mathematics into a coherent whole.

The systematic search for the foundations of mathematics started at the end of the 19th century and formed a new mathematical discipline called mathematical logic, which later had strong links to theoretical computer science. It went through a series of crises with paradoxical results, until the discoveries stabilized during the 20th century as a large and coherent body of mathematical knowledge with several aspects or components (set theory, model theory, proof theory, etc.), whose detailed properties and possible variants are still an active research field. Its high level of technical sophistication inspired many philosophers to conjecture that it can serve as a model or pattern for the foundations of other sciences.

Historical context

Ancient Greek mathematics

While the practice of mathematics had previously developed in other civilizations, special interest in its theoretical and foundational aspects was clearly evident in the work of the Ancient Greeks.

Early Greek philosophers disputed as to which is more basic, arithmetic or geometry. Zeno of Elea (490 – c. 430 BC) produced four paradoxes that seem to show the impossibility of change. The Pythagorean school of mathematics originally insisted that only natural and rational numbers exist. The discovery of the irrationality of 2, the ratio of the diagonal of a square to its side (around 5th century BC), was a shock to them which they only reluctantly accepted. The discrepancy between rationals and reals was finally resolved by Eudoxus of Cnidus (408–355 BC), a student of Plato, who reduced the comparison of two irrational ratios to comparisons of multiples of the magnitudes involved. His method anticipated that of the Dedekind cut in the modern definition of real numbers by Richard Dedekind (1831–1916).

In the Posterior Analytics, Aristotle (384–322 BC) laid down the axiomatic method for organizing a field of knowledge logically by means of primitive concepts, axioms, postulates, definitions, and theorems. Aristotle took a majority of his examples for this from arithmetic and from geometry. This method reached its high point with Euclid's Elements (300 BC), a treatise on mathematics structured with very high standards of rigor: Euclid justifies each proposition by a demonstration in the form of chains of syllogisms (though they do not always conform strictly to Aristotelian templates). Aristotle's syllogistic logic, together with the axiomatic method exemplified by Euclid's Elements, are recognized as scientific achievements of ancient Greece.

Platonism as a traditional philosophy of mathematics

Starting from the end of the 19th century, a Platonist view of mathematics became common among practicing mathematicians.

The concepts or, as Platonists would have it, the objects of mathematics are abstract and remote from everyday perceptual experience: geometrical figures are conceived as idealities to be distinguished from effective drawings and shapes of objects, and numbers are not confused with the counting of concrete objects. Their existence and nature present special philosophical challenges: How do mathematical objects differ from their concrete representation? Are they located in their representation, or in our minds, or somewhere else? How can we know them?

The ancient Greek philosophers took such questions very seriously. Indeed, many of their general philosophical discussions were carried on with extensive reference to geometry and arithmetic. Plato (424/423 BC – 348/347 BC) insisted that mathematical objects, like other platonic Ideas (forms or essences), must be perfectly abstract and have a separate, non-material kind of existence, in a world of mathematical objects independent of humans. He believed that the truths about these objects also exist independently of the human mind, but is discovered by humans. In the Meno Plato's teacher Socrates asserts that it is possible to come to know this truth by a process akin to memory retrieval.

Above the gateway to Plato's academy appeared a famous inscription: "Let no one who is ignorant of geometry enter here". In this way Plato indicated his high opinion of geometry. He regarded geometry as "the first essential in the training of philosophers", because of its abstract character.

This philosophy of Platonist mathematical realism is shared by many mathematicians. It can be argued that Platonism somehow comes as a necessary assumption underlying any mathematical work.

In this view, the laws of nature and the laws of mathematics have a similar status, and the effectiveness ceases to be unreasonable. Not our axioms, but the very real world of mathematical objects forms the foundation.

Aristotle dissected and rejected this view in his Metaphysics. These questions provide much fuel for philosophical analysis and debate.

Middle Ages and Renaissance

For over 2,000 years, Euclid's Elements stood as a perfectly solid foundation for mathematics, as its methodology of rational exploration guided mathematicians, philosophers, and scientists well into the 19th century.

The Middle Ages saw a dispute over the ontological status of the universals (platonic Ideas): Realism asserted their existence independently of perception; conceptualism asserted their existence within the mind only; nominalism denied either, only seeing universals as names of collections of individual objects (following older speculations that they are words, "logoi").

René Descartes published La Géométrie (1637), aimed at reducing geometry to algebra by means of coordinate systems, giving algebra a more foundational role (while the Greeks embedded arithmetic into geometry by identifying whole numbers with evenly spaced points on a line). Descartes' book became famous after 1649 and paved the way to infinitesimal calculus.

Isaac Newton (1642–1727) in England and Leibniz (1646–1716) in Germany independently developed the infinitesimal calculus based on heuristic methods greatly efficient, but direly lacking rigorous justifications. Leibniz even went on to explicitly describe infinitesimals as actual infinitely small numbers (close to zero). Leibniz also worked on formal logic but most of his writings on it remained unpublished until 1903.

The Protestant philosopher George Berkeley (1685–1753), in his campaign against the religious implications of Newtonian mechanics, wrote a pamphlet on the lack of rational justifications of infinitesimal calculus: "They are neither finite quantities, nor quantities infinitely small, nor yet nothing. May we not call them the ghosts of departed quantities?"

Then mathematics developed very rapidly and successfully in physical applications, but with little attention to logical foundations.

19th century

In the 19th century, mathematics became increasingly abstract. Concerns about logical gaps and inconsistencies in different fields led to the development of axiomatic systems.

Real analysis

Cauchy (1789–1857) started the project of formulating and proving the theorems of infinitesimal calculus in a rigorous manner, rejecting the heuristic principle of the generality of algebra exploited by earlier authors. In his 1821 work Cours d'Analyse he defines infinitely small quantities in terms of decreasing sequences that converge to 0, which he then used to define continuity. But he did not formalize his notion of convergence.

The modern (ε, δ)-definition of limit and continuous functions was first developed by Bolzano in 1817, but remained relatively unknown. It gives a rigorous foundation of infinitesimal calculus based on the set of real numbers, arguably resolving the Zeno paradoxes and Berkeley's arguments.

Mathematicians such as Karl Weierstrass (1815–1897) discovered pathological functions such as continuous, nowhere-differentiable functions. Previous conceptions of a function as a rule for computation, or a smooth graph, were no longer adequate. Weierstrass began to advocate the arithmetization of analysis, to axiomatize analysis using properties of the natural numbers.

In 1858, Dedekind proposed a definition of the real numbers as cuts of rational numbers. This reduction of real numbers and continuous functions in terms of rational numbers, and thus of natural numbers, was later integrated by Cantor in his set theory, and axiomatized in terms of second order arithmetic by Hilbert and Bernays.

Group theory

For the first time, the limits of mathematics were explored. Niels Henrik Abel (1802–1829), a Norwegian, and Évariste Galois, (1811–1832) a Frenchman, investigated the solutions of various polynomial equations, and proved that there is no general algebraic solution to equations of degree greater than four (Abel–Ruffini theorem). With these concepts, Pierre Wantzel (1837) proved that straightedge and compass alone cannot trisect an arbitrary angle nor double a cube. In 1882, Lindemann building on the work of Hermite showed that a straightedge and compass quadrature of the circle (construction of a square equal in area to a given circle) was also impossible by proving that π is a transcendental number. Mathematicians had attempted to solve all of these problems in vain since the time of the ancient Greeks.

Abel and Galois's works opened the way for the developments of group theory (which would later be used to study symmetry in physics and other fields), and abstract algebra. Concepts of vector spaces emerged from the conception of barycentric coordinates by Möbius in 1827, to the modern definition of vector spaces and linear maps by Peano in 1888. Geometry was no more limited to three dimensions. These concepts did not generalize numbers but combined notions of functions and sets which were not yet formalized, breaking away from familiar mathematical objects.

Non-Euclidean geometries

After many failed attempts to derive the parallel postulate from other axioms, the study of the still hypothetical hyperbolic geometry by Johann Heinrich Lambert (1728–1777) led him to introduce the hyperbolic functions and compute the area of a hyperbolic triangle (where the sum of angles is less than 180°). Then the Russian mathematician Nikolai Lobachevsky (1792–1856) established in 1826 (and published in 1829) the coherence of this geometry (thus the independence of the parallel postulate), in parallel with the Hungarian mathematician János Bolyai (1802–1860) in 1832, and with Gauss. Later in the 19th century, the German mathematician Bernhard Riemann developed Elliptic geometry, another non-Euclidean geometry where no parallel can be found and the sum of angles in a triangle is more than 180°. It was proved consistent by defining point to mean a pair of antipodal points on a fixed sphere and line to mean a great circle on the sphere. At that time, the main method for proving the consistency of a set of axioms was to provide a model for it.

Projective geometry

One of the traps in a deductive system is circular reasoning, a problem that seemed to befall projective geometry until it was resolved by Karl von Staudt. As explained by Russian historians:

In the mid-nineteenth century there was an acrimonious controversy between the proponents of synthetic and analytic methods in projective geometry, the two sides accusing each other of mixing projective and metric concepts. Indeed the basic concept that is applied in the synthetic presentation of projective geometry, the cross-ratio of four points of a line, was introduced through consideration of the lengths of intervals.

The purely geometric approach of von Staudt was based on the complete quadrilateral to express the relation of projective harmonic conjugates. Then he created a means of expressing the familiar numeric properties with his Algebra of Throws. English language versions of this process of deducing the properties of a field can be found in either the book by Oswald Veblen and John Young, Projective Geometry (1938), or more recently in John Stillwell's Four Pillars of Geometry (2005). Stillwell writes on page 120

... projective geometry is simpler than algebra in a certain sense, because we use only five geometric axioms to derive the nine field axioms.

The algebra of throws is commonly seen as a feature of cross-ratios since students ordinarily rely upon numbers without worry about their basis. However, cross-ratio calculations use metric features of geometry, features not admitted by purists. For instance, in 1961 Coxeter wrote Introduction to Geometry without mention of cross-ratio.

Boolean algebra and logic

Attempts of formal treatment of mathematics had started with Leibniz and Lambert (1728–1777), and continued with works by algebraists such as George Peacock (1791–1858). Systematic mathematical treatments of logic came with the British mathematician George Boole (1847) who devised an algebra that soon evolved into what is now called Boolean algebra, in which the only numbers were 0 and 1 and logical combinations (conjunction, disjunction, implication and negation) are operations similar to the addition and multiplication of integers. Additionally, De Morgan published his laws in 1847. Logic thus became a branch of mathematics. Boolean algebra is the starting point of mathematical logic and has important applications in computer science.

Charles Sanders Peirce built upon the work of Boole to develop a logical system for relations and quantifiers, which he published in several papers from 1870 to 1885.

The German mathematician Gottlob Frege (1848–1925) presented an independent development of logic with quantifiers in his Begriffsschrift (formula language) published in 1879, a work generally considered as marking a turning point in the history of logic. He exposed deficiencies in Aristotle's Logic, and pointed out the three expected properties of a mathematical theory

  1. Consistency: impossibility of proving contradictory statements.
  2. Completeness: any statement is either provable or refutable (i.e. its negation is provable).
  3. Decidability: there is a decision procedure to test any statement in the theory.

He then showed in Grundgesetze der Arithmetik (Basic Laws of Arithmetic) how arithmetic could be formalised in his new logic.

Frege's work was popularized by Bertrand Russell near the turn of the century. But Frege's two-dimensional notation had no success. Popular notations were (x) for universal and (∃x) for existential quantifiers, coming from Giuseppe Peano and William Ernest Johnson until the ∀ symbol was introduced by Gerhard Gentzen in 1935 and became canonical in the 1960s.

From 1890 to 1905, Ernst Schröder published Vorlesungen über die Algebra der Logik in three volumes. This work summarized and extended the work of Boole, De Morgan, and Peirce, and was a comprehensive reference to symbolic logic as it was understood at the end of the 19th century.

Peano arithmetic

The formalization of arithmetic (the theory of natural numbers) as an axiomatic theory started with Peirce in 1881 and continued with Richard Dedekind and Giuseppe Peano in 1888. This was still a second-order axiomatization (expressing induction in terms of arbitrary subsets, thus with an implicit use of set theory) as concerns for expressing theories in first-order logic were not yet understood. In Dedekind's work, this approach appears as completely characterizing natural numbers and providing recursive definitions of addition and multiplication from the successor function and mathematical induction.

Foundational crisis

The foundational crisis of mathematics (in German Grundlagenkrise der Mathematik) was the early 20th century's term for the search for proper foundations of mathematics.

Several schools of the philosophy of mathematics ran into difficulties one after the other in the 20th century, as the assumption that mathematics had any foundation that could be consistently stated within mathematics itself was heavily challenged by the discovery of various paradoxes (such as Russell's paradox).

The name "paradox" should not be confused with contradiction. A contradiction in a formal theory is a formal proof of an absurdity inside the theory (such as 2 + 2 = 5), showing that this theory is inconsistent and must be rejected. But a paradox may be either a surprising but true result in a given formal theory, or an informal argument leading to a contradiction, so that a candidate theory, if it is to be formalized, must disallow at least one of its steps; in this case the problem is to find a satisfying theory without contradiction. Both meanings may apply if the formalized version of the argument forms the proof of a surprising truth. For instance, Russell's paradox may be expressed as "there is no set of all sets" (except in some marginal axiomatic set theories).

Various schools of thought opposed each other. The leading school was that of the formalist approach, of which David Hilbert was the foremost proponent, culminating in what is known as Hilbert's program, which thought to ground mathematics on a small basis of a logical system proved sound by metamathematical finitistic means. The main opponent was the intuitionist school, led by L. E. J. Brouwer, which resolutely discarded formalism as a meaningless game with symbols. The fight was acrimonious. In 1920 Hilbert succeeded in having Brouwer, whom he considered a threat to mathematics, removed from the editorial board of Mathematische Annalen, the leading mathematical journal of the time.

Philosophical views

At the beginning of the 20th century, three schools of philosophy of mathematics opposed each other: Formalism, Intuitionism and Logicism. The Second Conference on the Epistemology of the Exact Sciences held in Königsberg in 1930 gave space to these three schools.

Formalism

It has been claimed that formalists, such as David Hilbert (1862–1943), hold that mathematics is only a language and a series of games. Indeed, he used the words "formula game" in his 1927 response to L. E. J. Brouwer's criticisms:

And to what extent has the formula game thus made possible been successful? This formula game enables us to express the entire thought-content of the science of mathematics in a uniform manner and develop it in such a way that, at the same time, the interconnections between the individual propositions and facts become clear ... The formula game that Brouwer so deprecates has, besides its mathematical value, an important general philosophical significance. For this formula game is carried out according to certain definite rules, in which the technique of our thinking is expressed. These rules form a closed system that can be discovered and definitively stated.

Thus Hilbert is insisting that mathematics is not an arbitrary game with arbitrary rules; rather it must agree with how our thinking, and then our speaking and writing, proceeds.

We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules. Rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise.

The foundational philosophy of formalism, as exemplified by David Hilbert, is a response to the paradoxes of set theory, and is based on formal logic. Virtually all mathematical theorems today can be formulated as theorems of set theory. The truth of a mathematical statement, in this view, is represented by the fact that the statement can be derived from the axioms of set theory using the rules of formal logic.

Merely the use of formalism alone does not explain several issues: why we should use the axioms we do and not some others, why we should employ the logical rules we do and not some others, why do "true" mathematical statements (e.g., the laws of arithmetic) appear to be true, and so on. Hermann Weyl would ask these very questions of Hilbert:

What "truth" or objectivity can be ascribed to this theoretic construction of the world, which presses far beyond the given, is a profound philosophical problem. It is closely connected with the further question: what impels us to take as a basis precisely the particular axiom system developed by Hilbert? Consistency is indeed a necessary but not a sufficient condition. For the time being we probably cannot answer this question ...

In some cases these questions may be sufficiently answered through the study of formal theories, in disciplines such as reverse mathematics and computational complexity theory. As noted by Weyl, formal logical systems also run the risk of inconsistency; in Peano arithmetic, this arguably has already been settled with several proofs of consistency, but there is debate over whether or not they are sufficiently finitary to be meaningful. Gödel's second incompleteness theorem establishes that logical systems of arithmetic can never contain a valid proof of their own consistency. What Hilbert wanted to do was prove a logical system S was consistent, based on principles P that only made up a small part of S. But Gödel proved that the principles P could not even prove P to be consistent, let alone S.

Intuitionism

Intuitionists, such as L. E. J. Brouwer (1882–1966), hold that mathematics is a creation of the human mind. Numbers, like fairy tale characters, are merely mental entities, which would not exist if there were never any human minds to think about them.

The foundational philosophy of intuitionism or constructivism, as exemplified in the extreme by Brouwer and Stephen Kleene, requires proofs to be "constructive" in nature – the existence of an object must be demonstrated rather than inferred from a demonstration of the impossibility of its non-existence. For example, as a consequence of this the form of proof known as reductio ad absurdum is suspect.

Some modern theories in the philosophy of mathematics deny the existence of foundations in the original sense. Some theories tend to focus on mathematical practice, and aim to describe and analyze the actual working of mathematicians as a social group. Others try to create a cognitive science of mathematics, focusing on human cognition as the origin of the reliability of mathematics when applied to the real world. These theories would propose to find foundations only in human thought, not in any objective outside construct. The matter remains controversial.

Logicism

Logicism is a school of thought, and research programme, in the philosophy of mathematics, based on the thesis that mathematics is an extension of a logic or that some or all mathematics may be derived in a suitable formal system whose axioms and rules of inference are 'logical' in nature. Bertrand Russell and Alfred North Whitehead championed this theory initiated by Gottlob Frege and influenced by Richard Dedekind.

Set-theoretic Platonism

Many researchers in axiomatic set theory have subscribed to what is known as set-theoretic Platonism, exemplified by Kurt Gödel.

Several set theorists followed this approach and actively searched for axioms that may be considered as true for heuristic reasons and that would decide the continuum hypothesis. Many large cardinal axioms were studied, but the hypothesis always remained independent from them and it is now considered unlikely that CH can be resolved by a new large cardinal axiom. Other types of axioms were considered, but none of them has reached consensus on the continuum hypothesis yet. Recent work by Hamkins proposes a more flexible alternative: a set-theoretic multiverse allowing free passage between set-theoretic universes that satisfy the continuum hypothesis and other universes that do not.

Indispensability argument for realism

This argument by Willard Quine and Hilary Putnam says (in Putnam's shorter words),

... quantification over mathematical entities is indispensable for science ...; therefore we should accept such quantification; but this commits us to accepting the existence of the mathematical entities in question.

However, Putnam was not a Platonist.

Rough-and-ready realism

Few mathematicians are typically concerned on a daily, working basis over logicism, formalism or any other philosophical position. Instead, their primary concern is that the mathematical enterprise as a whole always remains productive. Typically, they see this as ensured by remaining open-minded, practical and busy; as potentially threatened by becoming overly-ideological, fanatically reductionistic or lazy.

Such a view has also been expressed by some well-known physicists.

For example, the Physics Nobel Prize laureate Richard Feynman said

People say to me, "Are you looking for the ultimate laws of physics?" No, I'm not ... If it turns out there is a simple ultimate law which explains everything, so be it – that would be very nice to discover. If it turns out it's like an onion with millions of layers ... then that's the way it is. But either way there's Nature and she's going to come out the way She is. So therefore when we go to investigate we shouldn't predecide what it is we're looking for only to find out more about it.

And Steven Weinberg:

The insights of philosophers have occasionally benefited physicists, but generally in a negative fashion – by protecting them from the preconceptions of other philosophers. ... without some guidance from our preconceptions one could do nothing at all. It is just that philosophical principles have not generally provided us with the right preconceptions.

Weinberg believed that any undecidability in mathematics, such as the continuum hypothesis, could be potentially resolved despite the incompleteness theorem, by finding suitable further axioms to add to set theory.

Philosophical consequences of Gödel's completeness theorem

Gödel's completeness theorem establishes an equivalence in first-order logic between the formal provability of a formula and its truth in all possible models. Precisely, for any consistent first-order theory it gives an "explicit construction" of a model described by the theory; this model will be countable if the language of the theory is countable. However this "explicit construction" is not algorithmic. It is based on an iterative process of completion of the theory, where each step of the iteration consists in adding a formula to the axioms if it keeps the theory consistent; but this consistency question is only semi-decidable (an algorithm is available to find any contradiction but if there is none this consistency fact can remain unprovable).

This can be seen as a giving a sort of justification to the Platonist view that the objects of our mathematical theories are real. More precisely, it shows that the mere assumption of the existence of the set of natural numbers as a totality (an actual infinity) suffices to imply the existence of a model (a world of objects) of any consistent theory. However several difficulties remain:

  • For any consistent theory this usually does not give just one world of objects, but an infinity of possible worlds that the theory might equally describe, with a possible diversity of truths between them.
  • In the case of set theory, none of the models obtained by this construction resemble the intended model, as they are countable while set theory intends to describe uncountable infinities. Similar remarks can be made in many other cases. For example, with theories that include arithmetic, such constructions generally give models that include non-standard numbers, unless the construction method was specifically designed to avoid them.
  • As it gives models to all consistent theories without distinction, it gives no reason to accept or reject any axiom as long as the theory remains consistent, but regards all consistent axiomatic theories as referring to equally existing worlds. It gives no indication on which axiomatic system should be preferred as a foundation of mathematics.
  • As claims of consistency are usually unprovable, they remain a matter of belief or non-rigorous kinds of justifications. Hence the existence of models as given by the completeness theorem needs in fact two philosophical assumptions: the actual infinity of natural numbers and the consistency of the theory.

Another consequence of the completeness theorem is that it justifies the conception of infinitesimals as actual infinitely small nonzero quantities, based on the existence of non-standard models as equally legitimate to standard ones. This idea was formalized by Abraham Robinson into the theory of nonstandard analysis.

More paradoxes

The following lists some notable results in metamathematics. Zermelo–Fraenkel set theory is the most widely studied axiomatization of set theory. It is abbreviated ZFC when it includes the axiom of choice and ZF when the axiom of choice is excluded.

  • 1920: Thoralf Skolem corrected Leopold Löwenheim's proof of what is now called the downward Löwenheim–Skolem theorem, leading to Skolem's paradox discussed in 1922, namely the existence of countable models of ZF, making infinite cardinalities a relative property.
  • 1922: Proof by Abraham Fraenkel that the axiom of choice cannot be proved from the axioms of Zermelo set theory with urelements.
  • 1931: Publication of Gödel's incompleteness theorems, showing that essential aspects of Hilbert's program could not be attained. It showed how to construct, for any sufficiently powerful and consistent recursively axiomatizable system – such as necessary to axiomatize the elementary theory of arithmetic on the (infinite) set of natural numbers – a statement that formally expresses its own unprovability, which he then proved equivalent to the claim of consistency of the theory; so that (assuming the consistency as true), the system is not powerful enough for proving its own consistency, let alone that a simpler system could do the job. It thus became clear that the notion of mathematical truth can not be completely determined and reduced to a purely formal system as envisaged in Hilbert's program. This dealt a final blow to the heart of Hilbert's program, the hope that consistency could be established by finitistic means (it was never made clear exactly what axioms were the "finitistic" ones, but whatever axiomatic system was being referred to, it was a 'weaker' system than the system whose consistency it was supposed to prove).
  • 1936: Alfred Tarski proved his truth undefinability theorem.
  • 1936: Alan Turing proved that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist.
  • 1938: Gödel proved the consistency of the axiom of choice and of the generalized continuum hypothesis.
  • 1936–1937: Alonzo Church and Alan Turing, respectively, published independent papers showing that a general solution to the Entscheidungsproblem is impossible: the universal validity of statements in first-order logic is not decidable (it is only semi-decidable as given by the completeness theorem).
  • 1955: Pyotr Novikov showed that there exists a finitely presented group G such that the word problem for G is undecidable.
  • 1963: Paul Cohen showed that the Continuum Hypothesis is unprovable from ZFC. Cohen's proof developed the method of forcing, which is now an important tool for establishing independence results in set theory.
  • 1964: Inspired by the fundamental randomness in physics, Gregory Chaitin starts publishing results on algorithmic information theory (measuring incompleteness and randomness in mathematics).
  • 1966: Paul Cohen showed that the axiom of choice is unprovable in ZF even without urelements.
  • 1970: Hilbert's tenth problem is proven unsolvable: there is no recursive solution to decide whether a Diophantine equation (multivariable polynomial equation) has a solution in integers.
  • 1971: Suslin's problem is proven to be independent from ZFC.

Toward resolution of the crisis

Starting in 1935, the Bourbaki group of French mathematicians started publishing a series of books to formalize many areas of mathematics on the new foundation of set theory.

The intuitionistic school did not attract many adherents, and it was not until Bishop's work in 1967 that constructive mathematics was placed on a sounder footing.

One may consider that Hilbert's program has been partially completed, so that the crisis is essentially resolved, satisfying ourselves with lower requirements than Hilbert's original ambitions. His ambitions were expressed in a time when nothing was clear: it was not clear whether mathematics could have a rigorous foundation at all.

There are many possible variants of set theory, which differ in consistency strength, where stronger versions (postulating higher types of infinities) contain formal proofs of the consistency of weaker versions, but none contains a formal proof of its own consistency. Thus the only thing we don't have is a formal proof of consistency of whatever version of set theory we may prefer, such as ZF.

In practice, most mathematicians either do not work from axiomatic systems, or if they do, do not doubt the consistency of ZFC, generally their preferred axiomatic system. In most of mathematics as it is practiced, the incompleteness and paradoxes of the underlying formal theories never played a role anyway, and in those branches in which they do or whose formalization attempts would run the risk of forming inconsistent theories (such as logic and category theory), they may be treated carefully.

The development of category theory in the middle of the 20th century showed the usefulness of set theories guaranteeing the existence of larger classes than does ZFC, such as Von Neumann–Bernays–Gödel set theory or Tarski–Grothendieck set theory, albeit that in very many cases the use of large cardinal axioms or Grothendieck universes is formally eliminable.

One goal of the reverse mathematics program is to identify whether there are areas of "core mathematics" in which foundational issues may again provoke a crisis.

The Unreasonable Effectiveness of Mathematics in the Natural Sciences

From Wikipedia, the free encyclopedia

"The Unreasonable Effectiveness of Mathematics in the Natural Sciences" is a 1960 article by the physicist Eugene Wigner. In the paper, Wigner observes that a physical theory's mathematical structure often points the way to further advances in that theory and even to empirical predictions.

The miracle of mathematics in the natural sciences

Wigner begins his paper with the belief, common among those familiar with mathematics, that mathematical concepts have applicability far beyond the context in which they were originally developed. Based on his experience, he writes, "it is important to point out that the mathematical formulation of the physicist's often crude experience leads in an uncanny number of cases to an amazingly accurate description of a large class of phenomena". He then invokes the fundamental law of gravitation as an example. Originally used to model freely falling bodies on the surface of the earth, this law was extended on the basis of what Wigner terms "very scanty observations" to describe the motion of the planets, where it "has proved accurate beyond all reasonable expectations".

Another oft-cited example is Maxwell's equations, derived to model the elementary electrical and magnetic phenomena known as of the mid-19th century. The equations also describe radio waves, discovered by David Edward Hughes in 1879, around the time of James Clerk Maxwell's death. Wigner sums up his argument by saying that "the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and that there is no rational explanation for it". He concludes his paper with the same question with which he began:

The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. We should be grateful for it and hope that it will remain valid in future research and that it will extend, for better or for worse, to our pleasure, even though perhaps also to our bafflement, to wide branches of learning.

The deep connection between science and mathematics

Wigner's work provided a fresh insight into both physics and the philosophy of mathematics, and has been fairly often cited in the academic literature on the philosophy of physics and of mathematics. Wigner speculated on the relationship between the philosophy of science and the foundations of mathematics as follows:

It is difficult to avoid the impression that a miracle confronts us here, quite comparable in its striking nature to the miracle that the human mind can string a thousand arguments together without getting itself into contradictions, or to the two miracles of laws of nature and of the human mind's capacity to divine them.

Later, Hilary Putnam (1975) explained these "two miracles" as necessary consequences of a realist (but not Platonist) view of the philosophy of mathematics. But in a passage discussing cognitive bias Wigner cautiously labeled as "not reliable", he went further:

The writer is convinced that it is useful, in epistemological discussions, to abandon the idealization that the level of human intelligence has a singular position on an absolute scale. In some cases it may even be useful to consider the attainment which is possible at the level of the intelligence of some other species.

Whether humans checking the results of humans can be considered an objective basis for observation of the known (to humans) universe is an interesting question, one followed up in both cosmology and the philosophy of mathematics.

Wigner also laid out the challenge of a cognitive approach to integrating the sciences:

A much more difficult and confusing situation would arise if we could, some day, establish a theory of the phenomena of consciousness, or of biology, which would be as coherent and convincing as our present theories of the inanimate world.

He further proposed that arguments could be found that might

put a heavy strain on our faith in our theories and on our belief in the reality of the concepts which we form. It would give us a deep sense of frustration in our search for what I called 'the ultimate truth'. The reason that such a situation is conceivable is that, fundamentally, we do not know why our theories work so well. Hence, their accuracy may not prove their truth and consistency. Indeed, it is this writer's belief that something rather akin to the situation which was described above exists if the present laws of heredity and of physics are confronted.

Responses to Wigner's original paper

Wigner's original paper has provoked and inspired many responses across a wide range of disciplines. These include Richard Hamming in computer science, Arthur Lesk in molecular biology, Peter Norvig in data mining, Max Tegmark in physics, Ivor Grattan-Guinness in mathematics and Vela Velupillai in economics.

Richard Hamming

Richard Hamming, an applied mathematician and a founder of computer science, reflected on and extended Wigner's Unreasonable Effectiveness in 1980, mulling over four "partial explanations" for it. Hamming concluded that the four explanations he gave were unsatisfactory. They were:

1. Humans see what they look for. The belief that science is experimentally grounded is only partially true. Rather, our intellectual apparatus is such that much of what we see comes from the glasses we put on. Eddington went so far as to claim that a sufficiently wise mind could deduce all of physics, illustrating his point with the following joke: "Some men went fishing in the sea with a net, and upon examining what they caught they concluded that there was a minimum size to the fish in the sea."

Hamming gives four examples of nontrivial physical phenomena he believes arose from the mathematical tools employed and not from the intrinsic properties of physical reality.

  • Hamming proposes that Galileo discovered the law of falling bodies not by experimenting, but by simple, though careful, thinking. Hamming imagines Galileo as having engaged in the following thought experiment (the experiment, which Hamming calls "scholastic reasoning", is described in Galileo's book On Motion.):

Suppose that a falling body broke into two pieces. Of course the two pieces would immediately slow down to their appropriate speeds. But suppose further that one piece happened to touch the other one. Would they now be one piece and both speed up? Suppose I tie the two pieces together. How tightly must I do it to make them one piece? A light string? A rope? Glue? When are two pieces one?

There is simply no way a falling body can "answer" such hypothetical "questions." Hence Galileo would have concluded that "falling bodies need not know anything if they all fall with the same velocity, unless interfered with by another force." After coming up with this argument, Hamming found a related discussion in Pólya (1963: 83-85). Hamming's account does not reveal an awareness of the 20th century scholarly debate over just what Galileo did.

2. Humans create and select the mathematics that fit a situation. The mathematics at hand does not always work. For example, when mere scalars proved awkward for understanding forces, first vectors, then tensors, were invented.

3. Mathematics addresses only a part of human experience. Much of human experience does not fall under science or mathematics but under the philosophy of value, including ethics, aesthetics, and political philosophy. To assert that the world can be explained via mathematics amounts to an act of faith.

4. Evolution has primed humans to think mathematically. The earliest lifeforms must have contained the seeds of the human ability to create and follow long chains of close reasoning.

Max Tegmark

A different response, advocated by physicist Max Tegmark, is that physics is so successfully described by mathematics because the physical world is completely mathematical, isomorphic to a mathematical structure, and that we are simply uncovering this bit by bit. The same interpretation had been advanced some years previously by Peter Atkins. In this interpretation, the various approximations that constitute our current physics theories are successful because simple mathematical structures can provide good approximations of certain aspects of more complex mathematical structures. In other words, our successful theories are not mathematics approximating physics, but mathematics approximating mathematics. Most of Tegmark's propositions are highly speculative, and some of them even far-out by strict scientific standards, and they raise one basic question: can one make precise sense of a notion of isomorphism (rather than hand-waving "correspondence") between the universe – the concrete world of "stuff" and events – on the one hand, and mathematical structures as they are understood by mathematicians, within mathematics? Unless – or optimistically, until – this is achieved, the often-heard proposition that ‘the world/universe is mathematical’ may be nothing but a category mistake.

Ivor Grattan-Guinness

Ivor Grattan-Guinness found the effectiveness in question eminently reasonable and explicable in terms of concepts such as analogy, generalisation and metaphor.

Related quotations

[W]ir auch, gleich als ob es ein glücklicher unsre Absicht begünstigender Zufall wäre, erfreuet (eigentlich eines Bedürfnisses entledigt) werden, wenn wir eine solche systematische Einheit unter bloß empirischen Gesetzen antreffen. [We rejoice (actually we are relieved of a need) when, just as if it were a lucky chance favoring our aim, we do find such systematic unity among merely empirical laws.].

The most incomprehensible thing about the universe is that it is comprehensible.

— Albert Einstein

How can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality? [...] In my opinion the answer to this question is, briefly, this: As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.

— Albert Einstein

Physics is mathematical not because we know so much about the physical world, but because we know so little; it is only its mathematical properties that we can discover.

There is only one thing which is more unreasonable than the unreasonable effectiveness of mathematics in physics, and this is the unreasonable ineffectiveness of mathematics in biology.

Sciences reach a point where they become mathematized..the central issues in the field become sufficiently understood that they can be thought about mathematically..[by the early 1990s] biology was no longer the science of things that smelled funny in refrigerators (my view from undergraduate days in the 1960s)..The field was undergoing a revolution and was rapidly acquiring the depth and power previously associated exclusively with the physical sciences. Biology was now the study of information stored in DNA — strings of four letters: A, T, G, and C..and the transformations that information undergoes in the cell. There was mathematics here!

— Leonard Adleman, a theoretical computer scientist who pioneered the field of DNA computing 

We should stop acting as if our goal is to author extremely elegant theories, and instead embrace complexity and make use of the best ally we have: the unreasonable effectiveness of data.

 

Numerical cognition

From Wikipedia, the free encyclopedia

Numerical cognition is a subdiscipline of cognitive science that studies the cognitive, developmental and neural bases of numbers and mathematics. As with many cognitive science endeavors, this is a highly interdisciplinary topic, and includes researchers in cognitive psychology, developmental psychology, neuroscience and cognitive linguistics. This discipline, although it may interact with questions in the philosophy of mathematics, is primarily concerned with empirical questions.

Topics included in the domain of numerical cognition include:

  • How do non-human animals process numerosity?
  • How do infants acquire an understanding of numbers (and how much is inborn)?
  • How do humans associate linguistic symbols with numerical quantities?
  • How do these capacities underlie our ability to perform complex calculations?
  • What are the neural bases of these abilities, both in humans and in non-humans?
  • What metaphorical capacities and processes allow us to extend our numerical understanding into complex domains such as the concept of infinity, the infinitesimal or the concept of the limit in calculus?
  • Heuristics in numerical cognition

Comparative studies

A variety of research has demonstrated that non-human animals, including rats, lions and various species of primates have an approximate sense of number (referred to as "numerosity") (for a review, see Dehaene 1997). For example, when a rat is trained to press a bar 8 or 16 times to receive a food reward, the number of bar presses will approximate a Gaussian or Normal distribution with peak around 8 or 16 bar presses. When rats are more hungry, their bar pressing behavior is more rapid, so by showing that the peak number of bar presses is the same for either well-fed or hungry rats, it is possible to disentangle time and number of bar presses. In addition, in a few species the parallel individuation system has been shown, for example in the case of guppies which successfully discriminated between 1 and 4 other individuals.

Similarly, researchers have set up hidden speakers in the African savannah to test natural (untrained) behavior in lions (McComb, Packer & Pusey 1994). These speakers can play a number of lion calls, from 1 to 5. If a single lioness hears, for example, three calls from unknown lions, she will leave, while if she is with four of her sisters, they will go and explore. This suggests that not only can lions tell when they are "outnumbered" but that they can do this on the basis of signals from different sensory modalities, suggesting that numerosity is a multisensory concept.

Developmental studies

Developmental psychology studies have shown that human infants, like non-human animals, have an approximate sense of number. For example, in one study, infants were repeatedly presented with arrays of (in one block) 16 dots. Careful controls were in place to eliminate information from "non-numerical" parameters such as total surface area, luminance, circumference, and so on. After the infants had been presented with many displays containing 16 items, they habituated, or stopped looking as long at the display. Infants were then presented with a display containing 8 items, and they looked longer at the novel display.

Because of the numerous controls that were in place to rule out non-numerical factors, the experimenters infer that six-month-old infants are sensitive to differences between 8 and 16. Subsequent experiments, using similar methodologies showed that 6-month-old infants can discriminate numbers differing by a 2:1 ratio (8 vs. 16 or 16 vs. 32) but not by a 3:2 ratio (8 vs. 12 or 16 vs. 24). However, 10-month-old infants succeed both at the 2:1 and the 3:2 ratio, suggesting an increased sensitivity to numerosity differences with age (for a review of this literature see Feigenson, Dehaene & Spelke 2004).

In another series of studies, Karen Wynn showed that infants as young as five months are able to do very simple additions (e.g., 1 + 1 = 2) and subtractions (3 - 1 = 2). To demonstrate this, Wynn used a "violation of expectation" paradigm, in which infants were shown (for example) one Mickey Mouse doll going behind a screen, followed by another. If, when the screen was lowered, infants were presented with only one Mickey (the "impossible event") they looked longer than if they were shown two Mickeys (the "possible" event). Further studies by Karen Wynn and Koleen McCrink found that although infants' ability to compute exact outcomes only holds over small numbers, infants can compute approximate outcomes of larger addition and subtraction events (e.g., "5+5" and "10-5" events).

There is debate about how much these infant systems actually contain in terms of number concepts, harkening to the classic nature versus nurture debate. Gelman & Gallistel 1978 suggested that a child innately has the concept of natural number, and only has to map this onto the words used in her language. Carey 2004, Carey 2009 disagreed, saying that these systems can only encode large numbers in an approximate way, where language-based natural numbers can be exact. Without language, only numbers 1 to 4 are believed to have an exact representation, through the parallel individuation system. One promising approach is to see if cultures that lack number words can deal with natural numbers. The results so far are mixed (e.g., Pica et al. 2004); Butterworth & Reeve 2008, Butterworth, Reeve & Lloyd 2008.

Neuroimaging and neurophysiological studies

Human neuroimaging studies have demonstrated that regions of the parietal lobe, including the intraparietal sulcus (IPS) and the inferior parietal lobule (IPL) are activated when subjects are asked to perform calculation tasks. Based on both human neuroimaging and neuropsychology, Stanislas Dehaene and colleagues have suggested that these two parietal structures play complementary roles. The IPS is thought to house the circuitry that is fundamentally involved in numerical estimation (Piazza et al. 2004), number comparison (Pinel et al. 2001; Pinel et al. 2004) and on-line calculation, or quantity processing (often tested with subtraction) while the IPL is thought to be involved in rote memorization, such as multiplication (see Dehaene 1997). Thus, a patient with a lesion to the IPL may be able to subtract, but not multiply, and vice versa for a patient with a lesion to the IPS. In addition to these parietal regions, regions of the frontal lobe are also active in calculation tasks. These activations overlap with regions involved in language processing such as Broca's area and regions involved in working memory and attention. Additionally, the inferotemporal cortex is implicated in processing the numerical shapes and symbols, necessary for calculations with Arabic digits. More current research has highlighted the networks involved with multiplication and subtraction tasks. Multiplication is often learned through rote memorization and verbal repetitions, and neuroimaging studies have shown that multiplication uses a left lateralized network of the inferior frontal cortex and the superior-middle temporal gyri in addition to the IPL and IPS. Subtraction is taught more with quantity manipulation and strategy use, more reliant upon the right IPS and the posterior parietal lobule.

Single-unit neurophysiology in monkeys has also found neurons in the frontal cortex and in the intraparietal sulcus that respond to numbers. Andreas Nieder (Nieder 2005; Nieder, Freedman & Miller 2002; Nieder & Miller 2004 harvnb error: multiple targets (2×): CITEREFNiederMiller2004 (help)) trained monkeys to perform a "delayed match-to-sample" task. For example, a monkey might be presented with a field of four dots, and is required to keep that in memory after the display is taken away. Then, after a delay period of several seconds, a second display is presented. If the number on the second display match that from the first, the monkey has to release a lever. If it is different, the monkey has to hold the lever. Neural activity recorded during the delay period showed that neurons in the intraparietal sulcus and the frontal cortex had a "preferred numerosity", exactly as predicted by behavioral studies. That is, a certain number might fire strongly for four, but less strongly for three or five, and even less for two or six. Thus, we say that these neurons were "tuned" for specific quantities. Note that these neuronal responses followed Weber's law, as has been demonstrated for other sensory dimensions, and consistent with the ratio dependence observed for non-human animals' and infants' numerical behavior (Nieder & Miller 2003) harv error: multiple targets (2×): CITEREFNiederMiller2003 (help).

It is important to note that while primates have remarkably similar brains to humans, there are differences in function, ability, and sophistication. They make for good preliminary test subjects, but do not show small differences that are the result of different evolutionary tracks and environment. However, in the realm of number, they share many similarities. As identified in monkeys, neurons selectively tuned to number were identified in the bilateral intraparietal sulci and prefrontal cortex in humans. Piazza and colleagues investigated this using fMRI, presenting participants with sets of dots where they either had to make same-different judgments or larger-smaller judgments. The sets of dots consisted of base numbers 16 and 32 dots with ratios in 1.25, 1.5, and 2. Deviant numbers were included in some trials in larger or smaller amounts than the base numbers. Participants displayed similar activation patterns as Neider found in the monkeys. The intraparietal sulcus and the prefrontal cortex, also implicated in number, communicate in approximating number and it was found in both species that the parietal neurons of the IPS had short firing latencies, whereas the frontal neurons had longer firing latencies. This supports the notion that number is first processed in the IPS and, if needed, is then transferred to the associated frontal neurons in the prefrontal cortex for further numerations and applications. Humans displayed Gaussian curves in the tuning curves of approximate magnitude. This aligned with monkeys, displaying a similarly structured mechanism in both species with classic Gaussian curves relative to the increasingly deviant numbers with 16 and 32 as well as habituation. The results followed Weber's Law, with accuracy decreasing as the ratio between numbers became smaller. This supports the findings made by Neider in macaque monkeys and shows definitive evidence for an approximate number logarithmic scale in humans.

With an established mechanism for approximating non-symbolic number in both humans and primates, a necessary further investigation is needed to determine if this mechanism is innate and present in children, which would suggest an inborn ability to process numerical stimuli much like humans are born ready to process language. Cantlon and colleagues set out to investigate this in 4 year old healthy, normally developing children in parallel with adults. A similar task to Piazza's was used in this experiment, without the judgment tasks. Dot arrays of varying size and number were used, with 16 and 32 as the base numerosities. in each block, 232 stimuli were presented with 20 deviant numerosities of a 2.0 ratio both larger and smaller. For example, out of the 232 trials, 16 dots were presented in varying size and distance but 10 of those trials had 8 dots, and 10 of those trials had 32 dots, making up the 20 deviant stimuli. The same applied to the blocks with 32 as the base numerosity. To ensure the adults and children were attending to the stimuli, they put 3 fixation points throughout the trial where the participant had to move a joystick to move forward. Their findings indicated that the adults in the experiment had significant activation of the IPS when viewing the deviant number stimuli, aligning with what was previously found in the aforementioned paragraph. In the 4 year olds, they found significant activation of the IPS to the deviant number stimuli, resembling the activation found in adults. There were some differences in the activations, with adults displaying more robust bilateral activation, where the 4 year olds primarily showed activation in their right IPS and activated 112 less voxels than the adults. This suggests that at age 4, children have an established mechanism of neurons in the IPS tuned for processing non-symbolic numerosities. Other studies have gone deeper into this mechanism in children and discovered that children do also represent approximate numbers on a logarithmic scale, aligning with the claims made by Piazza in adults.

A study by Izard and colleagues investigated abstract number representations in infants using a different paradigm than the previous researchers because of the nature and developmental stage of the infants. For infants, they examined abstract number with both auditory and visual stimuli with a looking-time paradigm. The sets used were 4vs.12, 8vs.16, and 4vs.8. The auditory stimuli consisted of tones in different frequencies with a set number of tones, with some deviant trials where the tones were shorter but more numerous or longer and less numerous to account for duration and its potential confounds. After the auditory stimuli was presented with 2 minutes of familiarization, the visual stimuli was presented with a congruent or incongruent array of colorful dots with facial features. they remained on the screen until the infant looked away. They found that infants looked longer at the stimuli that matched the auditory tones, suggesting that the system for approximating non-symbolic number, even across modalities, is present in infancy. What is important to note across these three particular human studies on nonsymbolic numerosities is that it is present in infancy and develops over the lifetime. The honing of their approximation and number sense abilities as indicated by the improving Weber fractions across time, and usage of the left IPS to provide a wider berth for processing of computations and enumerations lend support for the claims that are made for a nonsymbolic number processing mechanism in human brains.

Relations between number and other cognitive processes

There is evidence that numerical cognition is intimately related to other aspects of thought – particularly spatial cognition. One line of evidence comes from studies performed on number-form synaesthetes. Such individuals report that numbers are mentally represented with a particular spatial layout; others experience numbers as perceivable objects that can be visually manipulated to facilitate calculation. Behavioral studies further reinforce the connection between numerical and spatial cognition. For instance, participants respond quicker to larger numbers if they are responding on the right side of space, and quicker to smaller numbers when on the left—the so-called "Spatial-Numerical Association of Response Codes" or SNARC effect. This effect varies across culture and context, however, and some research has even begun to question whether the SNARC reflects an inherent number-space association, instead invoking strategic problem solving or a more general cognitive mechanism like conceptual metaphor. Moreover, neuroimaging studies reveal that the association between number and space also shows up in brain activity. Regions of the parietal cortex, for instance, show shared activation for both spatial and numerical processing. These various lines of research suggest a strong, but flexible, connection between numerical and spatial cognition.

Modification of the usual decimal representation was advocated by John Colson. The sense of complementation, missing in the usual decimal system, is expressed by signed-digit representation.

Heuristics in numerical cognition

Several consumer psychologists have also studied the heuristics that people use in numerical cognition. For example, Thomas and Morwitz (2009) reviewed several studies showing that the three heuristics that manifest in many everyday judgments and decisions – anchoring, representativeness, and availability – also influence numerical cognition. They identify the manifestations of these heuristics in numerical cognition as: the left-digit anchoring effect, the precision effect, and the ease of computation effect respectively. The left-digit effect refers to the observation that people tend to incorrectly judge the difference between $4.00 and $2.99 to be larger than that between $4.01 and $3.00 because of anchoring on left-most digits. The precision effect reflects the influence of the representativeness of digit patterns on magnitude judgments. Larger magnitudes are usually rounded and therefore have many zeros, whereas smaller magnitudes are usually expressed as precise numbers; so relying on the representativeness of digit patterns can make people incorrectly judge a price of $391,534 to be more attractive than a price of $390,000. The ease of computation effect shows that magnitude judgments are based not only on the output of a mental computation, but also on its experienced ease or difficulty. Usually it is easier to compare two dissimilar magnitudes than two similar magnitudes; overuse of this heuristic can make people incorrectly judge the difference to be larger for pairs with easier computations, e.g. $5.00 minus $4.00, than for pairs with difficult computations, e.g. $4.97 minus $3.96. 

Ethnolinguistic variance

The numeracy of indigenous peoples is studied to identify universal aspects of numerical cognition in humans. Notable examples include the Pirahã people who have no words for specific numbers and the Munduruku people who only have number words up to five. Pirahã adults are unable to mark an exact number of tallies for a pile of nuts containing fewer than ten items. Anthropologist Napoleon Chagnon spent several decades studying the Yanomami in the field. He concluded that they have no need for counting in their everyday lives. Their hunters keep track of individual arrows with the same mental faculties that they use to recognize their family members. There are no known hunter-gatherer cultures that have a counting system in their language. The mental and lingual capabilities for numeracy are tied to the development of agriculture and with it large numbers of indistinguishable items.

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...