Search This Blog

Saturday, January 29, 2022

Ordinal arithmetic

From Wikipedia, the free encyclopedia

In the mathematical field of set theory, ordinal arithmetic describes the three usual operations on ordinal numbers: addition, multiplication, and exponentiation. Each can be defined in essentially two different ways: either by constructing an explicit well-ordered set that represents the result of the operation or by using transfinite recursion. Cantor normal form provides a standardized way of writing ordinals. In addition to these usual ordinal operations, there are also the "natural" arithmetic of ordinals and the nimber operations.

Addition

The union of two disjoint well-ordered sets S and T can be well-ordered. The order-type of that union is the ordinal that results from adding the order-types of S and T. If two well-ordered sets are not already disjoint, then they can be replaced by order-isomorphic disjoint sets, e.g. replace S by {0} × S and T by {1} × T. This way, the well-ordered set S is written "to the left" of the well-ordered set T, meaning one defines an order on S T in which every element of S is smaller than every element of T. The sets S and T themselves keep the ordering they already have. This addition of the order-types is associative and generalizes the addition of natural numbers.

The definition of addition can also be given inductively (the following induction is on β):

  • α + 0 = α,
  • α + (β + 1) = (α + β) + 1 (here, "+ 1" denotes the successor of an ordinal),
  • and if β is a limit ordinal then α + β is the limit of α + δ for all δ < β.

The first transfinite ordinal is ω, the set of all natural numbers. For example, the ordinal ω + ω is obtained by two copies of the natural numbers ordered in the usual fashion and the second copy completely to the right of the first. Writing 0' < 1' < 2' < ... for the second copy, ω + ω looks like

0 < 1 < 2 < 3 < ... < 0' < 1' < 2' < ...

This is different from ω because in ω only 0 does not have a direct predecessor while in ω + ω the two elements 0 and 0' do not have direct predecessors. As another example, here are 3 + ω and ω + 3:

0 < 1 < 2 < 0' < 1' < 2' < ...
0 < 1 < 2 < ... < 0' < 1' < 2'

After relabeling, the former just looks like ω itself, i.e. 3 + ω = ω, while the latter does not: ω + 3 is not equal to ω since ω + 3 has a largest element (namely, 2') and ω does not (even if ω and ω + 3 are equipotent, they are not isomorphic). Hence, this addition is not commutative. In fact it is quite rare for α+β to be equal to β+α: this happens if and only if α=γm, β=γn for some ordinal γ and natural numbers m and n. From this it follows that "α commutes with β" is an equivalence relation on the class of nonzero ordinals, and all the equivalence classes are countably infinite. However, addition is still associative; one can see for example that (ω + 4) + ω = ω + (4 + ω) = ω + ω.

Addition is strictly increasing and continuous in the right argument:

but the analogous relation does not hold for the left argument; instead we only have:

Ordinal addition is left-cancellative: if α + β = α + γ, then β = γ. Furthermore, one can define left subtraction for ordinals βα: there is a unique γ such that α = β + γ. On the other hand, right cancellation does not work:

but

Nor does right subtraction, even when βα: for example, there does not exist any γ such that γ + 42 = ω.

If the ordinals less than α are closed under addition and contain 0 then α is occasionally called a γ-number (see additively indecomposable ordinal). These are exactly the ordinals of the form ωβ.

Multiplication

The Cartesian product, S×T, of two well-ordered sets S and T can be well-ordered by a variant of lexicographical order that puts the least significant position first. Effectively, each element of T is replaced by a disjoint copy of S. The order-type of the Cartesian product is the ordinal that results from multiplying the order-types of S and T. Again, this operation is associative and generalizes the multiplication of natural numbers.

Here is ω·2:

00 < 10 < 20 < 30 < ... < 01 < 11 < 21 < 31 < ...,

which has the same order type as ω + ω. In contrast, 2·ω looks like this:

00 < 10 < 01 < 11 < 02 < 12 < 03 < 13 < ...

and after relabeling, this looks just like ω. Thus, ω·2 = ω+ω ≠ ω = 2·ω, showing that multiplication of ordinals is not commutative. More generally, a natural number greater than 1 never commutes with any infinite ordinal, and two infinite ordinals α, β commute if and only if αm = βn for some positive natural numbers m and n. The relation "α commutes with β" is an equivalence relation on the ordinals greater than 1, and all equivalence classes are countably infinite.

Distributivity partially holds for ordinal arithmetic: α(β+γ) = αβ+αγ. However, the other distributive law (β+γ)α = βα+γα is not generally true: (1+1)·ω = 2·ω = ω while 1·ω+1·ω = ω+ω, which is different. Therefore, the ordinal numbers form a left near-semiring, but do not form a ring.

The definition of multiplication can also be given inductively (the following induction is on β):

  • α·0 = 0,
  • α·(β+1) = (α·β)+α,
  • and if β is a limit ordinal then α·β is the limit of the α·δ for δ < β.

The main properties of the product are:

  • α·0 = 0·α = 0.
  • One (1) is a multiplicative identity α·1 = 1·α = α.
  • Multiplication is associative (α·βγ = α·(β·γ).
  • Multiplication is strictly increasing and continuous in the right argument: (α < β and γ > 0) γ·α < γ·β
  • Multiplication is not strictly increasing in the left argument, for example, 1 < 2 but 1·ω = 2·ω = ω. However, it is (non-strictly) increasing, i.e. αβ α·γβ·γ.
  • There is a left cancellation law: If α > 0 and α·β = α·γ, then β = γ.
  • Right cancellation does not work, e.g. 1·ω = 2·ω = ω, but 1 and 2 are different.
  • α·β = 0 α = 0 or β = 0.
  • Distributive law on the left: α·(β+γ) = α·β+α·γ
  • No distributive law on the right: e.g. (ω+1)·2 = ω+1+ω+1 = ω+ω+1 = ω·2+1, which is not ω·2+2.
  • Left division with remainder: for all α and β, if β > 0, then there are unique γ and δ such that α = β·γ+δ and δ < β. (This does not however mean the ordinals are a Euclidean domain, since they are not even a ring, and the Euclidean "norm" is ordinal-valued.)
  • Right division does not work: there is no α such that α·ω ≤ ωω ≤ (α+1)·ω.

A δ-number (see Multiplicatively indecomposable) is an ordinal greater than 1 such that αβ=β whenever 0<α<β. These consist of the ordinal 2 and the ordinals of the form ωωβ.

Exponentiation

The definition of ordinal exponentiation for finite exponents is straightforward. If the exponent is a finite number, the power is the result of iterated multiplication. For instance, ω2 = ω·ω using the operation of ordinal multiplication. Note that ω·ω can be defined using the set of functions from 2 = {0,1} to ω = {0,1,2,...}, ordered lexicographically with the least significant position first:

(0,0) < (1,0) < (2,0) < (3,0) < ... < (0,1) < (1,1) < (2,1) < (3,1) < ... < (0,2) < (1,2) < (2,2) < ...

Here for brevity, we have replaced the function {(0,k), (1,m)} by the ordered pair (k, m).

Similarly, for any finite exponent n, can be defined using the set of functions from n (the domain) to the natural numbers (the codomain). These functions can be abbreviated as n-tuples of natural numbers.

But for infinite exponents, the definition may not be obvious. A limit ordinal, such as ωω, is the supremum of all smaller ordinals. It might seem natural to define ωω using the set of all infinite sequences of natural numbers. However, we find that any absolutely defined ordering on this set is not well-ordered. To deal with this issue we can use the variant lexicographical ordering again. We restrict the set to sequences that are nonzero for only a finite number of arguments. This is naturally motivated as the limit of the finite powers of the base (similar to the concept of coproduct in algebra). This can also be thought of as the infinite union .

Each of those sequences corresponds to an ordinal less than such as and is the supremum of all those smaller ordinals.

The lexicographical order on this set is a well ordering that resembles the ordering of natural numbers written in decimal notation, except with digit positions reversed, and with arbitrary natural numbers instead of just the digits 0–9:

(0,0,0,...) < (1,0,0,0,...) < (2,0,0,0,...) < ... <
(0,1,0,0,0,...) < (1,1,0,0,0,...) < (2,1,0,0,0,...) < ... <
(0,2,0,0,0,...) < (1,2,0,0,0,...) < (2,2,0,0,0,...)
< ... <
(0,0,1,0,0,0,...) < (1,0,1,0,0,0,...) < (2,0,1,0,0,0,...)
< ...

In general, any ordinal α can be raised to the power of another ordinal β in the same way to get αβ.

It is easiest to explain this using Von Neumann's definition of an ordinal as the set of all smaller ordinals. Then, to construct a set of order type αβ consider all functions from β to α such that only a finite number of elements of the domain β map to a non zero element of α (essentially, we consider the functions with finite support). The order is lexicographic with the least significant position first. We find

  • 1ω = 1,
  • 2ω = ω,
  • 2ω+1 = ω·2 = ω+ω.

The definition of exponentiation can also be given inductively (the following induction is on β, the exponent):

  • α0 = 1,
  • αβ+1 = (αβα, and
  • if β is a limit ordinal, then αβ is the limit of the αδ for all δ < β.

Properties of ordinal exponentiation:

  • α0 = 1.
  • If 0 < α, then 0α = 0.
  • 1α = 1.
  • α1 = α.
  • αβ·αγ = αβ + γ.
  • (αβ)γ = αβ·γ.
  • There are α, β, and γ for which (α·β)γαγ·βγ. For instance, (ω·2)2 = ω·2·ω·2 = ω2·2 ≠ ω2·4.
  • Ordinal exponentiation is strictly increasing and continuous in the right argument: If γ > 1 and α < β, then γα < γβ.
  • If α < β, then αγβγ. Note, for instance, that 2 < 3 and yet 2ω = 3ω = ω.
  • If α > 1 and αβ = αγ, then β = γ. If α = 1 or α = 0 this is not the case.
  • For all α and β, if β > 1 and α > 0 then there exist unique γ, δ, and ρ such that α = βγ·δ + ρ such that 0 < δ < β and ρ < βγ.

While the same notation is used for ordinal exponentiation and cardinal exponentiation, ordinal exponentiation is quite different from cardinal exponentiation. For example, with ordinal exponentiation , but for (aleph naught, the cardinality of ), . Here, is the cardinality of the set of all functions from the set of all natural numbers to a set with two elements. (This is the cardinality of the power set of the set of all natural numbers and is equal to , the cardinality of the continuum.) To avoid confusing ordinal exponentiation with cardinal exponentiation, one can use symbols for ordinals (e.g. ω) in the former and symbols for cardinals (e.g. ) in the latter.

Jacobsthal showed that the only solutions of αβ = βα with α ≤ β are given by α = β, or α = 2 and β = 4, or α is any limit ordinal and β = εα where ε is an ε-number larger than α.

Cantor normal form

Every ordinal number α can be uniquely written as , where k is a natural number, are positive integers, and are ordinal numbers. The degenerate case α=0 occurs when k=0 and there are no βs nor cs. This decomposition of α is called the Cantor normal form of α, and can be considered the base-ω positional numeral system. The highest exponent is called the degree of , and satisfies . The equality applies if and only if . In that case Cantor normal form does not express the ordinal in terms of smaller ones; this can happen as explained below.

A minor variation of Cantor normal form, which is usually slightly easier to work with, is to set all the numbers ci equal to 1 and allow the exponents to be equal. In other words, every ordinal number α can be uniquely written as , where k is a natural number, and are ordinal numbers.

Another variation of the Cantor normal form is the "base δ expansion", where ω is replaced by any ordinal δ>1, and the numbers ci are positive ordinals less than δ.

The Cantor normal form allows us to uniquely express—and order—the ordinals α that are built from the natural numbers by a finite number of arithmetical operations of addition, multiplication and exponentiation base-: in other words, assuming in the Cantor normal form, we can also express the exponents in Cantor normal form, and making the same assumption for the as for α and so on recursively, we get a system of notation for these ordinals (for example,

denotes an ordinal).

The ordinal ε0 (epsilon nought) is the set of ordinal values α of the finite-length arithmetical expressions of Cantor normal form that are hereditarily non-trivial where non-trivial means β1<α when 0<α. It is the smallest ordinal that does not have a finite arithmetical expression in terms of ω, and the smallest ordinal such that , i.e. in Cantor normal form the exponent is not smaller than the ordinal itself. It is the limit of the sequence

The ordinal ε0 is important for various reasons in arithmetic (essentially because it measures the proof-theoretic strength of the first-order Peano arithmetic: that is, Peano's axioms can show transfinite induction up to any ordinal less than ε0 but not up to ε0 itself).

The Cantor normal form also allows us to compute sums and products of ordinals: to compute the sum, for example, one need merely know (see the properties listed in § Addition and § Multiplication) that

if (if one can apply the distributive law on the left and rewrite this as , and if the expression is already in Cantor normal form); and to compute products, the essential facts are that when is in Cantor normal form and , then

and

if n is a non-zero natural number.

To compare two ordinals written in Cantor normal form, first compare , then , then , then , etc.. At the first difference, the ordinal that has the larger component is the larger ordinal. If they are the same until one terminates before the other, then the one that terminates first is smaller.

Factorization into primes

Ernst Jacobsthal showed that the ordinals satisfy a form of the unique factorization theorem: every nonzero ordinal can be written as a product of a finite number of prime ordinals. This factorization into prime ordinals is in general not unique, but there is a "minimal" factorization into primes that is unique up to changing the order of finite prime factors (Sierpiński 1958).

A prime ordinal is an ordinal greater than 1 that cannot be written as a product of two smaller ordinals. Some of the first primes are 2, 3, 5, ... , ω, ω+1, ω2+1, ω3+1, ..., ωω, ωω+1, ωω+1+1, ... There are three sorts of prime ordinals:

  • The finite primes 2, 3, 5, ...
  • The ordinals of the form ωωα for any ordinal α. These are the prime ordinals that are limits, and are the delta numbers.
  • The ordinals of the form ωα+1 for any ordinal α>0. These are the infinite successor primes, and are the successors of gamma numbers, the additively indecomposable ordinals.

Factorization into primes is not unique: for example, 2×3=3×2, 2×ω=ω, (ω+1)×ω=ω×ω and ω×ωω = ωω. However, there is a unique factorization into primes satisfying the following additional conditions:

  • Every limit prime occurs before every successor prime
  • If two consecutive primes of the prime factorization are both limits or both finite, then the second one is at most the first one.

This prime factorization can easily be read off using the Cantor normal form as follows:

  • First write the ordinal as a product αβ where α is the smallest power of ω in the Cantor normal form and β is a successor.
  • If αγ then writing γ in Cantor normal form gives an expansion of α as a product of limit primes.
  • Now look at the Cantor normal form of β. If β = ωλm + ωμn + smaller terms, then β = (ωμn + smaller terms)(ωλμ + 1)m is a product of a smaller ordinal and a prime and an integer m. Repeating this and factorizing the integers into primes gives the prime factorization of β.

So the factorization of the Cantor normal form ordinal

(with )

into a minimal product of infinite primes and integers is

where each ni should be replaced by its factorization into a non-increasing sequence of finite primes and

with .

Large countable ordinals

As discussed above, the Cantor normal form of ordinals below can be expressed in an alphabet containing only the function symbols for addition, multiplication and exponentiation, as well as constant symbols for each natural number and for . We can do away with the infinitely many numerals by using just the constant symbol 0 and the operation of successor, (for example, the integer 4 may be expressed as ). This describes an ordinal notation: a system for naming ordinals over a finite alphabet. This particular system of ordinal notation is called the collection of arithmetical ordinal expressions, and can express all ordinals below , but cannot express . There are other ordinal notations capable of capturing ordinals well past , but because there are only countably many strings over any finite alphabet, for any given ordinal notation there will be ordinals below (the first uncountable ordinal) that are not expressible. Such ordinals are known as large countable ordinals.

The operations of addition, multiplication and exponentiation are all examples of primitive recursive ordinal functions, and more general primitive recursive ordinal functions can be used to describe larger ordinals.

Natural operations

The natural sum and natural product operations on ordinals were defined in 1906 by Gerhard Hessenberg, and are sometimes called the Hessenberg sum (or product) (Sierpinski 1958). These are the same as the addition and multiplication (restricted to ordinals) of John Conway's field of surreal numbers. They have the advantage that they are associative and commutative, and natural product distributes over natural sum. The cost of making these operations commutative is that they lose the continuity in the right argument, which is a property of the ordinary sum and product. The natural sum of α and β is often denoted by αβ or α#β, and the natural product by αβ or αβ.

The natural operations come up in the theory of well partial orders; given two well partial orders S and T, of types (maximum linearizations) o(S) and o(T), the type of the disjoint union is o(S)⊕o(T), while the type of the direct product is o(S)⊗o(T). One may take this relation as a definition of the natural operations by choosing S and T to be ordinals α and β; so αβ is the maximum order type of a total order extending the disjoint union (as a partial order) of α and β; while αβ is the maximum order type of a total order extending the direct product (as a partial order) of α and β. A useful application of this is when α and β are both subsets of some larger total order; then their union has order type at most αβ. If they are both subsets of some ordered abelian group, then their sum has order type at most αβ.

We can also define the natural sum of α and β inductively (by simultaneous induction on α and β) as the smallest ordinal greater than the natural sum of α and γ for all γ < β and of γ and β for all γ < α. There is also an inductive definition of the natural product (by mutual induction), but it is somewhat tedious to write down and we shall not do so (see the article on surreal numbers for the definition in that context, which, however, uses surreal subtraction, something that obviously cannot be defined on ordinals).

The natural sum is associative and commutative. It is always greater or equal to the usual sum, but it may be strictly greater. For example, the natural sum of ω and 1 is ω+1 (the usual sum), but this is also the natural sum of 1 and ω. The natural product is associative and commutative and distributes over the natural sum. The natural product is always greater or equal to the usual product, but it may be strictly greater. For example, the natural product of ω and 2 is ω·2 (the usual product), but this is also the natural product of 2 and ω.

Yet another way to define the natural sum and product of two ordinals α and β is to use the Cantor normal form: one can find a sequence of ordinals γ1 > … > γn and two sequences (k1, …, kn) and (j1, …, jn) of natural numbers (including zero, but satisfying ki + ji > 0 for all i) such that

and define

Under natural addition, the ordinals can be identified with the elements of the free commutative monoid generated by the gamma numbers ωα. Under natural addition and multiplication, the ordinals can be identified with the elements of the free commutative semiring generated by the delta numbers ωωα. The ordinals do not have unique factorization into primes under the natural product. While the full polynomial ring does have unique factorization, the subset of polynomials with non-negative coefficients does not: for example, if x is any delta number, then

has two incompatible expressions as a natural product of polynomials with non-negative coefficients that cannot be decomposed further.

Nimber arithmetic

There are arithmetic operations on ordinals by virtue of the one-to-one correspondence between ordinals and nimbers. Three common operations on nimbers are nimber addition, nimber multiplication, and minimum excludance (mex). Nimber addition is a generalization of the bitwise exclusive or operation on natural numbers. The mex of a set of ordinals is the smallest ordinal not present in the set.

Temporal finitism

From Wikipedia, the free encyclopedia

Temporal finitism is the doctrine that time is finite in the past. The philosophy of Aristotle, expressed in such works as his Physics, held that although space was finite, with only void existing beyond the outermost sphere of the heavens, time was infinite. This caused problems for mediaeval Islamic, Jewish, and Christian philosophers who, primarily creationist, were unable to reconcile the Aristotelian conception of the eternal with the Genesis creation narrative.

Medieval background

In contrast to ancient Greek philosophers who believed that the universe had an infinite past with no beginning, medieval philosophers and theologians developed the concept of the universe having a finite past with a beginning. This view was inspired by the creation myth shared by the three Abrahamic religions: Judaism, Christianity and Islam.

Prior to Maimonides, it was held that it was possible to prove, philosophically, creation theory. The Kalam cosmological argument held that creation was provable, for example. Maimonides himself held that neither creation nor Aristotle's infinite time were provable, or at least that no proof was available. (According to scholars of his work, he didn't make a formal distinction between unprovability and the simple absence of proof.) Thomas Aquinas was influenced by this belief, and held in his Summa Theologica that neither hypothesis was demonstrable. Some of Maimonides' Jewish successors, including Gersonides and Crescas, conversely held that the question was decidable, philosophically.

John Philoponus was probably the first to use the argument that infinite time is impossible in order to establish temporal finitism. He was followed by many others including St. Bonaventure.

Philoponus' arguments for temporal finitism were severalfold. Contra Aristotlem has been lost, and is chiefly known through the citations used by Simplicius of Cilicia in his commentaries on Aristotle's Physics and De Caelo. Philoponus' refutation of Aristotle extended to six books, the first five addressing De Caelo and the sixth addressing Physics, and from comments on Philoponus made by Simplicius can be deduced to have been quite lengthy.

A full exposition of Philoponus' several arguments, as reported by Simplicius, can be found in Sorabji.

One such argument was based upon Aristotle's own theorem that there were not multiple infinities, and ran as follows: If time were infinite, then as the universe continued in existence for another hour, the infinity of its age since creation at the end of that hour must be one hour greater than the infinity of its age since creation at the start of that hour. But since Aristotle holds that such treatments of infinity are impossible and ridiculous, the world cannot have existed for infinite time.

The most sophisticated medieval arguments against an infinite past were later developed by the early Muslim philosopher, Al-Kindi (Alkindus); the Jewish philosopher, Saadia Gaon (Saadia ben Joseph); and the Muslim theologian, Al-Ghazali (Algazel). They developed two logical arguments against an infinite past, the first being the "argument from the impossibility of the existence of an actual infinite", which states:

"An actual infinite cannot exist."
"An infinite temporal regress of events is an actual infinite."
"Thus an infinite temporal regress of events cannot exist."

This argument depends on the (unproved) assertion that an actual infinite cannot exist; and that an infinite past implies an infinite succession of "events", a word not clearly defined. The second argument, the "argument from the impossibility of completing an actual infinite by successive addition", states:

"An actual infinite cannot be completed by successive addition."
"The temporal series of past events has been completed by successive addition."
"Thus the temporal series of past events cannot be an actual infinite."

The first statement states, correctly, that a finite (number) cannot be made into an infinite one by the finite addition of more finite numbers. The second skirts around this; the analogous idea in mathematics, that the (infinite) sequence of negative integers "..-3, -2, -1" may be extended by appending zero, then one, and so forth; is perfectly valid.

Both arguments were adopted by later Christian philosophers and theologians, and the second argument in particular became more famous after it was adopted by Immanuel Kant in his thesis of the first antinomy concerning time.

Modern revival

Immanuel Kant's argument for temporal finitism, at least in one direction, from his First Antinomy, runs as follows:

If we assume that the world has no beginning in time, then up to every given moment an eternity has elapsed, and there has passed away in that world an infinite series of successive states of things. Now the infinity of a series consists in the fact that it can never be completed through successive synthesis. It thus follows that it is impossible for an infinite world-series to have passed away, and that a beginning of the world is therefore a necessary condition of the world's existence.

— Immanuel Kant, First Antinomy, of Space and Time

Modern mathematics generally incorporates infinity. For most purposes it is simply used as convenient; when considered more carefully it is incorporated, or not, according to whether the axiom of infinity is included. This is the mathematical concept of infinity; while this may provide useful analogies or ways of thinking about the physical world, it says nothing directly about the physical world. Georg Cantor recognized two different kinds of infinity. The first, used in calculus, he called the variable finite, or potential infinite, represented by the sign (known as the lemniscate), and the actual infinite, which Cantor called the "true infinite." His notion of transfinite arithmetic became the standard system for working with infinity within set theory. David Hilbert thought that the role of the actual infinite was relegated only to the abstract realm of mathematics. "The infinite is nowhere to be found in reality. It neither exists in nature nor provides a legitimate basis for rational thought... The role that remain for the infinite to play is solely that of an idea." Philosopher William Lane Craig argues that if the past were infinitely long, it would entail the existence of actual infinites in reality.

Craig and Sinclair also argue that an actual infinite cannot be formed by successive addition. Quite independent of the absurdities arising from an actual infinite number of past events, the formation of an actual infinite has its own problems. For any finite number n, n+1 equals a finite number. An actual infinity has no immediate predecessor.

The Tristram Shandy paradox is an attempt to illustrate the absurdity of an infinite past. Imagine Tristram Shandy, an immortal man who writes his biography so slowly that for every day that he lives, it takes him a year to record that day. Suppose that Shandy had always existed. Since there is a one-to-one correspondence between the number of past days and the number of past years on an infinite past, one could reason that Shandy could write his entire autobiography. From another perspective, Shandy would only get farther and farther behind, and given a past eternity, would be infinitely far behind.

Craig asks us to suppose that we met a man who claims to have been counting down from infinity and is now just finishing. We could ask why he did not finish counting yesterday or the day before, since eternity would have been over by then. In fact for any day in the past, if the man would have finished his countdown by day n, he would have finished his countdown by n-1. It follows that the man could not have finished his countdown at any point in the finite past, since he would have already been done.

Input from physicists

In 1984 physicist Paul Davies deduced a finite-time origin of the universe in a quite different way, from physical grounds: "the universe will eventually die, wallowing, as it were, in its own entropy. This is known among physicists as the 'heat death' of the universe... The universe cannot have existed for ever, otherwise it would have reached its equilibrium end state an infinite time ago. Conclusion: the universe did not always exist."

More recently though physicists have proposed various ideas for how the universe could have existed for an infinite time, such as eternal inflation. But in 2012, Alexander Vilenkin and Audrey Mithani of Tufts University wrote a paper claiming that in any such scenario past time could not have been infinite. It could however have been "before any nameable time", according to Leonard Susskind.

Critical reception

Kant's argument for finitism has been widely discussed, for instance Jonathan Bennett points out that Kant's argument is not a sound logical proof: His assertion that "Now the infinity of a series consists in the fact that it can never be completed through successive synthesis. It thus follows that it is impossible for an infinite world-series to have passed away", assumes that the universe was created at a beginning and then progressed from there, which seems to assume the conclusion. A universe that simply existed and had not been created, or a universe that was created as an infinite progression, for instance, would still be possible. Bennett quotes Strawson:

"A temporal process both completed and infinite in duration appears to be impossible only on the assumption that it has a beginning. If ... it is urged that we cannot conceive of a process of surveying which does not have a beginning, then we must inquire with what relevance and by what right the notion of surveying is introduced into the discussion at all."

Some of the criticism of William Lane Craig's argument for temporal finitism has been discussed and expanded on by Stephen Puryear.

In this, he writes Craig's argument as:

  1. If the universe did not have a beginning, then the past would consist in an infinite temporal sequence of events.
  2. An infinite temporal sequence of past events would be actually and not merely potentially infinite.
  3. It is impossible for a sequence formed by successive addition to be actually infinite.
  4. The temporal sequence of past events was formed by successive addition.
  5. Therefore, the universe had a beginning.

Puryear points out that Aristotle and Aquinas had an opposing view to point 2, but that the most contentious is point 3. Puryear says that many philosophers have disagreed with point 3, and adds his own objection:

"Consider the fact that things move from one point in space to another. In so doing, the moving object passes through an actual infinity of intervening points. Hence, motion involves traversing an actual infinite ... Accordingly, the finitist of this stripe must be mistaken. Similarly, whenever some period of time elapses, an actual infinite has been traversed, namely, the actual infinity of instants that make up that period of time."

Puryear then points that Craig has defended his position by saying that time might or must be naturally divided and so there is not an actual infinity of instants between two times. Puryear then goes on to argue that if Craig is willing to turn an infinity of points into a finite number of divisions, then points 1, 2 and 4 are not true.

An article by Louis J. Swingrover makes a number of points relating to the idea that Craig's "absurdities" are not contradictions in themselves: they are all either mathematically consistent (like Hilbert's hotel or the man counting down to today), or do not lead to inescapable conclusions. He argues that if one makes the assumption that any mathematically coherent model is metaphysically possible, then it can be shown that an infinite temporal chain is metaphysically possible, since one can show that there exist mathematically coherent models of an infinite progression of times. He also says that Craig might be making a cardinality error similar to assuming that because an infinitely extended temporal series would contain an infinite number of times, then it would have to contain the number "infinity".

Quentin Smith attacks "their supposition that an infinite series of past events must contain some events separated from the present event by an infinite number of intermediate events, and consequently that from one of these infinitely distant past events the present could never have been reached".

Smith asserts that Craig and Wiltrow are making a cardinality error by confusing an unending sequence with a sequence whose members must be separated by an infinity: None of the integers is separated from any other integer by an infinite number of integers, so why assert that an infinite series of times must contain a time infinitely far back in the past.

Smith then says that Craig uses false presuppositions when he makes statements about infinite collections (in particular the ones relating to Hilbert's Hotel and infinite sets being equivalent to proper subsets of them), often based on Craig finding things "unbelievable", when they are actually mathematically correct. He also points out that the Tristram Shandy paradox is mathematically coherent, but some of Craig's conclusions about when the biography would be finished are incorrect.

Ellery Eells expands on this last point by showing that the Tristram Shandy paradox is internally consistent and fully compatible with an infinite universe.

Graham Oppy embroiled in debate with Oderberg, points out that the Tristram Shandy story has been used in many versions. For it to be useful to the temporal finitism side, a version must be found that is logically consistent and not compatible with an infinite universe. To see this, note that the argument runs as follows:

  1. If an infinite past is possible, then the Tristram Shandy story must be possible
  2. The Tristram Shandy story leads to contradiction.
  3. Therefore, an infinite past is not possible.

The problem for the finitist is that point 1 is not necessarily true. If a version of the Tristram Shandy story is internally inconsistent, for instance, then the infinitist could just assert that an infinite past is possible, but that particular Tristram Shandy is not because it's not internally consistent. Oppy then lists the different versions of the Tristram Shandy story that have been put forward and shows that they are all either internally inconsistent or they don't lead to contradiction.

Constructivism (philosophy of mathematics)

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Constructivism_(philosophy_of_mathematics)

In the philosophy of mathematics, constructivism asserts that it is necessary to find (or "construct") a specific example of a mathematical object in order to prove that an example exists. Contrastingly, in classical mathematics, one can prove the existence of a mathematical object without "finding" that object explicitly, by assuming its non-existence and then deriving a contradiction from that assumption. Such a proof by contradiction might be called non-constructive, and a constructivist might reject it. The constructive viewpoint involves a verificational interpretation of the existential quantifier, which is at odds with its classical interpretation.

There are many forms of constructivism. These include the program of intuitionism founded by Brouwer, the finitism of Hilbert and Bernays, the constructive recursive mathematics of Shanin and Markov, and Bishop's program of constructive analysis. Constructivism also includes the study of constructive set theories such as CZF and the study of topos theory.

Constructivism is often identified with intuitionism, although intuitionism is only one constructivist program. Intuitionism maintains that the foundations of mathematics lie in the individual mathematician's intuition, thereby making mathematics into an intrinsically subjective activity. Other forms of constructivism are not based on this viewpoint of intuition, and are compatible with an objective viewpoint on mathematics.

Constructive mathematics

Much constructive mathematics uses intuitionistic logic, which is essentially classical logic without the law of the excluded middle. This law states that, for any proposition, either that proposition is true or its negation is. This is not to say that the law of the excluded middle is denied entirely; special cases of the law will be provable. It is just that the general law is not assumed as an axiom. The law of non-contradiction (which states that contradictory statements cannot both at the same time be true) is still valid.

For instance, in Heyting arithmetic, one can prove that for any proposition p that does not contain quantifiers, is a theorem (where x, y, z ... are the free variables in the proposition p). In this sense, propositions restricted to the finite are still regarded as being either true or false, as they are in classical mathematics, but this bivalence does not extend to propositions that refer to infinite collections.

In fact, L.E.J. Brouwer, founder of the intuitionist school, viewed the law of the excluded middle as abstracted from finite experience, and then applied to the infinite without justification. For instance, Goldbach's conjecture is the assertion that every even number (greater than 2) is the sum of two prime numbers. It is possible to test for any particular even number whether or not it is the sum of two primes (for instance by exhaustive search), so any one of them is either the sum of two primes or it is not. And so far, every one thus tested has in fact been the sum of two primes.

But there is no known proof that all of them are so, nor any known proof that not all of them are so. Thus to Brouwer, we are not justified in asserting "either Goldbach's conjecture is true, or it is not." And while the conjecture may one day be solved, the argument applies to similar unsolved problems; to Brouwer, the law of the excluded middle was tantamount to assuming that every mathematical problem has a solution.

With the omission of the law of the excluded middle as an axiom, the remaining logical system has an existence property that classical logic does not have: whenever is proven constructively, then in fact is proven constructively for (at least) one particular , often called a witness. Thus the proof of the existence of a mathematical object is tied to the possibility of its construction.

Example from real analysis

In classical real analysis, one way to define a real number is as an equivalence class of Cauchy sequences of rational numbers.

In constructive mathematics, one way to construct a real number is as a function ƒ that takes a positive integer and outputs a rational ƒ(n), together with a function g that takes a positive integer n and outputs a positive integer g(n) such that

so that as n increases, the values of ƒ(n) get closer and closer together. We can use ƒ and g together to compute as close a rational approximation as we like to the real number they represent.

Under this definition, a simple representation of the real number e is:

This definition corresponds to the classical definition using Cauchy sequences, except with a constructive twist: for a classical Cauchy sequence, it is required that, for any given distance, there exists (in a classical sense) a member in the sequence after which all members are closer together than that distance. In the constructive version, it is required that, for any given distance, it is possible to actually specify a point in the sequence where this happens (this required specification is often called the modulus of convergence). In fact, the standard constructive interpretation of the mathematical statement

is precisely the existence of the function computing the modulus of convergence. Thus the difference between the two definitions of real numbers can be thought of as the difference in the interpretation of the statement "for all... there exists..."

This then opens the question as to what sort of function from a countable set to a countable set, such as f and g above, can actually be constructed. Different versions of constructivism diverge on this point. Constructions can be defined as broadly as free choice sequences, which is the intuitionistic view, or as narrowly as algorithms (or more technically, the computable functions), or even left unspecified. If, for instance, the algorithmic view is taken, then the reals as constructed here are essentially what classically would be called the computable numbers.

Cardinality

To take the algorithmic interpretation above would seem at odds with classical notions of cardinality. By enumerating algorithms, we can show classically that the computable numbers are countable. And yet Cantor's diagonal argument shows that real numbers have higher cardinality. Furthermore, the diagonal argument seems perfectly constructive. To identify the real numbers with the computable numbers would then be a contradiction.

And in fact, Cantor's diagonal argument is constructive, in the sense that given a bijection between the real numbers and natural numbers, one constructs a real number that doesn't fit, and thereby proves a contradiction. We can indeed enumerate algorithms to construct a function T, about which we initially assume that it is a function from the natural numbers onto the reals. But, to each algorithm, there may or may not correspond a real number, as the algorithm may fail to satisfy the constraints, or even be non-terminating (T is a partial function), so this fails to produce the required bijection. In short, one who takes the view that real numbers are (individually) effectively computable interprets Cantor's result as showing that the real numbers (collectively) are not recursively enumerable.

Still, one might expect that since T is a partial function from the natural numbers onto the real numbers, that therefore the real numbers are no more than countable. And, since every natural number can be trivially represented as a real number, therefore the real numbers are no less than countable. They are, therefore exactly countable. However this reasoning is not constructive, as it still does not construct the required bijection. The classical theorem proving the existence of a bijection in such circumstances, namely the Cantor–Bernstein–Schroeder theorem, is non-constructive. It has recently been shown that the Cantor–Bernstein–Schroeder theorem implies the law of the excluded middle, hence there can be no constructive proof of the theorem.

Axiom of choice

The status of the axiom of choice in constructive mathematics is complicated by the different approaches of different constructivist programs. One trivial meaning of "constructive", used informally by mathematicians, is "provable in ZF set theory without the axiom of choice." However, proponents of more limited forms of constructive mathematics would assert that ZF itself is not a constructive system.

In intuitionistic theories of type theory (especially higher-type arithmetic), many forms of the axiom of choice are permitted. For example, the axiom AC11 can be paraphrased to say that for any relation R on the set of real numbers, if you have proved that for each real number x there is a real number y such that R(x,y) holds, then there is actually a function F such that R(x,F(x)) holds for all real numbers. Similar choice principles are accepted for all finite types. The motivation for accepting these seemingly nonconstructive principles is the intuitionistic understanding of the proof that "for each real number x there is a real number y such that R(x,y) holds". According to the BHK interpretation, this proof itself is essentially the function F that is desired. The choice principles that intuitionists accept do not imply the law of the excluded middle.

However, in certain axiom systems for constructive set theory, the axiom of choice does imply the law of the excluded middle (in the presence of other axioms), as shown by the Diaconescu-Goodman-Myhill theorem. Some constructive set theories include weaker forms of the axiom of choice, such as the axiom of dependent choice in Myhill's set theory.

Measure theory

Classical measure theory is fundamentally non-constructive, since the classical definition of Lebesgue measure does not describe any way to compute the measure of a set or the integral of a function. In fact, if one thinks of a function just as a rule that "inputs a real number and outputs a real number" then there cannot be any algorithm to compute the integral of a function, since any algorithm would only be able to call finitely many values of the function at a time, and finitely many values are not enough to compute the integral to any nontrivial accuracy. The solution to this conundrum, carried out first in Bishop's 1967 book, is to consider only functions that are written as the pointwise limit of continuous functions (with known modulus of continuity), with information about the rate of convergence. An advantage of constructivizing measure theory is that if one can prove that a set is constructively of full measure, then there is an algorithm for finding a point in that set (again see Bishop's book). For example, this approach can be used to construct a real number that is normal to every base.

The place of constructivism in mathematics

Traditionally, some mathematicians have been suspicious, if not antagonistic, towards mathematical constructivism, largely because of limitations they believed it to pose for constructive analysis. These views were forcefully expressed by David Hilbert in 1928, when he wrote in Grundlagen der Mathematik, "Taking the principle of excluded middle from the mathematician would be the same, say, as proscribing the telescope to the astronomer or to the boxer the use of his fists".

Errett Bishop, in his 1967 work Foundations of Constructive Analysis, worked to dispel these fears by developing a great deal of traditional analysis in a constructive framework.

Even though most mathematicians do not accept the constructivist's thesis that only mathematics done based on constructive methods is sound, constructive methods are increasingly of interest on non-ideological grounds. For example, constructive proofs in analysis may ensure witness extraction, in such a way that working within the constraints of the constructive methods may make finding witnesses to theories easier than using classical methods. Applications for constructive mathematics have also been found in typed lambda calculi, topos theory and categorical logic, which are notable subjects in foundational mathematics and computer science. In algebra, for such entities as topoi and Hopf algebras, the structure supports an internal language that is a constructive theory; working within the constraints of that language is often more intuitive and flexible than working externally by such means as reasoning about the set of possible concrete algebras and their homomorphisms.

Physicist Lee Smolin writes in Three Roads to Quantum Gravity that topos theory is "the right form of logic for cosmology" (page 30) and "In its first forms it was called 'intuitionistic logic'" (page 31). "In this kind of logic, the statements an observer can make about the universe are divided into at least three groups: those that we can judge to be true, those that we can judge to be false and those whose truth we cannot decide upon at the present time" (page 28).

Mathematicians who have made major contributions to constructivism

Branches

Reproductive rights

From Wikipedia, the free encyclo...