Search This Blog

Saturday, June 23, 2018

Infinitesimal

From Wikipedia, the free encyclopedia

Infinitesimals (ε) and infinites (ω) on the hyperreal number line (ε = 1/ω)

In mathematics, infinitesimals are things so small that there is no way to measure them. The insight with exploiting infinitesimals was that entities could still retain certain specific properties, such as angle or slope, even though these entities were quantitatively small.[1] The word infinitesimal comes from a 17th-century Modern Latin coinage infinitesimus, which originally referred to the "infinite-th" item in a sequence. Infinitesimals are a basic ingredient in the procedures of infinitesimal calculus as developed by Leibniz, including the law of continuity and the transcendental law of homogeneity. In common speech, an infinitesimal object is an object that is smaller than any feasible measurement, but not zero in size—or, so small that it cannot be distinguished from zero by any available means. Hence, when used as an adjective, "infinitesimal" means "extremely small". To give it a meaning, it usually must be compared to another infinitesimal object in the same context (as in a derivative). Infinitely many infinitesimals are summed to produce an integral.

The concept of infinitesimals was originally introduced around 1670 by either Nicolaus Mercator or Gottfried Wilhelm Leibniz.[2] Archimedes used what eventually came to be known as the method of indivisibles in his work The Method of Mechanical Theorems to find areas of regions and volumes of solids.[3] In his formal published treatises, Archimedes solved the same problem using the method of exhaustion. The 15th century saw the work of Nicholas of Cusa, further developed in the 17th century by Johannes Kepler, in particular calculation of area of a circle by representing the latter as an infinite-sided polygon. Simon Stevin's work on decimal representation of all numbers in the 16th century prepared the ground for the real continuum. Bonaventura Cavalieri's method of indivisibles led to an extension of the results of the classical authors. The method of indivisibles related to geometrical figures as being composed of entities of codimension 1. John Wallis's infinitesimals differed from indivisibles in that he would decompose geometrical figures into infinitely thin building blocks of the same dimension as the figure, preparing the ground for general methods of the integral calculus. He exploited an infinitesimal denoted 1/∞ in area calculations.

The use of infinitesimals by Leibniz relied upon heuristic principles, such as the law of continuity: what succeeds for the finite numbers succeeds also for the infinite numbers and vice versa; and the transcendental law of homogeneity that specifies procedures for replacing expressions involving inassignable quantities, by expressions involving only assignable ones. The 18th century saw routine use of infinitesimals by mathematicians such as Leonhard Euler and Joseph-Louis Lagrange. Augustin-Louis Cauchy exploited infinitesimals both in defining continuity in his Cours d'Analyse, and in defining an early form of a Dirac delta function. As Cantor and Dedekind were developing more abstract versions of Stevin's continuum, Paul du Bois-Reymond wrote a series of papers on infinitesimal-enriched continua based on growth rates of functions. Du Bois-Reymond's work inspired both Émile Borel and Thoralf Skolem. Borel explicitly linked du Bois-Reymond's work to Cauchy's work on rates of growth of infinitesimals. Skolem developed the first non-standard models of arithmetic in 1934. A mathematical implementation of both the law of continuity and infinitesimals was achieved by Abraham Robinson in 1961, who developed non-standard analysis based on earlier work by Edwin Hewitt in 1948 and Jerzy Łoś in 1955. The hyperreals implement an infinitesimal-enriched continuum and the transfer principle implements Leibniz's law of continuity. The standard part function implements Fermat's adequality.

Vladimir Arnold wrote in 1990:
Nowadays, when teaching analysis, it is not very popular to talk about infinitesimal quantities. Consequently present-day students are not fully in command of this language. Nevertheless, it is still necessary to have command of it.[4]

History of the infinitesimal

The notion of infinitely small quantities was discussed by the Eleatic School. The Greek mathematician Archimedes (c.287 BC–c.212 BC), in The Method of Mechanical Theorems, was the first to propose a logically rigorous definition of infinitesimals.[5] His Archimedean property defines a number x as infinite if it satisfies the conditions |x|>1, |x|>1+1, |x|>1+1+1, ..., and infinitesimal if x≠0 and a similar set of conditions holds for x and the reciprocals of the positive integers. A number system is said to be Archimedean if it contains no infinite or infinitesimal members.

The English mathematician John Wallis introduced the expression 1/∞ in his 1655 book Treatise on the Conic Sections. The symbol, which denotes the reciprocal, or inverse, of , is the symbolic representation of the mathematical concept of an infinitesimal. In his Treatise on the Conic Sections Wallis also discusses the concept of a relationship between the symbolic representation of infinitesimal 1/∞ that he introduced and the concept of infinity for which he introduced the symbol ∞. The concept suggests a thought experiment of adding an infinite number of parallelograms of infinitesimal width to form a finite area. This concept was the predecessor to the modern method of integration used in integral calculus. The conceptual origins of the concept of the infinitesimal 1/∞ can be traced as far back as the Greek philosopher Zeno of Elea, whose Zeno's dichotomy paradox was the first mathematical concept to consider the relationship between a finite interval and an interval approaching that of an infinitesimal-sized interval.

Infinitesimals were the subject of political and religious controversies in 17th century Europe, including a ban on infinitesimals issued by clerics in Rome in 1632.[6]

Prior to the invention of calculus mathematicians were able to calculate tangent lines using Pierre de Fermat's method of adequality and René Descartes' method of normals. There is debate among scholars as to whether the method was infinitesimal or algebraic in nature. When Newton and Leibniz invented the calculus, they made use of infinitesimals, Newton's fluxions and Leibniz' differential. The use of infinitesimals was attacked as incorrect by Bishop Berkeley in his work The Analyst.[7] Mathematicians, scientists, and engineers continued to use infinitesimals to produce correct results. In the second half of the nineteenth century, the calculus was reformulated by Augustin-Louis Cauchy, Bernard Bolzano, Karl Weierstrass, Cantor, Dedekind, and others using the (ε, δ)-definition of limit and set theory. While the followers of Cantor, Dedekind, and Weierstrass sought to rid analysis of infinitesimals, and their philosophical allies like Bertrand Russell and Rudolf Carnap declared that infinitesimals are pseudoconcepts, Hermann Cohen and his Marburg school of neo-Kantianism sought to develop a working logic of infinitesimals.[8] The mathematical study of systems containing infinitesimals continued through the work of Levi-Civita, Giuseppe Veronese, Paul du Bois-Reymond, and others, throughout the late nineteenth and the twentieth centuries, as documented by Philip Ehrlich (2006). In the 20th century, it was found that infinitesimals could serve as a basis for calculus and analysis; see hyperreal number.

First-order properties

In extending the real numbers to include infinite and infinitesimal quantities, one typically wishes to be as conservative as possible by not changing any of their elementary properties. This guarantees that as many familiar results as possible are still available. Typically elementary means that there is no quantification over sets, but only over elements. This limitation allows statements of the form "for any number x..." For example, the axiom that states "for any number x, x + 0 = x" would still apply. The same is true for quantification over several numbers, e.g., "for any numbers x and y, xy = yx." However, statements of the form "for any set S of numbers ..." may not carry over. Logic with this limitation on quantification is referred to as first-order logic.

The resulting extended number system cannot agree with the reals on all properties that can be expressed by quantification over sets, because the goal is to construct a non-Archimedean system, and the Archimedean principle can be expressed by quantification over sets. One can conservatively extend any theory including reals, including set theory, to include infinitesimals, just by adding a countably infinite list of axioms that assert that a number is smaller than 1/2, 1/3, 1/4 and so on. Similarly, the completeness property cannot be expected to carry over, because the reals are the unique complete ordered field up to isomorphism.

We can distinguish three levels at which a nonarchimedean number system could have first-order properties compatible with those of the reals:
  1. An ordered field obeys all the usual axioms of the real number system that can be stated in first-order logic. For example, the commutativity axiom x + y = y + x holds.
  2. A real closed field has all the first-order properties of the real number system, regardless of whether they are usually taken as axiomatic, for statements involving the basic ordered-field relations +, ×, and ≤. This is a stronger condition than obeying the ordered-field axioms. More specifically, one includes additional first-order properties, such as the existence of a root for every odd-degree polynomial. For example, every number must have a cube root.
  3. The system could have all the first-order properties of the real number system for statements involving any relations (regardless of whether those relations can be expressed using +, ×, and ≤). For example, there would have to be a sine function that is well defined for infinite inputs; the same is true for every real function.
Systems in category 1, at the weak end of the spectrum, are relatively easy to construct, but do not allow a full treatment of classical analysis using infinitesimals in the spirit of Newton and Leibniz. For example, the transcendental functions are defined in terms of infinite limiting processes, and therefore there is typically no way to define them in first-order logic. Increasing the analytic strength of the system by passing to categories 2 and 3, we find that the flavor of the treatment tends to become less constructive, and it becomes more difficult to say anything concrete about the hierarchical structure of infinities and infinitesimals.

Number systems that include infinitesimals

Formal series

Laurent series

An example from category 1 above is the field of Laurent series with a finite number of negative-power terms. For example, the Laurent series consisting only of the constant term 1 is identified with the real number 1, and the series with only the linear term x is thought of as the simplest infinitesimal, from which the other infinitesimals are constructed. Dictionary ordering is used, which is equivalent to considering higher powers of x as negligible compared to lower powers. David O. Tall[9] refers to this system as the super-reals, not to be confused with the superreal number system of Dales and Woodin. Since a Taylor series evaluated with a Laurent series as its argument is still a Laurent series, the system can be used to do calculus on transcendental functions if they are analytic. These infinitesimals have different first-order properties than the reals because, for example, the basic infinitesimal x does not have a square root.

The Levi-Civita field

The Levi-Civita field is similar to the Laurent series, but is algebraically closed. For example, the basic infinitesimal x has a square root. This field is rich enough to allow a significant amount of analysis to be done, but its elements can still be represented on a computer in the same sense that real numbers can be represented in floating point.[10]

Transseries

The field of transseries is larger than the Levi-Civita field.[11] An example of a transseries is:
e^{\sqrt {\ln \ln x}}+\ln \ln x+\sum _{j=0}^{\infty }e^{x}x^{-j},
where for purposes of ordering x is considered infinite.

Surreal numbers

Conway's surreal numbers fall into category 2. They are a system designed to be as rich as possible in different sizes of numbers, but not necessarily for convenience in doing analysis. Certain transcendental functions can be carried over to the surreals, including logarithms and exponentials, but most, e.g., the sine function, cannot[citation needed]. The existence of any particular surreal number, even one that has a direct counterpart in the reals, is not known a priori, and must be proved.[clarification needed]

Hyperreals

The most widespread technique for handling infinitesimals is the hyperreals, developed by Abraham Robinson in the 1960s. They fall into category 3 above, having been designed that way so all of classical analysis can be carried over from the reals. This property of being able to carry over all relations in a natural way is known as the transfer principle, proved by Jerzy Łoś in 1955. For example, the transcendental function sin has a natural counterpart *sin that takes a hyperreal input and gives a hyperreal output, and similarly the set of natural numbers \mathbb {N} has a natural counterpart ^{*}\mathbb {N} , which contains both finite and infinite integers. A proposition such as \forall n\in \mathbb {N} ,\sin n\pi =0 carries over to the hyperreals as \forall n\in {}^{*}\mathbb {N} ,{}^{*}\!\!\sin n\pi =0 .

Superreals

The superreal number system of Dales and Woodin is a generalization of the hyperreals. It is different from the super-real system defined by David Tall.

Dual numbers

In linear algebra, the dual numbers extend the reals by adjoining one infinitesimal, the new element ε with the property ε2 = 0 (that is, ε is nilpotent). Every dual number has the form z = a + bε with a and b being uniquely determined real numbers.

One application of dual numbers is automatic differentiation. This application can be generalized to polynomials in n variables, using the Exterior algebra of an n-dimensional vector space.

Smooth infinitesimal analysis

Synthetic differential geometry or smooth infinitesimal analysis have roots in category theory. This approach departs from the classical logic used in conventional mathematics by denying the general applicability of the law of excluded middle – i.e., not (ab) does not have to mean a = b. A nilsquare or nilpotent infinitesimal can then be defined. This is a number x where x2 = 0 is true, but x = 0 need not be true at the same time. Since the background logic is intuitionistic logic, it is not immediately clear how to classify this system with regard to classes 1, 2, and 3. Intuitionistic analogues of these classes would have to be developed first.

Infinitesimal delta functions

Cauchy used an infinitesimal \alpha to write down a unit impulse, infinitely tall and narrow Dirac-type delta function \delta _{\alpha } satisfying \int F(x)\delta _{\alpha }(x)=F(0) in a number of articles in 1827, see Laugwitz (1989). Cauchy defined an infinitesimal in 1821 (Cours d'Analyse) in terms of a sequence tending to zero. Namely, such a null sequence becomes an infinitesimal in Cauchy's and Lazare Carnot's terminology.

Modern set-theoretic approaches allow one to define infinitesimals via the ultrapower construction, where a null sequence becomes an infinitesimal in the sense of an equivalence class modulo a relation defined in terms of a suitable ultrafilter. The article by Yamashita (2007) contains a bibliography on modern Dirac delta functions in the context of an infinitesimal-enriched continuum provided by the hyperreals.

Logical properties

The method of constructing infinitesimals of the kind used in nonstandard analysis depends on the model and which collection of axioms are used. We consider here systems where infinitesimals can be shown to exist.

In 1936 Maltsev proved the compactness theorem. This theorem is fundamental for the existence of infinitesimals as it proves that it is possible to formalise them. A consequence of this theorem is that if there is a number system in which it is true that for any positive integer n there is a positive number x such that 0 < x < 1/n, then there exists an extension of that number system in which it is true that there exists a positive number x such that for any positive integer n we have 0 < x < 1/n. The possibility to switch "for any" and "there exists" is crucial. The first statement is true in the real numbers as given in ZFC set theory : for any positive integer n it is possible to find a real number between 1/n and zero, but this real number depends on n. Here, one chooses n first, then one finds the corresponding x. In the second expression, the statement says that there is an x (at least one), chosen first, which is between 0 and 1/n for any n. In this case x is infinitesimal. This is not true in the real numbers (R) given by ZFC. Nonetheless, the theorem proves that there is a model (a number system) in which this is true. The question is: what is this model? What are its properties? Is there only one such model?

There are in fact many ways to construct such a one-dimensional linearly ordered set of numbers, but fundamentally, there are two different approaches:
1) Extend the number system so that it contains more numbers than the real numbers.
2) Extend the axioms (or extend the language) so that the distinction between the infinitesimals and non-infinitesimals can be made in the real numbers themselves.
In 1960, Abraham Robinson provided an answer following the first approach. The extended set is called the hyperreals and contains numbers less in absolute value than any positive real number. The method may be considered relatively complex but it does prove that infinitesimals exist in the universe of ZFC set theory. The real numbers are called standard numbers and the new non-real hyperreals are called nonstandard.

In 1977 Edward Nelson provided an answer following the second approach. The extended axioms are IST, which stands either for Internal set theory or for the initials of the three extra axioms: Idealization, Standardization, Transfer. In this system we consider that the language is extended in such a way that we can express facts about infinitesimals. The real numbers are either standard or nonstandard. An infinitesimal is a nonstandard real number that is less, in absolute value, than any positive standard real number.

In 2006 Karel Hrbacek developed an extension of Nelson's approach in which the real numbers are stratified in (infinitely) many levels; i.e., in the coarsest level there are no infinitesimals nor unlimited numbers. Infinitesimals are in a finer level and there are also infinitesimals with respect to this new level and so on.

Infinitesimals in teaching

Calculus textbooks based on infinitesimals include the classic Calculus Made Easy by Silvanus P. Thompson (bearing the motto "What one fool can do another can"[12]) and the German text Mathematik fur Mittlere Technische Fachschulen der Maschinenindustrie by R Neuendorff.[13] Pioneering works based on Abraham Robinson's infinitesimals include texts by Stroyan (dating from 1972) and Howard Jerome Keisler (Elementary Calculus: An Infinitesimal Approach). Students easily relate to the intuitive notion of an infinitesimal difference 1-"0.999...", where "0.999..." differs from its standard meaning as the real number 1, and is reinterpreted as an infinite terminating extended decimal that is strictly less than 1.[14][15]

Another elementary calculus text that uses the theory of infinitesimals as developed by Robinson is Infinitesimal Calculus by Henle and Kleinberg, originally published in 1979.[16] The authors introduce the language of first order logic, and demonstrate the construction of a first order model of the hyperreal numbers. The text provides an introduction to the basics of integral and differential calculus in one dimension, including sequences and series of functions. In an Appendix, they also treat the extension of their model to the hyperhyperreals, and demonstrate some applications for the extended model.

Functions tending to zero

In a related but somewhat different sense, which evolved from the original definition of "infinitesimal" as an infinitely small quantity, the term has also been used to refer to a function tending to zero. More precisely, Loomis and Sternberg's Advanced Calculus defines the function class of infinitesimals, {\mathfrak {I}}, as a subset of functions f:V\to W between normed vector spaces by
{\displaystyle {\mathfrak {I}}(V,W)=\{f:V\to W\ |\ f(0)=0,(\forall \epsilon >0)(\exists \delta >0)\ \backepsilon \ ||\xi ||<\delta \implies ||f(\xi )||<\epsilon \}},
as well as two related classes {\displaystyle {\mathfrak {O}},{\mathfrak {o}}} (see Big-O notation) by
{\displaystyle {\mathfrak {O}}(V,W)=\{f:V\to W\ |\ f(0)=0,\ (\exists r>0,c>0)\ \backepsilon \ ||\xi ||<r\implies ||f(\xi )||\leq c||\xi ||\}}, and
{\displaystyle {\mathfrak {o}}(V,W)=\{f:V\to W\ |\ f(0)=0,\ \lim _{||\xi ||\to 0}||f(\xi )||/||\xi ||=0\}}.[17]
The set inclusions {\displaystyle {\mathfrak {o}}(V,W)\subsetneq {\mathfrak {O}}(V,W)\subsetneq {\mathfrak {I}}(V,W)}generally hold. That the inclusions are proper is demonstrated by the real-valued functions of a real variable {\displaystyle f:x\mapsto |x|^{1/2}}, {\displaystyle g:x\mapsto x}, and {\displaystyle h:x\mapsto x^{2}}:
{\displaystyle f,g,h\in {\mathfrak {I}}(\mathbb {R} ,\mathbb {R} ),\ g,h\in {\mathfrak {O}}(\mathbb {R} ,\mathbb {R} ),\ h\in {\mathfrak {o}}(\mathbb {R} ,\mathbb {R} )} but
{\displaystyle f,g\notin {\mathfrak {o}}(\mathbb {R} ,\mathbb {R} )} and {\displaystyle f\notin {\mathfrak {O}}(\mathbb {R} ,\mathbb {R} )}.
As an application of these definitions, a mapping {\displaystyle F:V\to W} between normed vector spaces is defined to be differentiable at {\displaystyle \alpha \in V} if there is a {\displaystyle T\in \mathrm {Hom} (V,W)} [i.e, a bounded linear map {\displaystyle V\to W}] such that
{\displaystyle [F(\alpha +\xi )-F(\alpha )]-T(\xi )\in {\mathfrak {o}}(V,W)}
in a neighborhood of \alpha . If such a map exists, it is unique; this map is called the differential and is denoted by {\displaystyle dF_{\alpha }},[18] coinciding with the traditional notation for the classical (though logically flawed) notion of a differential as an infinitely small "piece" of F. This definition represents a generalization of the usual definition of differentiability for vector-valued functions of (open subsets of) Euclidean spaces.

Array of random variables

Let (\Omega ,{\mathcal {F}},\mathbb {P} ) be a probability space and let n\in \mathbb {N} . An array {\displaystyle \{X_{n,k}:\Omega \to \mathbb {R} \mid 1\leq k\leq k_{n}\}} of random variables is called infinitesimal if for every \epsilon >0, we have:[19]
{\displaystyle \max _{1\leq k\leq k_{n}}\mathbb {P} \{\omega \in \Omega \mid \vert X_{n,k}(\omega )\vert \geq \epsilon \}\to 0{\text{ as }}n\to \infty }
The notion of infinitesimal array is essential in some central limit theorems and it is easily seen by monotonicity of the expectation operator that any array satisfying Lindeberg's condition is infinitesimal, thus playing an important role in Lindeberg's Central Limit Theorem (a generalization of the central limit theorem).

Supertask

From Wikipedia, the free encyclopedia
In philosophy, a supertask is a countably infinite sequence of operations that occur sequentially within a finite interval of time.[1] Supertasks are called "hypertasks" when the number of operations becomes uncountably infinite. A hypertask that includes one operation for each ordinal number is called an "ultratask".[2] The term supertask was coined by the philosopher James F. Thomson, who devised Thomson's lamp. The term hypertask derives from Clark and Read in their paper of that name.[3]

History

Zeno

Motion

The origin of the interest in supertasks is normally attributed to Zeno of Elea. Zeno claimed that motion was impossible. He argued as follows: suppose our burgeoning "mover", Achilles say, wishes to move from A to B. To achieve this he must traverse half the distance from A to B. To get from the midpoint of AB to B Achilles must traverse half this distance, and so on and so forth. However many times he performs one of these "traversing" tasks there is another one left for him to do before he arrives at B. Thus it follows, according to Zeno, that motion (travelling a non-zero distance in finite time) is a supertask. Zeno further argues that supertasks are not possible (how can this sequence be completed if for each traversing there is another one to come?). It follows that motion is impossible.

Zeno's argument takes the following form:
  1. Motion is a supertask, because the completion of motion over any set distance involves an infinite number of steps
  2. Supertasks are impossible
  3. Therefore, motion is impossible
Most subsequent philosophers reject Zeno's bold conclusion in favor of common sense. Instead they turn his argument on its head (assuming it's valid) and take it as a proof by contradiction where the possibility of motion is taken for granted. They accept the possibility of motion and apply modus tollens (contrapositive) to Zeno's argument to reach the conclusion that either motion is not a supertask or not all supertasks are impossible.

Achilles and the tortoise

Zeno himself also discusses the notion of what he calls "Achilles and the tortoise". Suppose that Achilles is the fastest runner, and moves at a speed of 1 m/s. Achilles chases a tortoise, an animal renowned for being slow, that moves at 0.1 m/s. However, the tortoise starts 0.9 metres ahead. Common sense seems to decree that Achilles will catch up with the tortoise after exactly 1 second, but Zeno argues that this is not the case. He instead suggests that Achilles must inevitably come up to the point where the tortoise has started from, but by the time he has accomplished this, the tortoise will already have moved on to another point. This continues, and every time Achilles reaches the mark where the tortoise was, the tortoise will have reached a new point that Achilles will have to catch up with; while it begins with 0.9 metres, it becomes an additional 0.09 metres, then 0.009 metres, and so on, infinitely. While these distances will grow very small, they will remain finite, while Achilles' chasing of the tortoise will become an unending supertask. Much commentary has been made on this particular paradox; many assert that it finds a loophole in common sense.[4]

Thomson

James F. Thomson believed that motion was not a supertask, and he emphatically denied that supertasks are possible. The proof Thomson offered to the latter claim involves what has probably become the most famous example of a supertask since Zeno. Thomson's lamp may either be on or off. At time t = 0 the lamp is off, at time t = 1/2 it is on, at time t = 3/4 (= 1/2 + 1/4) it is off, t = 7/8 (= 1/2 + 1/4 + 1/8) it is on, etc. The natural question arises: at t = 1 is the lamp on or off? There does not seem to be any non-arbitrary way to decide this question. Thomson goes further and claims this is a contradiction. He says that the lamp cannot be on for there was never a point when it was on where it was not immediately switched off again. And similarly he claims it cannot be off for there was never a point when it was off where it was not immediately switched on again. By Thomson's reasoning the lamp is neither on nor off, yet by stipulation it must be either on or off – this is a contradiction. Thomson thus believes that supertasks are impossible.

Benacerraf

Paul Benacerraf believes that supertasks are at least logically possible despite Thomson's apparent contradiction. Benacerraf agrees with Thomson insofar as that the experiment he outlined does not determine the state of the lamp at t = 1. However he disagrees with Thomson that he can derive a contradiction from this, since the state of the lamp at t = 1 need not be logically determined by the preceding states. Logical implication does not bar the lamp from being on, off, or vanishing completely to be replaced by a horse-drawn pumpkin. There are possible worlds in which Thomson's lamp finishes on, and worlds in which it finishes off not to mention countless others where weird and wonderful things happen at t = 1. The seeming arbitrariness arises from the fact that Thomson's experiment does not contain enough information to determine the state of the lamp at t = 1, rather like the way nothing can be found in Shakespeare's play to determine whether Hamlet was right- or left-handed. So what about the contradiction? Benacerraf showed that Thomson had committed a mistake. When he claimed that the lamp could not be on because it was never on without being turned off again – this applied only to instants of time strictly less than 1. It does not apply to 1 because 1 does not appear in the sequence {0, 1/2, 3/4, 7/8, …} whereas Thomson's experiment only specified the state of the lamp for times in this sequence.

Modern literature

Most of the modern literature comes from the descendants of Benacerraf, those who tacitly accept the possibility of supertasks. Philosophers who reject their possibility tend not to reject them on grounds such as Thomson's but because they have qualms with the notion of infinity itself. Of course there are exceptions. For example, McLaughlin claims that Thomson's lamp is inconsistent if it is analyzed with internal set theory, a variant of real analysis.

Philosophy of mathematics

If supertasks are possible, then the truth or falsehood of unknown propositions of number theory, such as Goldbach's conjecture, or even undecidable propositions could be determined in a finite amount of time by a brute-force search of the set of all natural numbers. This would, however, be in contradiction with the Church-Turing thesis. Some have argued this poses a problem for intuitionism, since the intuitionist must distinguish between things that cannot in fact be proven (because they are too long or complicated; for example Boolos's "Curious Inference"[5]) but nonetheless are considered "provable", and those which are provable by infinite brute force in the above sense.

Physical possibility

Some have claimed Thomson's lamp is physically impossible since it must have parts moving at speeds faster than the speed of light (e.g., the lamp switch). Adolf Grünbaum suggests that the lamp could have a strip of wire which, when lifted, disrupts the circuit and turns off the lamp; this strip could then be lifted by a smaller distance each time the lamp is to be turned off, maintaining a constant velocity. However, such a design would ultimately fail, as eventually the distance between the contacts would be so small as to allow electrons to jump the gap, preventing the circuit from being broken at all.

Other physically possible supertasks have been suggested. In one proposal, one person (or entity) counts upward from 1, taking an infinite amount of time, while another person observes this from a frame of reference where this occurs in a finite space of time. For the counter, this is not a supertask, but for the observer, it is. (This could theoretically occur due to time dilation, for example if the observer were falling into a black hole while observing a counter whose position is fixed relative to the singularity.)

Davies in his paper "Building Infinite Machines" concocted a device which he claims is physically possible up to infinite divisibility. It involves a machine which creates an exact replica of itself but has half its size and twice its speed. Still, for either a human or any device, to perceive or act upon the state of the lamp some measurement has to be done, for example the light from the lamp would have to reach an eye or a sensor. Any such measurement will take a fixed frame of time, no matter how small and, therefore, at some point measurement of the state will be impossible. Since the state at t=1 can not be determined even in principle, it is not meaningful to speak of the lamp being either on or off.

Gustavo E. Romero in the paper 'The collapse of supertasks'[6] maintains that any attempt to carry out a supertask will result in the formation of a black hole, making supertasks physically impossible.

Super Turing machines

The impact of supertasks on theoretical computer science has triggered some new and interesting work, for example Hamkins and Lewis – "Infinite Time Turing Machine".

Prominent supertasks

Ross-Littlewood paradox

Suppose there is a jar capable of containing infinitely many marbles and an infinite collection of marbles labelled 1, 2, 3, and so on. At time t = 0, marbles 1 through 10 are placed in the jar and marble 1 is taken out. At t = 0.5, marbles 11 through 20 are placed in the jar and marble 2 is taken out; at t = 0.75, marbles 21 through 30 are put in the jar and marble 3 is taken out; and in general at time t = 1 − 0.5n, marbles 10n + 1 through 10n + 10 are placed in the jar and marble n + 1 is taken out. How many marbles are in the jar at time t = 1?

One argument states that there should be infinitely many marbles in the jar, because at each step before t = 1 the number of marbles increases from the previous step and does so unboundedly. A second argument, however, shows that the jar is empty. Consider the following argument: if the jar is non-empty, then there must be a marble in the jar. Let us say that that marble is labeled with the number n. But at time t = 1 − 0.5n - 1, the nth marble has been taken out, so marble n cannot be in the jar. This is a contradiction, so the jar must be empty. The Ross-Littlewood paradox is that here we have two seemingly perfectly good arguments with completely opposite conclusions.

Further complications are introduced by the following variant. Suppose that we follow the same process as above, but instead of taking out marble 1 at t = 0, one takes out marble 2. And, at t = 0.5 one takes out marble 3, at t = 0.75 marble 4, etc. Then, one can use the same logic from above to show that while at t = 1, marble 1 is still in the jar, no other marbles can be left in the jar. Similarly, one can construct scenarios where in the end, 2 marbles are left, or 17 or, of course, infinitely many. But again this is paradoxical: given that in all these variations the same number of marbles are added or taken out at each step of the way, how can the end result differ?

Some people decide to simply bite the bullet and say that apparently, the end result does depend on which marbles are taken out at each instant. However, one immediate problem with that view is that one can think of the thought experiment as one where none of the marbles are actually labeled, and thus all the above variations are simply different ways of describing the same process; it seems unreasonable to say that the end result of the one actual process depends on the way we describe what happens.

Moreover, Allis and Koetsier offer the following variation on this thought experiment: at t = 0, marbles 1 to 9 are placed in the jar, but instead of taking a marble out they scribble a 0 after the 1 on the label of the first marble so that it is now labeled "10". At t = 0.5, marbles 11 to 19 are placed in the jar, and instead of taking out marble 2, a 0 is written on it, marking it as 20. The process is repeated ad infinitum. Now, notice that the end result at each step along the way of this process is the same as in the original experiment, and indeed the paradox remains: Since at every step along the way, more marbles were added, there must be infinitely marbles left at the end, yet at the same time, since every marble with number n was taken out at t = 1 − 0.5n - 1, no marbles can be left at the end. However, in this experiment, no marbles are ever taken out, and so any talk about the end result 'depending' on which marbles are taken out along the way is made impossible.

A bare-naked variation that really goes straight to the heart of all of this goes as follows: at t = 0, there is one marble in the jar with the number 0 scribbled on it. At t = 0.5, the number 0 on the marble gets replaced with the number 1, at t = 0.75, the number gets changed to 2, etc. Now, no marbles are ever added to or removed from the jar, so at t = 1, there should still be exactly that one marble in the jar. However, since we always replaced the number on that marble with some other number, it should have some number n on it, and that is impossible because we know precisely when that number was replaced, and never repeated again later. In other words, we can also reason that no marble can be left at the end of this process, which is quite a paradox.

Of course, it would be wise to heed Benacerraf’s words that the states of the jars before t = 1 do not logically determine the state at t = 1. Thus, neither Ross’s or Allis’s and Koetsier’s argument for the state of the jar at t = 1 proceeds by logical means only. Therefore, some extra premise must be introduced in order to say anything about the state of the jar at t = 1. Allis and Koetsier believe such an extra premise can be provided by the physical law that the marbles have continuous space-time paths, and therefore from the fact that for each n, marble n is out of the jar for t < 1, it must follow that it must still be outside the jar at t = 1 by continuity. Thus, the contradiction, and the paradox, remains.

One obvious solution to all these conundrums and paradoxes is to say that supertasks are impossible. If supertasks are impossible, then the very assumption that all of these scenarios had some kind of 'end result' to them is mistaken, preventing all of the further reasoning (leading to the contradictions) to go through.

Benardete’s paradox

There has been considerable interest in J. A. Benardete’s “Paradox of the Gods”:[7]
A man walks a mile from a point α. But there is an infinity of gods each of whom, unknown to the others, intends to obstruct him. One of them will raise a barrier to stop his further advance if he reaches the half-mile point, a second if he reaches the quarter-mile point, a third if he goes one-eighth of a mile, and so on ad infinitum. So he cannot even get started, because however short a distance he travels he will already have been stopped by a barrier. But in that case no barrier will rise, so that there is nothing to stop him setting off. He has been forced to stay where he is by the mere unfulfilled intentions of the gods.[8]
— M. Clark, Paradoxes from A to Z

Laraudogoitia’s supertask

This supertask, proposed by J. P. Laraudogoitia, is an example of indeterminism in Newtonian mechanics. The supertask consists of an infinite collection of stationary point masses. The point masses are all of mass m and are placed along a line AB that is a meters in length at positions B, AB / 2, AB / 4, AB / 8, and so on. The first particle at B is accelerated to a velocity of one meter per second towards A. According to the laws of Newtonian mechanics, when the first particle collides with the second, it will come to rest and the second particle will inherit its velocity of 1 m/s. This process will continue as an infinite amount of collisions, and after 1 second, all the collisions will have finished since all the particles were moving at 1 meter per second. However no particle will emerge from A, since there is no last particle in the sequence. It follows that all the particles are now at rest, contradicting conservation of energy. Now the laws of Newtonian mechanics are time-reversal-invariant; that is, if we reverse the direction of time, all the laws will remain the same. If time is reversed in this supertask, we have a system of stationary point masses along A to AB / 2 that will, at random, spontaneously start colliding with each other, resulting in a particle moving away from B at a velocity of 1 m/s. Alper and Bridger have questioned the reasoning in this supertask invoking the distinction between actual and potential infinity.

Davies' super-machine

Proposed by E. B. Davies,[9] this is a machine that can, in the space of half an hour, create an exact replica of itself that is half its size and capable of twice its replication speed. This replica will in turn create an even faster version of itself with the same specifications, resulting in a supertask that finishes after an hour. If, additionally, the machines create a communication link between parent and child machine that yields successively faster bandwidth and the machines are capable of simple arithmetic, the machines can be used to perform brute-force proofs of unknown conjectures. However, Davies also points out that – due to fundamental properties of the real universe such as quantum mechanics, thermal noise and information theory – his machine can't actually be built.

From Wikipedia, the free encyclopedia https://en.wikipedi...