Search This Blog

Saturday, June 23, 2018

Differential equation

From Wikipedia, the free encyclopedia


Visualization of heat transfer in a pump casing, created by solving the heat equation. Heat is being generated internally in the casing and being cooled at the boundary, providing a steady state temperature distribution.

A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the equation defines a relationship between the two. Because such relations are extremely common, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology.

In pure mathematics, differential equations are studied from several different perspectives, mostly concerned with their solutions—the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit formulas; however, some properties of solutions of a given differential equation may be determined without finding their exact form.

If a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy.

History

Differential equations first came into existence with the invention of calculus by Newton and Leibniz. In Chapter 2 of his 1671 work "Methodus fluxionum et Serierum Infinitarum",[1] Isaac Newton listed three kinds of differential equations:
{\displaystyle {\begin{aligned}&{\frac {dy}{dx}}=f(x)\\[5pt]&{\frac {dy}{dx}}=f(x,y)\\[5pt]&x_{1}{\frac {\partial y}{\partial x_{1}}}+x_{2}{\frac {\partial y}{\partial x_{2}}}=y\end{aligned}}}
He solves these examples and others using infinite series and discusses the non-uniqueness of solutions.

Jacob Bernoulli proposed the Bernoulli differential equation in 1695.[2] This is an ordinary differential equation of the form
y'+P(x)y=Q(x)y^{n}\,
for which the following year Leibniz obtained solutions by simplifying it.[3]

Historically, the problem of a vibrating string such as that of a musical instrument was studied by Jean le Rond d'Alembert, Leonhard Euler, Daniel Bernoulli, and Joseph-Louis Lagrange.[4][5][6][7] In 1746, d’Alembert discovered the one-dimensional wave equation, and within ten years Euler discovered the three-dimensional wave equation.[8]

The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point.

Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics.

In 1822, Fourier published his work on heat flow in Théorie analytique de la chaleur (The Analytic Theory of Heat),[9] in which he based his reasoning on Newton's law of cooling, namely, that the flow of heat between two adjacent molecules is proportional to the extremely small difference of their temperatures. Contained in this book was Fourier's proposal of his heat equation for conductive diffusion of heat. This partial differential equation is now taught to every student of mathematical physics.

Example

For example, in classical mechanics, the motion of a body is described by its position and velocity as the time value varies. Newton's laws allow these variables to be expressed dynamically (given the position, velocity, acceleration and various forces acting on the body) as a differential equation for the unknown position of the body as a function of time.

In some cases, this differential equation (called an equation of motion) may be solved explicitly.

An example of modelling a real world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity and air resistance. The ball's acceleration towards the ground is the acceleration due to gravity minus the acceleration due to air resistance. Gravity is considered constant, and air resistance may be modeled as proportional to the ball's velocity. This means that the ball's acceleration, which is a derivative of its velocity, depends on the velocity (and the velocity depends on time). Finding the velocity as a function of time involves solving a differential equation and verifying its validity.

Types

Differential equations can be divided into several types. Apart from describing the properties of the equation itself, these classes of differential equations can help inform the choice of approach to a solution. Commonly used distinctions include whether the equation is: Ordinary/Partial, Linear/Non-linear, and Homogeneous/Inhomogeneous. This list is far from exhaustive; there are many other properties and subclasses of differential equations which can be very useful in specific contexts.

Ordinary differential equations

An ordinary differential equation (ODE) is an equation containing an unknown function of one real or complex variable x, its derivatives, and some given functions of x. The unknown function is generally represented by a variable (often denoted y), which, therefore, depends on x. Thus x is often called the independent variable of the equation. The term "ordinary" is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable.

Linear differential equations are the differential equations that are linear in the unknown function and its derivatives. Their theory is well developed, and, in many cases, one may express their solutions in terms of integrals.

Most ODEs that are encountered in physics are linear, and, therefore, most special functions may be defined as solutions of linear differential equations.

As, in general, the solutions of a differential equation cannot be expressed by a closed-form expression, numerical methods are commonly used for solving differential equations on a computer.

Partial differential equations

A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved in closed form, or used to create a relevant computer model.

PDEs can be used to describe a wide variety of phenomena in nature such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations.

Non-linear differential equations

Non-linear differential equations are formed by the products of the unknown function and its derivatives are allowed and its degree is > 1. There are very few methods of solving nonlinear differential equations exactly; those that are known typically depend on the equation having particular symmetries. Nonlinear differential equations can exhibit very complicated behavior over extended time intervals, characteristic of chaos. Even the fundamental questions of existence, uniqueness, and extendability of solutions for nonlinear differential equations, and well-posedness of initial and boundary value problems for nonlinear PDEs are hard problems and their resolution in special cases is considered to be a significant advance in the mathematical theory (cf. Navier–Stokes existence and smoothness). However, if the differential equation is a correctly formulated representation of a meaningful physical process, then one expects it to have a solution.[10]

Linear differential equations frequently appear as approximations to nonlinear equations. These approximations are only valid under restricted conditions. For example, the harmonic oscillator equation is an approximation to the nonlinear pendulum equation that is valid for small amplitude oscillations.

Equation order

Differential equations are described by their order, determined by the term with the highest derivatives. An equation containing only first derivatives is a first-order differential equation, an equation containing the second derivative is a second-order differential equation, and so on.[11][12] Differential equations that describe natural phenomena almost always have only first and second order derivatives in them, but there are some exceptions, such as the thin film equation, which is a fourth order partial differential equation.

Examples

In the first group of examples, let u be an unknown function of x, and let c & ω be known constants. Note both ordinary and partial differential equations are broadly classified as linear and nonlinear.
  • Inhomogeneous first-order linear constant coefficient ordinary differential equation:
{\frac {du}{dx}}=cu+x^{2}.
  • Homogeneous second-order linear ordinary differential equation:
{\frac {d^{2}u}{dx^{2}}}-x{\frac {du}{dx}}+u=0.
  • Homogeneous second-order linear constant coefficient ordinary differential equation describing the harmonic oscillator:
{\frac {d^{2}u}{dx^{2}}}+\omega ^{2}u=0.
  • Inhomogeneous first-order nonlinear ordinary differential equation:
{\frac {du}{dx}}=u^{2}+4.
  • Second-order nonlinear (due to sine function) ordinary differential equation describing the motion of a pendulum of length L:
L{\frac {d^{2}u}{dx^{2}}}+g\sin u=0.
In the next group of examples, the unknown function u depends on two variables x and t or x and y.
  • Homogeneous first-order linear partial differential equation:
{\frac {\partial u}{\partial t}}+t{\frac {\partial u}{\partial x}}=0.
  • Homogeneous second-order linear constant coefficient partial differential equation of elliptic type, the Laplace equation:
{\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}=0.
{\frac {\partial u}{\partial t}}=6u{\frac {\partial u}{\partial x}}-{\frac {\partial ^{3}u}{\partial x^{3}}}.

Existence of solutions

Solving differential equations is not like solving algebraic equations. Not only are their solutions often unclear, but whether solutions are unique or exist at all are also notable subjects of interest.

For first order initial value problems, the Peano existence theorem gives one set of circumstances in which a solution exists. Given any point (a,b) in the xy-plane, define some rectangular region Z, such that Z=[l,m]\times [n,p] and (a,b) is in the interior of Z. If we are given a differential equation {\frac {\mathrm {d} y}{\mathrm {d} x}}=g(x,y) and the condition that y=b when x=a, then there is locally a solution to this problem if g(x,y) and {\frac {\partial g}{\partial x}} are both continuous on Z. This solution exists on some interval with its center at a. The solution may not be unique. (See Ordinary differential equation for other results.)

However, this only helps us with first order initial value problems. Suppose we had a linear initial value problem of the nth order:
f_{n}(x){\frac {\mathrm {d} ^{n}y}{\mathrm {d} x^{n}}}+\cdots +f_{1}(x){\frac {\mathrm {d} y}{\mathrm {d} x}}+f_{0}(x)y=g(x)
such that
y(x_{0})=y_{0},y'(x_{0})=y'_{0},y''(x_{0})=y''_{0},\cdots
For any nonzero f_{{n}}(x), if \{f_{0},f_{1},\cdots \} and g are continuous on some interval containing x_{0}, y is unique and exists.[13]

Related concepts

Connection to difference equations

The theory of differential equations is closely related to the theory of difference equations, in which the coordinates assume only discrete values, and the relationship involves values of the unknown function or functions and values at nearby coordinates. Many methods to compute numerical solutions of differential equations or study the properties of differential equations involve the approximation of the solution of a differential equation by the solution of a corresponding difference equation.

Applications

The study of differential equations is a wide field in pure and applied mathematics, physics, and engineering. All of these disciplines are concerned with the properties of differential equations of various types. Pure mathematics focuses on the existence and uniqueness of solutions, while applied mathematics emphasizes the rigorous justification of the methods for approximating solutions. Differential equations play an important role in modelling virtually every physical, technical, or biological process, from celestial motion, to bridge design, to interactions between neurons. Differential equations such as those used to solve real-life problems may not necessarily be directly solvable, i.e. do not have closed form solutions. Instead, solutions can be approximated using numerical methods.

Many fundamental laws of physics and chemistry can be formulated as differential equations. In biology and economics, differential equations are used to model the behavior of complex systems. The mathematical theory of differential equations first developed together with the sciences where the equations had originated and where the results found application. However, diverse problems, sometimes originating in quite distinct scientific fields, may give rise to identical differential equations. Whenever this happens, mathematical theory behind the equations can be viewed as a unifying principle behind diverse phenomena. As an example, consider the propagation of light and sound in the atmosphere, and of waves on the surface of a pond. All of them may be described by the same second-order partial differential equation, the wave equation, which allows us to think of light and sound as forms of waves, much like familiar waves in the water. Conduction of heat, the theory of which was developed by Joseph Fourier, is governed by another second-order partial differential equation, the heat equation. It turns out that many diffusion processes, while seemingly different, are described by the same equation; the Black–Scholes equation in finance is, for instance, related to the heat equation.

Physics

Classical mechanics

So long as the force acting on a particle is known, Newton's second law is sufficient to describe the motion of a particle. Once independent relations for each force acting on a particle are available, they can be substituted into Newton's second law to obtain an ordinary differential equation, which is called the equation of motion.

Electrodynamics

Maxwell's equations are a set of partial differential equations that, together with the Lorentz force law, form the foundation of classical electrodynamics, classical optics, and electric circuits. These fields in turn underlie modern electrical and communications technologies. Maxwell's equations describe how electric and magnetic fields are generated and altered by each other and by charges and currents. They are named after the Scottish physicist and mathematician James Clerk Maxwell, who published an early form of those equations between 1861 and 1862.

General relativity

The Einstein field equations (EFE; also known as "Einstein's equations") are a set of ten partial differential equations in Albert Einstein's general theory of relativity which describe the fundamental interaction of gravitation as a result of spacetime being curved by matter and energy.[14] First published by Einstein in 1915[15] as a tensor equation, the EFE equate local spacetime curvature (expressed by the Einstein tensor) with the local energy and momentum within that spacetime (expressed by the stress–energy tensor).[16]

Quantum mechanics

In quantum mechanics, the analogue of Newton's law is Schrödinger's equation (a partial differential equation) for a quantum system (usually atoms, molecules, and subatomic particles whether free, bound, or localized). It is not a simple algebraic equation, but in general a linear partial differential equation, describing the time-evolution of the system's wave function (also called a "state function").[17]

Biology

Predator-prey equations

The Lotka–Volterra equations, also known as the predator–prey equations, are a pair of first-order, non-linear, differential equations frequently used to describe the population dynamics of two species that interact, one as a predator and the other as prey.

Chemistry

The rate law or rate equation for a chemical reaction is a differential equation that links the reaction rate with concentrations or pressures of reactants and constant parameters (normally rate coefficients and partial reaction orders).[18] To determine the rate equation for a particular system one combines the reaction rate with a mass balance for the system.[19] In addition, a range of differential equations are present in the study of thermodynamics and quantum mechanics.

Economics

Infinitesimal

From Wikipedia, the free encyclopedia

Infinitesimals (ε) and infinites (ω) on the hyperreal number line (ε = 1/ω)

In mathematics, infinitesimals are things so small that there is no way to measure them. The insight with exploiting infinitesimals was that entities could still retain certain specific properties, such as angle or slope, even though these entities were quantitatively small.[1] The word infinitesimal comes from a 17th-century Modern Latin coinage infinitesimus, which originally referred to the "infinite-th" item in a sequence. Infinitesimals are a basic ingredient in the procedures of infinitesimal calculus as developed by Leibniz, including the law of continuity and the transcendental law of homogeneity. In common speech, an infinitesimal object is an object that is smaller than any feasible measurement, but not zero in size—or, so small that it cannot be distinguished from zero by any available means. Hence, when used as an adjective, "infinitesimal" means "extremely small". To give it a meaning, it usually must be compared to another infinitesimal object in the same context (as in a derivative). Infinitely many infinitesimals are summed to produce an integral.

The concept of infinitesimals was originally introduced around 1670 by either Nicolaus Mercator or Gottfried Wilhelm Leibniz.[2] Archimedes used what eventually came to be known as the method of indivisibles in his work The Method of Mechanical Theorems to find areas of regions and volumes of solids.[3] In his formal published treatises, Archimedes solved the same problem using the method of exhaustion. The 15th century saw the work of Nicholas of Cusa, further developed in the 17th century by Johannes Kepler, in particular calculation of area of a circle by representing the latter as an infinite-sided polygon. Simon Stevin's work on decimal representation of all numbers in the 16th century prepared the ground for the real continuum. Bonaventura Cavalieri's method of indivisibles led to an extension of the results of the classical authors. The method of indivisibles related to geometrical figures as being composed of entities of codimension 1. John Wallis's infinitesimals differed from indivisibles in that he would decompose geometrical figures into infinitely thin building blocks of the same dimension as the figure, preparing the ground for general methods of the integral calculus. He exploited an infinitesimal denoted 1/∞ in area calculations.

The use of infinitesimals by Leibniz relied upon heuristic principles, such as the law of continuity: what succeeds for the finite numbers succeeds also for the infinite numbers and vice versa; and the transcendental law of homogeneity that specifies procedures for replacing expressions involving inassignable quantities, by expressions involving only assignable ones. The 18th century saw routine use of infinitesimals by mathematicians such as Leonhard Euler and Joseph-Louis Lagrange. Augustin-Louis Cauchy exploited infinitesimals both in defining continuity in his Cours d'Analyse, and in defining an early form of a Dirac delta function. As Cantor and Dedekind were developing more abstract versions of Stevin's continuum, Paul du Bois-Reymond wrote a series of papers on infinitesimal-enriched continua based on growth rates of functions. Du Bois-Reymond's work inspired both Émile Borel and Thoralf Skolem. Borel explicitly linked du Bois-Reymond's work to Cauchy's work on rates of growth of infinitesimals. Skolem developed the first non-standard models of arithmetic in 1934. A mathematical implementation of both the law of continuity and infinitesimals was achieved by Abraham Robinson in 1961, who developed non-standard analysis based on earlier work by Edwin Hewitt in 1948 and Jerzy Łoś in 1955. The hyperreals implement an infinitesimal-enriched continuum and the transfer principle implements Leibniz's law of continuity. The standard part function implements Fermat's adequality.

Vladimir Arnold wrote in 1990:
Nowadays, when teaching analysis, it is not very popular to talk about infinitesimal quantities. Consequently present-day students are not fully in command of this language. Nevertheless, it is still necessary to have command of it.[4]

History of the infinitesimal

The notion of infinitely small quantities was discussed by the Eleatic School. The Greek mathematician Archimedes (c.287 BC–c.212 BC), in The Method of Mechanical Theorems, was the first to propose a logically rigorous definition of infinitesimals.[5] His Archimedean property defines a number x as infinite if it satisfies the conditions |x|>1, |x|>1+1, |x|>1+1+1, ..., and infinitesimal if x≠0 and a similar set of conditions holds for x and the reciprocals of the positive integers. A number system is said to be Archimedean if it contains no infinite or infinitesimal members.

The English mathematician John Wallis introduced the expression 1/∞ in his 1655 book Treatise on the Conic Sections. The symbol, which denotes the reciprocal, or inverse, of , is the symbolic representation of the mathematical concept of an infinitesimal. In his Treatise on the Conic Sections Wallis also discusses the concept of a relationship between the symbolic representation of infinitesimal 1/∞ that he introduced and the concept of infinity for which he introduced the symbol ∞. The concept suggests a thought experiment of adding an infinite number of parallelograms of infinitesimal width to form a finite area. This concept was the predecessor to the modern method of integration used in integral calculus. The conceptual origins of the concept of the infinitesimal 1/∞ can be traced as far back as the Greek philosopher Zeno of Elea, whose Zeno's dichotomy paradox was the first mathematical concept to consider the relationship between a finite interval and an interval approaching that of an infinitesimal-sized interval.

Infinitesimals were the subject of political and religious controversies in 17th century Europe, including a ban on infinitesimals issued by clerics in Rome in 1632.[6]

Prior to the invention of calculus mathematicians were able to calculate tangent lines using Pierre de Fermat's method of adequality and René Descartes' method of normals. There is debate among scholars as to whether the method was infinitesimal or algebraic in nature. When Newton and Leibniz invented the calculus, they made use of infinitesimals, Newton's fluxions and Leibniz' differential. The use of infinitesimals was attacked as incorrect by Bishop Berkeley in his work The Analyst.[7] Mathematicians, scientists, and engineers continued to use infinitesimals to produce correct results. In the second half of the nineteenth century, the calculus was reformulated by Augustin-Louis Cauchy, Bernard Bolzano, Karl Weierstrass, Cantor, Dedekind, and others using the (ε, δ)-definition of limit and set theory. While the followers of Cantor, Dedekind, and Weierstrass sought to rid analysis of infinitesimals, and their philosophical allies like Bertrand Russell and Rudolf Carnap declared that infinitesimals are pseudoconcepts, Hermann Cohen and his Marburg school of neo-Kantianism sought to develop a working logic of infinitesimals.[8] The mathematical study of systems containing infinitesimals continued through the work of Levi-Civita, Giuseppe Veronese, Paul du Bois-Reymond, and others, throughout the late nineteenth and the twentieth centuries, as documented by Philip Ehrlich (2006). In the 20th century, it was found that infinitesimals could serve as a basis for calculus and analysis; see hyperreal number.

First-order properties

In extending the real numbers to include infinite and infinitesimal quantities, one typically wishes to be as conservative as possible by not changing any of their elementary properties. This guarantees that as many familiar results as possible are still available. Typically elementary means that there is no quantification over sets, but only over elements. This limitation allows statements of the form "for any number x..." For example, the axiom that states "for any number x, x + 0 = x" would still apply. The same is true for quantification over several numbers, e.g., "for any numbers x and y, xy = yx." However, statements of the form "for any set S of numbers ..." may not carry over. Logic with this limitation on quantification is referred to as first-order logic.

The resulting extended number system cannot agree with the reals on all properties that can be expressed by quantification over sets, because the goal is to construct a non-Archimedean system, and the Archimedean principle can be expressed by quantification over sets. One can conservatively extend any theory including reals, including set theory, to include infinitesimals, just by adding a countably infinite list of axioms that assert that a number is smaller than 1/2, 1/3, 1/4 and so on. Similarly, the completeness property cannot be expected to carry over, because the reals are the unique complete ordered field up to isomorphism.

We can distinguish three levels at which a nonarchimedean number system could have first-order properties compatible with those of the reals:
  1. An ordered field obeys all the usual axioms of the real number system that can be stated in first-order logic. For example, the commutativity axiom x + y = y + x holds.
  2. A real closed field has all the first-order properties of the real number system, regardless of whether they are usually taken as axiomatic, for statements involving the basic ordered-field relations +, ×, and ≤. This is a stronger condition than obeying the ordered-field axioms. More specifically, one includes additional first-order properties, such as the existence of a root for every odd-degree polynomial. For example, every number must have a cube root.
  3. The system could have all the first-order properties of the real number system for statements involving any relations (regardless of whether those relations can be expressed using +, ×, and ≤). For example, there would have to be a sine function that is well defined for infinite inputs; the same is true for every real function.
Systems in category 1, at the weak end of the spectrum, are relatively easy to construct, but do not allow a full treatment of classical analysis using infinitesimals in the spirit of Newton and Leibniz. For example, the transcendental functions are defined in terms of infinite limiting processes, and therefore there is typically no way to define them in first-order logic. Increasing the analytic strength of the system by passing to categories 2 and 3, we find that the flavor of the treatment tends to become less constructive, and it becomes more difficult to say anything concrete about the hierarchical structure of infinities and infinitesimals.

Number systems that include infinitesimals

Formal series

Laurent series

An example from category 1 above is the field of Laurent series with a finite number of negative-power terms. For example, the Laurent series consisting only of the constant term 1 is identified with the real number 1, and the series with only the linear term x is thought of as the simplest infinitesimal, from which the other infinitesimals are constructed. Dictionary ordering is used, which is equivalent to considering higher powers of x as negligible compared to lower powers. David O. Tall[9] refers to this system as the super-reals, not to be confused with the superreal number system of Dales and Woodin. Since a Taylor series evaluated with a Laurent series as its argument is still a Laurent series, the system can be used to do calculus on transcendental functions if they are analytic. These infinitesimals have different first-order properties than the reals because, for example, the basic infinitesimal x does not have a square root.

The Levi-Civita field

The Levi-Civita field is similar to the Laurent series, but is algebraically closed. For example, the basic infinitesimal x has a square root. This field is rich enough to allow a significant amount of analysis to be done, but its elements can still be represented on a computer in the same sense that real numbers can be represented in floating point.[10]

Transseries

The field of transseries is larger than the Levi-Civita field.[11] An example of a transseries is:
e^{\sqrt {\ln \ln x}}+\ln \ln x+\sum _{j=0}^{\infty }e^{x}x^{-j},
where for purposes of ordering x is considered infinite.

Surreal numbers

Conway's surreal numbers fall into category 2. They are a system designed to be as rich as possible in different sizes of numbers, but not necessarily for convenience in doing analysis. Certain transcendental functions can be carried over to the surreals, including logarithms and exponentials, but most, e.g., the sine function, cannot[citation needed]. The existence of any particular surreal number, even one that has a direct counterpart in the reals, is not known a priori, and must be proved.[clarification needed]

Hyperreals

The most widespread technique for handling infinitesimals is the hyperreals, developed by Abraham Robinson in the 1960s. They fall into category 3 above, having been designed that way so all of classical analysis can be carried over from the reals. This property of being able to carry over all relations in a natural way is known as the transfer principle, proved by Jerzy Łoś in 1955. For example, the transcendental function sin has a natural counterpart *sin that takes a hyperreal input and gives a hyperreal output, and similarly the set of natural numbers \mathbb {N} has a natural counterpart ^{*}\mathbb {N} , which contains both finite and infinite integers. A proposition such as \forall n\in \mathbb {N} ,\sin n\pi =0 carries over to the hyperreals as \forall n\in {}^{*}\mathbb {N} ,{}^{*}\!\!\sin n\pi =0 .

Superreals

The superreal number system of Dales and Woodin is a generalization of the hyperreals. It is different from the super-real system defined by David Tall.

Dual numbers

In linear algebra, the dual numbers extend the reals by adjoining one infinitesimal, the new element ε with the property ε2 = 0 (that is, ε is nilpotent). Every dual number has the form z = a + bε with a and b being uniquely determined real numbers.

One application of dual numbers is automatic differentiation. This application can be generalized to polynomials in n variables, using the Exterior algebra of an n-dimensional vector space.

Smooth infinitesimal analysis

Synthetic differential geometry or smooth infinitesimal analysis have roots in category theory. This approach departs from the classical logic used in conventional mathematics by denying the general applicability of the law of excluded middle – i.e., not (ab) does not have to mean a = b. A nilsquare or nilpotent infinitesimal can then be defined. This is a number x where x2 = 0 is true, but x = 0 need not be true at the same time. Since the background logic is intuitionistic logic, it is not immediately clear how to classify this system with regard to classes 1, 2, and 3. Intuitionistic analogues of these classes would have to be developed first.

Infinitesimal delta functions

Cauchy used an infinitesimal \alpha to write down a unit impulse, infinitely tall and narrow Dirac-type delta function \delta _{\alpha } satisfying \int F(x)\delta _{\alpha }(x)=F(0) in a number of articles in 1827, see Laugwitz (1989). Cauchy defined an infinitesimal in 1821 (Cours d'Analyse) in terms of a sequence tending to zero. Namely, such a null sequence becomes an infinitesimal in Cauchy's and Lazare Carnot's terminology.

Modern set-theoretic approaches allow one to define infinitesimals via the ultrapower construction, where a null sequence becomes an infinitesimal in the sense of an equivalence class modulo a relation defined in terms of a suitable ultrafilter. The article by Yamashita (2007) contains a bibliography on modern Dirac delta functions in the context of an infinitesimal-enriched continuum provided by the hyperreals.

Logical properties

The method of constructing infinitesimals of the kind used in nonstandard analysis depends on the model and which collection of axioms are used. We consider here systems where infinitesimals can be shown to exist.

In 1936 Maltsev proved the compactness theorem. This theorem is fundamental for the existence of infinitesimals as it proves that it is possible to formalise them. A consequence of this theorem is that if there is a number system in which it is true that for any positive integer n there is a positive number x such that 0 < x < 1/n, then there exists an extension of that number system in which it is true that there exists a positive number x such that for any positive integer n we have 0 < x < 1/n. The possibility to switch "for any" and "there exists" is crucial. The first statement is true in the real numbers as given in ZFC set theory : for any positive integer n it is possible to find a real number between 1/n and zero, but this real number depends on n. Here, one chooses n first, then one finds the corresponding x. In the second expression, the statement says that there is an x (at least one), chosen first, which is between 0 and 1/n for any n. In this case x is infinitesimal. This is not true in the real numbers (R) given by ZFC. Nonetheless, the theorem proves that there is a model (a number system) in which this is true. The question is: what is this model? What are its properties? Is there only one such model?

There are in fact many ways to construct such a one-dimensional linearly ordered set of numbers, but fundamentally, there are two different approaches:
1) Extend the number system so that it contains more numbers than the real numbers.
2) Extend the axioms (or extend the language) so that the distinction between the infinitesimals and non-infinitesimals can be made in the real numbers themselves.
In 1960, Abraham Robinson provided an answer following the first approach. The extended set is called the hyperreals and contains numbers less in absolute value than any positive real number. The method may be considered relatively complex but it does prove that infinitesimals exist in the universe of ZFC set theory. The real numbers are called standard numbers and the new non-real hyperreals are called nonstandard.

In 1977 Edward Nelson provided an answer following the second approach. The extended axioms are IST, which stands either for Internal set theory or for the initials of the three extra axioms: Idealization, Standardization, Transfer. In this system we consider that the language is extended in such a way that we can express facts about infinitesimals. The real numbers are either standard or nonstandard. An infinitesimal is a nonstandard real number that is less, in absolute value, than any positive standard real number.

In 2006 Karel Hrbacek developed an extension of Nelson's approach in which the real numbers are stratified in (infinitely) many levels; i.e., in the coarsest level there are no infinitesimals nor unlimited numbers. Infinitesimals are in a finer level and there are also infinitesimals with respect to this new level and so on.

Infinitesimals in teaching

Calculus textbooks based on infinitesimals include the classic Calculus Made Easy by Silvanus P. Thompson (bearing the motto "What one fool can do another can"[12]) and the German text Mathematik fur Mittlere Technische Fachschulen der Maschinenindustrie by R Neuendorff.[13] Pioneering works based on Abraham Robinson's infinitesimals include texts by Stroyan (dating from 1972) and Howard Jerome Keisler (Elementary Calculus: An Infinitesimal Approach). Students easily relate to the intuitive notion of an infinitesimal difference 1-"0.999...", where "0.999..." differs from its standard meaning as the real number 1, and is reinterpreted as an infinite terminating extended decimal that is strictly less than 1.[14][15]

Another elementary calculus text that uses the theory of infinitesimals as developed by Robinson is Infinitesimal Calculus by Henle and Kleinberg, originally published in 1979.[16] The authors introduce the language of first order logic, and demonstrate the construction of a first order model of the hyperreal numbers. The text provides an introduction to the basics of integral and differential calculus in one dimension, including sequences and series of functions. In an Appendix, they also treat the extension of their model to the hyperhyperreals, and demonstrate some applications for the extended model.

Functions tending to zero

In a related but somewhat different sense, which evolved from the original definition of "infinitesimal" as an infinitely small quantity, the term has also been used to refer to a function tending to zero. More precisely, Loomis and Sternberg's Advanced Calculus defines the function class of infinitesimals, {\mathfrak {I}}, as a subset of functions f:V\to W between normed vector spaces by
{\displaystyle {\mathfrak {I}}(V,W)=\{f:V\to W\ |\ f(0)=0,(\forall \epsilon >0)(\exists \delta >0)\ \backepsilon \ ||\xi ||<\delta \implies ||f(\xi )||<\epsilon \}},
as well as two related classes {\displaystyle {\mathfrak {O}},{\mathfrak {o}}} (see Big-O notation) by
{\displaystyle {\mathfrak {O}}(V,W)=\{f:V\to W\ |\ f(0)=0,\ (\exists r>0,c>0)\ \backepsilon \ ||\xi ||<r\implies ||f(\xi )||\leq c||\xi ||\}}, and
{\displaystyle {\mathfrak {o}}(V,W)=\{f:V\to W\ |\ f(0)=0,\ \lim _{||\xi ||\to 0}||f(\xi )||/||\xi ||=0\}}.[17]
The set inclusions {\displaystyle {\mathfrak {o}}(V,W)\subsetneq {\mathfrak {O}}(V,W)\subsetneq {\mathfrak {I}}(V,W)}generally hold. That the inclusions are proper is demonstrated by the real-valued functions of a real variable {\displaystyle f:x\mapsto |x|^{1/2}}, {\displaystyle g:x\mapsto x}, and {\displaystyle h:x\mapsto x^{2}}:
{\displaystyle f,g,h\in {\mathfrak {I}}(\mathbb {R} ,\mathbb {R} ),\ g,h\in {\mathfrak {O}}(\mathbb {R} ,\mathbb {R} ),\ h\in {\mathfrak {o}}(\mathbb {R} ,\mathbb {R} )} but
{\displaystyle f,g\notin {\mathfrak {o}}(\mathbb {R} ,\mathbb {R} )} and {\displaystyle f\notin {\mathfrak {O}}(\mathbb {R} ,\mathbb {R} )}.
As an application of these definitions, a mapping {\displaystyle F:V\to W} between normed vector spaces is defined to be differentiable at {\displaystyle \alpha \in V} if there is a {\displaystyle T\in \mathrm {Hom} (V,W)} [i.e, a bounded linear map {\displaystyle V\to W}] such that
{\displaystyle [F(\alpha +\xi )-F(\alpha )]-T(\xi )\in {\mathfrak {o}}(V,W)}
in a neighborhood of \alpha . If such a map exists, it is unique; this map is called the differential and is denoted by {\displaystyle dF_{\alpha }},[18] coinciding with the traditional notation for the classical (though logically flawed) notion of a differential as an infinitely small "piece" of F. This definition represents a generalization of the usual definition of differentiability for vector-valued functions of (open subsets of) Euclidean spaces.

Array of random variables

Let (\Omega ,{\mathcal {F}},\mathbb {P} ) be a probability space and let n\in \mathbb {N} . An array {\displaystyle \{X_{n,k}:\Omega \to \mathbb {R} \mid 1\leq k\leq k_{n}\}} of random variables is called infinitesimal if for every \epsilon >0, we have:[19]
{\displaystyle \max _{1\leq k\leq k_{n}}\mathbb {P} \{\omega \in \Omega \mid \vert X_{n,k}(\omega )\vert \geq \epsilon \}\to 0{\text{ as }}n\to \infty }
The notion of infinitesimal array is essential in some central limit theorems and it is easily seen by monotonicity of the expectation operator that any array satisfying Lindeberg's condition is infinitesimal, thus playing an important role in Lindeberg's Central Limit Theorem (a generalization of the central limit theorem).

Anti-environmentalism

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Anti-environmentalism Anti-environmentalism is a set of ideas and actio...