Search This Blog

Saturday, July 29, 2023

Limit (mathematics)

From Wikipedia, the free encyclopedia

In mathematics, a limit is the value that a function (or sequence) approaches as the input (or index) approaches some value. Limits are essential to calculus and mathematical analysis, and are used to define continuity, derivatives, and integrals.

The concept of a limit of a sequence is further generalized to the concept of a limit of a topological net, and is closely related to limit and direct limit in category theory.

In formulas, a limit of a function is usually written as

(although a few authors use "Lt" instead of "lim") and is read as "the limit of f of x as x approaches c equals L". The fact that a function f approaches the limit L as x approaches c is sometimes denoted by a right arrow (→ or ), as in

which reads " of tends to as tends to ".

History

Grégoire de Saint-Vincent gave the first definition of limit (terminus) of a geometric series in his work Opus Geometricum (1647): "The terminus of a progression is the end of the series, which none progression can reach, even not if she is continued in infinity, but which she can approach nearer than a given segment."

The modern definition of a limit goes back to Bernard Bolzano who, in 1817, developed the basics of the epsilon-delta technique to define continuous functions. However, his work remained unknown to other mathematicians until thirty years after his death.

Augustin-Louis Cauchy in 1821, followed by Karl Weierstrass, formalized the definition of the limit of a function which became known as the (ε, δ)-definition of limit.

The modern notation of placing the arrow below the limit symbol is due to G. H. Hardy, who introduced it in his book A Course of Pure Mathematics in 1908.

Types of limits

In sequences

Real numbers

The expression 0.999... should be interpreted as the limit of the sequence 0.9, 0.99, 0.999, ... and so on. This sequence can be rigorously shown to have the limit 1, and therefore this expression is meaningfully interpreted as having the value 1.

Formally, suppose a1, a2, … is a sequence of real numbers. When the limit of the sequence exists, the real number L is the limit of this sequence if and only if for every real number ε > 0, there exists a natural number N such that for all n > N, we have |anL| < ε. The notation

is often used, and which is read as

"The limit of an as n approaches infinity equals L"

The formal definition intuitively means that eventually, all elements of the sequence get arbitrarily close to the limit, since the absolute value |anL| is the distance between an and L.

Not every sequence has a limit. If it does, then it is called convergent, and if it does not, then it is divergent. One can show that a convergent sequence has only one limit.

The limit of a sequence and the limit of a function are closely related. On one hand, the limit as n approaches infinity of a sequence {an} is simply the limit at infinity of a function a(n)—defined on the natural numbers {n}. On the other hand, if X is the domain of a function f(x) and if the limit as n approaches infinity of f(xn) is L for every arbitrary sequence of points {xn} in {X – {x0}} which converges to x0, then the limit of the function f(x) as x approaches x0 is L. One such sequence would be {x0 + 1/n}.

Infinity as a limit

There is also a notion of having a limit "at infinity", as opposed to at some finite . A sequence is said to "tend to infinity" if, for each real number , known as the bound, there exists an integer such that for each ,

That is, for every possible bound, the magnitude of the sequence eventually exceeds the bound. This is often written or simply . Such sequences are also called unbounded.

It is possible for a sequence to be divergent, but not tend to infinity. Such sequences are called oscillatory. An example of an oscillatory sequence is .

For the real numbers, there are corresponding notions of tending to positive infinity and negative infinity, by removing the modulus sign from the above definition:

defines tending to positive infinity, while
defines tending to negative infinity.

Sequences which do not tend to infinity are called bounded. Sequences which do not tend to positive infinity are called bounded above, while those which do not tend to negative infinity are bounded below.

Metric space

The discussion of sequences above is for sequences of real numbers. The notion of limits can be defined for sequences valued in more abstract spaces. One example of a more abstract space is metric spaces. If is a metric space with distance function , and is a sequence in , then the limit (when it exists) of the sequence is an element such that, given , there exists an such that for each , the equation

is satisfied.

An equivalent statement is that if the sequence of real numbers .

Example: ℝn

An important example is the space of -dimensional real vectors, with elements where each of the are real, an example of a suitable distance function is the Euclidean distance, defined by

The sequence of points converges to if the limit exists and .

Topological space

In some sense the most abstract space in which limits can be defined are topological spaces. If is a topological space with topology , and is a sequence in , then the limit (when it exists) of the sequence is a point such that, given a (open) neighborhood of , there exists an such that for every ,

is satisfied. In this case, the limit (if it exists) may not be unique. However it must be unique if is a Hausdorff space.

Function space

This section deals with the idea of limits of sequences of functions, not to be confused with the idea of limits of functions, discussed below.

The field of functional analysis partly seeks to identify useful notions of convergence on function spaces. For example, consider the space of functions from a generic set to . Given a sequence of functions such that each is a function , suppose that there exists a function such that for each ,

Then the sequence is said to converge pointwise to . However, such sequences can exhibit unexpected behavior. For example, it is possible to construct a sequence of continuous functions which has a discontinuous pointwise limit.

Another notion of convergence is uniform convergence. The uniform distance between two functions is the maximum difference between the two functions as the argument is varied. That is,

Then the sequence is said to uniformly converge or have a uniform limit of if with respect to this distance. The uniform limit has "nicer" properties than the pointwise limit. For example, the uniform limit of a sequence of continuous functions is continuous.

Many different notions of convergence can be defined on function spaces. This is sometimes dependent on the regularity of the space. Prominent examples of function spaces with some notion of convergence are Lp spaces and Sobolev space.

In functions

A function f(x) for which the limit at infinity is L. For any arbitrary distance ε, there must be a value S such that the function stays within L ± ε for all x > S.

Suppose f is a real-valued function and c is a real number. Intuitively speaking, the expression

means that f(x) can be made to be as close to L as desired, by making x sufficiently close to c. In that case, the above equation can be read as "the limit of f of x, as x approaches c, is L".

Formally, the definition of the "limit of as approaches " is given as follows. The limit is a real number so that, given an arbitrary real number (thought of as the "error"), there is a such that, for any satisfying , it holds that . This is known as the (ε, δ)-definition of limit.

The inequality is used to exclude from the set of points under consideration, but some authors do not include this in their definition of limits, replacing with simply . This replacement is equivalent to additionally requiring that be continuous at .

It can be proven that there is an equivalent definition which makes manifest the connection between limits of sequences and limits of functions. The equivalent definition is given as follows. First observe that for every sequence in the domain of , there is an associated sequence , the image of the sequence under . The limit is a real number so that, for all sequences , the associated sequence .

One-sided limit

It is possible to define the notion of having a "left-handed" limit ("from below"), and a notion of a "right-handed" limit ("from above"). These need not agree. An example is given by the positive indicator function, , defined such that if , and if . At , the function has a "left-handed limit" of 0, a "right-handed limit" of 1, and its limit does not exist. Symbolically, this can be stated as, for this example, , and , and from this it can be deduced doesn't exist, because .

Infinity in limits of functions

It is possible to define the notion of "tending to infinity" in the domain of ,

In this expression, the infinity is considered to be signed: either or . The "limit of f as x tends to positive infinity" is defined as follows. It is a real number such that, given any real , there exists an so that if , . Equivalently, for any sequence , we have .

It is also possible to define the notion of "tending to infinity" in the value of ,

The definition is given as follows. Given any real number , there is a so that for , the absolute value of the function . Equivalently, for any sequence , the sequence .

Nonstandard analysis

In non-standard analysis (which involves a hyperreal enlargement of the number system), the limit of a sequence can be expressed as the standard part of the value of the natural extension of the sequence at an infinite hypernatural index n=H. Thus,

Here, the standard part function "st" rounds off each finite hyperreal number to the nearest real number (the difference between them is infinitesimal). This formalizes the natural intuition that for "very large" values of the index, the terms in the sequence are "very close" to the limit value of the sequence. Conversely, the standard part of a hyperreal represented in the ultrapower construction by a Cauchy sequence , is simply the limit of that sequence:

In this sense, taking the limit and taking the standard part are equivalent procedures.

Limit sets

Limit set of a sequence

Let be a sequence in a topological space . For concreteness, can be thought of as , but the definitions hold more generally. The limit set is the set of points such that if there is a convergent subsequence with , then belongs to the limit set. In this context, such an is sometimes called a limit point.

A use of this notion is to characterize the "long-term behavior" of oscillatory sequences. For example, consider the sequence . Starting from n=1, the first few terms of this sequence are . It can be checked that it is oscillatory, so has no limit, but has limit points .

Limit set of a trajectory

This notion is used in dynamical systems, to study limits of trajectories. Defining a trajectory to be a function , the point is thought of as the "position" of the trajectory at "time" . The limit set of a trajectory is defined as follows. To any sequence of increasing times , there is an associated sequence of positions . If is the limit set of the sequence for any sequence of increasing times, then is a limit set of the trajectory.

Technically, this is the -limit set. The corresponding limit set for sequences of decreasing time is called the -limit set.

An illustrative example is the circle trajectory: . This has no unique limit, but for each , the point is a limit point, given by the sequence of times . But the limit points need not be attained on the trajectory. The trajectory also has the unit circle as its limit set.

Uses

Limits are used to define a number of important concepts in analysis.

Series

A particular expression of interest which is formalized as the limit of a sequence is sums of infinite series. These are "infinite sums" of real numbers, generally written as

This is defined through limits as follows: given a sequence of real numbers , the sequence of partial sums is defined by
If the limit of the sequence exists, the value of the expression is defined to be the limit. Otherwise, the series is said to be divergent.

A classic example is the Basel problem, where . Then

However, while for sequences there is essentially a unique notion of convergence, for series there are different notions of convergence. This is due to the fact that the expression does not discriminate between different orderings of the sequence , while the convergence properties of the sequence of partial sums can depend on the ordering of the sequence.

A series which converges for all orderings is called unconditionally convergent. It can be proven to be equivalent to absolute convergence. This is defined as follows. A series is absolutely convergent if is well defined. Furthermore, all possible orderings give the same value.

Otherwise, the series is conditionally convergent. A surprising result for conditionally convergent series is the Riemann series theorem: depending on the ordering, the partial sums can be made to converge to any real number, as well as .

Power series

A useful application of the theory of sums of series is for power series. These are sums of series of the form

Often is thought of as a complex number, and a suitable notion of convergence of complex sequences is needed. The set of values of for which the series sum converges is a circle, with its radius known as the radius of convergence.

Continuity of a function at a point

The definition of continuity at a point is given through limits.

The above definition of a limit is true even if . Indeed, the function f need not even be defined at c. However, if is defined and is equal to , then the function is said to be continuous at the point .

Equivalently, the function is continuous at if as , or in terms of sequences, whenever , then .

An example of a limit where is not defined at is given below.

Consider the function

then f(1) is not defined (see Indeterminate form), yet as x moves arbitrarily close to 1, f(x) correspondingly approaches 2:[12]

f(0.9) f(0.99) f(0.999) f(1.0) f(1.001) f(1.01) f(1.1)
1.900 1.990 1.999 undefined 2.001 2.010 2.100

Thus, f(x) can be made arbitrarily close to the limit of 2—just by making x sufficiently close to 1.

In other words,

This can also be calculated algebraically, as for all real numbers x ≠ 1.

Now, since x + 1 is continuous in x at 1, we can now plug in 1 for x, leading to the equation

In addition to limits at finite values, functions can also have limits at infinity. For example, consider the function

where:

  • f(100) = 1.9900
  • f(1000) = 1.9990
  • f(10000) = 1.9999

As x becomes extremely large, the value of f(x) approaches 2, and the value of f(x) can be made as close to 2 as one could wish—by making x sufficiently large. So in this case, the limit of f(x) as x approaches infinity is 2, or in mathematical notation,

Continuous functions

An important class of functions when considering limits are continuous functions. These are precisely those functions which preserve limits, in the sense that if is a continuous function, then whenever in the domain of , then the limit exists and furthermore is .

In the most general setting of topological spaces, a short proof is given below:

Let be a continuous function between topological spaces and . By definition, for each open set in , the preimage is open in .

Now suppose is a sequence with limit in . Then is a sequence in , and is some point.

Choose a neighborhood of . Then is an open set (by continuity of ) which in particular contains , and therefore is a neighborhood of . By the convergence of to , there exists an such that for , we have .

Then applying to both sides gives that, for the same , for each we have . Originally was an arbitrary neighborhood of , so . This concludes the proof.

In real analysis, for the more concrete case of real-valued functions defined on a subset , that is, , a continuous function may also be defined as a function which is continuous at every point of its domain.

Limit points

In topology, limits are used to define limit points of a subset of a topological space, which in turn give a useful characterization of closed sets.

In a topological space , consider a subset . A point is called a limit point if there is a sequence in such that .

The reason why is defined to be in rather than just is illustrated by the following example. Take and . Then , and therefore is the limit of the constant sequence . But is not a limit point of .

A closed set, which is defined to be the complement of an open set, is equivalently any set which contains all its limit points.

Derivative

The derivative is defined formally as a limit. In the scope of real analysis, the derivative is first defined for real functions defined on a subset . The derivative at is defined as follows. If the limit

as exists, then the derivative at is this limit.

Equivalently, it is the limit as of

If the derivative exists, it is commonly denoted by .

Properties

Sequences of real numbers

For sequences of real numbers, a number of properties can be proven. Suppose and are two sequences converging to and respectively.

  • Sum of limits is equal to limit of sum

  • Product of limits is equal to limit of product

  • Inverse of limit is equal to limit of inverse (as long as )

Equivalently, the function is continuous about nonzero .

Cauchy sequences

A property of convergent sequences of real numbers is that they are Cauchy sequences. The definition of a Cauchy sequence is that for every real number , there is an such that whenever ,

Informally, for any arbitrarily small error , it is possible to find an interval of diameter such that eventually the sequence is contained within the interval.

Cauchy sequences are closely related to convergent sequences. In fact, for sequences of real numbers they are equivalent: any Cauchy sequence is convergent.

In general metric spaces, it continues to hold that convergent sequences are also Cauchy. But the converse is not true: not every Cauchy sequence is convergent in a general metric space. A classic counterexample is the rational numbers, , with the usual distance. The sequence of decimal approximations to , truncated at the th decimal place is a Cauchy sequence, but does not converge in .

A metric space in which every Cauchy sequence is also convergent, that is, Cauchy sequences are equivalent to convergent sequences, is known as a complete metric space.

One reason Cauchy sequences can be "easier to work with" than convergent sequences is that they are a property of the sequence alone, while convergent sequences require not just the sequence but also the limit of the sequence .

Order of convergence

Beyond whether or not a sequence converges to a limit , it is possible to describe how fast a sequence converges to a limit. One way to quantify this is using the order of convergence of a sequence.

A formal definition of order of convergence can be stated as follows. Suppose is a sequence of real numbers which is convergent with limit . Furthermore, for all . If positive constants and exist such that

then is said to converge to with order of convergence . The constant is known as the asymptotic error constant.

Order of convergence is used for example the field of numerical analysis, in error analysis.

Computability

Limits can be difficult to compute. There exist limit expressions whose modulus of convergence is undecidable. In recursion theory, the limit lemma proves that it is possible to encode undecidable problems using limits.

There are several theorems or tests that indicate whether the limit exists. These are known as convergence tests. Examples include the ratio test and the squeeze theorem. However they may not tell how to compute the limit.

Object lifetime

From Wikipedia, the free encyclopedia

In object-oriented programming (OOP), the object lifetime (or life cycle) of an object is the time between an object's creation and its destruction. Rules for object lifetime vary significantly between languages, in some cases between implementations of a given language, and lifetime of a particular object may vary from one run of the program to another.

In some cases, object lifetime coincides with variable lifetime of a variable with that object as value (both for static variables and automatic variables), but in general, object lifetime is not tied to the lifetime of any one variable. In many cases – and by default in many object-oriented languages, particularly those that use garbage collection (GC) – objects are allocated on the heap, and object lifetime is not determined by the lifetime of a given variable: the value of a variable holding an object actually corresponds to a reference to the object, not the object itself, and destruction of the variable just destroys the reference, not the underlying object.

Overview

While the basic idea of object lifetime is simple – an object is created, used, then destroyed – details vary substantially between languages, and within implementations of a given language, and is intimately tied to how memory management is implemented. Further, many fine distinctions are drawn between the steps, and between language-level concepts and implementation-level concepts. Terminology is relatively standard, but which steps correspond to a given term varies significantly between languages.

Terms generally come in antonym pairs, one for a creation concept, one for the corresponding destruction concept, like initialize/finalize or constructor/destructor. The creation/destruction pair is also known as initiation/termination, among other terms. The terms allocation and deallocation or freeing are also used, by analogy with memory management, though object creation and destruction can involve significantly more than simply memory allocation and deallocation, and allocation/deallocation are more properly considered steps in creation and destruction, respectively.

Determinism

A major distinction is whether an object's lifetime is deterministic or non-deterministic. This varies by language, and within language varies with the memory allocation of an object; object lifetime may be distinct from variable lifetime.

Objects with static memory allocation, notably objects stored in static variables, and classes modules (if classes or modules are themselves objects, and statically allocated), have a subtle non-determinism in many languages: while their lifetime appears to coincide with the run time of the program, the order of creation and destruction – which static object is created first, which second, etc. – is generally nondeterministic.

For objects with automatic memory allocation or dynamic memory allocation, object creation generally happens deterministically, either explicitly when an object is explicitly created (such as via new in C++ or Java), or implicitly at the start of variable lifetime, particularly when the scope of an automatic variable is entered, such as at declaration. Object destruction varies, however – in some languages, notably C++, automatic and dynamic objects are destroyed at deterministic times, such as scope exit, explicit destruction (via manual memory management), or reference count reaching zero; while in other languages, such as C#, Java, and Python, these objects are destroyed at non-deterministic times, depending on the garbage collector, and object resurrection may occur during destruction, extending the lifetime.

In garbage-collected languages, objects are generally dynamically allocated (on the heap) even if they are initially bound to an automatic variable, unlike automatic variables with primitive values, which are typically automatically allocated (on the stack or in a register). This allows the object to be returned from a function ("escape") without being destroyed. However, in some cases a compiler optimization is possible, namely performing escape analysis and proving that escape is not possible, and thus the object can be allocated on the stack; this is significant in Java. In this case object destruction will occur promptly – possibly even during the variable's lifetime (before the end of its scope), if it is unreachable.

A complex case is the use of an object pool, where objects may be created ahead of time or reused, and thus apparent creation and destruction may not correspond to actual creation and destruction of an object, only (re)initialization for creation and finalization for destruction. In this case both creation and destruction may be nondeterministic.

Steps

Object creation can be broken down into two operations: memory allocation and initialization, where initialization both includes assigning values to object fields and possibly running arbitrary other code. These are implementation-level concepts, roughly analogous to the distinction between declaration and definition of a variable, though these later are language-level distinctions. For an object that is tied to a variable, declaration may be compiled to memory allocation (reserving space for the object), and definition to initialization (assigning values), but declarations may also be for compiler use only (such as name resolution), not directly corresponding to compiled code.

Analogously, object destruction can be broken down into two operations, in the opposite order: finalization and memory deallocation. These do not have analogous language-level concepts for variables: variable lifetime ends implicitly (for automatic variables, on stack unwind; for static variables, on program termination), and at this time (or later, depending on implementation) memory is deallocated, but no finalization is done in general. However, when an object's lifetime is tied to a variable's lifetime, the end of the variable's lifetime causes finalization of the object; this is a standard paradigm in C++.

Together these yield four implementation-level steps:

allocation, initialization, finalization, deallocation

These steps may be done automatically by the language runtime, interpreter, or virtual machine, or may be manually specified by the programmer in a subroutine, concretely via methods – the frequency of this varies significantly between steps and languages. Initialization is very commonly programmer-specified in class-based languages, while in strict prototype-based languages initialization is automatically done by copying. Finalization is also very common in languages with deterministic destruction, notably C++, but much less common in garbage-collected languages. Allocation is more rarely specified, and deallocation generally cannot be specified.

Status during creation and destruction

An important subtlety is the status of an object during creation or destruction, and handling cases where errors occur or exceptions are raised, such as if creation or destruction fail. Strictly speaking, an object's lifetime begins when allocation completes and ends when deallocation starts. Thus during initialization and finalization an object is alive, but may not be in a consistent state – ensuring class invariants is a key part of initialization – and the period from when initialization completes to when finalization starts is when the object is both alive and expected to be in a consistent state.

If creation or destruction fail, error reporting (often by raising an exception) can be complicated: the object or related objects may be in an inconsistent state, and in the case of destruction – which generally happens implicitly, and thus in an unspecified environment – it may be difficult to handle errors. The opposite issue – incoming exceptions, not outgoing exceptions – is whether creation or destruction should behave differently if they occur during exception handling, when different behavior may be desired.

Another subtlety is when creation and destruction happen for static variables, whose lifespan coincides with the run time of the program – do creation and destruction happen during regular program execution, or in special phases before and after regular execution – and how objects are destroyed at program termination, when the program may not be in a usual or consistent state. This is particularly an issue for garbage-collected languages, as they may have a lot of garbage at program termination.

Class-based programming

In class-based programming, object creation is also known as instantiation (creating an instance of a class), and creation and destruction can be controlled via methods known as a constructor and destructor, or an initializer and finalizer. Creation and destruction are thus also known as construction and destruction, and when these methods are called an object is said to be constructed or destructed (not "destroyed") – respectively, initialized or finalized when those methods are called.

The relationship between these methods can be complicated, and a language may have both constructors and initializers (like Python), or both destructors and finalizers (like C++/CLI), or the terms "destructor" and "finalizer" may refer to language-level construct versus implementation (as in C# versus CLI).

A key distinction is that constructors are class methods, as there is no object (class instance) available until the object is created, but the other methods (destructors, initializers, and finalizers) are instance methods, as an object has been created. Further, constructors and initializers may take arguments, while destructors and finalizers generally do not, as they are usually called implicitly.

In common usage, a constructor is a method directly called explicitly by user code to create an object, while "destructor" is the subroutine called (usually implicitly, but sometimes explicitly) on object destruction in languages with deterministic object lifetimes – the archetype is C++ – and "finalizer" is the subroutine called implicitly by the garbage collector on object destruction in languages with non-deterministic object lifetime – the archetype is Java.

The steps during finalization vary significantly depending on memory management: in manual memory management (as in C++, or manual reference counting), references need to be explicitly destroyed by the programmer (references cleared, reference counts decremented); in automatic reference counting, this also happens during finalization, but is automated (as in Python, when it occurs after programmer-specified finalizers have been called); and in tracing garbage collection this is not necessary. Thus in automatic reference counting, programmer-specified finalizers are often short or absent, but significant work may still be done, while in tracing garbage collectors finalization is often unnecessary.

Resource management

In languages where objects have deterministic lifetimes, object lifetime may be used for piggybacking resource management: this is called the Resource Acquisition Is Initialization (RAII) idiom: resources are acquired during initialization, and released during finalization. In languages where objects have non-deterministic lifetimes, notably due to garbage collection, the management of memory is generally kept separate from management of other resources.

Object creation

In typical case, the process is as follows:

  • calculate the size of an object – the size is mostly the same as that of the class but can vary. When the object in question is not derived from a class, but from a prototype instead, the size of an object is usually that of the internal data structure (a hash for instance) that holds its slots.
  • allocation – allocating memory space with the size of an object plus the growth later, if possible to know in advance
  • binding methods – this is usually either left to the class of the object, or is resolved at dispatch time, but nevertheless it is possible that some object models bind methods at creation time.
  • calling an initializing code (namely, constructor) of superclass
  • calling an initializing code of class being created

Those tasks can be completed at once but are sometimes left unfinished and the order of the tasks can vary and can cause several strange behaviors. For example, in multi-inheritance, which initializing code should be called first is a difficult question to answer. However, superclass constructors should be called before subclass constructors.

It is a complex problem to create each object as an element of an array. Some languages (e.g. C++) leave this to programmers.

Handling exceptions in the midst of creation of an object is particularly problematic because usually the implementation of throwing exceptions relies on valid object states. For instance, there is no way to allocate a new space for an exception object when the allocation of an object failed before that due to a lack of free space on the memory. Due to this, implementations of OO languages should provide mechanisms to allow raising exceptions even when there is short supply of resources, and programmers or the type system should ensure that their code is exception-safe. Propagating an exception is more likely to free resources than to allocate them. But in object oriented programming, object construction may fail, because constructing an object should establish the class invariants, which are often not valid for every combination of constructor arguments. Thus, constructors can raise exceptions.

The abstract factory pattern is a way to decouple a particular implementation of an object from code for the creation of such an object.

Creation methods

The way to create objects varies across languages. In some class-based languages, a special method known as a constructor, is responsible for validating the state of an object. Just like ordinary methods, constructors can be overloaded in order to make it so that an object can be created with different attributes specified. Also, the constructor is the only place to set the state of immutable objects. A copy constructor is a constructor which takes a (single) parameter of an existing object of the same type as the constructor's class, and returns a copy of the object sent as a parameter.

Other programming languages, such as Objective-C, have class methods, which can include constructor-type methods, but are not restricted to merely instantiating objects.

C++ and Java have been criticized for not providing named constructors—a constructor must always have the same name as the class. This can be problematic if the programmer wants to provide two constructors with the same argument types, e.g., to create a point object either from the cartesian coordinates or from the polar coordinates, both of which would be represented by two floating point numbers. Objective-C can circumvent this problem, in that the programmer can create a Point class, with initialization methods, for example, +newPointWithX:andY:, and +newPointWithR:andTheta:. In C++, something similar can be done using static member functions.

A constructor can also refer to a function which is used to create a value of a tagged union, particularly in functional languages.

Object destruction

It is generally the case that after an object is used, it is removed from memory to make room for other programs or objects to take that object's place. However, if there is sufficient memory or a program has a short run time, object destruction may not occur, memory simply being deallocated at process termination. In some cases object destruction simply consists of deallocating the memory, particularly in garbage-collected languages, or if the "object" is actually a plain old data structure. In other cases some work is performed prior to deallocation, particularly destroying member objects (in manual memory management), or deleting references from the object to other objects to decrement reference counts (in reference counting). This may be automatic, or a special destruction method may be called on the object.

In class-based languages with deterministic object lifetime, notably C++, a destructor is a method called when an instance of a class is deleted, before the memory is deallocated. In C++, destructors differs from constructors in various ways: they cannot be overloaded, must have no arguments, need not maintain class invariants, and can cause program termination if they throw exceptions.

In garbage collecting languages, objects may be destroyed when they can no longer be reached by the running code. In class-based GCed languages, the analog of destructors are finalizers, which are called before an object is garbage-collected. These differ in running at an unpredictable time and in an unpredictable order, since garbage collection is unpredictable, and are significantly less-used and less complex than C++ destructors. Example of such languages include Java, Python, and Ruby.

Destroying an object will cause any references to the object to become invalid, and in manual memory management any existing references become dangling references. In garbage collection (both tracing garbage collection and reference counting), objects are only destroyed when there are no references to them, but finalization may create new references to the object, and to prevent dangling references, object resurrection occurs so the references remain valid.

Examples

C++

class Foo {
 public:
  // These are the prototype declarations of the constructors.
  Foo(int x);
  Foo(int x, int y);    // Overloaded Constructor.
  Foo(const Foo &old);  // Copy Constructor.
  ~Foo();               // Destructor.
};

Foo::Foo(int x) {
  // This is the implementation of
  // the one-argument constructor.
}

Foo::Foo(int x, int y) {
  // This is the implementation of
  // the two-argument constructor.
}

Foo::Foo(const Foo &old) {
  // This is the implementation of
  // the copy constructor.
}

Foo::~Foo() {
  // This is the implementation of the destructor.
}

int main() {
  Foo foo(14);       // Call first constructor.
  Foo foo2(12, 16);  // Call overloaded constructor.
  Foo foo3(foo);     // Call the copy constructor.

  // Destructors called in backwards-order
  // here, automatically.
}

Java

class Foo
{
    public Foo(int x)
    {
        // This is the implementation of
        // the one-argument constructor
    }

    public Foo(int x, int y)
    {
        // This is the implementation of
        // the two-argument constructor
    }

    public Foo(Foo old)
    {
        // This is the implementation of
        // the copy constructor
    }

    public static void main(String[] args)
    {
        Foo foo = new Foo(14); // call first constructor
        Foo foo2 = new Foo(12, 16); // call overloaded constructor
        Foo foo3 = new Foo(foo); // call the copy constructor
        // garbage collection happens under the covers, and objects are destroyed
    }
}

C#

namespace ObjectLifeTime;

class Foo
{
    public Foo()
    {
        // This is the implementation of
        // default constructor.
    }

    public Foo(int x)
    {
        // This is the implementation of
        // the one-argument constructor.
    }
     ~Foo()
    {
        // This is the implementation of
        // the destructor.
    }

    public Foo(int x, int y)
    {
        // This is the implementation of
        // the two-argument constructor.
    }
 
    public Foo(Foo old)
    {
        // This is the implementation of
        // the copy constructor.
    }
 
    public static void Main(string[] args)
    {
        var defaultfoo = new Foo(); // Call default constructor
        var foo = new Foo(14); // Call first constructor
        var foo2 = new Foo(12, 16); // Call overloaded constructor
        var foo3 = new Foo(foo); // Call the copy constructor
    }
}

Objective-C

#import <objc/Object.h>

@interface Point : Object
{
   double x;
   double y;
}

//These are the class methods; we have declared two constructors
+ (Point *) newWithX: (double) andY: (double);
+ (Point *) newWithR: (double) andTheta: (double);

//Instance methods
- (Point *) setFirstCoord: (double);
- (Point *) setSecondCoord: (double);

/* Since Point is a subclass of the generic Object 
 * class, we already gain generic allocation and initialization
 * methods, +alloc and -init. For our specific constructors
 * we can make these from these methods we have
 * inherited.
 */
@end
 
@implementation Point

- (Point *) setFirstCoord: (double) new_val
{
   x = new_val;
}

- (Point *) setSecondCoord: (double) new_val
{
   y = new_val;
}

+ (Point *) newWithX: (double) x_val andY: (double) y_val
{
   //Concisely written class method to automatically allocate and 
   //perform specific initialization.
   return [[[Point alloc] setFirstCoord:x_val] setSecondCoord:y_val]; 
}

+ (Point *) newWithR: (double) r_val andTheta: (double) theta_val
{
   //Instead of performing the same as the above, we can underhandedly
   //use the same result of the previous method
   return [Point newWithX:r_val andY:theta_val];
}

@end

int
main(void)
{
   //Constructs two points, p and q.
   Point *p = [Point newWithX:4.0 andY:5.0];
   Point *q = [Point newWithR:1.0 andTheta:2.28];

   //...program text....
   
   //We're finished with p, say, so, free it.
   //If p allocates more memory for itself, may need to
   //override Object's free method in order to recursively
   //free p's memory. But this is not the case, so we can just
   [p free];

   //...more text...

   [q free];

   return 0;
}

Object Pascal

Related Languages: "Delphi", "Free Pascal", "Mac Pascal".

program Example;

type

  DimensionEnum =
    (
      deUnassigned,
      de2D,
      de3D,
      de4D
    );

  PointClass = class
  private
    Dimension: DimensionEnum;

  public
    X: Integer;
    Y: Integer;
    Z: Integer;
    T: Integer;

  public
    (* prototype of constructors *)

    constructor Create();
    constructor Create(AX, AY: Integer);
    constructor Create(AX, AY, AZ: Integer);
    constructor Create(AX, AY, AZ, ATime: Integer);
    constructor CreateCopy(APoint: PointClass);

    (* prototype of destructors *)

    destructor Destroy;
  end;

constructor PointClass.Create();
begin
  // implementation of a generic, non argument constructor
  Self.Dimension := deUnassigned;
end;

constructor PointClass.Create(AX, AY: Integer);
begin
  // implementation of a, 2 argument constructor
  Self.X := AX;
  Y := AY;

  Self.Dimension := de2D;
end;

constructor PointClass.Create(AX, AY, AZ: Integer);
begin
  // implementation of a, 3 argument constructor
  Self.X := AX;
  Y := AY;
  Self.X := AZ;

  Self.Dimension := de3D;
end;

constructor PointClass.Create(AX, AY, AZ, ATime: Integer);
begin
  // implementation of a, 4 argument constructor
  Self.X := AX;
  Y := AY;
  Self.X := AZ;
  T := ATime;

  Self.Dimension := de4D;
end;

constructor PointClass.CreateCopy(APoint: PointClass);
begin
  // implementation of a, "copy" constructor
  APoint.X := AX;
  APoint.Y := AY;
  APoint.X := AZ;
  APoint.T := ATime;

  Self.Dimension := de4D;
end;

destructor PointClass.PointClass.Destroy;
begin
  // implementation of a generic, non argument destructor
  Self.Dimension := deUnAssigned;
end;

var
  (* variable for static allocation *)
  S:  PointClass;
  (* variable for dynamic allocation *)
  D: ^PointClass;

begin (* of program *)
  (* object lifeline with static allocation *)
  S.Create(5, 7);

  (* do something with "S" *)

  S.Destroy; 

  (* object lifeline with dynamic allocation *)
  D = new PointClass, Create(5, 7);

  (* do something with "D" *)

  dispose D, Destroy;
end.  (* of program *)

Python

class Socket:
    def __init__(self, remote_host: str) -> None:
        # connect to remote host

    def send(self):
        # Send data

    def recv(self):
        # Receive data
        
    def close(self):
        # close the socket
        
    def __del__(self):
        # __del__ magic function called when the object's reference count equals zero
        self.close()

def f():
    socket = Socket("example.com")
    socket.send("test")
    return socket.recv()

Socket will be closed at the next garbage collection round after "f" function runs and returns, as all references to it have been lost.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...