Search This Blog

Saturday, July 7, 2018

Quantum eraser experiment

From Wikipedia, the free encyclopedia

In quantum mechanics, the quantum eraser experiment is an interferometer experiment that demonstrates several fundamental aspects of quantum mechanics, including quantum entanglement and complementarity.

The double-slit quantum eraser experiment described in this article has three stages:[1]
  1. First, the experimenter reproduces the interference pattern of Young's double-slit experiment by shining photons at the double-slit interferometer and checking for an interference pattern at the detection screen.
  2. Next, the experimenter marks through which slit each photon went and demonstrates that thereafter the interference pattern is destroyed. This stage indicates that it is the existence of the "which-path" information that causes the destruction of the interference pattern.
  3. Third, the "which-path" information is "erased," whereupon the interference pattern is recovered. (Rather than removing or reversing any changes introduced into the photon or its path, these experiments typically produce another change that obscures the markings earlier produced.)
A key result is that it does not matter whether the eraser procedure is done before or after the photons arrive at the detection screen.[1][2]

Quantum erasure technology can be used to increase the resolution of advanced microscopes.[3]

Introduction

The quantum eraser experiment described in this article is a variation of Thomas Young's classic double-slit experiment. It establishes that when action is taken to determine which slit a photon has passed through, the photon cannot interfere with itself. When a stream of photons is marked in this way, then the interference fringes characteristic of the Young experiment will not be seen. The experiment described in this article is capable of creating situations in which a photon that has been "marked" to reveal through which slit it has passed can later be "unmarked." A photon that has been "marked" cannot interfere with itself and will not produce fringe patterns, but a photon that has been "marked" and then "unmarked" can thereafter interfere with itself and will cooperate in producing the fringes characteristic of Young's experiment.[1]

This experiment involves an apparatus with two main sections. After two entangled photons are created, each is directed into its own section of the apparatus. Anything done to learn the path of the entangled partner of the photon being examined in the double-slit part of the apparatus will influence the second photon, and vice versa. The advantage of manipulating the entangled partners of the photons in the double-slit part of the experimental apparatus is that experimenters can destroy or restore the interference pattern in the latter without changing anything in that part of the apparatus. Experimenters do so by manipulating the entangled photon, and they can do so before or after its partner has passed through the slits and other elements of experimental apparatus between the photon emitter and the detection screen. So, under conditions where the double-slit part of the experiment has been set up to prevent the appearance of interference phenomena (because there is definitive "which path" information present), the quantum eraser can be used to effectively erase that information. In doing so, the experimenter restores interference without altering the double-slit part of the experimental apparatus.[1]

A variation of this experiment, delayed choice quantum eraser, allows the decision whether to measure or destroy the "which path" information to be delayed until after the entangled particle partner (the one going through the slits) has either interfered with itself or not.[4] In delayed-choice experiments quantum effects can mimic an influence of future actions on past events. However, the temporal order of measurement actions is not relevant. [5]

The experiment

Figure 1. Crossed polarizations prevent interference fringes

First, a photon is shot through a specialized nonlinear optical device: a beta barium borate (BBO) crystal. This crystal converts the single photon into two entangled photons of lower frequency, a process known as spontaneous parametric down-conversion (SPDC). These entangled photons follow separate paths. One photon goes directly to a detector, while the second photon passes through the double-slit mask to a second detector. Both detectors are connected to a coincidence circuit, ensuring that only entangled photon pairs are counted. A stepper motor moves the second detector to scan across the target area, producing an intensity map. This configuration yields the familiar interference pattern.

Figure 2. Introduction of polarizer in upper path restores interference fringes below

Next, a circular polarizer is placed in front of each slit in the double-slit mask, producing clockwise circular polarization in light passing through one slit, and counter-clockwise circular polarization in the other slit (see Figure 1). This polarization is measured at the detector, thus "marking" the photons and destroying the interference pattern (see Fresnel–Arago laws).

Finally, a linear polarizer is introduced in the path of the first photon of the entangled pair, giving this photon a diagonal polarization (see Figure 2). Entanglement ensures a complementary diagonal polarization in its partner, which passes through the double-slit mask. This alters the effect of the circular polarizers: each will produce a mix of clockwise and counter-clockwise polarized light. Thus the second detector can no longer determine which path was taken, and the interference fringes are restored.

A double slit with rotating polarizers can also be accounted for by considering the light to be a classical wave.[6] However this experiment uses entangled photons, which are not compatible with classical mechanics.

Non-locality

A very common misunderstanding about this experiment is that it may be used to communicate information instantaneously between two detectors. Imagine Alice on the first detector measuring either linear or circular polarization and instantaneously affecting the result of Bob's interference measurement. One could even conceive a situation in which Alice would switch from a circular polarizer to a linear polarizer on her detector long after Bob made his measurement: Bob's interference pattern would suddenly change from interference to a smear - but in the past. Following this train of thought would lead to an abundance of time paradoxes, like Bob measuring one pattern and telling Alice to switch her detector in the future to the polarizer that would cause the opposite pattern. So something must be wrong.

The misunderstanding is that Bob always measures a smear, never an interference pattern, no matter what Alice does. Non-locality manifests in a somewhat more subtle way. How? Let's say that the BBO crystal produces the following state:

{\displaystyle |\circlearrowright \rangle _{A}|\circlearrowleft \rangle _{B}+|\circlearrowleft \rangle _{A}|\circlearrowright \rangle _{B}}

(Alice's photon has clockwise polarization and Bob's photon has anti-clockwise polarization) or (Bob's photon has clockwise polarization and Alice's photon has anti-clockwise polarization)

If Alice places a circular polarizer in front of her detector that filters out photons with clockwise polarization, then every time Alice measures a photon, Bob's corresponding photon is sure to have a clockwise polarization:

{\displaystyle |\circlearrowleft \rangle _{A}|\circlearrowright \rangle _{B}}

Since Bob has placed opposite polarization filters on each slit, we know that these photons can only have passed through (let's say) the first slit. From that slit, they hit the screen according to the wave-function:

{\displaystyle f_{1}(x)={\frac {1}{\sqrt {2\pi {\sqrt {d^{2}+(x+a/2)^{2}}}}}}\exp \left[i(h/\lambda ){\sqrt {d^{2}+(x+a/2)^{2}}}\right],}

where a is the spacing between the slits, d is the distance from the slits to the screen and x is the distance to the middle of the screen. The intensity of light on the screen (counts of photons) will be proportional to the square of the amplitude of this wave, in other words

{\displaystyle I_{1}(x)\propto {\frac {1}{d^{2}+(x+a/2)^{2}}}.}

Likewise, when Alice measures a photon with anti-clockwise polarization, Bob receives an anti-clockwise polarized photon which can only pass through the second slit and arrives at the screen with a wave-function

{\displaystyle f_{2}(x)={\frac {1}{\sqrt {2\pi {\sqrt {d^{2}+(x-a/2)^{2}}}}}}\exp \left[i(h/\lambda ){\sqrt {d^{2}+(x-a/2)^{2}}}\right].}

Notice that the only difference is the sign of a/2, because the photon was emitted from another slit. The pattern on the screen is another smear, but shifted by a. Now, this point is important: if Alice never tells him directly, then Bob never knows which polarization Alice measured, since both are produced in equal amounts by the crystal. So what Bob actually sees on his screen is the sum of the two intensities:

Figure 3. The two intereference patterns and their sum when Alice measures her photons' polarization with a circular polarizer.
 
{\displaystyle I(x)=I_{1}(x)+I_{2}(x)\propto {\frac {1}{d^{2}+(x+a/2)^{2}}}+{\frac {1}{d^{2}+(x-a/2)^{2}}}.}

The results of this experiment are summarized in Fig.3. Bob can only distinguish the two peaks in his data after he has had access to Alice's results: for the set of photons where Alice measured clockwise polarization, Bob's subset of photons is distributed according to {\displaystyle I_{1}(x)} and for the set of photons where Alice measured anti-clockwise polarization, Bob's subset of photons is distributed according to {\displaystyle I_{2}(x).}

Next, let Alice use a linear polarizer instead of a circular one. The first thing to do is write down the system's wave-function in terms of linear polarization states:

{\displaystyle |\circlearrowright \rangle _{A}|\circlearrowleft \rangle _{B}+|\circlearrowleft \rangle _{A}|\circlearrowright \rangle _{B}={\frac {1}{\sqrt {2}}}\left(|H\rangle _{A}+i|V\rangle _{A}\right)|\circlearrowleft \rangle _{B}+{\frac {1}{\sqrt {2}}}\left(|H\rangle _{A}-i|V\rangle _{A}\right)|\circlearrowright \rangle _{B}}
{\displaystyle \qquad ={\frac {1}{\sqrt {2}}}\left(|\circlearrowright \rangle _{B}+|\circlearrowleft \rangle _{B}\right)|H\rangle _{A}+{\frac {i}{\sqrt {2}}}\left(|\circlearrowright \rangle _{B}-|\circlearrowleft \rangle _{B}\right)|V\rangle _{A}}

So say Alice measures a horizontally polarized photon. Then the wave function of Bob's photon is in a superposition of clockwise and anti-clockwise polarizations, which means indeed it can pass through both slits! After travelling to the screen, the wave amplitude is

{\displaystyle h(x)={\frac {1}{\sqrt {2}}}\left[f_{1}(x)+f_{2}(x)\right],}

and the intensity is

{\displaystyle I_{H}(x)\propto |f_{1}(x)|^{2}+|f_{2}(x)|^{2}+f_{1}(x)f_{2}(x)^{*}+f_{1}(x)^{*}f_{2}(x)={\frac {I_{1}(x)+I_{2}(x)}{2}}+\cos \phi _{12}}

where {\displaystyle \phi _{12}} is the phase difference between the two wave function at position x on the screen. The pattern is now indeed an interference pattern! Likewise, if Alice detects a vertically polarized photon then the wave amplitude of Bob's photon is

Figure 4. The two intereference patterns and their sum when Alice measures her photons' polarization with a linear polarizer.
 
{\displaystyle v(x)={\frac {i}{\sqrt {2}}}\left[f_{1}(x)-f_{2}(x)\right],}
and

{\displaystyle I_{V}(x)\propto |f_{1}(x)|^{2}+|f_{2}(x)|^{2}-f_{1}(x)f_{2}(x)^{*}-f_{1}(x)^{*}f_{2}(x)={\frac {I_{1}(x)+I_{2}(x)}{2}}-\cos \phi _{12}}

and once again an interference pattern appears, but slightly changed because of the 180º phase difference between the two photons traversing each slit. So can this be used by Alice to send a message to Bob, encoding her messages in changes between the two types of patterns? No! Remember that, as before, if Bob is not told which polarization Alice measured, then all he sees is the sum of both patterns. The result is therefore,

{\displaystyle I(x)=I_{H}(x)+I_{V}(x)=I(x)}

which is again a smudge. The results are given in Fig.4.

So what's so odd about this experiment? The correlations change according to which experiment was conducted by Alice. Despite the fact that the total pattern is the same, the two subsets of outcomes give radically different correlations: if Alice measured a linear polarization the total smear is subdivided into two interference patterns whereas if Alice measured a circular polarization the pattern is the sum of two other Gaussian bell-shapes. How could Bob's photon know that it could go in the forbidden stripes of the interference pattern when Alice was measuring a circular polarization but not when Alice was measuring a linear one? This can only be orchestrated by a global dynamic of the system as a whole, it cannot be locally carried by each photon on its own. This experiment demonstrates the phenomenon of microscopic non-locality.

Ontology

From Wikipedia, the free encyclopedia

Parmenides was among the first to propose an ontological characterization of the fundamental nature of reality.

Ontology is the philosophical study of the nature of being, becoming, existence, or reality, as well as the basic categories of being and their relations.[1] Traditionally listed as a part of the major branch of philosophy known as metaphysics, ontology often deals with questions concerning what entities exist or may be said to exist and how such entities may be grouped, related within a hierarchy, and subdivided according to similarities and differences. A very simple definition of ontology is that it is the examination of what is meant by 'being'.

Etymology

The compound word ontology combines onto-, from the Greek ὄν, on (gen. ὄντος, ontos), i.e. "being; that which is", which is the present participle of the verb εἰμί, eimí, i.e. "to be, I am", and -λογία, -logia, i.e. "logical discourse", see classical compounds for this type of word formation.[2][3]

While the etymology is Greek, the oldest extant record of the word itself, the New Latin form ontologia, appeared in 1606 in the work Ogdoas Scholastica by Jacob Lorhard (Lorhardus) and in 1613 in the Lexicon philosophicum by Rudolf Göckel (Goclenius).

The first occurrence in English of ontology as recorded by the OED (Oxford English Dictionary, online edition, 2008) came in a work by Gideon Harvey (1636/7–1702): Archelogia philosophica nova; or, New principles of Philosophy. Containing Philosophy in general, Metaphysicks or Ontology, Dynamilogy or a Discourse of Power, Religio Philosophi or Natural Theology, Physicks or Natural philosophy, London, Thomson, 1663. The word was first used in its Latin form by philosophers based on the Latin roots, which themselves are based on the Greek.

Leibniz is the only one of the great philosophers of the 17th century to have used the term ontology.[4]

Overview

Some philosophers, notably in the traditions of the Platonic school, contend that all nouns (including abstract nouns) refer to existent entities.[citation needed] Other philosophers contend that nouns do not always name entities, but that some provide a kind of shorthand for reference to a collection either of objects or of events. In this latter view, mind, instead of referring to an entity, refers to a collection of mental events experienced by a person; society refers to a collection of persons with some shared characteristics, and geometry refers to a collection of specific kinds of intellectual activities.  Between these poles of realism and nominalism stand a variety of other positions.

Some fundamental questions

Principal questions of ontology include:
  • "What can be said to exist?"
  • "What is a thing?"[6]
  • "Into what categories, if any, can we sort existing things?"
  • "What are the meanings of being?"
  • "What are the various modes of being of entities?"
Various philosophers have provided different answers to these questions. One common approach involves dividing the extant subjects and predicates into groups called categories. Such lists of categories differ widely from one another, and it is through the co-ordination of different categorical schemes that ontology relates to such fields as library science and artificial intelligence. Such an understanding of ontological categories, however, is merely taxonomic, classificatory. Aristotle's categories are the ways in which a being may be addressed simply as a being, such as:[7]
  • what it is (its 'whatness', quiddity, haecceity or essence)
  • how it is (its 'howness' or qualitativeness)
  • how much it is (quantitativeness)
  • where it is, its relatedness to other beings
Further examples of ontological questions include:[citation needed]
  • What is existence, i.e. what does it mean for a being to be?
  • Is existence a property?
  • Is existence a genus or general class that is simply divided up by specific differences?
  • Which entities, if any, are fundamental?
  • Are all entities objects?
  • How do the properties of an object relate to the object itself?
  • Do physical properties actually exist?
  • What features are the essential, as opposed to merely accidental attributes of a given object?
  • How many levels of existence or ontological levels are there? And what constitutes a "level"?
  • What is a physical object?
  • Can one give an account of what it means to say that a physical object exists?
  • Can one give an account of what it means to say that a non-physical entity exists?
  • What constitutes the identity of an object?
  • When does an object go out of existence, as opposed to merely changing?
  • Do beings exist other than in the modes of objectivity and subjectivity, i.e. is the subject/object split of modern philosophy inevitable?

Concepts

Essential ontological dichotomies include:

Types

Philosophers can classify ontologies in various ways, using criteria such as the degree of abstraction and field of application:[8]
  1. Upper ontology: concepts supporting development of an ontology, meta-ontology
  2. Domain ontology: concepts relevant to a particular topic or area of interest, for example, to information technology or to computer languages, or to particular branches of science
  3. Interface ontology: concepts relevant to the juncture of two disciplines
  4. Process ontology: inputs, outputs, constraints, sequencing information, involved in business or engineering processes

History

Origins

Ontology was referred to as Tattva Mimamsa by ancient Indian philosophers going back as early as Vedas.[citation needed] Ontology is an aspect of the Samkhya school of philosophy from the first millennium BCE.[9] The concept of Guna which describes the three properties (sattva, rajas and tamas) present in differing proportions in all existing things, is a notable concept of this school.

Parmenides and monism

Parmenides was among the first in the Greek tradition to propose an ontological characterization of the fundamental nature of existence. In his prologue or proem he describes two views of existence; initially that nothing comes from nothing, and therefore existence is eternal. Consequently, our opinions about truth must often be false and deceitful. Most of western philosophy — including the fundamental concepts of falsifiability — have emerged from this view. This posits that existence is what may be conceived of by thought, created, or possessed. Hence, there may be neither void nor vacuum; and true reality neither may come into being nor vanish from existence. Rather, the entirety of creation is eternal, uniform, and immutable, though not infinite (he characterized its shape as that of a perfect sphere). Parmenides thus posits that change, as perceived in everyday experience, is illusory. Everything that may be apprehended is but one part of a single entity. This idea somewhat anticipates the modern concept of an ultimate grand unification theory that finally describes all of existence in terms of one inter-related sub-atomic reality which applies to everything.

Ontological pluralism

The opposite of eleatic monism is the pluralistic conception of Being. In the 5th century BC, Anaxagoras and Leucippus replaced[10] the reality of Being (unique and unchanging) with that of Becoming and therefore by a more fundamental and elementary ontic plurality. This thesis originated in the Hellenic world, stated in two different ways by Anaxagoras and by Leucippus. The first theory dealt with "seeds" (which Aristotle referred to as "homeomeries") of the various substances. The second was the atomistic theory,[11] which dealt with reality as based on the vacuum, the atoms and their intrinsic movement in it.

The materialist atomism proposed by Leucippus was indeterminist, but then developed by Democritus in a deterministic way. It was later (4th century BC) that the original atomism was taken again as indeterministic by Epicurus. He confirmed the reality as composed of an infinity of indivisible, unchangeable corpuscles or atoms (atomon, lit. 'uncuttable'), but he gives weight to characterize atoms while for Leucippus they are characterized by a "figure", an "order" and a "position" in the cosmos.[12] They are, besides, creating the whole with the intrinsic movement in the vacuum, producing the diverse flux of being. Their movement is influenced by the parenklisis (Lucretius names it clinamen) and that is determined by the chance. These ideas foreshadowed our understanding of traditional physics until the nature of atoms was discovered in the 20th century.[13]

Plato

Plato developed this distinction between true reality and illusion, in arguing that what is real are eternal and unchanging Forms or Ideas (a precursor to universals), of which things experienced in sensation are at best merely copies, and real only in so far as they copy ('partake of') such Forms. In general, Plato presumes that all nouns (e.g., 'Beauty') refer to real entities, whether sensible bodies or insensible Forms. Hence, in The Sophist Plato argues that Being is a Form in which all existent things participate and which they have in common (though it is unclear whether 'Being' is intended in the sense of existence, copula, or identity); and argues, against Parmenides, that Forms must exist not only of Being, but also of Negation and of non-Being (or Difference).

In his Categories, Aristotle identifies ten possible kinds of things that may be the subject or the predicate of a proposition. For Aristotle there are four different ontological dimensions:
  1. according to the various categories or ways of addressing a being as such
  2. according to its truth or falsity (e.g. fake gold, counterfeit money)
  3. whether it exists in and of itself or simply 'comes along' by accident
  4. according to its potency, movement (energy) or finished presence (Metaphysics Book Theta).
According to Avicenna, and in an interpretation of Greek Aristotelian and Platonist ontological doctrines in medieval metaphysics, being is either necessary, contingent qua possible, or impossible. Necessary being is that which cannot but be, since its non-being entails a contradiction. Contingent qua possible being is neither necessary nor impossible for it to be or not to be. It is ontologically neutral, and is brought from potential existing into actual existence by way of a cause that is external to its essence. Its being is borrowed unlike the necessary existent, which is self-subsisting and is impossible for it not to be. As for the impossible, it necessarily does not exist, and the affirmation of its being is a contradiction.[14]

Other ontological topics

Ontological formations

The concept of 'ontological formations' refers to formations of social relations understood as dominant ways of living. Temporal, spatial, corporeal, epistemological and performative relations are taken to be central to understanding a dominant formation. That is, a particular ontological formation is based on how ontological categories of time, space, embodiment, knowing and performing are lived—objectively and subjectively. Different ontological formations include the customary (including the tribal), the traditional, the modern and the postmodern. The concept was first introduced by Paul James' Globalism, Nationalism, Tribalism[15] together with a series of writers including Damian Grenfell and Manfred Steger.

In the engaged theory approach, ontological formations are seen as layered and intersecting rather than singular formations. They are 'formations of being'. This approach avoids the usual problems of a Great Divide being posited between the modern and the pre-modern. From a philosophical distinction concerning different formations of being, the concept then provides a way of translating into practical understandings concerning how humans might design cities and communities that live creatively across different ontological formations, for example cities that are not completely dominated by modern valences of spatial configuration. Here the work of Tony Fry is important.[16]

Ontological and epistemological certainty

René Descartes, with je pense donc je suis or cogito ergo sum or "I think, therefore I am", argued that "the self" is something that we can know exists with epistemological certainty. Descartes argued further that this knowledge could lead to a proof of the certainty of the existence of God, using the ontological argument that had been formulated first by Anselm of Canterbury.

Certainty about the existence of "the self" and "the other", however, came under increasing criticism in the 20th century. Sociological theorists, most notably George Herbert Mead and Erving Goffman, saw the Cartesian Other as a "Generalized Other", the imaginary audience that individuals use when thinking about the self. According to Mead, "we do not assume there is a self to begin with. Self is not presupposed as a stuff out of which the world arises. Rather, the self arises in the world".[17][18] The Cartesian Other was also used by Sigmund Freud, who saw the superego as an abstract regulatory force, and Émile Durkheim who viewed this as a psychologically manifested entity which represented God in society at large.

Body and environment, questioning the meaning of being

Schools of subjectivism, objectivism and relativism existed at various times in the 20th century, and the postmodernists and body philosophers tried to reframe all these questions in terms of bodies taking some specific action in an environment. This relied to a great degree on insights derived from scientific research into animals taking instinctive action in natural and artificial settings—as studied by biology, ecology,[19] and cognitive science.

The processes by which bodies related to environments became of great concern, and the idea of being itself became difficult to really define. What did people mean when they said "A is B", "A must be B", "A was B"...? Some linguists advocated dropping the verb "to be" from the English language, leaving "E Prime", supposedly less prone to bad abstractions. Others, mostly philosophers, tried to dig into the word and its usage. Martin Heidegger distinguished human being as existence from the being of things in the world. Heidegger proposes that our way of being human and the way the world is for us are cast historically through a fundamental ontological questioning. These fundamental ontological categories provide the basis for communication in an age: a horizon of unspoken and seemingly unquestionable background meanings, such as human beings understood unquestioningly as subjects and other entities understood unquestioningly as objects. Because these basic ontological meanings both generate and are regenerated in everyday interactions, the locus of our way of being in a historical epoch is the communicative event of language in use.[17] For Heidegger, however, communication in the first place is not among human beings, but language itself shapes up in response to questioning (the inexhaustible meaning of) being.[20] Even the focus of traditional ontology on the 'whatness' or quidditas of beings in their substantial, standing presence can be shifted to pose the question of the 'whoness' of human being itself.[21]

Ontology and language

Some philosophers suggest that the question of "What is?" is (at least in part) an issue of usage rather than a question about facts.[22] This perspective is conveyed by an analogy made by Donald Davidson: Suppose a person refers to a 'cup' as a 'chair' and makes some comments pertinent to a cup, but uses the word 'chair' consistently throughout instead of 'cup'. One might readily catch on that this person simply calls a 'cup' a 'chair' and the oddity is explained.[23] Analogously, if we find people asserting 'there are' such-and-such, and we do not ourselves think that 'such-and-such' exist, we might conclude that these people are not nuts (Davidson calls this assumption 'charity'), they simply use 'there are' differently than we do. The question of What is? is at least partially a topic in the philosophy of language, and is not entirely about ontology itself.[24] This viewpoint has been expressed by Eli Hirsch.[25][26]

Hirsch interprets Hilary Putnam as asserting that different concepts of "the existence of something" can be correct.[26] This position does not contradict the view that some things do exist, but points out that different 'languages' will have different rules about assigning this property.[26][27] How to determine the 'fitness' of a 'language' to the world then becomes a subject for investigation.

Common to all Indo-European copula languages is the double use of the verb "to be" in both stating that entity X exists ("X is.") as well as stating that X has a property ("X is P"). It is sometimes argued that a third use is also distinct, stating that X is a member of a class ("X is a C"). In other language families these roles may have completely different verbs and are less likely to be confused with one another. For example they might say something like "the car has redness" rather than "the car is red". Hence any discussion of "being" in Indo-European language philosophy may need to make distinctions between these senses.[citation needed]

Ontology and human geography

In human geography there are two types of ontology: small "o" which accounts for the practical orientation, describing functions of being a part of the group, thought to oversimplify and ignore key activities. The other "o", or big "O", systematically, logically, and rationally describes the essential characteristics and universal traits. This concept relates closely to Plato's view that the human mind can only perceive a bigger world if they continue to live within the confines of their "caves". However, in spite of the differences, ontology relies on the symbolic agreements among members. That said, ontology is crucial for the axiomatic language frameworks.[28]

Reality and actuality

According to A.N. Whitehead, for ontology, it is useful to distinguish the terms 'reality' and 'actuality'. In this view, an 'actual entity' has a philosophical status of fundamental ontological priority, while a 'real entity' is one which may be actual, or may derive its reality from its logical relation to some actual entity or entities. For example, an occasion in the life of Socrates is an actual entity. But Socrates' being a man does not make 'man' an actual entity, because it refers indeterminately to many actual entities, such as several occasions in the life of Socrates, and also to several occasions in the lives of Alcibiades, and of others. But the notion of man is real; it derives its reality from its reference to those many actual occasions, each of which is an actual entity. An actual occasion is a concrete entity, while terms such as 'man' are abstractions from many concrete relevant entities.

According to Whitehead, an actual entity must earn its philosophical status of fundamental ontological priority by satisfying several philosophical criteria, as follows.
  • There is no going behind an actual entity, to find something more fundamental in fact or in efficacy. This criterion is to be regarded as expressing an axiom, or postulated distinguished doctrine.
  • An actual entity must be completely determinate in the sense that there may be no confusion about its identity that would allow it to be confounded with another actual entity. In this sense an actual entity is completely concrete, with no potential to be something other than itself. It is what it is. It is a source of potentiality for the creation of other actual entities, of which it may be said to be a part cause. Likewise it is the concretion or realization of potentialities of other actual entities which are its partial causes.
  • Causation between actual entities is essential to their actuality. Consequently, for Whitehead, each actual entity has its distinct and definite extension in physical Minkowski space, and so is uniquely identifiable. A description in Minkowski space supports descriptions in time and space for particular observers.
  • It is part of the aim of the philosophy of such an ontology as Whitehead's that the actual entities should be all alike, qua actual entities; they should all satisfy a single definite set of well stated ontological criteria of actuality.
Whitehead proposed that his notion of an occasion of experience satisfies the criteria for its status as the philosophically preferred definition of an actual entity. From a purely logical point of view, each occasion of experience has in full measure the characters of both objective and subjective reality. Subjectivity and objectivity refer to different aspects of an occasion of experience, and in no way do they exclude each other.[29]

Examples of other philosophical proposals or candidates as actual entities, in this view, are Aristotle's 'substances', Leibniz' monads, and Descartes ′res verae' , and the more modern 'states of affairs'. Aristotle's substances, such as Socrates, have behind them as more fundamental the 'primary substances', and in this sense do not satisfy Whitehead's criteria. Whitehead is not happy with Leibniz' monads as actual entities because they are "windowless" and do not cause each other. 'States of affairs' are often not closely defined, often without specific mention of extension in physical Minkowski space; they are therefore not necessarily processes of becoming, but may be as their name suggests, simply static states in some sense. States of affairs are contingent on particulars, and therefore have something behind them.[30] One summary of the Whiteheadian actual entity is that it is a process of becoming. Another summary, referring to its causal linkage to other actual entities, is that it is "all window", in contrast with Leibniz' windowless monads.

This view allows philosophical entities other than actual entities to really exist, but not as fundamentally and primarily factual or causally efficacious; they have existence as abstractions, with reality only derived from their reference to actual entities. A Whiteheadian actual entity has a unique and completely definite place and time. Whiteheadian abstractions are not so tightly defined in time and place, and in the extreme, some are timeless and placeless, or 'eternal' entities. All abstractions have logical or conceptual rather than efficacious existence; their lack of definite time does not make them unreal if they refer to actual entities. Whitehead calls this 'the ontological principle'.

Microcosmic ontology

There is an established and long philosophical history of the concept of atoms as microscopic physical objects.They are far too small to be visible to the naked eye. It was as recent as the nineteenth century that precise estimates of the sizes of putative physical atoms began to become plausible. Almost direct empirical observation of atomic effects was due to the theoretical investigation of Brownian motion by Albert Einstein in the very early twentieth century. But even then, the real existence of atoms was debated by some. Such debate might be labeled 'microcosmic ontology'. Here the word 'microcosm' is used to indicate a physical world of small entities, such as for example atoms.

Subatomic particles are usually considered to be much smaller than atoms. Their real or actual existence may be very difficult to demonstrate empirically.[31] A distinction is sometimes drawn between actual and virtual subatomic particles. Reasonably, one may ask, in what sense, if any, do virtual particles exist as physical entities? For atomic and subatomic particles, difficult questions arise, such as do they possess a precise position, or a precise momentum? A question that continues to be controversial is 'to what kind of physical thing, if any, does the quantum mechanical wave function refer?'.[6]

Ontological argument

The first ontological argument in the Western Christian tradition[32] was proposed by Anselm of Canterbury in his 1078 work Proslogion. Anselm defined God as "that than which nothing greater can be thought", and argued that this being must exist in the mind, even in the mind of the person who denies the existence of God. He suggested that, if the greatest possible being exists in the mind, it must also exist in reality. If it only exists in the mind, then an even greater being must be possible—one which exists both in the mind and in reality. Therefore, this greatest possible being must exist in reality. Seventeenth century French philosopher René Descartes deployed a similar argument. Descartes published several variations of his argument, each of which centred on the idea that God's existence is immediately inferable from a "clear and distinct" idea of a supremely perfect being. In the early eighteenth century, Gottfried Leibniz augmented Descartes' ideas in an attempt to prove that a "supremely perfect" being is a coherent concept. A more recent ontological argument came from Kurt Gödel, who proposed a formal argument for God's existence. Norman Malcolm revived the ontological argument in 1960 when he located a second, stronger ontological argument in Anselm's work; Alvin Plantinga challenged this argument and proposed an alternative, based on modal logic. Attempts have also been made to validate Anselm's proof using an automated theorem prover. Other arguments have been categorised as ontological, including those made by Islamic philosophers Mulla Sadra and Allama Tabatabai.

Turing completeness

From Wikipedia, the free encyclopedia

In computability theory, a system of data-manipulation rules (such as a computer's instruction set, a programming language, or a cellular automaton) is said to be Turing complete or computationally universal if it can be used to simulate any Turing machine. The concept is named after English mathematician and computer scientist Alan Turing. A classic example is lambda calculus.

A closely related concept is that of Turing equivalence – two computers P and Q are called equivalent if P can simulate Q and Q can simulate P. The Church–Turing thesis conjectures that any function whose values can be computed by an algorithm can be computed by a Turing machine, and therefore that if any real-world computer can simulate a Turing machine, it is Turing equivalent to a Turing machine. A universal Turing machine can be used to simulate any Turing machine and by extension the computational aspects of any possible real-world computer.[NB 1]

To show that something is Turing complete, it is enough to show that it can be used to simulate some Turing complete system. For example, an imperative language is Turing complete if it has conditional branching (e.g., "if" and "goto" statements, or a "branch if zero" instruction; see one instruction set computer) and the ability to change an arbitrary amount of memory (e.g., the ability to maintain an arbitrary number of data items). Of course, no physical system can have infinite memory; but if the limitation of finite memory is ignored, most programming languages are otherwise Turing complete.

Non-mathematical usage

In colloquial usage, the terms "Turing complete" or "Turing equivalent" are used to mean that any real-world general-purpose computer or computer language can approximately simulate the computational aspects of any other real-world general-purpose computer or computer language.

Real computers constructed so far can be functionally analyzed like a single-tape Turing machine (the "tape" corresponding to their memory); thus the associated mathematics can apply by abstracting their operation far enough. However, real computers have limited physical resources, so they are only linear bounded automaton complete. In contrast, a universal computer is defined as a device with a Turing complete instruction set, infinite memory, and infinite available time.

Formal definitions

In computability theory, several closely related terms are used to describe the computational power of a computational system (such as an abstract machine or programming language):
Turing completeness
A computational system that can compute every Turing-computable function is called Turing-complete (or Turing-powerful). Alternatively, such a system is one that can simulate a universal Turing machine.
Turing equivalence
A Turing-complete system is called Turing equivalent if every function it can compute is also Turing computable; i.e., it computes precisely the same class of functions as do Turing machines. Alternatively, a Turing-equivalent system is one that can simulate, and be simulated by, a universal Turing machine. (All known Turing-complete systems are Turing equivalent, which adds support to the Church–Turing thesis.)
(Computational) universality
A system is called universal with respect to a class of systems if it can compute every function computable by systems in that class (or can simulate each of those systems). Typically, the term universality is tacitly used with respect to a Turing-complete class of systems. The term "weakly universal" is sometimes used to distinguish a system (e.g. a cellular automaton) whose universality is achieved only by modifying the standard definition of Turing machine so as to include input streams with infinitely many 1s.

History

Turing completeness is significant in that every real-world design for a computing device can be simulated by a universal Turing machine. The Church–Turing thesis states that this is a law of mathematics – that a universal Turing machine can, in principle, perform any calculation that any other programmable computer can. This says nothing about the effort needed to write the program, or the time it may take for the machine to perform the calculation, or any abilities the machine may possess that have nothing to do with computation.

Charles Babbage's analytical engine (1830s) would have been the first Turing-complete machine if it had been built at the time it was designed. Babbage appreciated that the machine was capable of great feats of calculation, including primitive logical reasoning, but he did not appreciate that no other machine could do better. From the 1830s until the 1940s, mechanical calculating machines such as adders and multipliers were built and improved, but they could not perform a conditional branch and therefore were not Turing complete.

In the late 19th century, Leopold Kronecker formulated notions of computability, defining primitive recursive functions. These functions can be calculated by rote computation, but they are not enough to make a universal computer, because the instructions which compute them do not allow for an infinite loop. In the early 20th century, David Hilbert led a program to axiomatize all of mathematics with precise axioms and precise logical rules of deduction which could be performed by a machine. Soon, it became clear that a small set of deduction rules are enough to produce the consequences of any set of axioms. These rules were proved by Kurt Gödel in 1930 to be enough to produce every theorem.

The actual notion of computation was isolated soon after, starting with Gödel's incompleteness theorem. This theorem showed that axiom systems were limited when reasoning about the computation which deduces their theorems. Church and Turing independently demonstrated that Hilbert's Entscheidungsproblem (decision problem) was unsolvable,[1] thus identifying the computational core of the incompleteness theorem. This work, along with Gödel's work on general recursive functions, established that there are sets of simple instructions, which, when put together, are able to produce any computation. The work of Gödel showed that the notion of computation is essentially unique.

In 1941 Konrad Zuse completed the Z3 (computer), the first working Turing-complete machine; this was the first digital computer in the modern sense.[2]

Computability theory

Computability theory characterizes problems as having, or not having, computational solutions. The first result of computability theory is that there exist problems for which it is impossible to predict what a (Turing-complete) system will do over an arbitrarily long time.

The classic example is the halting problem: create an algorithm which takes as input (a) a program in some Turing-complete language, and (b) some data to be fed to that program; and which determines whether the program, operating on the input, will eventually stop or will continue forever. It is trivial to create an algorithm that can do this for some inputs, but impossible to do this in general. For any characteristic of the program's eventual output, it is impossible to determine whether this characteristic will hold.

This impossibility poses problems when analyzing real-world computer programs. For example, one cannot write a tool that entirely protects programmers from writing infinite loops, or protects users from supplying input that would cause infinite loops.

One can instead limit a program to executing only for a fixed period of time (timeout), or limit the power of flow control instructions (for example, providing only loops that iterate over the items of an existing array). However, another theorem shows that there are problems solvable by Turing-complete languages that cannot be solved by any language with only finite looping abilities (i.e., any language that guarantees every program will eventually finish to a halt). So any such language is not Turing complete. For example, a language in which programs are guaranteed to complete and halt cannot compute the computable function which is produced by Cantor's diagonal argument on all computable functions in that language.

Turing oracles

A computer with access to an infinite tape of data may be more powerful than a Turing machine: for instance, the tape might contain the solution to the halting problem, or some other Turing-undecidable problem. Such an infinite tape of data is called a Turing oracle. Even a Turing oracle with random data is not computable (with probability 1), since there are only countably many computations but uncountably many oracles. So a computer with a random Turing oracle can compute things that a Turing machine cannot.

Digital physics

All known laws of physics have consequences that are computable by a series of approximations on a digital computer. A hypothesis called digital physics states that this is no accident, because the universe itself is computable on a universal Turing machine. This would imply that no computer more powerful than a universal Turing machine can be built physically.

Examples

The computational systems (algebras, calculi) that are discussed as Turing complete systems are those intended for studying theoretical computer science. They are intended to be as simple as possible, so that it would be easier to understand the limits of computation. Here are a few:
Most programming languages, conventional and unconventional, are Turing-complete. This includes:
Rewrite systems are also Turing-complete.

Turing completeness is an abstract statement of ability, rather than a prescription of specific language features used to implement that ability. The features used to achieve Turing completeness can be quite different; Fortran systems would use loop constructs or possibly even goto statements to achieve repetition; Haskell and Prolog, lacking looping almost entirely, would use recursion. Most programming languages are describing computations on von Neumann architectures, which have memory (RAM and register) and a control unit. These two elements make this architecture Turing-complete. Even pure functional languages are Turing-complete.[4][NB 2]

Turing completeness in declarative SQL is implemented through recursive common table expressions.[5] Unsurprisingly, procedural extensions to SQL (PLSQL, etc.) are also Turing complete. This illustrates one reason why relatively powerful non-Turing-complete languages are rare: the more powerful the language is initially, the more complex are the tasks to which it is applied and the sooner its lack of completeness becomes perceived as a drawback, encouraging its extension until it is Turing complete.

The untyped lambda calculus is Turing-complete, but many typed lambda calculi, including System F, are not. The value of typed systems is based in their ability to represent most typical computer programs while detecting more errors.

Rule 110 and Conway's Game of Life, both cellular automata, are Turing complete.

Unintentional Turing completeness

Some games and other software are Turing-complete by accident.
Video games:
Card games:
Zero-person games (simulations):

Non-Turing-complete languages

Many computational languages exist that are not Turing complete. One such example is the set of regular languages, which are generated by regular expressions and which are recognized by finite automata. A more powerful but still not Turing-complete extension of finite automata is the category of pushdown automata and context-free grammars, which are commonly used to generate parse trees in an initial stage of program compiling. Further examples include some of the early versions of the pixel shader languages embedded in Direct3D and OpenGL extensions.[citation needed]

In total functional programming languages, such as Charity and Epigram, all functions are total and must terminate. Charity uses a type system and control constructs based on category theory, whereas Epigram uses dependent types. The LOOP language is designed so that it computes only the functions that are primitive recursive. All of these compute proper subsets of the total computable functions, since the full set of total computable functions is not computably enumerable. Also, since all functions in these languages are total, algorithms for recursively enumerable sets cannot be written in these languages, in contrast with Turing machines.

Although (untyped) lambda calculus is Turing-complete, simply typed lambda calculus is not.

Data languages

The notion of Turing-completeness does not apply to languages such as XML, HTML, JSON, YAML and S-expressions, because they are typically used to represent structured data, not describe computation. These are sometimes referred to as markup languages, or more properly as "container languages" or "data description languages".[citation needed] However, Rule 110, a Turing-complete cellular automaton, has been successfully implemented in CSS 3, thus proving, to some extent, its Turing completeness.[11]

Marriage in Islam

From Wikipedia, the free encyclopedia ...