Search This Blog

Saturday, July 7, 2018

Ontology

From Wikipedia, the free encyclopedia

Parmenides was among the first to propose an ontological characterization of the fundamental nature of reality.

Ontology is the philosophical study of the nature of being, becoming, existence, or reality, as well as the basic categories of being and their relations.[1] Traditionally listed as a part of the major branch of philosophy known as metaphysics, ontology often deals with questions concerning what entities exist or may be said to exist and how such entities may be grouped, related within a hierarchy, and subdivided according to similarities and differences. A very simple definition of ontology is that it is the examination of what is meant by 'being'.

Etymology

The compound word ontology combines onto-, from the Greek ὄν, on (gen. ὄντος, ontos), i.e. "being; that which is", which is the present participle of the verb εἰμί, eimí, i.e. "to be, I am", and -λογία, -logia, i.e. "logical discourse", see classical compounds for this type of word formation.[2][3]

While the etymology is Greek, the oldest extant record of the word itself, the New Latin form ontologia, appeared in 1606 in the work Ogdoas Scholastica by Jacob Lorhard (Lorhardus) and in 1613 in the Lexicon philosophicum by Rudolf Göckel (Goclenius).

The first occurrence in English of ontology as recorded by the OED (Oxford English Dictionary, online edition, 2008) came in a work by Gideon Harvey (1636/7–1702): Archelogia philosophica nova; or, New principles of Philosophy. Containing Philosophy in general, Metaphysicks or Ontology, Dynamilogy or a Discourse of Power, Religio Philosophi or Natural Theology, Physicks or Natural philosophy, London, Thomson, 1663. The word was first used in its Latin form by philosophers based on the Latin roots, which themselves are based on the Greek.

Leibniz is the only one of the great philosophers of the 17th century to have used the term ontology.[4]

Overview

Some philosophers, notably in the traditions of the Platonic school, contend that all nouns (including abstract nouns) refer to existent entities.[citation needed] Other philosophers contend that nouns do not always name entities, but that some provide a kind of shorthand for reference to a collection either of objects or of events. In this latter view, mind, instead of referring to an entity, refers to a collection of mental events experienced by a person; society refers to a collection of persons with some shared characteristics, and geometry refers to a collection of specific kinds of intellectual activities.  Between these poles of realism and nominalism stand a variety of other positions.

Some fundamental questions

Principal questions of ontology include:
  • "What can be said to exist?"
  • "What is a thing?"[6]
  • "Into what categories, if any, can we sort existing things?"
  • "What are the meanings of being?"
  • "What are the various modes of being of entities?"
Various philosophers have provided different answers to these questions. One common approach involves dividing the extant subjects and predicates into groups called categories. Such lists of categories differ widely from one another, and it is through the co-ordination of different categorical schemes that ontology relates to such fields as library science and artificial intelligence. Such an understanding of ontological categories, however, is merely taxonomic, classificatory. Aristotle's categories are the ways in which a being may be addressed simply as a being, such as:[7]
  • what it is (its 'whatness', quiddity, haecceity or essence)
  • how it is (its 'howness' or qualitativeness)
  • how much it is (quantitativeness)
  • where it is, its relatedness to other beings
Further examples of ontological questions include:[citation needed]
  • What is existence, i.e. what does it mean for a being to be?
  • Is existence a property?
  • Is existence a genus or general class that is simply divided up by specific differences?
  • Which entities, if any, are fundamental?
  • Are all entities objects?
  • How do the properties of an object relate to the object itself?
  • Do physical properties actually exist?
  • What features are the essential, as opposed to merely accidental attributes of a given object?
  • How many levels of existence or ontological levels are there? And what constitutes a "level"?
  • What is a physical object?
  • Can one give an account of what it means to say that a physical object exists?
  • Can one give an account of what it means to say that a non-physical entity exists?
  • What constitutes the identity of an object?
  • When does an object go out of existence, as opposed to merely changing?
  • Do beings exist other than in the modes of objectivity and subjectivity, i.e. is the subject/object split of modern philosophy inevitable?

Concepts

Essential ontological dichotomies include:

Types

Philosophers can classify ontologies in various ways, using criteria such as the degree of abstraction and field of application:[8]
  1. Upper ontology: concepts supporting development of an ontology, meta-ontology
  2. Domain ontology: concepts relevant to a particular topic or area of interest, for example, to information technology or to computer languages, or to particular branches of science
  3. Interface ontology: concepts relevant to the juncture of two disciplines
  4. Process ontology: inputs, outputs, constraints, sequencing information, involved in business or engineering processes

History

Origins

Ontology was referred to as Tattva Mimamsa by ancient Indian philosophers going back as early as Vedas.[citation needed] Ontology is an aspect of the Samkhya school of philosophy from the first millennium BCE.[9] The concept of Guna which describes the three properties (sattva, rajas and tamas) present in differing proportions in all existing things, is a notable concept of this school.

Parmenides and monism

Parmenides was among the first in the Greek tradition to propose an ontological characterization of the fundamental nature of existence. In his prologue or proem he describes two views of existence; initially that nothing comes from nothing, and therefore existence is eternal. Consequently, our opinions about truth must often be false and deceitful. Most of western philosophy — including the fundamental concepts of falsifiability — have emerged from this view. This posits that existence is what may be conceived of by thought, created, or possessed. Hence, there may be neither void nor vacuum; and true reality neither may come into being nor vanish from existence. Rather, the entirety of creation is eternal, uniform, and immutable, though not infinite (he characterized its shape as that of a perfect sphere). Parmenides thus posits that change, as perceived in everyday experience, is illusory. Everything that may be apprehended is but one part of a single entity. This idea somewhat anticipates the modern concept of an ultimate grand unification theory that finally describes all of existence in terms of one inter-related sub-atomic reality which applies to everything.

Ontological pluralism

The opposite of eleatic monism is the pluralistic conception of Being. In the 5th century BC, Anaxagoras and Leucippus replaced[10] the reality of Being (unique and unchanging) with that of Becoming and therefore by a more fundamental and elementary ontic plurality. This thesis originated in the Hellenic world, stated in two different ways by Anaxagoras and by Leucippus. The first theory dealt with "seeds" (which Aristotle referred to as "homeomeries") of the various substances. The second was the atomistic theory,[11] which dealt with reality as based on the vacuum, the atoms and their intrinsic movement in it.

The materialist atomism proposed by Leucippus was indeterminist, but then developed by Democritus in a deterministic way. It was later (4th century BC) that the original atomism was taken again as indeterministic by Epicurus. He confirmed the reality as composed of an infinity of indivisible, unchangeable corpuscles or atoms (atomon, lit. 'uncuttable'), but he gives weight to characterize atoms while for Leucippus they are characterized by a "figure", an "order" and a "position" in the cosmos.[12] They are, besides, creating the whole with the intrinsic movement in the vacuum, producing the diverse flux of being. Their movement is influenced by the parenklisis (Lucretius names it clinamen) and that is determined by the chance. These ideas foreshadowed our understanding of traditional physics until the nature of atoms was discovered in the 20th century.[13]

Plato

Plato developed this distinction between true reality and illusion, in arguing that what is real are eternal and unchanging Forms or Ideas (a precursor to universals), of which things experienced in sensation are at best merely copies, and real only in so far as they copy ('partake of') such Forms. In general, Plato presumes that all nouns (e.g., 'Beauty') refer to real entities, whether sensible bodies or insensible Forms. Hence, in The Sophist Plato argues that Being is a Form in which all existent things participate and which they have in common (though it is unclear whether 'Being' is intended in the sense of existence, copula, or identity); and argues, against Parmenides, that Forms must exist not only of Being, but also of Negation and of non-Being (or Difference).

In his Categories, Aristotle identifies ten possible kinds of things that may be the subject or the predicate of a proposition. For Aristotle there are four different ontological dimensions:
  1. according to the various categories or ways of addressing a being as such
  2. according to its truth or falsity (e.g. fake gold, counterfeit money)
  3. whether it exists in and of itself or simply 'comes along' by accident
  4. according to its potency, movement (energy) or finished presence (Metaphysics Book Theta).
According to Avicenna, and in an interpretation of Greek Aristotelian and Platonist ontological doctrines in medieval metaphysics, being is either necessary, contingent qua possible, or impossible. Necessary being is that which cannot but be, since its non-being entails a contradiction. Contingent qua possible being is neither necessary nor impossible for it to be or not to be. It is ontologically neutral, and is brought from potential existing into actual existence by way of a cause that is external to its essence. Its being is borrowed unlike the necessary existent, which is self-subsisting and is impossible for it not to be. As for the impossible, it necessarily does not exist, and the affirmation of its being is a contradiction.[14]

Other ontological topics

Ontological formations

The concept of 'ontological formations' refers to formations of social relations understood as dominant ways of living. Temporal, spatial, corporeal, epistemological and performative relations are taken to be central to understanding a dominant formation. That is, a particular ontological formation is based on how ontological categories of time, space, embodiment, knowing and performing are lived—objectively and subjectively. Different ontological formations include the customary (including the tribal), the traditional, the modern and the postmodern. The concept was first introduced by Paul James' Globalism, Nationalism, Tribalism[15] together with a series of writers including Damian Grenfell and Manfred Steger.

In the engaged theory approach, ontological formations are seen as layered and intersecting rather than singular formations. They are 'formations of being'. This approach avoids the usual problems of a Great Divide being posited between the modern and the pre-modern. From a philosophical distinction concerning different formations of being, the concept then provides a way of translating into practical understandings concerning how humans might design cities and communities that live creatively across different ontological formations, for example cities that are not completely dominated by modern valences of spatial configuration. Here the work of Tony Fry is important.[16]

Ontological and epistemological certainty

René Descartes, with je pense donc je suis or cogito ergo sum or "I think, therefore I am", argued that "the self" is something that we can know exists with epistemological certainty. Descartes argued further that this knowledge could lead to a proof of the certainty of the existence of God, using the ontological argument that had been formulated first by Anselm of Canterbury.

Certainty about the existence of "the self" and "the other", however, came under increasing criticism in the 20th century. Sociological theorists, most notably George Herbert Mead and Erving Goffman, saw the Cartesian Other as a "Generalized Other", the imaginary audience that individuals use when thinking about the self. According to Mead, "we do not assume there is a self to begin with. Self is not presupposed as a stuff out of which the world arises. Rather, the self arises in the world".[17][18] The Cartesian Other was also used by Sigmund Freud, who saw the superego as an abstract regulatory force, and Émile Durkheim who viewed this as a psychologically manifested entity which represented God in society at large.

Body and environment, questioning the meaning of being

Schools of subjectivism, objectivism and relativism existed at various times in the 20th century, and the postmodernists and body philosophers tried to reframe all these questions in terms of bodies taking some specific action in an environment. This relied to a great degree on insights derived from scientific research into animals taking instinctive action in natural and artificial settings—as studied by biology, ecology,[19] and cognitive science.

The processes by which bodies related to environments became of great concern, and the idea of being itself became difficult to really define. What did people mean when they said "A is B", "A must be B", "A was B"...? Some linguists advocated dropping the verb "to be" from the English language, leaving "E Prime", supposedly less prone to bad abstractions. Others, mostly philosophers, tried to dig into the word and its usage. Martin Heidegger distinguished human being as existence from the being of things in the world. Heidegger proposes that our way of being human and the way the world is for us are cast historically through a fundamental ontological questioning. These fundamental ontological categories provide the basis for communication in an age: a horizon of unspoken and seemingly unquestionable background meanings, such as human beings understood unquestioningly as subjects and other entities understood unquestioningly as objects. Because these basic ontological meanings both generate and are regenerated in everyday interactions, the locus of our way of being in a historical epoch is the communicative event of language in use.[17] For Heidegger, however, communication in the first place is not among human beings, but language itself shapes up in response to questioning (the inexhaustible meaning of) being.[20] Even the focus of traditional ontology on the 'whatness' or quidditas of beings in their substantial, standing presence can be shifted to pose the question of the 'whoness' of human being itself.[21]

Ontology and language

Some philosophers suggest that the question of "What is?" is (at least in part) an issue of usage rather than a question about facts.[22] This perspective is conveyed by an analogy made by Donald Davidson: Suppose a person refers to a 'cup' as a 'chair' and makes some comments pertinent to a cup, but uses the word 'chair' consistently throughout instead of 'cup'. One might readily catch on that this person simply calls a 'cup' a 'chair' and the oddity is explained.[23] Analogously, if we find people asserting 'there are' such-and-such, and we do not ourselves think that 'such-and-such' exist, we might conclude that these people are not nuts (Davidson calls this assumption 'charity'), they simply use 'there are' differently than we do. The question of What is? is at least partially a topic in the philosophy of language, and is not entirely about ontology itself.[24] This viewpoint has been expressed by Eli Hirsch.[25][26]

Hirsch interprets Hilary Putnam as asserting that different concepts of "the existence of something" can be correct.[26] This position does not contradict the view that some things do exist, but points out that different 'languages' will have different rules about assigning this property.[26][27] How to determine the 'fitness' of a 'language' to the world then becomes a subject for investigation.

Common to all Indo-European copula languages is the double use of the verb "to be" in both stating that entity X exists ("X is.") as well as stating that X has a property ("X is P"). It is sometimes argued that a third use is also distinct, stating that X is a member of a class ("X is a C"). In other language families these roles may have completely different verbs and are less likely to be confused with one another. For example they might say something like "the car has redness" rather than "the car is red". Hence any discussion of "being" in Indo-European language philosophy may need to make distinctions between these senses.[citation needed]

Ontology and human geography

In human geography there are two types of ontology: small "o" which accounts for the practical orientation, describing functions of being a part of the group, thought to oversimplify and ignore key activities. The other "o", or big "O", systematically, logically, and rationally describes the essential characteristics and universal traits. This concept relates closely to Plato's view that the human mind can only perceive a bigger world if they continue to live within the confines of their "caves". However, in spite of the differences, ontology relies on the symbolic agreements among members. That said, ontology is crucial for the axiomatic language frameworks.[28]

Reality and actuality

According to A.N. Whitehead, for ontology, it is useful to distinguish the terms 'reality' and 'actuality'. In this view, an 'actual entity' has a philosophical status of fundamental ontological priority, while a 'real entity' is one which may be actual, or may derive its reality from its logical relation to some actual entity or entities. For example, an occasion in the life of Socrates is an actual entity. But Socrates' being a man does not make 'man' an actual entity, because it refers indeterminately to many actual entities, such as several occasions in the life of Socrates, and also to several occasions in the lives of Alcibiades, and of others. But the notion of man is real; it derives its reality from its reference to those many actual occasions, each of which is an actual entity. An actual occasion is a concrete entity, while terms such as 'man' are abstractions from many concrete relevant entities.

According to Whitehead, an actual entity must earn its philosophical status of fundamental ontological priority by satisfying several philosophical criteria, as follows.
  • There is no going behind an actual entity, to find something more fundamental in fact or in efficacy. This criterion is to be regarded as expressing an axiom, or postulated distinguished doctrine.
  • An actual entity must be completely determinate in the sense that there may be no confusion about its identity that would allow it to be confounded with another actual entity. In this sense an actual entity is completely concrete, with no potential to be something other than itself. It is what it is. It is a source of potentiality for the creation of other actual entities, of which it may be said to be a part cause. Likewise it is the concretion or realization of potentialities of other actual entities which are its partial causes.
  • Causation between actual entities is essential to their actuality. Consequently, for Whitehead, each actual entity has its distinct and definite extension in physical Minkowski space, and so is uniquely identifiable. A description in Minkowski space supports descriptions in time and space for particular observers.
  • It is part of the aim of the philosophy of such an ontology as Whitehead's that the actual entities should be all alike, qua actual entities; they should all satisfy a single definite set of well stated ontological criteria of actuality.
Whitehead proposed that his notion of an occasion of experience satisfies the criteria for its status as the philosophically preferred definition of an actual entity. From a purely logical point of view, each occasion of experience has in full measure the characters of both objective and subjective reality. Subjectivity and objectivity refer to different aspects of an occasion of experience, and in no way do they exclude each other.[29]

Examples of other philosophical proposals or candidates as actual entities, in this view, are Aristotle's 'substances', Leibniz' monads, and Descartes ′res verae' , and the more modern 'states of affairs'. Aristotle's substances, such as Socrates, have behind them as more fundamental the 'primary substances', and in this sense do not satisfy Whitehead's criteria. Whitehead is not happy with Leibniz' monads as actual entities because they are "windowless" and do not cause each other. 'States of affairs' are often not closely defined, often without specific mention of extension in physical Minkowski space; they are therefore not necessarily processes of becoming, but may be as their name suggests, simply static states in some sense. States of affairs are contingent on particulars, and therefore have something behind them.[30] One summary of the Whiteheadian actual entity is that it is a process of becoming. Another summary, referring to its causal linkage to other actual entities, is that it is "all window", in contrast with Leibniz' windowless monads.

This view allows philosophical entities other than actual entities to really exist, but not as fundamentally and primarily factual or causally efficacious; they have existence as abstractions, with reality only derived from their reference to actual entities. A Whiteheadian actual entity has a unique and completely definite place and time. Whiteheadian abstractions are not so tightly defined in time and place, and in the extreme, some are timeless and placeless, or 'eternal' entities. All abstractions have logical or conceptual rather than efficacious existence; their lack of definite time does not make them unreal if they refer to actual entities. Whitehead calls this 'the ontological principle'.

Microcosmic ontology

There is an established and long philosophical history of the concept of atoms as microscopic physical objects.They are far too small to be visible to the naked eye. It was as recent as the nineteenth century that precise estimates of the sizes of putative physical atoms began to become plausible. Almost direct empirical observation of atomic effects was due to the theoretical investigation of Brownian motion by Albert Einstein in the very early twentieth century. But even then, the real existence of atoms was debated by some. Such debate might be labeled 'microcosmic ontology'. Here the word 'microcosm' is used to indicate a physical world of small entities, such as for example atoms.

Subatomic particles are usually considered to be much smaller than atoms. Their real or actual existence may be very difficult to demonstrate empirically.[31] A distinction is sometimes drawn between actual and virtual subatomic particles. Reasonably, one may ask, in what sense, if any, do virtual particles exist as physical entities? For atomic and subatomic particles, difficult questions arise, such as do they possess a precise position, or a precise momentum? A question that continues to be controversial is 'to what kind of physical thing, if any, does the quantum mechanical wave function refer?'.[6]

Ontological argument

The first ontological argument in the Western Christian tradition[32] was proposed by Anselm of Canterbury in his 1078 work Proslogion. Anselm defined God as "that than which nothing greater can be thought", and argued that this being must exist in the mind, even in the mind of the person who denies the existence of God. He suggested that, if the greatest possible being exists in the mind, it must also exist in reality. If it only exists in the mind, then an even greater being must be possible—one which exists both in the mind and in reality. Therefore, this greatest possible being must exist in reality. Seventeenth century French philosopher René Descartes deployed a similar argument. Descartes published several variations of his argument, each of which centred on the idea that God's existence is immediately inferable from a "clear and distinct" idea of a supremely perfect being. In the early eighteenth century, Gottfried Leibniz augmented Descartes' ideas in an attempt to prove that a "supremely perfect" being is a coherent concept. A more recent ontological argument came from Kurt Gödel, who proposed a formal argument for God's existence. Norman Malcolm revived the ontological argument in 1960 when he located a second, stronger ontological argument in Anselm's work; Alvin Plantinga challenged this argument and proposed an alternative, based on modal logic. Attempts have also been made to validate Anselm's proof using an automated theorem prover. Other arguments have been categorised as ontological, including those made by Islamic philosophers Mulla Sadra and Allama Tabatabai.

Turing completeness

From Wikipedia, the free encyclopedia

In computability theory, a system of data-manipulation rules (such as a computer's instruction set, a programming language, or a cellular automaton) is said to be Turing complete or computationally universal if it can be used to simulate any Turing machine. The concept is named after English mathematician and computer scientist Alan Turing. A classic example is lambda calculus.

A closely related concept is that of Turing equivalence – two computers P and Q are called equivalent if P can simulate Q and Q can simulate P. The Church–Turing thesis conjectures that any function whose values can be computed by an algorithm can be computed by a Turing machine, and therefore that if any real-world computer can simulate a Turing machine, it is Turing equivalent to a Turing machine. A universal Turing machine can be used to simulate any Turing machine and by extension the computational aspects of any possible real-world computer.[NB 1]

To show that something is Turing complete, it is enough to show that it can be used to simulate some Turing complete system. For example, an imperative language is Turing complete if it has conditional branching (e.g., "if" and "goto" statements, or a "branch if zero" instruction; see one instruction set computer) and the ability to change an arbitrary amount of memory (e.g., the ability to maintain an arbitrary number of data items). Of course, no physical system can have infinite memory; but if the limitation of finite memory is ignored, most programming languages are otherwise Turing complete.

Non-mathematical usage

In colloquial usage, the terms "Turing complete" or "Turing equivalent" are used to mean that any real-world general-purpose computer or computer language can approximately simulate the computational aspects of any other real-world general-purpose computer or computer language.

Real computers constructed so far can be functionally analyzed like a single-tape Turing machine (the "tape" corresponding to their memory); thus the associated mathematics can apply by abstracting their operation far enough. However, real computers have limited physical resources, so they are only linear bounded automaton complete. In contrast, a universal computer is defined as a device with a Turing complete instruction set, infinite memory, and infinite available time.

Formal definitions

In computability theory, several closely related terms are used to describe the computational power of a computational system (such as an abstract machine or programming language):
Turing completeness
A computational system that can compute every Turing-computable function is called Turing-complete (or Turing-powerful). Alternatively, such a system is one that can simulate a universal Turing machine.
Turing equivalence
A Turing-complete system is called Turing equivalent if every function it can compute is also Turing computable; i.e., it computes precisely the same class of functions as do Turing machines. Alternatively, a Turing-equivalent system is one that can simulate, and be simulated by, a universal Turing machine. (All known Turing-complete systems are Turing equivalent, which adds support to the Church–Turing thesis.)
(Computational) universality
A system is called universal with respect to a class of systems if it can compute every function computable by systems in that class (or can simulate each of those systems). Typically, the term universality is tacitly used with respect to a Turing-complete class of systems. The term "weakly universal" is sometimes used to distinguish a system (e.g. a cellular automaton) whose universality is achieved only by modifying the standard definition of Turing machine so as to include input streams with infinitely many 1s.

History

Turing completeness is significant in that every real-world design for a computing device can be simulated by a universal Turing machine. The Church–Turing thesis states that this is a law of mathematics – that a universal Turing machine can, in principle, perform any calculation that any other programmable computer can. This says nothing about the effort needed to write the program, or the time it may take for the machine to perform the calculation, or any abilities the machine may possess that have nothing to do with computation.

Charles Babbage's analytical engine (1830s) would have been the first Turing-complete machine if it had been built at the time it was designed. Babbage appreciated that the machine was capable of great feats of calculation, including primitive logical reasoning, but he did not appreciate that no other machine could do better. From the 1830s until the 1940s, mechanical calculating machines such as adders and multipliers were built and improved, but they could not perform a conditional branch and therefore were not Turing complete.

In the late 19th century, Leopold Kronecker formulated notions of computability, defining primitive recursive functions. These functions can be calculated by rote computation, but they are not enough to make a universal computer, because the instructions which compute them do not allow for an infinite loop. In the early 20th century, David Hilbert led a program to axiomatize all of mathematics with precise axioms and precise logical rules of deduction which could be performed by a machine. Soon, it became clear that a small set of deduction rules are enough to produce the consequences of any set of axioms. These rules were proved by Kurt Gödel in 1930 to be enough to produce every theorem.

The actual notion of computation was isolated soon after, starting with Gödel's incompleteness theorem. This theorem showed that axiom systems were limited when reasoning about the computation which deduces their theorems. Church and Turing independently demonstrated that Hilbert's Entscheidungsproblem (decision problem) was unsolvable,[1] thus identifying the computational core of the incompleteness theorem. This work, along with Gödel's work on general recursive functions, established that there are sets of simple instructions, which, when put together, are able to produce any computation. The work of Gödel showed that the notion of computation is essentially unique.

In 1941 Konrad Zuse completed the Z3 (computer), the first working Turing-complete machine; this was the first digital computer in the modern sense.[2]

Computability theory

Computability theory characterizes problems as having, or not having, computational solutions. The first result of computability theory is that there exist problems for which it is impossible to predict what a (Turing-complete) system will do over an arbitrarily long time.

The classic example is the halting problem: create an algorithm which takes as input (a) a program in some Turing-complete language, and (b) some data to be fed to that program; and which determines whether the program, operating on the input, will eventually stop or will continue forever. It is trivial to create an algorithm that can do this for some inputs, but impossible to do this in general. For any characteristic of the program's eventual output, it is impossible to determine whether this characteristic will hold.

This impossibility poses problems when analyzing real-world computer programs. For example, one cannot write a tool that entirely protects programmers from writing infinite loops, or protects users from supplying input that would cause infinite loops.

One can instead limit a program to executing only for a fixed period of time (timeout), or limit the power of flow control instructions (for example, providing only loops that iterate over the items of an existing array). However, another theorem shows that there are problems solvable by Turing-complete languages that cannot be solved by any language with only finite looping abilities (i.e., any language that guarantees every program will eventually finish to a halt). So any such language is not Turing complete. For example, a language in which programs are guaranteed to complete and halt cannot compute the computable function which is produced by Cantor's diagonal argument on all computable functions in that language.

Turing oracles

A computer with access to an infinite tape of data may be more powerful than a Turing machine: for instance, the tape might contain the solution to the halting problem, or some other Turing-undecidable problem. Such an infinite tape of data is called a Turing oracle. Even a Turing oracle with random data is not computable (with probability 1), since there are only countably many computations but uncountably many oracles. So a computer with a random Turing oracle can compute things that a Turing machine cannot.

Digital physics

All known laws of physics have consequences that are computable by a series of approximations on a digital computer. A hypothesis called digital physics states that this is no accident, because the universe itself is computable on a universal Turing machine. This would imply that no computer more powerful than a universal Turing machine can be built physically.

Examples

The computational systems (algebras, calculi) that are discussed as Turing complete systems are those intended for studying theoretical computer science. They are intended to be as simple as possible, so that it would be easier to understand the limits of computation. Here are a few:
Most programming languages, conventional and unconventional, are Turing-complete. This includes:
Rewrite systems are also Turing-complete.

Turing completeness is an abstract statement of ability, rather than a prescription of specific language features used to implement that ability. The features used to achieve Turing completeness can be quite different; Fortran systems would use loop constructs or possibly even goto statements to achieve repetition; Haskell and Prolog, lacking looping almost entirely, would use recursion. Most programming languages are describing computations on von Neumann architectures, which have memory (RAM and register) and a control unit. These two elements make this architecture Turing-complete. Even pure functional languages are Turing-complete.[4][NB 2]

Turing completeness in declarative SQL is implemented through recursive common table expressions.[5] Unsurprisingly, procedural extensions to SQL (PLSQL, etc.) are also Turing complete. This illustrates one reason why relatively powerful non-Turing-complete languages are rare: the more powerful the language is initially, the more complex are the tasks to which it is applied and the sooner its lack of completeness becomes perceived as a drawback, encouraging its extension until it is Turing complete.

The untyped lambda calculus is Turing-complete, but many typed lambda calculi, including System F, are not. The value of typed systems is based in their ability to represent most typical computer programs while detecting more errors.

Rule 110 and Conway's Game of Life, both cellular automata, are Turing complete.

Unintentional Turing completeness

Some games and other software are Turing-complete by accident.
Video games:
Card games:
Zero-person games (simulations):

Non-Turing-complete languages

Many computational languages exist that are not Turing complete. One such example is the set of regular languages, which are generated by regular expressions and which are recognized by finite automata. A more powerful but still not Turing-complete extension of finite automata is the category of pushdown automata and context-free grammars, which are commonly used to generate parse trees in an initial stage of program compiling. Further examples include some of the early versions of the pixel shader languages embedded in Direct3D and OpenGL extensions.[citation needed]

In total functional programming languages, such as Charity and Epigram, all functions are total and must terminate. Charity uses a type system and control constructs based on category theory, whereas Epigram uses dependent types. The LOOP language is designed so that it computes only the functions that are primitive recursive. All of these compute proper subsets of the total computable functions, since the full set of total computable functions is not computably enumerable. Also, since all functions in these languages are total, algorithms for recursively enumerable sets cannot be written in these languages, in contrast with Turing machines.

Although (untyped) lambda calculus is Turing-complete, simply typed lambda calculus is not.

Data languages

The notion of Turing-completeness does not apply to languages such as XML, HTML, JSON, YAML and S-expressions, because they are typically used to represent structured data, not describe computation. These are sometimes referred to as markup languages, or more properly as "container languages" or "data description languages".[citation needed] However, Rule 110, a Turing-complete cellular automaton, has been successfully implemented in CSS 3, thus proving, to some extent, its Turing completeness.[11]

Friday, July 6, 2018

Programming paradigm

From Wikipedia, the free encyclopedia

Programming paradigms are a way to classify programming languages based on their features. Languages can be classified into multiple paradigms.

Some paradigms are concerned mainly with implications for the execution model of the language, such as allowing side effects, or whether the sequence of operations is defined by the execution model. Other paradigms are concerned mainly with the way that code is organized, such as grouping a code into units along with the state that is modified by the code. Yet others are concerned mainly with the style of syntax and grammar.

Common programming paradigms include:
  • imperative which allows side effects,
  • declarative which does not state the order in which operations execute,
    • functional which disallows side effects,
    • logic which has a particular style of execution model coupled to a particular style of syntax and grammar, and
  • symbolic programming which has a particular style of syntax and grammar.[1][2][3]
For example, languages that fall into the imperative paradigm have two main features: they state the order in which operations occur, with constructs that explicitly control that order, and they allow side effects, in which state can be modified at one point in time, within one unit of code, and then later read at a different point in time inside a different unit of code. The communication between the units of code is not explicit. Meanwhile, in object-oriented programming, code is organized into objects that contain state that is only modified by the code that is part of the object. Most object-oriented languages are also imperative languages. In contrast, languages that fit the declarative paradigm do not state the order in which to execute operations. Instead, they supply a number of operations that are available in the system, along with the conditions under which each is allowed to execute. The implementation of the language's execution model tracks which operations are free to execute and chooses the order on its own. More at Comparison of multi-paradigm programming languages.

Overview

Overview of the various programming paradigms according to Peter Van Roy[4]:5[5]

Just as software engineering (as a process) is defined by differing methodologies, so the programming languages (as models of computation) are defined by differing paradigms. Some languages are designed to support one paradigm (Smalltalk supports object-oriented programming, Haskell supports functional programming), while other programming languages support multiple paradigms (such as Object Pascal, C++, Java, C#, Scala, Visual Basic, Common Lisp, Scheme, Perl, PHP, Python, Ruby, Oz, and F#). For example, programs written in C++, Object Pascal or PHP can be purely procedural, purely object-oriented, or can contain elements of both or other paradigms. Software designers and programmers decide how to use those paradigm elements.

In object-oriented programming, programs are treated as a set of interacting objects. In functional programming, programs are treated as a sequence of stateless function evaluations. When programming computers or systems with many processors, in process-oriented programming, programs are treated as sets of concurrent processes acting on logically shared data structures.

Many programming paradigms are as well known for the techniques they forbid as for those they enable. For instance, pure functional programming disallows use of side-effects, while structured programming disallows use of the goto statement. Partly for this reason, new paradigms are often regarded as doctrinaire or overly rigid by those accustomed to earlier styles.[6] Yet, avoiding certain techniques can make it easier to understand program behavior, and to prove theorems about program correctness.

Programming paradigms can also be compared with programming models which allow invoking an execution model by using only an API. Programming models can also be classified into paradigms, based on features of the execution model.

For parallel computing, using a programming model instead of a language is common. The reason is that details of the parallel hardware leak into the abstractions used to program the hardware. This causes the programmer to have to map patterns in the algorithm onto patterns in the execution model (which have been inserted due to leakage of hardware into the abstraction). As a consequence, no one parallel programming language maps well to all computation problems. It is thus more convenient to use a base sequential language and insert API calls to parallel execution models, via a programming model. Such parallel programming models can be classified according to abstractions that reflect the hardware, such as shared memory, distributed memory with message passing, notions of place visible in the code, and so forth. These can be considered flavors of programming paradigm that apply to only parallel languages and programming models.

Criticism

Some programming language researchers criticise the notion of paradigms as a classification of programming languages, e.g. Harper [7], and Krishnamurthi.[8] They argue that many programming languages cannot be strictly classified into one paradigm, but rather include features from several paradigms. See Comparison of multi-paradigm programming languages.

History

Different approaches to programming have developed over time, being identified as such either at the time or retrospectively. An early approach consciously identified as such is structured programming, advocated since the mid 1960s. The concept of a "programming paradigm" as such dates at least to 1978, in the Turing Award lecture of Robert W. Floyd, entitled The Paradigms of Programming, which cites the notion of paradigm as used by Thomas Kuhn in his The Structure of Scientific Revolutions (1962).[9]

Machine code

The lowest-level programming paradigms are machine code, which directly represents the instructions (the contents of program memory) as a sequence of numbers, and assembly language where the machine instructions are represented by mnemonics and memory addresses can be given symbolic labels. These are sometimes called first- and second-generation languages.

In the 1960s, assembly languages were developed to support library COPY and quite sophisticated conditional macro generation and preprocessing abilities, CALL to (subroutines), external variables and common sections (globals), enabling significant code re-use and isolation from hardware specifics via use of logical operators such as READ/WRITE/GET/PUT. Assembly was, and still is, used for time critical systems and often in embedded systems as it gives the most direct control of what the machine does.

Procedural languages

The next advance was the development of procedural languages. These third-generation languages (the first described as high-level languages) use vocabulary related to the problem being solved. For example,
  • COmmon Business Oriented Language (COBOL) – uses terms like file, move and copy.
  • FORmula TRANslation (FORTRAN) – using mathematical language terminology, it was developed mainly for scientific and engineering problems.
  • ALGOrithmic Language (ALGOL) – focused on being an appropriate language to define algorithms, while using mathematical language terminology and targeting scientific and engineering problems just like FORTRAN.
  • Programming Language One (PL/I) – a hybrid commercial-scientific general purpose language supporting pointers.
  • Beginners All purpose Symbolic Instruction Code (BASIC) – it was developed to enable more people to write programs.
  • C – a general-purpose programming language, initially developed by Dennis Ritchie between 1969 and 1973 at AT&T Bell Labs.
All these languages follow the procedural paradigm. That is, they describe, step by step, exactly the procedure that should, according to the particular programmer at least, be followed to solve a specific problem. The efficacy and efficiency of any such solution are both therefore entirely subjective and highly dependent on that programmer's experience, inventiveness, and ability.

Object-oriented programming

Following the widespread use of procedural languages, object-oriented programming (OOP) languages were created, such as Simula, Smalltalk, C++, C#, Eiffel, PHP, and Java. In these languages, data and methods to manipulate it are kept as one unit called an object. With perfect encapsulation, one of the distinguishing features of OOP, the only way that another object or user would be able to access the data is via the object's methods. Thus, the inner workings of an object may be changed without affecting any code that uses the object. There is still some controversy raised by Alexander Stepanov, Richard Stallman[10] and other programmers, concerning the efficacy of the OOP paradigm versus the procedural paradigm. The need for every object to have associative methods leads some skeptics to associate OOP with software bloat; an attempt to resolve this dilemma came through polymorphism.
Because object-oriented programming is considered a paradigm, not a language, it is possible to create even an object-oriented assembler language. High Level Assembly (HLA) is an example of this that fully supports advanced data types and object-oriented assembly language programming – despite its early origins. Thus, differing programming paradigms can be seen rather like motivational memes of their advocates, rather than necessarily representing progress from one level to the next[citation needed]. Precise comparisons of the efficacy of competing paradigms are frequently made more difficult because of new and differing terminology applied to similar entities and processes together with numerous implementation distinctions across languages.

Further paradigms

Literate programming, as a form of imperative programming, structures programs as a human-centered web, as in a hypertext essay: documentation is integral to the program, and the program is structured following the logic of prose exposition, rather than compiler convenience.

Independent of the imperative branch, declarative programming paradigms were developed. In these languages, the computer is told what the problem is, not how to solve the problem – the program is structured as a set of properties to find in the expected result, not as a procedure to follow. Given a database or a set of rules, the computer tries to find a solution matching all the desired properties. An archetype of a declarative language is the fourth generation language SQL, and the family of functional languages and logic programming.

Functional programming is a subset of declarative programming. Programs written using this paradigm use functions, blocks of code intended to behave like mathematical functions. Functional languages discourage changes in the value of variables through assignment, making a great deal of use of recursion instead.

The logic programming paradigm views computation as automated reasoning over a body of knowledge. Facts about the problem domain are expressed as logic formulae, and programs are executed by applying inference rules over them until an answer to the problem is found, or the set of formulae is proved inconsistent.

Symbolic programming is a paradigm that describes programs able to manipulate formulas and program components as data.[3] Programs can thus effectively modify themselves, and appear to "learn", making them suited for applications such as artificial intelligence, expert systems, natural-language processing and computer games. Languages that support this paradigm include Lisp and Prolog.[11]

Support for multiple paradigms

Most programming languages support more than one programming paradigm to allow programmers to use the most suitable programming style and associated language constructs for a given job.[12]

Quantum programming

From Wikipedia, the free encyclopedia

Quantum programming is the process of assembling sequences of instructions, called quantum programs, that are capable of running on a quantum computer. Quantum programming languages help express quantum algorithms using high-level constructs.[1] The most up-to-date list of open-source quantum programming projects can be found here.

Quantum instruction sets

Quantum instruction sets are used to turn higher level algorithms into physical instructions that can be executed on quantum processors. Sometimes these instructions are specific to a given hardware platform, e.g. ion traps or superconducting qubits.

Quil

Quil is an instruction set architecture for quantum computing that first introduced a shared quantum/classical memory model. It was introduced by Robert Smith, Michael Curtis, and William Zeng in A Practical Quantum Instruction Set Architecture.[2] Many quantum algorithms (including quantum teleportation, quantum error correction, simulation,[3][4] and optimization algorithms[5]) require a shared memory architecture.

OpenQASM

OpenQASM[6] is the intermediate representation introduced by IBM for use with their Quantum Experience.

Quantum programming languages

There are two main groups of quantum programming languages: imperative quantum programming languages and functional quantum programming languages.

Imperative languages

The most prominent representatives of the imperative languages are QCL,[7] LanQ[8] and Q|SI>.[9]

QCL

Quantum Computation Language (QCL) is one of the first implemented quantum programming languages.[10] The most important feature of QCL is the support for user-defined operators and functions. Its syntax resembles the syntax of the C programming language and its classical data types are similar to primitive data types in C. One can combine classical code and quantum code in the same program.

Quantum pseudocode

Quantum pseudocode proposed by E. Knill is the first formalized language for description of quantum algorithms. It was introduced and, moreover, was tightly connected with a model of quantum machine called Quantum Random Access Machine (QRAM).

Q|SI>

Q|SI>[9] is a platform embedded in .Net language supporting quantum programming in a quantum extension of while-language.[11] This platform includes a compiler of the quantum while-language[12] and a chain of tools for the simulation of quantum computation, optimisation of quantum circuits, termination analysis of quantum programs,[13] and verification of quantum programs.[14][15]

Q language

Q Language is the second implemented imperative quantum programming language.[16] Q Language was implemented as an extension of C++ programming language. It provides classes for basic quantum operations like QHadamard, QFourier, QNot, and QSwap, which are derived from the base class Qop. New operators can be defined using C++ class mechanism.

Quantum memory is represented by class Qreg.
 
 Qreg x1; // 1-qubit quantum register with initial value 0
   Qreg x2(2,0); // 2-qubit quantum register with initial value 0

The computation process is executed using a provided simulator. Noisy environments can be simulated using parameters of the simulator.

qGCL

Quantum Guarded Command Language (qGCL) was defined by P. Zuliani in his PhD thesis. It is based on Guarded Command Language created by Edsger Dijkstra.

It can be described as a language of quantum programs specification.

QMASM

Quantum Macro Assembler (QMASM) is a low-level language specific to quantum annealers such as the D-Wave.[17]

Functional languages

Efforts are underway to develop functional programming languages for quantum computing. Functional programming languages are well-suited for reasoning about programs. Examples include Selinger's QPL,[18] and the Haskell-like language QML by Altenkirch and Grattage.[19][20] Higher-order quantum programming languages, based on lambda calculus, have been proposed by van Tonder,[21] Selinger and Valiron[22] and by Arrighi and Dowek.[23]

QFC and QPL

QFC and QPL are two closely related quantum programming languages defined by Peter Selinger.
They differ only in their syntax: QFC uses a flow chart syntax, whereas QPL uses a textual syntax. These languages have classical control flow but can operate on quantum or classical data. Selinger gives a denotational semantics for these languages in a category of superoperators.

QML

QML is a Haskell-like quantum programming language by Altenkirch and Grattage.[19] Unlike Selinger's QPL, this language takes duplication, rather than discarding, of quantum information as a primitive operation. Duplication in this context is understood to be the operation that maps |\phi\rangle to |\phi \rangle \otimes |\phi \rangle , and is not to be confused with the impossible operation of cloning; the authors claim it is akin to how sharing is modeled in classical languages. QML also introduces both classical and quantum control operators, whereas most other languages rely on classical control.

An operational semantics for QML is given in terms of quantum circuits, while a denotational semantics is presented in terms of superoperators, and these are shown to agree. Both the operational and denotational semantics have been implemented (classically) in Haskell.[24]

LIQUi|>

LIQUi|> (pronounced 'liquid') is a quantum simulation extension on the F# programming language.[25] It is currently being developed by the Quantum Architectures and Computation Group (QuArC)[26] part of the StationQ efforts at Microsoft Research. LIQUi|> seeks to allow theorists to experiment with quantum algorithm design before physical quantum computers are available for use.[27]

It includes a programming language, optimization and scheduling algorithms, and quantum simulators. LIQUi|> can be used to translate a quantum algorithm written in the form of a high-level program into the low-level machine instructions for a quantum device.[28]

Quantum lambda calculi

Quantum lambda calculi are extensions of the classical lambda calculus introduced by Alonzo Church and Stephen Cole Kleene in the 1930s. The purpose of quantum lambda calculi is to extend quantum programming languages with a theory of higher-order functions.

The first attempt to define a quantum lambda calculus was made by Philip Maymin in 1996.[29] His lambda-q calculus is powerful enough to express any quantum computation. However, this language can efficiently solve NP-complete problems, and therefore appears to be strictly stronger than the standard quantum computational models (such as the quantum Turing machine or the quantum circuit model). Therefore, Maymin's lambda-q calculus is probably not implementable on a physical device.
In 2003, André van Tonder defined an extension of the lambda calculus suitable for proving correctness of quantum programs. He also provided an implementation in the Scheme programming language.[30]

In 2004, Selinger and Valiron defined a strongly typed lambda calculus for quantum computation with a type system based on linear logic.

Quipper

Quipper was published in 2013.[31] It is implemented as an embedded language, using Haskell as the host language.[32] For this reason, quantum programs written in Quipper are written in Haskell using provided libraries. For example, the following code implements preparation of a superposition

    import Quipper
    
    spos :: Bool -> Circ Qubit
    spos b = do q <- span=""> qinit b
                r <- span=""> hadamard q
                return r

Multi-Paradigm languages

Q#

Q# is a quantum programming language from Microsoft.

Quantum platforms

Quantum operating systems include t|ket> by Cambridge Quantum Computing, Rigetti Forest, and an OS currently under development by Microsoft.

Lie point symmetry

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_point_symmetry     ...