Search This Blog

Wednesday, January 16, 2019

Indeterminacy (philosophy)

From Wikipedia, the free encyclopedia

Indeterminacy, in philosophy, can refer both to common scientific and mathematical concepts of uncertainty and their implications and to another kind of indeterminacy deriving from the nature of definition or meaning. It is related to deconstructionism and to Nietzsche's criticism of the Kantian noumenon.

Indeterminacy in philosophy

Introduction

The problem of indeterminacy arises when one observes the eventual circularity of virtually every possible definition. It is easy to find loops of definition in any dictionary, because this seems to be the only way that certain concepts, and generally very important ones such as that of existence, can be defined in the English language. A definition is a collection of other words, and in any finite dictionary if one continues to follow the trail of words in search of the precise meaning of any given term, one will inevitably encounter this linguistic indeterminacy. 

Philosophers and scientists generally try to eliminate indeterminate terms from their arguments, since any indeterminate thing is unquantifiable and untestable; similarly, any hypothesis which consists of a statement of the properties of something unquantifiable or indefinable cannot be falsified and thus cannot be said to be supported by evidence that does not falsify it. This is related to Popper's discussions of falsifiability in his works on the scientific method. The quantifiability of data collected during an experiment is central to the scientific method, since reliable conclusions can only be drawn from replicable experiments, and since in order to establish observer agreement scientists must be able to quantify experimental evidence.

Kant and hazards of positing the "thing in itself"

Immanuel Kant unwittingly proposed one answer to this question in his Critique of Pure Reason by stating that there must "exist" a "thing in itself" – a thing which is the cause of phenomena, but not a phenomenon itself. But, so to speak, "approximations" of "things in themselves" crop up in many models of empirical phenomena. Singularities in physics, such as gravitational singularities, certain aspects of which (e.g., their unquantifiability) can seem almost to mirror various "aspects" of the proposed "thing in itself", are generally eliminated (or attempts are made at eliminating them) in newer, more precise models of the universe. Definitions of various psychiatric disorders stem, according to philosophers who draw on the work of Michel Foucault, from a belief that something unobservable and indescribable is fundamentally "wrong" with the mind of whoever suffers from such a disorder. Proponents of Foucault's treatment of the concept of insanity would assert that one need only try to quantify various characteristics of such disorders as presented in today's Diagnostic and Statistical Manual (e.g., delusion, one of the diagnostic criteria which must be exhibited by a patient if he or she is to be considered schizophrenic) in order to discover that the field of study known as abnormal psychology relies upon indeterminate concepts in defining virtually each "mental disorder" it describes. The quality that makes a belief a delusion is indeterminate to the extent to which it is unquantifiable; arguments that delusion is determined by popular sentiment (i.e., "almost no-one believes that he or she is made of cheese, and thus that belief is a delusion") would lead to the conclusion that, for example, Alfred Wegener's assertion of continental drift was a delusion since it was dismissed for decades after it was made.

Nietzsche and the indeterminacy of the "thing in itself"

Relevant criticism of Kant's original formulation of the "thing in itself" can be found in the works of Friedrich Wilhelm Nietzsche, who argued against what he held to be the indeterminate nature of such concepts as the Platonic idea, the subject, the Kantian noumenon, the opposition of "appearance" to "reality", etc. Nietzsche concisely argued against Kant's noumenon in his On Truth and Lies in a Nonmoral Sense as follows:
The 'thing in itself' (which is precisely what the pure truth, apart from any of its consequences, would be) is likewise something quite incomprehensible to the creator of language and something not in the least worth striving for.
In his Beyond Good and Evil, Nietzsche argues against the "misleading significance of words" and its production of a "thing in itself":
I would repeat it, however, a hundred times, that 'immediate certainty,' as well as 'absolute knowledge' and the 'thing in itself,' involve a CONTRADICTIO IN ADJECTO; we really ought to free ourselves from the misleading significance of words!
Furthermore, Nietzsche argued against such singularities as the atom in the scientific models of his day in The Will to Power:
For all its detachment and freedom from emotion, our science is still the dupe of linguistic habits; it has never got rid of those changelings called 'subjects.' The atom is one such changeling, another is the Kantian 'thing-in-itself.'

Approximation versus equality

The concept of something that is unapproachable but always further-approximable has led to a rejection by philosophers like Nietzsche of the concept of exact equality in general in favor of that of approximate similarity:
Every word instantly becomes a concept precisely insofar as it is not supposed to serve as a reminder of the unique and entirely individual original experience to which it owes its origin; but rather, a word becomes a concept insofar as it simultaneously has to fit countless more or less similar cases – which means, purely and simply, cases which are never equal and thus altogether unequal.
What then is truth? A movable host of metaphors, metonymies, and; anthropomorphisms: in short, a sum of human relations which have been poetically and rhetorically intensified, transferred, and embellished, and which, after long usage, seem to a people to be fixed, canonical, and binding. Truths are illusions which we have forgotten are illusions- they are metaphors that have become worn out and have been drained of sensuous force, coins which have lost their embossing and are now considered as metal and no longer as coins.
If one states an equation between two things, one states, in effect, that they are the same thing. It can be argued that this cannot possibly be true, since one will then consider the properties which the two sides of the equation share – that which makes them "equal" – but one also can, and does, consider them as two separate concepts. Even in a mathematical statement as simple as "x=x", one encounters fundamental differences between the two "x"es under consideration: firstly, that there are two distinct "x"es, in that they neither occupy the same space on this page nor in one's own mind. There would otherwise be only one "x". Secondly, that if two things were absolutely equal in every possible respect, then there would necessarily be no reason to consider their equality. Nothing could lead anyone to consider the possibility or impossibility of their equality if there were no properties not shared between "them", since there would necessarily be no relationship between them whatsoever. Thirdly, and most importantly, if two things were equal in every possible respect they would necessarily not be two things, but the very same thing, since there would be no difference to separate them. 

In examples as odd as this, the differences between two approximately equal things may be very small indeed, and it is certainly true that they are quite irrelevant to most discussions. Acceptance of the reflexive property illustrated above has led to useful mathematical discoveries which have influenced the life of anyone reading this article on a computer. But in an examination of the possibility of the determinacy of any possible concept, differences like this are supremely relevant since that quality which could possibly make two separate things "equal" seems to be indeterminate.

The indeterminacy of the pharmakon in Derrida's Plato's Pharmacy

Indeterminacy was discussed in one of Jacques Derrida's early works Plato's Pharmacy (1969), a reading of Plato's Phaedrus and Phaedo. Plato writes of a fictionalized conversation between Socrates and a student, in which Socrates tries to convince the student that writing is inferior to speech. Socrates uses the Egyptian myth of Thoth's creation of writing to illustrate his point. As the story goes, Thoth presents his invention to the god-king of Upper Egypt for judgment. Upon its presentation, Thoth offers script as a pharmakon for the Egyptian people. The Greek word pharmakon poses a quandary for translators: it is both a remedy and a poison. In the proffering of a pharmakon, Thoth presents it as its true meaning: a harm and benefit. The god-king, however, refuses the invention. Through various reasonings, he determines the pharmakon of writing to be a bad thing for the Egyptian people. The pharmakon, the undecidable, has been returned decided. The problem, as Derrida reasons, is this: since the word pharmakon, in the original Greek, means both a remedy and a poison, it cannot be determined as fully remedy or fully poison. Amon rejected writing as fully poison in Socrates' retelling of the tale, thus shutting out the other possibilities.

Foucault and the indeterminacy of insanity

The philosopher Michel Foucault wrote about the existence of such problems of precise definition in the very concept of insanity itself – a very rough approximation of his argument can be found in the late social commentator and journalist Hunter S. Thompson's book, Kingdom of Fear:
The only difference between the Sane and the Insane, is IN and yet within this world, the Sane have the power to have the Insane locked up.
Another summary of Foucault's original argument against the indeterminacy of the concept of insanity in his Madness and Civilization can be found in the following excerpt from the Literature, Arts, and Medicine Database:
Central to this is the notion of confinement as a meaningful exercise. Foucault's history explains how the mad came first to be confined; how they became identified as confined due to moral and economic factors that determined those who ought to be confined; how they became perceived as dangerous through their confinement, partly by way of atavistic identification with the lepers whose place they had come to occupy; how they were 'liberated' by Pinel and Tuke, but in their liberation remained confined, both physically in asylums and in the designation of being mad; and how this confinement subsequently became enacted in the figure of the psychiatrist, whose practice is 'a certain moral tactic contemporary with the end of the eighteenth century, preserved in the rites of the asylum life, and overlaid by the myths of positivism.' Science and medicine, notably, come in at the later stages, as practices 'elaborated once this division' between the mad and the sane has been made (ix).
In The Archaeology of Knowledge, Foucault addresses indeterminacy directly by discussing the origin of the meaning of concepts:
Foucault directs his analysis toward the 'statement', the basic unit of discourse that he believes has been ignored up to this point. 'Statement' is the English translation from French énoncé (that which is enunciated or expressed), which has a peculiar meaning for Foucault. 'Énoncé' for Foucault means that which makes propositions, utterances, or speech acts meaningful. In this understanding, statements themselves are not propositions, utterances, or speech acts. Rather, statements create a network of rules establishing what is meaningful, and it is these rules that are the preconditions for propositions, utterances, or speech acts to have meaning. Statements are also 'events'. Depending on whether or not they comply with the rules of meaning, a grammatically correct sentence may still lack meaning and inversely, an incorrect sentence may still be meaningful. Statements depend on the conditions in which they emerge and exist within a field of discourse. It is huge collections of statements, called discursive formations, toward which Foucault aims his analysis.

Rather than looking for a deeper meaning underneath discourse or looking for the source of meaning in some transcendental subject, Foucault analyzes the conditions of existence for meaning. In order to show the principles of meaning production in various discursive formations he details how truth claims emerge during various epochs on the basis of what was actually said and written during these periods of time.
The difference described by Foucault between the sane and the insane does have observable and very real effects on millions of people daily and can be characterized in terms of those effects, but it can also serve to illustrate a particular effect of the indeterminacy of definition: i.e., that insofar as the general public tends not to characterize or define insanity in very precise terms, it tends, according to Foucault, unnecessarily and arbitrarily to confine some of its members on an irrational basis. The less-precisely such states as "insanity" and "criminality" are defined in a society, the more likely that society is to fail to continue over time to describe the same behaviors as characteristic of those states (or, alternately, to characterize such states in terms of the same behaviors).

Indeterminacy in discourse analysis

Steve Hoenisch asserts in his article Interpretation and Indeterminacy in Discourse Analysis that "[T]he exact meaning of a speaker's utterance in a contextualized exchange is often indeterminate. Within the context of the analysis of the teacher-pupil exchange, I will argue for the superiority of interactional linguistics over speech act theory because it reduces the indeterminacy and yields a more principled interpretation..."

Indeterminacy and consciousness

Richard Dawkins, the man who coined the term meme in the 1970s, described the concept of faith in his documentary, Root of All Evil?, as "the process of non-thinking". In the documentary, he used Bertrand Russell's analogy between a teapot orbiting the sun (something that cannot be observed because the brightness of the sun would obscure it even from the best telescope's view) and the object of one's faith (in this particular case, God) to explain that a highly indeterminate idea can self-replicate freely: "Everybody in the society had faith in the teapot. Stories of the teapot had been handed down for generations as part of the tradition of society. There are holy books about the teapot."

In Darwin's Dangerous Idea, Daniel Dennett argues against the existence of determinate meaning (in this case, of the subjective experience of vision for frogs) via an explanation of their indeterminacy in the chapter entitled The Evolution of Meanings, in the section The Quest for Real Meanings:
Unless there were 'meaningless' or 'indeterminate' variation in the triggering conditions of the various frogs' eyes, there could be no raw material [...] for selection for a new purpose to act upon. The indeterminacy that Fodor (and others) see as a flaw [...] is actually a prediction for such evolution [of "purpose"]. The idea that there must be something determinate that the frog's eye really means – some possibly unknowable proposition in froggish that expresses exactly what the frog's eye is telling the frog's brain – is just essentialism applied to meaning (or function). Meaning, like function on which it so directly depends, is not something determinate at its birth...
Dennet argues, controversially, against qualia in Consciousness Explained. Qualia are attacked from several directions at once: he maintains they do not exist (or that they are too ill-defined to play any role in science, or that they are really something else, i.e. behavioral dispositions). They cannot simultaneously have all the properties attributed to them by philosophers—incorrigible, ineffable, private, directly accessible and so on. The multiple drafts theory is leveraged to show that facts about qualia are not definite. Critics object that one's own qualia are subjectively quite clear and distinct to oneself

The self-replicating nature of memes is a partial explanation of the recurrence of indeterminacies in language and thought. The wide influences of Platonism and Kantianism in Western philosophy can arguably be partially attributed to the indeterminacies of some of their most fundamental concepts (namely, the Idea and the Noumenon, respectively). 

For a given meme to exhibit replication and heritability – that is, for it to be able to make an imperfect copy of itself which is more likely to share any given trait with its "parent" meme than with some random member of the general "population" of memes – it must in some way be mutable, since memetic replication occurs by means of human conceptual imitation rather than via the discrete molecular processes that govern genetic replication. (If a statement were to generate copies of itself that didn't meaningfully differ from it, that process of copying would more accurately be described as "duplication" than as "replication", and it would be incorrect to term these statements "memes"; the same would be true if the "child" statements did not noticeably inherit a substantial proportion of their traits from their "parent" statements.) In other words, if a meme is defined roughly (and somewhat arbitrarily) as a statement (or as a collection of statements, like Foucault's "discursive formations") that inherits some, but not all, of its properties (or elements of its definition) from its "parent" memes and which self-replicates, then indeterminacy of definition could be seen as advantageous to memetic replication, since an absolute rigidity of definition would preclude memetic adaptation.

It is important to note that indeterminacy in linguistics can arguably partially be defeated by the fact that languages are always changing. However, what the entire language and its collected changes continue to reflect is sometimes still considered to be indeterminate.

Criticism

Persons of faith argue that faith "is the basis of all knowledge". The Wikipedia article on faith states that "one must assume, believe, or have faith in the credibility of a person, place, thing, or idea in order to have a basis for knowledge." In this way the object of one's faith is similar to Kant's noumenon

This would seem to attempt to make direct use of the indeterminacy of the object of one's faith as evidential support of its existence: if the object of one's faith were to be proven to exist (i.e., if it were no longer of indeterminate definition, or if it were no longer unquantifiable, etc.), then faith in that object would no longer be necessary; arguments from authority such as those mentioned above wouldn't either; all that would be needed to prove its existence would be scientific evidence. Thus, if faith is to be considered as a reliable basis for knowledge, persons of faith would seem, in effect, to assert that indeterminacy is not only necessary, but good (see Nassim Taleb).

Indeterminacy in new physical theories

Science generally attempts to eliminate vague definitions, causally inert entities, and indeterminate properties, via further observation, experimentation, characterization, and explanation. Occam's razor tends to eliminate causally inert entities from functioning models of quantifiable phenomena, but some quantitative models, such as quantum mechanics, actually imply certain indeterminacies, such as the relative indeterminacy of quantum particles' positions to the precision with which their momenta can be measured (and vice versa).

One ardent supporter of the possibility of a final unifying theory (and thus, arguably, of the possibility of the end of some current indeterminacies) in physics, Steven Weinberg, stated in an interview with PBS that:
Sometimes [...] people say that surely there's no final theory because, after all, every time we've made a step toward unification or toward simplification we always find more and more complexity there. That just means we haven't found it yet. Physicists never thought they had the final theory.
The Wikipedia article on the possibility of such a "theory of everything" notes that 
Other possibilities which may frustrate the explanatory capacity of a TOE may include sensitivity to the boundary conditions of the universe, or the existence of mathematical chaos in its solutions, making its predictions precise, but useless.
Chaos theory argues that precise prediction of the behavior of complex systems becomes impossible because of the observer's inability to gather all necessary data. 

As yet, it seems entirely possible that there shall never be any "final theory" of all phenomena, and that, rather, explanations may instead breed more and more complex and exact explanations of the new phenomena uncovered by current experimentation. In this argument, the "indeterminacy" or "thing in itself" is the "final explanation" that will never be reached; this can be compared to the concept of the limit in calculus, in that quantities may approach, but never reach, a given limit in certain situations.

Criticism

Proponents of a deterministic universe have criticized various applications of the concept of indeterminacy in the sciences; for instance, Einstein once stated that "God does not play dice" in a succinct (but now unpopular) argument against the theory of quantum indeterminacy, which states that the actions of particles of extremely low mass or energy are unpredictable because an observer's interaction with them changes either their positions or momenta. (The "dice" in Einstein's metaphor refer to the probabilities that these particles will behave in particular ways, which is how quantum mechanics addressed the problem.)

At first it might seem that a criticism could be made from a biological standpoint in that an indeterminate idea would seem not to be beneficial to the species that holds it.[citation needed] A strong counterargument, however, is that not all traits exhibited by living organisms will be seen in the long term as evolutionarily advantageous, given that extinctions occur regularly and that phenotypic traits have often died out altogether – in other words, an indeterminate meme may in the long term demonstrate its evolutionary value to the species that produced it in either direction; humans are, as yet, the only species known to make use of such concepts. It might also be argued that conceptual vagueness is an inevitability, given the limited capacity of the human nervous systems. We just do not have enough neurons to maintain separate concepts for "dog with 1,000,000 hairs", "dog with 1,000,001 hairs" and so on. But conceptual vagueness is not metaphysical indeterminacy.

Synonymous concepts in philosophy

Uncertainty and indeterminacy are words for essentially the same concept in both quantum mechanics. Unquantifiability, and undefinability (or indefinability), can also sometimes be synonymous with indeterminacy. In science, indeterminacy can sometimes be interchangeable with unprovability or unpredictability. Also, anything entirely inobservable can be said to be indeterminate in that it cannot be precisely characterized.

Revision theory

From Wikipedia, the free encyclopedia

Revision theory is a subfield of philosophical logic. It consists of a general theory of definitions, including (but not limited to) circular and interdependent concepts. A circular definition is one in which the concept being defined occurs in the statement defining it—for example, defining a G as being blue and to the left of a G. Revision theory provides formal semantics for defined expressions, and formal proof systems study the logic of circular expressions.

Definitions are important in philosophy and logic. Although circular definitions have been regarded as logically incorrect or incoherent, revision theory demonstrates that they are meaningful and can be studied with mathematical and philosophical logic. It has been used to provide circular analyses of philosophical and logical concepts.

History

Revision theory is a generalization of the revision theories of truth developed Anil Gupta, Hans Herzberger, and Nuel Belnap. In the revision theories of Gupta and Herzberger, revision is supposed to reflect intuitive evaluations of sentences that use the truth predicate. Some sentences are stable in their evaluations, such as the truth-teller sentence,
The truth-teller is true.
Assuming the truth-teller is true, it is true, and assuming that it is false, it is false. Neither status will change. On the other hand, some sentences oscillate, such as the liar,
The liar sentence is not true.
On the assumption that the liar is true, one can show that it is false, and on the assumption that it is false, one can show that it is true. This instability is reflected in revision sequences for the liar.
The generalization to circular definitions was developed by Gupta, in collaboration with Belnap. Their book, The Revision Theory of Truth, presents an in-depth development of the theory of circular definitions, as well as an overview and critical discussion of philosophical views on truth and the relation between truth and definition.

Philosophical background

The philosophical background of revision theory is developed by Gupta and Belnap. Other philosophers, such as Aladdin Yaqūb, have developed philosophical interpretations of revision theory in the context of theories of truth, but not in the general context of circular definitions.
Gupta and Belnap maintain that circular concepts are meaningful and logically acceptable. Circular definitions are formally tractable, as demonstrated by the formal semantics of revision theory. As Gupta and Belnap put it, "the moral we draw from the paradoxes is that the domain of the meaningful is more extensive than it appears to be, that certain seemingly meaningless concepts are in fact meaningful."
The meaning of a circular predicate is not an extension, as is often assigned to non-circular predicates. Its meaning, rather, is a rule of revision that determines how to generate a new hypothetical extension given an initial one. These new extensions are at least as good as the originals, in the sense that, given one extension, the new extension contains exactly the things that satisfy the definiens for a particular circular predicate. In general, there is no unique extension on which revision will settle.
Revision theory offers an alternative to the standard theory of definitions. The standard theory maintains that good definitions have two features. First, defined symbols can always be eliminated, replaced by what defines them. Second, definitions should be conservative in the sense that adding a definition should not result in new consequences in the original language. Revision theory rejects the first but maintains the second, as demonstrated for both of the strong senses of validity presented below.
The logician Alfred Tarski presented two criteria for evaluating definitions as analyses of concepts: formal correctness and material adequacy. The criterion of formal correctness states that in a definition, the definiendum must not occur in the definiens. The criterion of material adequacy says that the definition must be faithful to the concept being analyzed. Gupta and Belnap recommend siding with material adequacy in cases in which the two criteria conflict. To determine whether a circular definition provides a good analysis of a concept requires evaluating the material adequacy of the definition. Some circular definitions will be good analyses, while some will not. Either way, formal correctness, in Tarski’s sense, will be violated.

Semantics for circular predicates

The central semantic idea of revision theory is that a definition, such as that of being a , provides a rule of revision that tells one what the new extension for the definiendum should be, given a hypothetical extension of the definiendum and information concerning the undefined expressions. Repeated application of a rule of revision generates sequences of hypotheses, which can be used to define logics of circular concepts. In work on revision theory, it is common to use the symbol, , to indicate a definition, with the left-hand side being the definiendum and the right-hand side the definiens. The example
Being a is defined as being both blue and to the left of a
can then be written as
Being a being both blue and to the left of a .
Given a hypothesis about the extension of , one can obtain a new extension for appealing to the meaning of the undefined expressions in the definition, namely blue and to the left of.
We begin with a ground language, , that is interpreted via a classical ground model , which is a pair of a domain and an interpretation function . Suppose that the set of definitions is the following,
where each is a formula that may contain any of the definienda , including itself. It is required that in the definitions, only the displayed variables, , are free in the definientia, the formulas . The language is expanded with these new predicates, , to form +. When the set contains few defined predicates, it is common to use the notation, to emphasize that may contain .
A hypothesis is a function from the definienda of to tuples of . The model is just like the model except that interprets each definiendum according to the following biconditional, the left-hand side of which is read as “ is true in .”
The set of definitions yields a rule of revision, or revision operator, . Revision operators obey the following equivalence for each definiendum, , in .
A tuple will satisfy a definiendum after revision just in case it satisfies the definiens for , namely , prior to revision. This is to say that the tuples that satisfy according to a hypothesis will be exactly those that satisfy according to the revision of that hypothesis.
The classical connectives are evaluated in the usual, recursive way in . Only the evaluation of a defined predicate appeals to the hypotheses.

Sequences

Revision sequences are sequences of hypotheses satisfying extra conditions.[8] We will focus here on sequences that are -long, since transfinite revision sequences require the additional specification of what to do at limit stages.
Let be a sequence of hypotheses, and let be the -th hypothesis in . An -long sequence of hypotheses is a revision sequence just in case for all ,
Recursively define iteration as
  • and
The -long revision sequence starting from can be written as follows.
One sense of validity, validity, can be defined as follows. A sentence is valid in in on iff there exists an such that for all and for all , . A sentence is valid on just in case it is valid in all .
Validity in can be recast in terms of stability in -long sequences. A sentence is stably true in a revision sequence just in case there is an such that for all , . A sentence is stably false in a revision sequence just in case there is an such that for all , . In these terms, a sentence is valid in in on just in case is stably true in all -long revision sequences on .

Examples

For the first example, let be Let the domain of the ground model be {a, b} , and let and . There are then four possible hypotheses for : , {a} , {b} , {a, b} . The first few steps of the revision sequences starting from those hypotheses are illustrated by the following table.
Sample revision for
stage 0 stage 1 stage 2 stage 3
{a} {a}
{a} {a}
{b} {a, b} {b} {a, b}
{a, b} {b} {a, b} {b}

As can be seen in the table, goes in and out of the extension of . It never stabilizes. On the other hand, either stays in or stays out. It is stable, but whether it is stably true or stably false depends on the initial hypothesis.
Next, let be As shown in the following table, all hypotheses for the ground model of the previous example are revised to the set {a, b}.
Sample revision for
stage 0 stage 1 stage 2 stage 3
{a, b} {a, b} {a, b}
{a} {a, b} {a, b} {a, b}
{b} {a, b} {a, b} {a, b}
{a, b} {a, b} {a, b} {a, b}

For a slightly more complex revision pattern, let contain and all the numerals, , and let the ground model be , whose domain is the natural numbers, , with interpretation such that for all numerals and is the usual ordering on natural numbers. Let be Let the initial hypothesis be . In this case, the sequence of extensions builds up stage by stage.
Although for every , is valid in , is not valid in .
Suppose the initial hypothesis contains 0, 2, and all the odd numbers. After one revision, the extension of will be {0, 1, 2, 3, 4} . Subsequent revisions will build up the extension as with the previous example. More generally, if the extension of is not all of , then one revision will cut the extension of down to a possibly empty initial segment of the natural numbers and subsequent revisions will build it back up.

Proof system

There is a Fitch-style natural deduction proof system, , for circular definitions. The system uses indexed formulas, , where can be any integer. One can think of the indices as representing relative position in a revision sequence. The premises and conclusions of the rules for the classical connectives all have the same index. For example, here are the conjunction and negation introduction rules.
| 
| 
|  In
| |__ 
| | 
| | 
|  In

For each definition, , in , there is a pair of rules.
| 
|  DfIn
| 
|  DfElim

In these rules, it is assumed that are free for in .
Finally, for formulas of , there is one more rule, the index shift rule.
| 
|  IS

In this rule, and can be any distinct indices. This rule reflects the fact that formulas from the ground language do not change their interpretation throughout the revision process.
The system is sound and complete with respect to validity, meaning a sentence is valid in just in case it is derivable in .
Recently Riccardo Bruni has developed a Hilbert-style axiom system and a sequent system that are both sound and complete with respect to .

Transfinite revision

For some definitions, validity is not strong enough. For example, in definition , even though every number is eventually stably in the extension of , the universally quantified sentence is not valid. The reason is that for any given sentence to be valid, it must stabilize to true after finitely many revisions. On the other hand, needs infinitely many revisions, unless the initial hypothesis already assigns all the natural numbers as the extension of .
Natural strengthenings of validity, and alternatives to it, use transfinitely long revision sequences. Let be the class of all ordinals. The definitions will focus on sequences of hypotheses that are -long.
Suppose is an -long sequence of hypotheses. A tuple is stably in the extension of a defined predicate at a limit ordinal in a sequence just in case there is an such that for all with , . Similarly, a tuple is stably out of the extension of at a limit ordinal just in case there is a stage such that for all with , . Otherwise is unstable at in . Informally, a tuple is stably in an extension at a limit, just in case there’s a stage after which the tuple is in the extension up until the limit, and a tuple is stably out just in case there’s a stage after which it remains out going to the limit stage.
A hypothesis coheres with at a limit ordinal iff for all tuples , if is stably in [stably out of] the extension of at in , then .
An -long sequence of hypotheses is a revision sequence iff for all ,
  • if , then , and
  • if is a limit, then coheres with at .
Just as with the sequences, the successor stages of the sequence are generated by the revision operator. At limit stages, however, the only constraint is that the limit hypothesis cohere with what came before. The unstable elements are set according to a limit rule, the details of which are left open by the set of definitions.
Limit rules can be categorized into two classes, constant and non-constant, depending on whether they do different things at different limit stages. A constant limit rule does the same thing to unstable elements at each limit. One particular constant limit rule, the Herzberger rule, excludes all unstable elements from extensions. According to another constant rule, the Gupta rule, unstable elements are included in extensions just in case they were in . Non-constant limit rules vary the treatment of unstable elements at limits.
Two senses of validity can be defined using -long sequences. The first, validity, is defined in terms of stability. A sentence is valid in in on iff for all -long revision sequences , there is a stage such that is stably true in after stage . A sentence is valid on just in case for all classical ground models , is valid in on .
The second sense of validity, validity, uses near stability rather than stability. A sentence is nearly stably true in a sequence iff there is an such that for all , there is a natural number such that for all , A sentence is nearly stably false in a sequence iff there is an such that for all , there is a natural number such that for all , A nearly stable sentence may have finitely long periods of instability following limits, after which it settles down until the next limit.
A sentence is valid in in on iff for all -long revision sequences , there is a stage such that is nearly stably true in after stage . A sentence is valid in in on just in case it is valid in in all ground models.
If a sentence is valid in , then it is valid in , but not conversely. An example using shows this for validity in a model. The sentence is not valid in in , but it is valid in .
An attraction of validity is that it generates a simpler logic than . The proof system is sound for , but it is not, in general, complete. In light of the completeness of , if a sentence is valid in , then it is valid in , but the converse does not hold in general. Validity in and in are, in general, incomparable. Consequently, is not sound for .

Finite definitions

While validity outstrips validity, in general, there is a special case in which the two coincide, finite definitions. Loosely speaking, a definition is finite if all revision sequences stop producing new hypotheses after a finite number of revisions. To put it more precisely, we define a hypothesis as reflexive just in case there is an such that . A definition is finite iff for all models , for all hypotheses , there is a natural number , such that is reflexive. Gupta showed that if is finite, then validity and validity coincide.
There is no known syntactic characterization of the set of finite definitions, and finite definitions are not closed under standard logical operations, such as conjunction and disjunction. Maricarmen Martinez has identified some syntactic features under which the set of finite definitions is closed. She has shown that if contains only unary predicates, apart from identity, contains no function symbols, and the definienda of are all unary, then is finite.
While many standard logical operations do not preserve finiteness, it is preserved by the operation of self-composition. For a definition , define self-composition recursively as follows.
  • and
  • .
The latter says that is obtained by replacing all instances of in , with . If is a finite definition and is the result of replacing each definiens in with , then is a finite definition as well.

Notable formal features

Revision theory distinguishes material equivalence from definitional equivalence. The sets of definitions use the latter. In general, definitional equivalence is not the same as material equivalence. Given a definition
its material counterpart,
will not, in general, be valid. The definition
illustrates the invalidity. Its definiens and definiendum will not have the same truth value after any revision, so the material biconditional will not be valid. For some definitions, the material counterparts of the defining clauses are valid. For example, if the definientia of contain only symbols from the ground language, then the material counterparts will be valid.
The definitions given above are for the classical scheme. The definitions can be adjusted to work with any semantic scheme. This includes three-valued schemes, such as Strong Kleene, with exclusion negation, whose truth table is the following.
Exclusion negation


Notably, many approaches to truth, such as Saul Kripke’s Strong Kleene theory, cannot be used with exclusion negation in the language.
Revision theory, while in some respects similar to the theory of inductive definitions, differs in several ways. Most importantly, revision need not be monotonic, which is to say that extensions at later stages need not be supersets of extensions at earlier stages, as illustrated by the first example above. Relatedly, revision theory does not postulate any restrictions on the syntactic form of definitions. Inductive definitions require their definientia to be positive, in the sense that definienda can only appear in definientia under an even number of negations. (This assumes that negation, conjunction, disjunction, and the universal quantifier are the primitive logical connectives, and the remaining classical connectives are simply defined symbols.) The definition
is acceptable in revision theory, although not in the theory of inductive definitions.
Inductive definitions are semantically interpreted via fixed points, hypotheses for which . In general, revision sequences will not reach fixed points. If the definientia of are all positive, then revision sequences will reach fixed points, as long as the initial hypothesis has the feature that , for each . In particular, given such a , if the initial hypothesis assigns the empty extension to all definienda, then the revision sequence will reach the minimal fixed point.
The sets of valid sentences on some definitions can be highly complex, in particular . This was shown by Philip Kremer and Aldo Antonelli. There is, consequently, no proof system for validity.

Truth

The most famous application of revision theory is to the theory of truth, as developed in Gupta and Belnap (1993), for example. The circular definition of truth is the set of all the Tarski biconditionals, ‘’ is true iff , where ‘iff’ is understood as definitional equivalence, , rather than material equivalence. Each Tarski biconditional provides a partial definition of the concept of truth. The concept of truth is circular because some Tarski biconditionals use an ineliminable instance of ‘is true’ in their definiens. For example, suppose that is the name of a truth-teller sentence, is true. This sentence has as its Tarski biconditional: is true iff is true. The truth predicate on the right cannot be eliminated. This example depends on there being a truth-teller in the language. This and other examples show that truth, defined by the Tarski biconditionals, is a circular concept.
Some languages, such as the language of arithmetic, will have vicious self-reference. The liar and other pathological sentences are guaranteed to be in the language with truth. Other languages with truth can be defined that lack vicious self-reference. In such a language, any revision sequence for truth is bound to reach a stage where , so the truth predicate behaves like a non-circular predicate. The result is that, in such languages, truth has a stable extension that is defined over all sentences of the language. This is in contrast to many other theories of truth, for example the minimal Strong Kleene and minimal supervaluational theories. The extension and anti-extension of the truth predicate in these theories will not exhaust the set of sentences of the language.
The difference between and is important when considering revision theories of truth. Part of the difference comes across in the semantical laws, which are the following equivalences, where T is a truth predicate.
These are all valid in , although the last is valid only when the domain is countable and every element is named. In , however, none are valid. One can see why the negation law fails by considering the liar, . The liar and all finite iterations of the truth predicate to it are unstable, so one can set and to have the same truth value at some limits, which results in and having different truth values. This is corrected after revision, but the negation law will not be stably true. It is a consequence of a theorem of Vann McGee that the revision theory of truth in is -inconsistent. The theory is not -inconsistent.
There is an axiomatic theory of truth that is related to the theory in the language of arithmetic with truth. The Friedman-Sheard theory (FS) is obtained by adding to the usual axioms of Peano arithmetic
  • the axiom ,
  • the semantical laws,
  • the induction axioms with the truth predicate, and
  • the two rules
    • if , then , and
    • if , then .
By McGee’s theorem, this theory is -inconsistent. FS does not, however, have as theorems any false purely arithmetical sentences. FS has as a theorem global reflection for Peano arithmetic,
where is a provability predicate for Peano arithmetic and is a predicate true of all and only sentences of the language with truth. Consequently, it is a theorem of FS that Peano arithmetic is consistent.
FS is a subtheory of the theory of truth for arithmetic, the set of sentences valid in . A standard way to show that FS is consistent is to use an -long revision sequence. There has been some work done on axiomatizing the theory of truth for arithmetic.

Other applications

Revision theory has been used to study circular concepts apart from truth and to provide alternative analyses of concepts, such as rationality.
A non-well-founded set theory is a set theory that postulates the existence of a non-well-founded set, which is a set that has an infinite descending chain along the membership relation,
Antonelli has used revision theory to construct models of non-well-founded set theory. One example is a set theory that postulates a set whose sole member is itself, .
Infinite-time Turing machines are models of computation that permit computations to go on for infinitely many steps. They generalize standard Turing machines used in the theory of computability. Benedikt Löwe has shown that there are close connections between computations of infinite-time Turing machines and revision processes.
Rational choice in game theory has been analyzed as a circular concept. André Chapuis has argued that the reasoning agents use in rational choice exhibits an interdependence characteristic of circular concepts.
Revision theory can be adapted to model other sorts of phenomena. For example, vagueness has been analyzed in revision-theoretic terms by Conrad Asmus. To model a vague predicate on this approach, one specifies pairs of similar objects and which objects are non-borderline cases, and so are unrevisable. The borderline objects change their status with respect to a predicate depending on the status of the objects to which they are similar.
Revision theory has been used by Gupta to explicate the logical contribution of experience to one’s beliefs. According to this view, the contribution of experience is represented by a rule of revision that takes as input on an agent’s view, or concepts and beliefs, and yields as output perceptual judgments. These judgments can be used to update the agent’s view.

Cousin marriage in the Middle East

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cou...