Search This Blog

Sunday, January 30, 2022

Philosophical logic

From Wikipedia, the free encyclopedia

Understood in a narrow sense, philosophical logic is the area of philosophy that studies the application of logical methods to philosophical problems, often in the form of extended logical systems like modal logic. Some theorists conceive philosophical logic in a wider sense as the study of the scope and nature of logic in general. In this sense, philosophical logic can be seen as identical to the philosophy of logic, which includes additional topics like how to define logic or a discussion of the fundamental concepts of logic. The current article treats philosophical logic in the narrow sense, in which it forms one field of inquiry within the philosophy of logic.

An important issue for philosophical logic is the question of how to classify the great variety of non-classical logical systems, many of which are of rather recent origin. One form of classification often found in the literature is to distinguish between extended logics and deviant logics. Logic itself can be defined as the study of valid inference. Classical logic is the dominant form of logic and articulates rules of inference in accordance with logical intuitions shared by many, like the law of excluded middle, the double negation elimination, and the bivalence of truth.

Extended logics are logical systems that are based on classical logic and its rules of inference but extend it to new fields by introducing new logical symbols and the corresponding rules of inference governing these symbols. In the case of alethic modal logic, these new symbols are used to express not just what is true simpliciter, but also what is possibly or necessarily true. It is often combined with possible worlds semantics, which holds that a proposition is possibly true if it is true in some possible world while it is necessarily true if it is true in all possible worlds. Deontic logic pertains to ethics and provides a formal treatment of ethical notions, such as obligation and permission. Temporal logic formalizes temporal relations between propositions. This includes ideas like whether something is true at some time or all the time and whether it is true in the future or in the past. Epistemic logic belongs to epistemology. It can be used to express not just what is the case but also what someone believes or knows to be the case. Its rules of inference articulate what follows from the fact that someone has these kinds of mental states. Higher-order logics do not directly apply classical logic to certain new sub-fields within philosophy but generalize it by allowing quantification not just over individuals but also over predicates.

Deviant logics, in contrast to these forms of extended logics, reject some of the fundamental principles of classical logic and are often seen as its rivals. Intuitionistic logic is based on the idea that truth depends on verification through a proof. This leads it to reject certain rules of inference found in classical logic that are not compatible with this assumption. Free logic modifies classical logic in order to avoid existential presuppositions associated with the use of possibly empty singular terms, like names and definite descriptions. Many-valued logics allow additional truth values besides true and false. They thereby reject the principle of bivalence of truth. Paraconsistent logics are logical systems able to deal with contradictions. They do so by avoiding the principle of explosion found in classical logic. Relevance logic is a prominent form of paraconsistent logic. It rejects the purely truth-functional interpretation of the material conditional by introducing the additional requirement of relevance: for the conditional to be true, its antecedent has to be relevant to its consequent.

Definition and related fields

The term "philosophical logic" is used by different theorists in slightly different ways. When understood in a narrow sense, as discussed in this article, philosophical logic is the area of philosophy that studies the application of logical methods to philosophical problems. This usually happens in the form of developing new logical systems to either extend classical logic to new areas or to modify it to include certain logical intuitions not properly addressed by classical logic. In this sense, philosophical logic studies various forms of non-classical logics, like modal logic and deontic logic. This way, various fundamental philosophical concepts, like possibility, necessity, obligation, permission, and time, are treated in a logically precise manner by formally expressing the inferential roles they play in relation to each other. Some theorists understand philosophical logic in a wider sense as the study of the scope and nature of logic in general. On this view, it investigates various philosophical problems raised by logic, including the fundamental concepts of logic. In this wider sense, it can be understood as identical to the philosophy of logic, where these topics are discussed. The current article discusses only the narrow conception of philosophical logic. In this sense, it forms one area of the philosophy of logic.

Central to philosophical logic is an understanding of what logic is and what role philosophical logics play in it. Logic can be defined as the study of valid inferences. An inference is the step of reasoning in which it moves from the premises to a conclusion. Often the term "argument" is also used instead. An inference is valid if it is impossible for the premises to be true and the conclusion to be false. In this sense, the truth of the premises ensures the truth of the conclusion. This can be expressed in terms of rules of inference: an inference is valid if its structure, i.e. the way its premises and its conclusion are formed, follows a rule of inference. Different systems of logic provide different accounts for when an inference is valid. This means that they use different rules of inference. The traditionally dominant approach to validity is called classical logic. But philosophical logic is concerned with non-classical logic: it studies alternative systems of inference. The motivations for doing so can roughly be divided into two categories. For some, classical logic is too narrow: it leaves out many philosophically interesting issues. This can be solved by extending classical logic with additional symbols to give a logically strict treatment of further areas. Others see some flaw with classical logic itself and try to give a rival account of inference. This usually leads to the development of deviant logics, each of which modifies the fundamental principles behind classical logic in order to rectify their alleged flaws.

Classification of logics

Modern developments in the area of logic have resulted in a great proliferation of logical systems. This stands in stark contrast to the historical dominance of Aristotelian logic, which was treated as the one canon of logic for over two thousand years. Treatises on modern logic often treat these different systems as a list of separate topics without providing a clear classification of them. However, one classification frequently mentioned in the academic literature is due to Susan Haack and distinguishes between classical logic, extended logics, and deviant logics. This classification is based on the idea that classical logic, i.e. propositional logic and first-order logic, formalizes some of the most common logical intuitions. In this sense, it constitutes a basic account of the axioms governing valid inference. Extended logics accept this basic account and extend it to additional areas. This usually happens by adding new vocabulary, for example, to express necessity, obligation, or time. These new symbols are then integrated into the logical mechanism by specifying which new rules of inference apply to them, like that possibility follows from necessity. Deviant logics, on the other hand, reject some of the basic assumptions of classical logic. In this sense, they are not mere extensions of it but are often formulated as rival systems that offer a different account of the laws of logic.

Expressed in a more technical language, the distinction between extended and deviant logics is sometimes drawn in a slightly different manner. On this view, a logic is an extension of classical logic if two conditions are fulfilled: (1) all well-formed formulas of classical logic are also well-formed formulas in it and (2) all valid inferences in classical logic are also valid inferences in it. For a deviant logic, on the other hand, (a) its class of well-formed formulas coincides with that of classical logic, while (b) some valid inferences in classical logic are not valid inferences in it. The term quasi-deviant logic is used if (i) it introduces new vocabulary but all well-formed formulas of classical logic are also well-formed formulas in it and (ii) even when it is restricted to inferences using only the vocabulary of classical logic, some valid inferences in classical logic are not valid inferences in it. The term "deviant logic" is often used in a sense that includes quasi-deviant logics as well.

A philosophical problem raised by this plurality of logics concerns the question of whether there can be more than one true logic. Some theorists favor a local approach in which different types of logic are applied to different areas. Early intuitionists, for example, saw intuitionistic logic as the correct logic for mathematics but allowed classical logic in other fields. But others, like Michael Dummett, prefer a global approach by holding that intuitionistic logic should replace classical logic in every area. Monism is the thesis that there is only one true logic. This can be understood in different ways, for example, that only one of all the suggested logical systems is correct or that the correct logical system is yet to be found as a system underlying and unifying all the different logics. Pluralists, on the other hand, hold that a variety of different logical systems can all be correct at the same time.

A closely related problem concerns the question of whether all of these formal systems actually constitute logical systems. This is especially relevant for deviant logics that stray very far from the common logical intuitions associated with classical logic. In this sense, it has been argued, for example, that fuzzy logic is a logic only in name but should be considered a non-logical formal system instead since the idea of degrees of truth is too far removed from the most fundamental logical intuitions. So not everyone agrees that all the formal systems discussed in this article actually constitute logics, when understood in a strict sense.

Classical logic

Classical logic is the dominant form of logic used in most fields. The term refers primarily to propositional logic and first-order logic. Classical logic is not an independent topic within philosophical logic. But a good familiarity with it is still required since many of the logical systems of direct concern to philosophical logic can be understood either as extensions of classical logic, which accept its fundamental principles and build on top of it, or as modifications of it, rejecting some of its core assumptions. Classical logic was initially created in order to analyze mathematical arguments and was applied to various other fields only afterward. For this reason, it neglects many topics of philosophical importance not relevant to mathematics, like the difference between necessity and possibility, between obligation and permission, or between past, present, and future. These and similar topics are given a logical treatment in the different philosophical logics extending classical logic. Classical logic by itself is only concerned with a few basic concepts and the role these concepts play in making valid inferences. The concepts pertaining to propositional logic include propositional connectives, like "and", "or", and "if-then". Characteristic of the classical approach to these connectives is that they follow certain laws, like the law of excluded middle, the double negation elimination, the principle of explosion, and the bivalence of truth. This sets classical logic apart from various deviant logics, which deny one or several of these principles.

In first-order logic, the propositions themselves are made up of subpropositional parts, like predicates, singular terms, and quantifiers. Singular terms refer to objects and predicates express properties of objects and relations between them. Quantifiers constitute a formal treatment of notions like "for some" and "for all". They can be used to express whether predicates have an extension at all or whether their extension includes the whole domain. Quantification is only allowed over individual terms but not over predicates, in contrast to higher-order logics.

Extended logics

Alethic modal

Alethic modal logic has been very influential in logic and philosophy. It provides a logical formalism to express what is possibly or necessarily true. It constitutes an extension of first-order logic, which by itself is only able to express what is true simpliciter. This extension happens by introducing two new symbols: "" for possibility and "" for necessity. These symbols are used to modify propositions. For example, if "" stands for the propositions "Socrates is wise", then "" expresses the proposition "it is possible that Socrates is wise". In order to integrate these symbols into the logical formalism, various axioms are added to the existing axioms of first-order logic. They govern the logical behavior of these symbols by determining how the validity of an inference depends on the fact that the symbol is found in it. They usually include the idea that if a proposition is necessary then its negation is impossible, i.e. that "" is equivalent to "". Another such principle is that if something is necessary, then it must also be possible. This means that "" follows from "". There is disagreement about exactly which axioms govern modal logic. The different forms of modal logic are often presented as a nested hierarchy of systems in which the most fundamental systems, like system K, include only the most fundamental axioms while other systems, like the popular system S5, build on top of it by including additional axioms. In this sense, system K is an extension of first-order logic while system S5 is an extension of system K. Important discussions within philosophical logic concern the question of which system of modal logic is correct. It is usually advantageous to have the strongest system possible in order to be able to draw many different inferences. But this brings with it the problem that some of these additional inferences may contradict basic modal intuitions in specific cases. This usually motivates the choice of a more basic system of axioms.

Possible worlds semantics is a very influential formal semantics in modal logic that brings with it system S5.[27][28][30] A formal semantics of a language characterizes the conditions under which the sentences of this language are true or false. Formal semantics play a central role in the model-theoretic conception of validity. They are able to provide clear criteria for when an inference is valid or not: an inference is valid if and only if it is truth-preserving, i.e. if whenever its premises are true then its conclusion is also true. Whether they are true or false is specified by the formal semantics. Possible worlds semantics specifies the truth conditions of sentences expressed in modal logic in terms of possible worlds. A possible world is a complete and consistent way how things could have been. On this view, a sentence modified by the -operator is true if it is true in at least one possible world while a sentence modified by the -operator is true if it is true in all possible worlds. So the sentence "" (it is possible that Socrates is wise) is true since there is at least one world where Socrates is wise. But "" (it is necessary that Socrates is wise) is false since Socrates is not wise in every possible world. Possible world semantics has been criticized as a formal semantics of modal logic since it seems to be circular. The reason for this is that possible worlds are themselves defined in modal terms, i.e. as ways how things could have been. In this way, it itself uses modal expressions to determine the truth of sentences containing modal expressions.

Deontic

Deontic logic extends classical logic to the field of ethics. Of central importance in ethics are the concepts of obligation and permission, i.e. which actions the agent has to do or is allowed to do. Deontic logic usually expresses these ideas with the operators and . So if "" stands for the proposition "Ramirez goes jogging", then "" means that Ramirez has the obligation to go jogging and "" means that Ramirez has the permission to go jogging.

Deontic logic is closely related to alethic modal logic in that the axioms governing the logical behavior of their operators are identical. This means that obligation and permission behave in regards to valid inference just like necessity and possibility do. For this reason, sometimes even the same symbols are used as operators. Just as in alethic modal logic, there is a discussion in philosophical logic concerning which is the right system of axioms for expressing the common intuitions governing deontic inferences. But the arguments and counterexamples here are slightly different since the meanings of these operators differ. For example, a common intuition in ethics is that if the agent has the obligation to do something then they automatically also have the permission to do it. This can be expressed formally through the axiom schema "". Another question of interest to philosophical logic concerns the relation between alethic modal logic and deontic logic. An often discussed principle in this respect is that ought implies can. This means that the agent can only have the obligation to do something if it is possible for the agent to do it. Expressed formally: "".

Temporal

Temporal logic, or tense logic, uses logical mechanisms to express temporal relations. In its most simple form, it contains one operator to express that something happened at one time and another to express that something is happening all the time. These two operators behave in the same way as the operators for possibility and necessity in alethic modal logic. Since the difference between past and future is of central importance to human affairs, these operators are often modified to take this difference into account. Athur Prior's tense logic, for example, realizes this idea using four such operators: (it was the case that...), (it will be the case that...), (it has always been the case that...), and (it will always be the case that...). So to express that it will always be rainy in London one could use "". Various axioms are used to govern which inferences are valid depending on the operators appearing in them. According to them, for example, one can deduce "" (it will be rainy in London at some time) from "". In more complicated forms of temporal logic, also binary operators linking two propositions are defined, for example, to express that something happens until something else happens.

Temporal modal logic can be translated into classical first-order logic by treating time in the form of a singular term and increasing the arity of one's predicates by one. For example, the tense-logic-sentence "" (it is dark, it was light, and it will be light again) can be translated into pure first-order logic as "". While similar approaches are often seen in physics, logicians usually prefer an autonomous treatment of time in terms of operators. This is also closer to natural languages, which mostly use grammar, e.g. by conjugating verbs, to express the pastness or futurity of events.

Epistemic

Epistemic logic is a form of modal logic applied to the field of epistemology. It aims to capture the logic of knowledge and belief. The modal operators expressing knowledge and belief are usually expressed through the symbols "" and "". So if "" stands for the propositions "Socrates is wise", then "" expresses the proposition "the agent knows that Socrates is wise" and "" expresses the proposition "the agent believes that Socrates is wise". Axioms governing these operators are then formulated to express various epistemic principles. For example, the axiom schema "" expresses that whenever something is known, then it is true. This reflects the idea that one can only know what is true, otherwise it is not knowledge but another mental state. Another epistemic intuition about knowledge concerns the fact that when the agent knows something, they also know that they know it. This can be expressed by the axiom schema "". An additional principle linking knowledge and belief states that knowledge implies belief, i.e. "". Dynamic epistemic logic is a distinct form of epistemic logic that focuses on situations in which changes in belief and knowledge happen.

Higher-order

Higher-order logics extend first-order logic by including new forms of quantification. In first-order logic, quantification is restricted to singular terms. It can be used to talk about whether a predicate has an extension at all or whether its extension includes the whole domain. This way, propositions like "" (there are some apples that are sweet) can be expressed. In higher-order logics, quantification is allowed not just over individual terms but also over predicates. This way, it is possible to express, for example, whether certain individuals share some or all of their predicates, as in "" (there are some qualities that Mary and John share). Because of these changes, higher-order logics have more expressive power than first-order logic. This can be helpful for mathematics in various ways since different mathematical theories have a much simpler expression in higher-order logic than in first-order logic. For example, Peano arithmetic and Zermelo-Fraenkel set theory need an infinite number of axioms to be expressed in first-order logic. But they can be expressed in second-order logic with only a few axioms.

But despite this advantage, first-order logic is still much more widely used than higher-order logic. One reason for this is that higher-order logic is incomplete. This means that, for theories formulated in higher-order logic, it is not possible to prove every true sentence pertaining to the theory in question. Another disadvantage is connected to the additional ontological commitments of higher-order logics. It is often held that the usage of the existential quantifier brings with it an ontological commitment to the entities over which this quantifier ranges. In first-order logic, this concerns only individuals, which is usually seen as an unproblematic ontological commitment. In higher-order logic, quantification concerns also properties and relations. This is often interpreted as meaning that higher-order logic brings with it a form of Platonism, i.e. the view that universal properties and relations exist in addition to individuals.

Deviant logics

Intuitionistic

Intuitionistic logic is a more restricted version of classical logic. It is more restricted in the sense that certain rules of inference used in classical logic do not constitute valid inferences in it. This concerns specifically the law of excluded middle and the double negation elimination. The law of excluded middle states that for every sentence, either it or its negation are true. Expressed formally: . The law of double negation elimination states that if a sentence is not not true, then it is true, i.e. "". Due to these restrictions, many proofs are more complicated and some proofs otherwise accepted become impossible.

These modifications of classical logic are motivated by the idea that truth depends on verification through a proof. This has been interpreted in the sense that "true" means "verifiable". It was originally only applied to the area of mathematics but has since then been used in other areas as well. On this interpretation, the law of excluded middle would involve the assumption that every mathematical problem has a solution in the form of a proof. In this sense, the intuitionistic rejection of the law of excluded middle is motivated by the rejection of this assumption. This position can also be expressed by stating that there are no unexperienced or verification-transcendent truths. In this sense, intuitionistic logic is motivated by a form of metaphysical idealism. Applied to mathematics, it states that mathematical objects exist only to the extent that they are constructed in the mind.

Free

Free logic rejects some of the existential presuppositions found in classical logic. In classical logic, every singular term has to denote an object in the domain of quantification. This is usually understood as an ontological commitment to the existence of the named entity. But many names are used in everyday discourse that do not refer to existing entities, like "Santa Clause" or "Pegasus". This threatens to preclude such areas of discourse from a strict logical treatment. Free logic avoids these problems by allowing formulas with non-denoting singular terms. This applies to proper names as well as definite descriptions, and functional expressions. Quantifiers, on the other hand, are treated in the usual way as ranging over the domain. This allows for expressions like "" (Santa Clause does not exist) to be true even though they are self-contradictory in classical logic. It also brings with it the consequence that certain valid forms of inference found in classical logic are not valid in free logic. For example, one may infer from "" (Santa Clause has a beard) that "" (something has a beard) in classical logic but not in free logic. In free logic, often an existence-predicate is used to indicate whether a singular term denotes an object in the domain or not. But the usage of existence-predicates is controversial. They are often opposed based on the idea that having existence is required if any predicates should apply to the object at all. In this sense, existence cannot itself be a predicate.

Karel Lambert, who coined the term "free logic", has suggested that free logic can be understood as a generalization of classical predicate logic just as predicate logic is a generalization of Aristotelian logic. On this view, classical predicate logic introduces predicates with an empty extension while free logic introduces singular terms of non-existing things.

An important problem for free logic consists in how to determine the truth value of expressions containing empty singular terms, i.e. of formulating a formal semantics for free logic. Formal semantics of classical logic can define the truth of their expressions in terms of their denotation. But this option cannot be applied to all expressions in free logic since not all of them have a denotation. Three general approaches to this issue are often discussed in the literature: negative semantics, positive semantics, and neutral semantics. Negative semantics hold that all atomic formulas containing empty terms are false. On this view, the expression "" is false. Positive semantics allows that at least some expressions with empty terms are true. This usually includes identity statements, like "". Some versions introduce a second, outer domain for non-existing objects, which is then used to determine the corresponding truth values. Neutral semantics, on the other hand, hold that atomic formulas containing empty terms are neither true nor false. This is often understood as a three-valued logic, i.e. that a third truth value besides true and false is introduced for these cases.

Many-valued

Many-valued logics are logics that allow for more than two truth values. They reject one of the core assumptions of classical logic: the principle of the bivalence of truth. The most simple versions of many-valued logics are three-valued logics: they contain a third truth value. In Stephen Cole Kleene's three-valued logic, for example, this third truth value is "undefined". According to Nuel Belnap's four-valued logic, there are four possible truth values: "true", "false", "neither true nor false", and "both true and false". This can be interpreted, for example, as indicating the information one has concerning whether a state obtains: information that it does obtain, information that it does not obtain, no information, and conflicting information. One of the most extreme forms of many-valued logic is fuzzy logic. It allows truth to arise in any degree between 0 and 1. 0 corresponds to completely false, 1 corresponds to completely true, and the values in between correspond to truth in some degree, e.g. as a little true or very true. It is often used to deal with vague expressions in natural language. For example, saying that "Petr is young" fits better (i.e. is "more true") if "Petr" refers to a three-year-old than if it refers to a 23-year-old. Many-valued logics with a finite number of truth-values can define their logical connectives using truth tables, just like classical logic. The difference is that these truth tables are more complex since more possible inputs and outputs have to be considered. In Kleene's three-valued logic, for example, the inputs "true" and "undefined" for the conjunction-operator "" result in the output "undefined". The inputs "false" and "undefined", on the other hand, result in "false".

Paraconsistent

Paraconsistent logics are logical systems that can deal with contradictions without leading to all-out absurdity. They achieve this by avoiding the principle of explosion found in classical logic. According to the principle of explosion, anything follows from a contradiction. This is the case because of two rules of inference, which are valid in classical logic: disjunction introduction and disjunctive syllogism. According to the disjunction introduction, any proposition can be introduced in the form of a disjunction when paired with a true proposition. So since it is true that "the sun is bigger than the moon", it is possible to infer that "the sun is bigger than the moon or Spain is controlled by space-rabbits". According to the disjunctive syllogism, one can infer that one of these disjuncts is true if the other is false. So if the logical system also contains the negation of this proposition, i.e. that "the sun is not bigger than the moon", then it is possible to infer any proposition from this system, like the proposition that "Spain is controlled by space-rabbits". Paraconsistent logics avoid this by using different rules of inference that make inferences in accordance with the principle of explosion invalid.

An important motivation for using paraconsistent logics is dialetheism, i.e. the belief that contradictions are not just introduced into theories due to mistakes but that reality itself is contradictory and contradictions within theories are needed to accurately reflect reality. Without paraconsistent logics, dialetheism would be hopeless since everything would be both true and false. Paraconsistent logics make it possible to keep contradictions local, without exploding the whole system. But even with this adjustment, dialetheism is still highly contested. Another motivation for paraconsistent logic is to provide a logic for discussions and group beliefs where the group as a whole may have inconsistent beliefs if its different members are in disagreement.

Relevance

Relevance logic is one type of paraconsistent logic. As such, it also avoids the principle of explosion even though this is usually not the main motivation behind relevance logic. Instead, it is usually formulated with the goal of avoiding certain unintuitive applications of the material conditional found in classical logic. Classical logic defines the material conditional in purely truth-functional terms, i.e. "" is false if "" is true and "" is false, but otherwise true in every case. According to this formal definition, it does not matter whether "" and "" are relevant to each other in any way. For example, the material conditional "if all lemons are red then there is a sandstrom inside the Sydney Opera House" is true even though the two propositions are not relevant to each other.

The fact that this usage of material conditionals is highly unintuitive is also reflected in informal logic, which categorizes such inferences as fallacies of relevance. Relevance logic tries to avoid these cases by requiring that for a true material conditional, its antecedent has to be relevant to the consequent. A difficulty faced for this issue is that relevance usually belongs to the content of the propositions while logic only deals with formal aspects. This problem is partially addressed by the so-called variable sharing principle. It states that antecedent and consequent have to share a propositional variable. This would be the case, for example, in "" but not in "". A closely related concern of relevance logic is that inferences should follow the same requirement of relevance, i.e. that it is a necessary requirement of valid inferences that their premises are relevant to their conclusion.

Principle of indifference

From Wikipedia, the free encyclopedia
 
The principle of indifference (also called principle of insufficient reason) is a rule for assigning epistemic probabilities. The principle of indifference states that in the absence of any relevant evidence, agents should distribute their credence (or 'degrees of belief') equally among all the possible outcomes under consideration.

In Bayesian probability, this is the simplest non-informative prior. The principle of indifference is meaningless under the frequency interpretation of probability, in which probabilities are relative frequencies rather than degrees of belief in uncertain propositions, conditional upon state information.

Examples

The textbook examples for the application of the principle of indifference are coins, dice, and cards.

In a macroscopic system, at least, it must be assumed that the physical laws that govern the system are not known well enough to predict the outcome. As observed some centuries ago by John Arbuthnot (in the preface of Of the Laws of Chance, 1692),

It is impossible for a Die, with such determin'd force and direction, not to fall on such determin'd side, only I don't know the force and direction which makes it fall on such determin'd side, and therefore I call it Chance, which is nothing but the want of art....

Given enough time and resources, there is no fundamental reason to suppose that suitably precise measurements could not be made, which would enable the prediction of the outcome of coins, dice, and cards with high accuracy: Persi Diaconis's work with coin-flipping machines is a practical example of this.

Coins

A symmetric coin has two sides, arbitrarily labeled heads (many coins have the head of a person portrayed on one side) and tails. Assuming that the coin must land on one side or the other, the outcomes of a coin toss are mutually exclusive, exhaustive, and interchangeable. According to the principle of indifference, we assign each of the possible outcomes a probability of 1/2.

It is implicit in this analysis that the forces acting on the coin are not known with any precision. If the momentum imparted to the coin as it is launched were known with sufficient accuracy, the flight of the coin could be predicted according to the laws of mechanics. Thus the uncertainty in the outcome of a coin toss is derived (for the most part) from the uncertainty with respect to initial conditions. This point is discussed at greater length in the article on coin flipping.

Dice

A symmetric die has n faces, arbitrarily labeled from 1 to n. An ordinary cubical die has n = 6 faces, although a symmetric die with different numbers of faces can be constructed; see Dice. We assume that the die will land with one face or another upward, and there are no other possible outcomes. Applying the principle of indifference, we assign each of the possible outcomes a probability of 1/n. As with coins, it is assumed that the initial conditions of throwing the dice are not known with enough precision to predict the outcome according to the laws of mechanics. Dice are typically thrown so as to bounce on a table or other surface(s). This interaction makes prediction of the outcome much more difficult.

The assumption of symmetry is crucial here. Suppose that we are asked to bet for or against the outcome "6". We might reason that there are two relevant outcomes here "6" or "not 6", and that these are mutually exclusive and exhaustive. This suggests assigning the probability 1/2 to each of the two outcomes.

Cards

A standard deck contains 52 cards, each given a unique label in an arbitrary fashion, i.e. arbitrarily ordered. We draw a card from the deck; applying the principle of indifference, we assign each of the possible outcomes a probability of 1/52.

This example, more than the others, shows the difficulty of actually applying the principle of indifference in real situations. What we really mean by the phrase "arbitrarily ordered" is simply that we don't have any information that would lead us to favor a particular card. In actual practice, this is rarely the case: a new deck of cards is certainly not in arbitrary order, and neither is a deck immediately after a hand of cards. In practice, we therefore shuffle the cards; this does not destroy the information we have, but instead (hopefully) renders our information practically unusable, although it is still usable in principle. In fact, some expert blackjack players can track aces through the deck; for them, the condition for applying the principle of indifference is not satisfied.

Application to continuous variables

Applying the principle of indifference incorrectly can easily lead to nonsensical results, especially in the case of multivariate, continuous variables. A typical case of misuse is the following example:

  • Suppose there is a cube hidden in a box. A label on the box says the cube has a side length between 3 and 5 cm.
  • We don't know the actual side length, but we might assume that all values are equally likely and simply pick the mid-value of 4 cm.
  • The information on the label allows us to calculate that the surface area of the cube is between 54 and 150 cm2. We don't know the actual surface area, but we might assume that all values are equally likely and simply pick the mid-value of 102 cm2.
  • The information on the label allows us to calculate that the volume of the cube is between 27 and 125 cm3. We don't know the actual volume, but we might assume that all values are equally likely and simply pick the mid-value of 76 cm3.
  • However, we have now reached the impossible conclusion that the cube has a side length of 4 cm, a surface area of 102 cm2, and a volume of 76 cm3!

In this example, mutually contradictory estimates of the length, surface area, and volume of the cube arise because we have assumed three mutually contradictory distributions for these parameters: a uniform distribution for any one of the variables implies a non-uniform distribution for the other two. In general, the principle of indifference does not indicate which variable (e.g. in this case, length, surface area, or volume) is to have a uniform epistemic probability distribution.

Another classic example of this kind of misuse is the Bertrand paradox. Edwin T. Jaynes introduced the principle of transformation groups, which can yield an epistemic probability distribution for this problem. This generalises the principle of indifference, by saying that one is indifferent between equivalent problems rather than indifferent between propositions. This still reduces to the ordinary principle of indifference when one considers a permutation of the labels as generating equivalent problems (i.e. using the permutation transformation group). To apply this to the above box example, we have three random variables related by geometric equations. If we have no reason to favour one trio of values over another, then our prior probabilities must be related by the rule for changing variables in continuous distributions. Let L be the length, and V be the volume. Then we must have

,

where are the probability density functions (pdf) of the stated variables. This equation has a general solution: , where K is a normalization constant, determined by the range of L, in this case equal to:

To put this "to the test", we ask for the probability that the length is less than 4. This has probability of:

.

For the volume, this should be equal to the probability that the volume is less than 43 = 64. The pdf of the volume is

.

And then probability of volume less than 64 is

.

Thus we have achieved invariance with respect to volume and length. One can also show the same invariance with respect to surface area being less than 6(42) = 96. However, note that this probability assignment is not necessarily a "correct" one. For the exact distribution of lengths, volume, or surface area will depend on how the "experiment" is conducted.

The fundamental hypothesis of statistical physics, that any two microstates of a system with the same total energy are equally probable at equilibrium, is in a sense an example of the principle of indifference. However, when the microstates are described by continuous variables (such as positions and momenta), an additional physical basis is needed in order to explain under which parameterization the probability density will be uniform. Liouville's theorem justifies the use of canonically conjugate variables, such as positions and their conjugate momenta.

The wine/water paradox shows a dilemma with linked variables, and which one to choose.

History

The original writers on probability, primarily Jacob Bernoulli and Pierre Simon Laplace, considered the principle of indifference to be intuitively obvious and did not even bother to give it a name. Laplace wrote:

The theory of chance consists in reducing all the events of the same kind to a certain number of cases equally possible, that is to say, to such as we may be equally undecided about in regard to their existence, and in determining the number of cases favorable to the event whose probability is sought. The ratio of this number to that of all the cases possible is the measure of this probability, which is thus simply a fraction whose numerator is the number of favorable cases and whose denominator is the number of all the cases possible.

These earlier writers, Laplace in particular, naively generalized the principle of indifference to the case of continuous parameters, giving the so-called "uniform prior probability distribution", a function that is constant over all real numbers. He used this function to express a complete lack of knowledge as to the value of a parameter. According to Stigler (page 135), Laplace's assumption of uniform prior probabilities was not a meta-physical assumption. It was an implicit assumption made for the ease of analysis.

The principle of insufficient reason was its first name, given to it by later writers, possibly as a play on Leibniz's principle of sufficient reason. These later writers (George Boole, John Venn, and others) objected to the use of the uniform prior for two reasons. The first reason is that the constant function is not normalizable, and thus is not a proper probability distribution. The second reason is its inapplicability to continuous variables, as described above. (However, these paradoxical issues can be resolved. In the first case, a constant, or any more general finite polynomial, is normalizable within any finite range: the range [0,1] is all that matters here. Alternatively, the function may be modified to be zero outside that range, as with a continuous uniform distribution. In the second case, there is no ambiguity provided the problem is "well-posed", so that no unwarranted assumptions can be made, or have to be made, thereby fixing the appropriate prior probability density function or prior moment generating function (with variables fixed appropriately) to be used for the probability itself. See the Bertrand paradox (probability) for an analogous case.)

The "principle of insufficient reason" was renamed the "principle of indifference" by the economist John Maynard Keynes (1921), who was careful to note that it applies only when there is no knowledge indicating unequal probabilities.

Attempts to put the notion on firmer philosophical ground have generally begun with the concept of equipossibility and progressed from it to equiprobability.

The principle of indifference can be given a deeper logical justification by noting that equivalent states of knowledge should be assigned equivalent epistemic probabilities. This argument was propounded by E.T. Jaynes: it leads to two generalizations, namely the principle of transformation groups as in the Jeffreys prior, and the principle of maximum entropy.

More generally, one speaks of uninformative priors.

Peel Commission

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Peel_Commission   Report of the Palest...