Search This Blog

Thursday, November 30, 2023

Slippery slope

From Wikipedia, the free encyclopedia
Black and white cartoon of a tall woman in a dress reaching her knees and a shorter man holding a bouquet. Both are in front of a robed figure. Each of the marrying couples has a couple of their same-sex and similar attire behind.
This 1895 cartoon makes a slippery-slope argument of how weddings would look in 2001 if women got the right to vote.

A slippery slope fallacy (SSF), in logic, critical thinking, political rhetoric, and caselaw, is a fallacious argument in which a party asserts that a relatively small first step leads to a chain of related events culminating in some significant (usually negative) effect. The core of the slippery slope argument is that a specific decision under debate is likely to result in unintended consequences. The strength of such an argument depends on whether the small step really is likely to lead to the effect. This is quantified in terms of what is known as the warrant (in this case, a demonstration of the process that leads to the significant effect). This type of argument is sometimes used as a form of fearmongering in which the probable consequences of a given action are exaggerated in an attempt to scare the audience.

The fallacious sense of "slippery slope" is often used synonymously with continuum fallacy, in that it ignores the possibility of middle ground and assumes a discrete transition from category A to category B. In this sense, it constitutes an informal fallacy. Other idioms for the slippery slope fallacy are the thin end/edge of the wedge, the camel's nose in the tent, or If You Give a Mouse a Cookie.

Slopes, arguments, and fallacies

Some writers distinguish between a slippery slope event and a slippery slope argument. A slippery slope event can be represented by a series of conditional statements, namely:

if p then q; if q then r; if r then  z.

The idea being that through a series of intermediate steps p will imply z. Some writers point out that strict necessity isn't required and it can still be characterized as a slippery slope if at each stage the next step is plausible. This is important for with strict implication p will imply z but if at each step the probability is say 90% then the more steps there are the less likely it becomes that p will cause z.

A slippery slope argument is typically a negative argument where there is an attempt to discourage someone from taking a course of action because if they do it will lead to some unacceptable conclusion. Some writers point out that an argument with the same structure might be used in a positive way in which someone is encouraged to take the first step because it leads to a desirable conclusion.

If someone is accused of using a slippery slope argument then it is being suggested they are guilty of fallacious reasoning, and while they are claiming that p implies z, for whatever reason, this is not the case. In logic and critical thinking textbooks, slippery slopes and slippery slope arguments are normally discussed as a form of fallacy, although there may be an acknowledgement that non-fallacious forms of the argument can also exist.

Types of argument

Different writers have classified slippery slope arguments in different and often contradictory ways, but there are two basic types of argument that have been described as slippery slope arguments. One type has been called the causal slippery slope, and the distinguishing feature of this type is that the various steps leading from p to z are events with each event being the cause of the next in the sequence. The second type might be called the judgmental slippery slope with the idea being that the 'slope' does not consist of a series of events but is such that, for whatever reason, if a person makes one particular judgment they will rationally have to make another and so on. The judgmental type may be further sub-divided into conceptual slippery slopes and decisional slippery slopes.

Conceptual slippery slopes, which Trudy Govier calls the fallacy of slippery assimilation, are closely related to the sorites paradox so, for example, in the context of talking about slippery slopes Merilee Salmon can say, "The slippery slope is an ancient form of reasoning. According to van Fraassen (The Scientific Image), the argument is found in Sextus Empiricus that incest is not immoral, on the grounds that 'touching your mother's big toe with your little finger is not immoral, and all the rest differs only by degree.'"

Decisional slippery slopes are similar to conceptual slippery slopes in that they rely on there being a continuum with no clear dividing lines such that if you decide to accept one position or course of action then there will, either now or in the future, be no rational grounds for not accepting the next position or course of action in the sequence.

The difficulty in classifying slippery slope arguments is that there is no clear consensus in the literature as to how terminology should be used. It has been said that whilst these two fallacies "have a relationship which may justify treating them together", they are also distinct, and "the fact that they share a name is unfortunate". Some writers treat them side by side but emphasize how they differ. Some writers use the term slippery slope to refer to one kind of argument but not the other, but don't agree on which one, whilst others use the term to refer to both. So, for example,

  • Christopher Tindale gives a definition that only fits the causal type. He says, "Slippery Slope reasoning is a type of negative reasoning from consequences, distinguished by the presence of a causal chain leading from the proposed action to the negative outcome."
  • Merrilee Salmon describes the fallacy as a failure to recognise that meaningful distinctions can be drawn and even casts the "domino theory" in that light.
  • Douglas N. Walton says that an essential feature of slippery slopes is a "loss of control" and this only fits with the decisional type of slippery slope. He says that, "The domino argument has a sequence of events in which each one in the sequence causes the next one to happen in such a manner that once the first event occurs it will lead to the next event, and so forth, until the last event in the sequence finally occurs…(and)…is clearly different from the slippery slope argument, but can be seen as a part of it, and closely related to it."

Metaphor and its alternatives

The metaphor of the "slippery slope" dates back at least to Cicero's essay Laelius de Amicitia (XII.41). The title character Gaius Laelius Sapiens uses the metaphor to describe the decline of the Republic upon the impending election of Gaius Gracchus: "Affairs soon move on, for they glide readily down the path of ruin when once they have taken a start."

Thin end of a wedge

Walton suggests Alfred Sidgwick should be credited as the first writer on informal logic to describe what would today be called a slippery slope argument.

"We must not do this or that, it is often said, because if we did we should be logically bound to do something else which is plainly absurd or wrong. If we once begin to take a certain course there is no knowing where we shall be able to stop within any show of consistency; there would be no reason for stopping anywhere in particular, and we should be led on, step by step into action or opinions that we all agree to call undesirable or untrue."

Sidgwick says this is "popularly known as the objection to a thin end of a wedge" but might be classified now as a decisional slippery slope. However, the wedge metaphor also captures the idea that unpleasant end result is a wider application of a principle associated with the initial decision which is often a feature of decisional slippery slopes due to their incremental nature but may be absent from causal slippery slopes.

Domino fallacy

T. Edward Damer, in his book Attacking Faulty Reasoning, describes what others might call a causal slippery slope but says,

"While this image may be insightful for understanding the character of the fallacy, it represents a misunderstanding of the nature of the causal relations between events. Every causal claim requires a separate argument. Hence, any "slipping" to be found is only in the clumsy thinking of the arguer, who has failed to provide sufficient evidence that one causally explained event can serve as an explanation for another event or for a series of events."

Instead Damer prefers to call it the domino fallacy. Howard Kahane suggests that the domino variation of the fallacy has gone out of fashion because it was tied the domino theory for the United States becoming involved in the war in Vietnam and although the U.S. lost that war "it is primarily communist dominoes that have fallen".

Dam burst

Frank Saliger notes that "in the German-speaking world the dramatic image of the dam burst seems to predominate, in English speaking circles talk is more of the slippery slope argument" and that "in German writing dam burst and slippery slope arguments are treated as broadly synonymous. In particular the structural analyses of slippery slope arguments derived from English writing are largely transferred directly to the dam burst argument."

In exploring the differences between the two metaphors he comments that in the dam burst the initial action is clearly in the foreground and there is a rapid movement towards the resulting events whereas in the slippery slope metaphor the downward slide has at least equal prominence to the initial action and it "conveys the impression of a slower 'step-by-step' process where the decision maker as participant slides inexorably downwards under the weight of its own successive (erroneous) decisions." Despite these differences Saliger continues to treat the two metaphors as being synonymous. Walton argues that although the two are comparable "the metaphor of the dam bursting carries with it no essential element of a sequence of steps from an initial action through a gray zone with its accompanying loss of control eventuated in the ultimate outcome of the ruinous disaster. For these reasons, it seems best to propose drawing a distinction between dam burst arguments and slippery slope arguments."

Other metaphors

Eric Lode notes that "commentators have used numerous different metaphors to refer to arguments that have this rough form. For example, people have called such arguments "wedge" or "thin edge of the wedge", "camel's nose" or "camel's nose in the tent", "parade of horrors" or "parade of horribles", "domino", "Boiling Frog" and "this could snowball" arguments. All of these metaphors suggest that allowing one practice or policy could lead us to allow a series of other practices or policies." Bruce Waller says it is lawyers who often call it the "parade of horribles" argument while politicians seem to favor "the camel's nose is in the tent".

Defining features of slippery slope arguments

Given the disagreement over what constitutes a genuine slippery slope argument, it is to be expected that there are differences in the way they are defined. Lode says that "although all SSAs share certain features, they are a family of related arguments rather than a class of arguments whose members all share the same form."

Various writers have attempted to produce a general taxonomy of these different kinds of slippery slope. Other writers have given a general definition that will encompass the diversity of slippery slope arguments. Eugene Volokh says, "I think the most useful definition of a slippery slope is one that covers all situations where decision A, which you might find appealing, ends up materially increasing the probability that others will bring about decision B, which you oppose."

Those who hold that slippery slopes are causal generally give a simple definition, provide some appropriate examples and perhaps add some discussion as to the difficulty of determining whether the argument is reasonable or fallacious. Most of the more detailed analysis of slippery slopes has been done by those who hold that genuine slippery slopes are of the decisional kind.

Lode, having claimed that SSAs are not a single class of arguments whose members all share the same form, nevertheless goes on to suggest the following common features.

  1. The series of intervening and gradual steps
  2. The idea that the slope lacks a non-arbitrary stopping place
  3. The idea that the practice under consideration is, in itself, unobjectionable

Rizzo and Whitman identify slightly different features. They say, "Although there is no paradigm case of the slippery slope argument, there are characteristic features of all such arguments. The key components of slippery slope arguments are three:

  1. An initial, seemingly acceptable argument and decision;
  2. A "danger case"—a later argument and decision that are clearly unacceptable;
  3. A "process" or "mechanism" by which accepting the initial argument and making the initial decision raise the likelihood of accepting the later argument and making the later decision."

Walton notes that these three features will be common to all slippery slopes but objects that there needs to be more clarity on the nature of the 'mechanism' and a way of distinguishing between slippery slope arguments and arguments from negative consequences.

Corner et al. say that a slippery slope has "four distinct components:

  1. An initial proposal (A).
  2. An undesirable outcome (C).
  3. The belief that allowing (A) will lead to a re-evaluation of (C) in the future.
  4. The rejection of (A) based on this belief.

The alleged danger lurking on the slippery slope is the fear that a presently unacceptable proposal (C) will (by any number of psychological processes—see, e.g., Volokh 2003) in the future be re-evaluated as acceptable."

Walton adds the requirement that there must be a loss of control. He says, there are four basic components, "One is a first step, an action or policy being considered. A second is a sequence in which this action leads to other actions. A third is a so-called gray zone or area of indeterminacy along the sequence where the agent loses control. The fourth is the catastrophic outcome at the very end of the sequence. The idea is that as soon as the agent in question takes the first step he will be impelled forward through the sequence, losing control so that in the end he will reach the catastrophic outcome. Not all of these components are typically made explicit..."

Truth table

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Truth_table

A truth table is a mathematical table used in logic—specifically in connection with Boolean algebra, boolean functions, and propositional calculus—which sets out the functional values of logical expressions on each of their functional arguments, that is, for each combination of values taken by their logical variables. In particular, truth tables can be used to show whether a propositional expression is true for all legitimate input values, that is, logically valid.

A truth table has one column for each input variable (for example, A and B), and one final column showing all of the possible results of the logical operation that the table represents (for example, A XOR B). Each row of the truth table contains one possible configuration of the input variables (for instance, A=true, B=false), and the result of the operation for those values.

A truth table is a structured representation that presents all possible combinations of truth values for the input variables of a Boolean function and their corresponding output values. A function f from A to F is a special relation, a subset of A×F, which simply means that f can be listed as a list of input-output pairs. Clearly, for the Boolean functions, the output belongs to a binary set, i.e. F = {0, 1}. For an n-ary Boolean function, the inputs come from a domain that is itself a Cartesian product of binary sets corresponding to the input Boolean variables. For example for a binary function, f(A, B), the domain of f is A×B, which can be listed as: A×B = {(A = 0, B = 0), (A = 0, B = 1), (A = 1, B = 0), (A = 1, B = 1)}. Each element in the domain represents a combination of input values for the variables A and B. These combinations now can be combined with the output of the function corresponding to that combination, thus forming the set of input-output pairs as a special relation that is subset of A×F. For a relation to be a function, the special requirement is that each element of the domain of the function must be mapped to one and only one member of the codomain. Thus, the function f itself can be listed as: f = {((0, 0), f0), ((0, 1), f1), ((1, 0), f2), ((1, 1), f3)}, where f0, f1, f2, and f3 are each Boolean, 0 or 1, values as members of the codomain {0, 1}, as the outputs corresponding to the member of the domain, respectively. Rather than a list (set) given above, the truth table then presents these input-output pairs in a tabular format, in which each row corresponds to a member of the domain paired with its corresponding output value, 0 or 1. Of course, for the Boolean functions, we do not have to list all the members of the domain with their images in the codomain; we can simply list the mappings that map the member to "1", because all the others will have to be mapped to "0" automatically (that leads us to the minterms idea).

Ludwig Wittgenstein is generally credited with inventing and popularizing the truth table in his Tractatus Logico-Philosophicus, which was completed in 1918 and published in 1921. Such a system was also independently proposed in 1921 by Emil Leon Post.

Nullary operations

There are 2 nullary operations:

  • Always true
  • Never true, unary falsum

Logical true

The output value is always true, because this operator has zero operands and therefore no input values

p T
T T
F T

Logical false

The output value is never true: that is, always false, because this operator has zero operands and therefore no input values

p F
T F
F F

Unary operations

There are 2 unary operations:

  • Unary identity
  • Unary negation

Logical identity

Logical identity is an operation on one logical value p, for which the output value remains p.

The truth table for the logical identity operator is as follows:

p p
T T
F F

Logical negation

Logical negation is an operation on one logical value, typically the value of a proposition, that produces a value of true if its operand is false and a value of false if its operand is true.

The truth table for NOT p (also written as ¬p, Np, Fpq, or ~p) is as follows:

p ¬p
T F
F T

Binary operations

There are 16 possible truth functions of two binary variables, each operator has its own name.

Truth table

Here is an extended truth table giving definitions of all sixteen possible truth functions of two Boolean variables p and q:

p q
F0 NOR1 2 ¬p3 NIMPLY4 ¬q5 XOR6 NAND7 AND8 XNOR9 q10 IMPLY11 p12 13 OR14 T15
T T
F F F F F F F F T T T T T T T T
T F
F F F F T T T T F F F F T T T T
F T
F F T T F F T T F F T T F F T T
F F
F T F T F T F T F T F T F T F T

Com








Assoc








Adj
F0 NOR1 4 ¬q5 NIMPLY2 ¬p3 XOR6 NAND7 AND8 XNOR9 p12 IMPLY13 q10 11 OR14 T15
Neg
T15 OR14 13 p12 IMPLY11 q10 XNOR9 AND8 NAND7 XOR6 ¬q5 NIMPLY4 ¬p3 2 NOR1 F0
Dual
T15 NAND7 11 ¬p3 13 ¬q5 XNOR9 NOR1 OR14 XOR6 q10 2 p12 4 AND8 F0
L id


F


F
T T T,F T

F
R id




F
F
T T

T,F T F

where

T = true.
F = false.
The superscripts 0 to 15 is the number resulting from reading the four truth values as a binary number with F = 0 and T = 1.
The Com row indicates whether an operator, op, is commutative - P op Q = Q op P.
The Assoc row indicates whether an operator, op, is associative - (P op Q) op R = P op (Q op R).
The Adj row shows the operator op2 such that P op Q = Q op2 P.
The Neg row shows the operator op2 such that P op Q = ¬(P op2 Q).
The Dual row shows the dual operation obtained by interchanging T with F, and AND with OR.
The L id row shows the operator's left identities if it has any - values I such that I op Q = Q.
The R id row shows the operator's right identities if it has any - values I such that P op I = P.

Wittgenstein table

In proposition 5.101 of the Tractatus Logico-Philosophicus, Wittgenstein listed the table above as follows:


Truthvalues
Operator Operation name Tractatus
0 (F F F F)(p, q) false Opq Contradiction if p then p; and if q then q
1 (F F F T)(p, q) NOR pq Xpq Logical NOR neither p nor q
2 (F F T F)(p, q) pq Mpq Converse nonimplication q and not p
3 (F F T T)(p, q) ¬p, ~p ¬p Np, Fpq Negation not p
4 (F T F F)(p, q) pq Lpq Material nonimplication p and not q
5 (F T F T)(p, q) ¬q, ~q ¬q Nq, Gpq Negation not q
6 (F T T F)(p, q) XOR pq Jpq Exclusive disjunction p or q, but not both
7 (F T T T)(p, q) NAND pq Dpq Logical NAND not both p and q
8 (T F F F)(p, q) AND pq Kpq Logical conjunction p and q
9 (T F F T)(p, q) XNOR p iff q Epq Logical biconditional if p then q; and if q then p
10 (T F T F)(p, q) q q Hpq Projection function q
11 (T F T T)(p, q) pq if p then q Cpq Material implication if p then q
12 (T T F F)(p, q) p p Ipq Projection function p
13 (T T F T)(p, q) pq if q then p Bpq Converse implication if q then p
14 (T T T F)(p, q) OR pq Apq Logical disjunction p or q
15 (T T T T)(p, q) true Vpq Tautology p and not p; and q and not q

The truth table represented by each row is obtained by appending the sequence given in Truthvaluesrow to the table

p T T F F
q T F T F

For example, the table

p T T F F
q T F T F
11 T F T T

represents the truth table for Material implication.

Logical operators can also be visualized using Venn diagrams.

Logical conjunction (AND)

Logical conjunction is an operation on two logical values, typically the values of two propositions, that produces a value of true if both of its operands are true.

The truth table for p AND q (also written as p ∧ q, Kpq, p & q, or p q) is as follows:

p q pq
T T T
T F F
F T F
F F F

In ordinary language terms, if both p and q are true, then the conjunction pq is true. For all other assignments of logical values to p and to q the conjunction p ∧ q is false.

It can also be said that if p, then pq is q, otherwise pq is p.

Logical disjunction (OR)

Logical disjunction is an operation on two logical values, typically the values of two propositions, that produces a value of true if at least one of its operands is true.

The truth table for p OR q (also written as p ∨ q, Apq, p || q, or p + q) is as follows:

p q pq
T T T
T F T
F T T
F F F

Stated in English, if p, then pq is p, otherwise pq is q.

Logical implication

Logical implication and the material conditional are both associated with an operation on two logical values, typically the values of two propositions, which produces a value of false if the first operand is true and the second operand is false, and a value of true otherwise.

The truth table associated with the logical implication p implies q (symbolized as p ⇒ q, or more rarely Cpq) is as follows:

p q pq
T T T
T F F
F T T
F F T

The truth table associated with the material conditional if p then q (symbolized as p → q) is as follows:

p q pq
T T T
T F F
F T T
F F T

It may also be useful to note that p ⇒ q and p → q are equivalent to ¬p ∨ q.

Logical equality

Logical equality (also known as biconditional or exclusive nor) is an operation on two logical values, typically the values of two propositions, that produces a value of true if both operands are false or both operands are true.

The truth table for p XNOR q (also written as p ↔ q, Epq, p = q, or p ≡ q) is as follows:

p q pq
T T T
T F F
F T F
F F T

So p EQ q is true if p and q have the same truth value (both true or both false), and false if they have different truth values.

Exclusive disjunction

Exclusive disjunction is an operation on two logical values, typically the values of two propositions, that produces a value of true if one but not both of its operands is true.

The truth table for p XOR q (also written as Jpq, or p ⊕ q) is as follows:

p q pq
T T F
T F T
F T T
F F F

For two propositions, XOR can also be written as (p ∧ ¬q) ∨ (¬p ∧ q).

Logical NAND

The logical NAND is an operation on two logical values, typically the values of two propositions, that produces a value of false if both of its operands are true. In other words, it produces a value of true if at least one of its operands is false.

The truth table for p NAND q (also written as p ↑ q, Dpq, or p | q) is as follows:

p q pq
T T F
T F T
F T T
F F T

It is frequently useful to express a logical operation as a compound operation, that is, as an operation that is built up or composed from other operations. Many such compositions are possible, depending on the operations that are taken as basic or "primitive" and the operations that are taken as composite or "derivative".

In the case of logical NAND, it is clearly expressible as a compound of NOT and AND.

The negation of a conjunction: ¬(p ∧ q), and the disjunction of negations: (¬p) ∨ (¬q) can be tabulated as follows:

p q p ∧ q ¬(p ∧ q) ¬p ¬q p) ∨ (¬q)
T T T F F F F
T F F T F T T
F T F T T F T
F F F T T T T

Logical NOR

The logical NOR is an operation on two logical values, typically the values of two propositions, that produces a value of true if both of its operands are false. In other words, it produces a value of false if at least one of its operands is true. ↓ is also known as the Peirce arrow after its inventor, Charles Sanders Peirce, and is a Sole sufficient operator.

The truth table for p NOR q (also written as p ↓ q, or Xpq) is as follows:

p q pq
T T F
T F F
F T F
F F T

The negation of a disjunction ¬(p ∨ q), and the conjunction of negations (¬p) ∧ (¬q) can be tabulated as follows:

p q p ∨ q ¬(p ∨ q) ¬p ¬q p) ∧ (¬q)
T T T F F F F
T F T F F T F
F T T F T F F
F F F T T T T

Inspection of the tabular derivations for NAND and NOR, under each assignment of logical values to the functional arguments p and q, produces the identical patterns of functional values for ¬(p ∧ q) as for (¬p) ∨ (¬q), and for ¬(p ∨ q) as for (¬p) ∧ (¬q). Thus the first and second expressions in each pair are logically equivalent, and may be substituted for each other in all contexts that pertain solely to their logical values.

This equivalence is one of De Morgan's laws.

Size of truth tables

If there are n input variables then there are 2n possible combinations of their truth values. A given function may produce true or false for each combination so the number of different functions of n variables is the double exponential 22n.

n 2n 22n
0 1 2
1 2 4
2 4 16
3 8 256
4 16 65,536
5 32 4,294,967,296 ≈ 4.3×109
6 64 18,446,744,073,709,551,616 ≈ 1.8×1019
7 128 340,282,366,920,938,463,463,374,607,431,768,211,456 ≈ 3.4×1038
8 256 115,792,089,237,316,195,423,570,985,008,687,907,853,269,984,665,640,564,039,457,584,007,913,129,639,936 ≈ 1.2×1077

Truth tables for functions of three or more variables are rarely given.

Applications

Truth tables can be used to prove many other logical equivalences. For example, consider the following truth table:

T T F T T
T F F F F
F T T T T
F F T T T

This demonstrates the fact that is logically equivalent to .

Truth table for most commonly used logical operators

Here is a truth table that gives definitions of the 7 most commonly used out of the 16 possible truth functions of two Boolean variables P and Q:

P Q
T T T T F T T T T
T F F T T F F T F
F T F T T F T F F
F F F F F T T T T
P Q


AND
(conjunction)
OR
(disjunction)
XOR
(exclusive or)
XNOR
(exclusive nor)
conditional
"if-then"
conditional
"then-if"
biconditional
"if-and-only-if"

where    T    means true and    F    means false

Condensed truth tables for binary operators

For binary operators, a condensed form of truth table is also used, where the row headings and the column headings specify the operands and the table cells specify the result. For example, Boolean logic uses this condensed truth table notation:


F T
F F F
T F T

F T
F F T
T T T

This notation is useful especially if the operations are commutative, although one can additionally specify that the rows are the first operand and the columns are the second operand. This condensed notation is particularly useful in discussing multi-valued extensions of logic, as it significantly cuts down on combinatoric explosion of the number of rows otherwise needed. It also provides for quickly recognizable characteristic "shape" of the distribution of the values in the table which can assist the reader in grasping the rules more quickly.

Truth tables in digital logic

Truth tables are also used to specify the function of hardware look-up tables (LUTs) in digital logic circuitry. For an n-input LUT, the truth table will have 2^n values (or rows in the above tabular format), completely specifying a boolean function for the LUT. By representing each boolean value as a bit in a binary number, truth table values can be efficiently encoded as integer values in electronic design automation (EDA) software. For example, a 32-bit integer can encode the truth table for a LUT with up to 5 inputs.

When using an integer representation of a truth table, the output value of the LUT can be obtained by calculating a bit index k based on the input values of the LUT, in which case the LUT's output value is the kth bit of the integer. For example, to evaluate the output value of a LUT given an array of n boolean input values, the bit index of the truth table's output value can be computed as follows: if the ith input is true, let , else let . Then the kth bit of the binary representation of the truth table is the LUT's output value, where .

Truth tables are a simple and straightforward way to encode boolean functions, however given the exponential growth in size as the number of inputs increase, they are not suitable for functions with a large number of inputs. Other representations which are more memory efficient are text equations and binary decision diagrams.

Applications of truth tables in digital electronics

In digital electronics and computer science (fields of applied logic engineering and mathematics), truth tables can be used to reduce basic boolean operations to simple correlations of inputs to outputs, without the use of logic gates or code. For example, a binary addition can be represented with the truth table:

Binary addition
T T T F
T F F T
F T F T
F F F F

where A is the first operand, B is the second operand, C is the carry digit, and R is the result.

This truth table is read left to right:

  • Value pair (A,B) equals value pair (C,R).
  • Or for this example, A plus B equal result R, with the Carry C.

Note that this table does not describe the logic operations necessary to implement this operation, rather it simply specifies the function of inputs to output values.

With respect to the result, this example may be arithmetically viewed as modulo 2 binary addition, and as logically equivalent to the exclusive-or (exclusive disjunction) binary logic operation.

In this case it can be used for only very simple inputs and outputs, such as 1s and 0s. However, if the number of types of values one can have on the inputs increases, the size of the truth table will increase.

For instance, in an addition operation, one needs two operands, A and B. Each can have one of two values, zero or one. The number of combinations of these two values is 2×2, or four. So the result is four possible outputs of C and R. If one were to use base 3, the size would increase to 3×3, or nine possible outputs.

The first "addition" example above is called a half-adder. A full-adder is when the carry from the previous operation is provided as input to the next adder. Thus, a truth table of eight rows would be needed to describe a full adder's logic:

A B C* | C R
0 0 0  | 0 0
0 1 0  | 0 1
1 0 0  | 0 1
1 1 0  | 1 0
0 0 1  | 0 1
0 1 1  | 1 0
1 0 1  | 1 0
1 1 1  | 1 1

Same as previous, but..
C* = Carry from previous adder

History

Irving Anellis's research shows that C.S. Peirce appears to be the earliest logician (in 1883) to devise a truth table matrix.

From the summary of Peirce's paper:

In 1997, John Shosky discovered, on the verso of a page of the typed transcript of Bertrand Russell's 1912 lecture on "The Philosophy of Logical Atomism" truth table matrices. The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript by Peirce identified as having been composed in 1883–84 in connection with the composition of Peirce's "On the Algebra of Logic: A Contribution to the Philosophy of Notation" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.

Lie point symmetry

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_point_symmetry     ...