Desktop publishing (DTP) is the creation of documents using page layoutsoftware on a personal ("desktop") computer.
It was first used almost exclusively for print publications, but now it
also assists in the creation of various forms of online content. Desktop publishing software can generate layouts and produce typographic-quality text and images comparable to traditional typography and printing. Desktop publishing is also the main reference for digital typography. This technology allows individuals, businesses, and other organizations to self-publish a wide variety of content, from menus to magazines to books, without the expense of commercial printing.
Desktop publishing often requires the use of a personal computer and WYSIWYG page layout software to create documents for either large-scale publishing or small-scale local multifunction peripheral output and distribution - although non-WYSIWYG systems such as TeX and LaTeX are also used, especially in scientific publishing. Desktop publishing methods provide more control over design, layout, and typography than word processing.
However, word processing software has evolved to include most, if not
all, capabilities previously available only with professional printing
or desktop publishing.
Desktop publishing was first developed at Xerox PARC in the 1970s.
A contradictory claim states that desktop publishing began in 1983 with
a program developed by James Davise at a community newspaper in
Philadelphia. The program Type Processor One ran on a PC using a graphics card for a WYSIWYG display and was offered commercially by Best Info in 1984. Desktop typesetting with only limited page makeup facilities arrived in 1978–1979 with the introduction of TeX, and was extended in 1985 with the introduction of LaTeX.
The desktop publishing market took off in 1985 with the introduction in January of the Apple LaserWriter printer. This momentum was kept up with the addition of PageMaker software from Aldus,
which rapidly became the standard software application for desktop
publishing. With its advanced layout features, PageMaker immediately
relegated word processors like Microsoft Word to the composition and editing of purely textual documents. The term "desktop publishing" is attributed to Aldus founder Paul Brainerd,
who sought a marketing catchphrase to describe the small size and
relative affordability of this suite of products, in contrast to the
expensive commercial phototypesetting equipment of the day.
Before the advent of desktop publishing, the only option
available to most people for producing typed documents (as opposed to
handwritten documents) was a typewriter,
which offered only a handful of typefaces (usually fixed-width) and one
or two font sizes. Indeed, one popular desktop publishing book was
titled The Mac is Not a Typewriter, and it had to actually explain how a Mac could do so much more than a typewriter. The ability to create WYSIWYG page layouts on screen and then print pages containing text and graphical elements at crisp 300 dpi
resolution was revolutionary for both the typesetting industry and the
personal computer industry at the time; newspapers and other print
publications made the move to DTP-based programs from older layout
systems such as Atex and other programs in the early 1980s.
Desktop publishing was still in its embryonic stage in the early
1980s. Users of the PageMaker-LaserWriter-Macintosh 512K system endured
frequent software crashes, cramped display on the Mac's tiny 512 x 342 1-bit monochrome screen, the inability to control letter spacing, kerning, and other typographic features,
and the discrepancies between screen display and printed output.
However, it was a revolutionary combination at the time, and was
received with considerable acclaim.
Behind-the-scenes, technologies developed by Adobe Systems
set the foundation for professional desktop publishing applications.
The LaserWriter and LaserWriter Plus printers included high quality,
scalable Adobe PostScript fonts built into their ROM
memory. The LaserWriter's PostScript capability allowed publication
designers to proof files on a local printer, then print the same file at
DTP service bureaus using optical resolution 600+ ppi PostScript printers such as those from Linotronic.
Later, the Macintosh II
was released, which was considerably more suitable for desktop
publishing due to its greater expandability, support for large color multi-monitor displays, and its SCSI
storage interface (which allowed fast high-capacity hard drives to be
attached to the system). Macintosh-based systems continued to dominate
the market into 1986, when the GEM-based Ventura Publisher was introduced for MS-DOS
computers. PageMaker's pasteboard metaphor closely simulated the
process of creating layouts manually, but Ventura Publisher automated
the layout process through its use of tags and style sheets
and automatically generated indices and other body matter. This made it
particularly suitable for the creation of manuals and other long-format
documents.
Desktop publishing moved into the home market in 1986 with Professional Page for the Amiga, Publishing Partner (now PageStream) for the Atari ST, GST's Timeworks Publisher on the PC and Atari ST, and Calamus for the Atari TT030. Software was published even for 8-bit computers like the Apple II and Commodore 64: Home Publisher, The Newsroom, and geoPublish.
During its early years, desktop publishing acquired a bad reputation as
a result of untrained users who created poorly organized,
unprofessional-looking "ransom note effect" layouts; similar criticism was leveled again against early World Wide Web
publishers a decade later. However, some desktop publishers who
mastered the programs were able to achieve highly professional results.
Desktop publishing skills were considered of primary importance in
career advancement in the 1980s, but increased accessibility to more
user-friendly DTP software has made DTP a secondary skill to art direction, graphic design, multimedia development, marketing communications, and administrative careers.
DTP skill levels range from what may be learned in a couple of hours
(e.g., learning how to put clip art in a word processor), to what's
typically required in a college education. The discipline of DTP skills
range from technical skills such as prepress production and programming, to creative skills such as communication design and graphic image development.
As of 2014, Apple computers remain dominant in publishing, even as the most popular software has changed from QuarkXPress – an estimated 95% market share in the 1990s – to Adobe InDesign. As an Ars Technica
writer puts: "I've heard about Windows-based publishing environments,
but I've never actually seen one in my 20+ years in design and
publishing".
Terminology
There are two types of pages in desktop publishing: digital pages and virtual paper pages to be printed on physical paper pages. All computerized documents are technically digital, which are limited in size only by computer memory or computer data storage space. Virtual paper pages will ultimately be printed, and will therefore require paper parameters coinciding with standard physical paper sizes
such as A4, letterpaper and legalpaper. Alternatively, the virtual
paper page may require a custom size for later trimming. Some desktop
publishing programs allow custom sizes designated for large format
printing used in posters, billboards and trade show displays. A virtual page for printing has a predesignated size of virtual printing material and can be viewed on a monitor in WYSIWYG format. Each page for printing has trim sizes (edge of paper) and a printable area if bleed printing is not possible as is the case with most desktop printers. A web page
is an example of a digital page that is not constrained by virtual
paper parameters. Most digital pages may be dynamically re-sized,
causing either the content to scale in size with the page or the content to re-flow.
Master pages are templates used to automatically copy or link
elements and graphic design styles to some or all the pages of a
multipage document. Linked elements can be modified without having to
change each instance of an element on pages that use the same element.
Master pages can also be used to apply graphic design styles to
automatic page numbering. Cascading Style Sheets can provide the same global formatting functions for web pages that master pages provide for virtual paper pages. Page layout
is the process by which the elements are laid on the page orderly,
aesthetically and precisely. Main types of components to be laid out on a
page include text, linked images
(that can only be modified as an external source), and embedded images
(that may be modified with the layout application software). Some
embedded images are rendered in the application software, while others can be placed from an external source image file. Text may be keyed into the layout, placed, or – with database publishing applications – linked to an external source of text which allows multiple editors to develop a document at the same time.
Graphic design styles such as color, transparency and filters may also be applied to layout elements. Typography styles may be applied to text automatically with style sheets.
Some layout programs include style sheets for images in addition to
text. Graphic styles for images may include border shapes, colors,
transparency, filters, and a parameter designating the way text flows
around the object (also known as "wraparound" or "runaround").
Comparisons
With word processing
As
desktop publishing software still provides extensive features necessary
for print publishing, modern word processors now have publishing
capabilities beyond those of many older DTP applications, blurring the
line between word processing and desktop publishing.
In the early 1980s, graphical user interface
was still in its embryonic stage and DTP software was in a class of its
own when compared to the leading word processing applications of the
time. Programs such as WordPerfect and WordStar
were still mainly text-based and offered little in the way of page
layout, other than perhaps margins and line spacing. On the other hand,
word processing software was necessary for features like indexing and
spell checking – features that are common in many applications today. As
computers and operating systems became more powerful, versatile, and
user-friendly in the 2010s, vendors have sought to provide users with a
single application that can meet almost all their publication needs.
With other digital layout software
In earlier modern-day usage, DTP usually does not include digital tools such as TeX or troff, though both can easily be used on a modern desktop system, and are standard with many Unix-like operating systems and are readily available for other systems. The key difference between digital typesetting software and DTP software is that DTP software is generally interactive and "What you see [onscreen] is what you get" (WYSIWYG) in design, while other digital typesetting software, such as TeX, LaTeX and other variants, tend to operate in "batch mode", requiring the user to enter the processing program's markup language (e.g. HTML)
without immediate visualization of the finished product. This kind of
workflow is less user-friendly than WYSIWYG, but more suitable for
conference proceedings and scholarly articles as well as corporate
newsletters or other applications where consistent, automated layout is
important.
In the 2010s, interactive front-end components of TeX, such as TeXworks and LyX, have produced "what you see is what you mean" (WYSIWYM) hybrids of DTP and batch processing.[13] These hybrids are focused more on the semantics
than the traditional DTP. Furthermore, with the advent of TeX editors
the line between desktop publishing and markup-based typesetting is
becoming increasingly narrow as well; a software which separates itself
from the TeX world and develops itself in the direction of WYSIWYG
markup-based typesetting is GNU TeXmacs.
On a different note, there is a slight overlap between desktop publishing and what is known as hypermedia publishing (e.g. web design, kiosk, CD-ROM). Many graphical HTML editors such as Microsoft FrontPage and Adobe Dreamweaver
use a layout engine similar to that of a DTP program. However, many web
designers still prefer to write HTML without the assistance of a
WYSIWYG editor, for greater control and ability to fine-tune the
appearance and functionality. Another reason that some Web designers
write in HTML is that WYSIWYG editors often result in excessive lines of
code, leading to code bloat that can make the pages hard to troubleshoot.
Deductive reasoning is the mental process of drawing deductive inferences. An inference is deductively valid if its conclusion follows logically from its premises, i.e. it is impossible for the premises to be true and the conclusion to be false.
For example, the inference from the premises "all men are mortal" and "Socrates is a man" to the conclusion "Socrates is mortal" is deductively valid. An argument is sound if it is valid
and all its premises are true. Some theorists define deduction in terms
of the intentions of the author: they have to intend for the premises
to offer deductive support to the conclusion. With the help of this
modification, it is possible to distinguish valid from invalid deductive
reasoning: it is invalid if the author's belief about the deductive
support is false, but even invalid deductive reasoning is a form of
deductive reasoning.
Psychology is interested in deductive reasoning as a psychological process, i.e. how people actually draw inferences. Logic, on the other hand, focuses on the deductive relation of logical consequence between the premises and the conclusion or how people should draw inferences. There are different ways of conceptualizing this relation. According to the semantic approach, an argument is deductively valid if and only if there is no possible interpretation of this argument where its premises are true and its conclusion is false. The syntactic
approach, on the other hand, holds that an argument is deductively
valid if and only if its conclusion can be deduced from its premises
using a valid rule of inference. A rule of inference is a schema of drawing a conclusion from a set of premises based only on their logical form.
There are various rules of inference, like the modus ponens and the modus tollens. Invalid deductive arguments, which do not follow a rule of inference, are called formal fallacies.
Rules of inference are definitory rules and contrast to strategic
rules, which specify what inferences one needs to draw in order to
arrive at an intended conclusion. Deductive reasoning contrasts with
non-deductive or ampliative reasoning. For ampliative arguments, like inductive or abductive arguments,
the premises offer weaker support to their conclusion: they make it
more likely but they do not guarantee its truth. They make up for this
drawback by being able to provide genuinely new information not already
found in the premises, unlike deductive arguments.
Cognitive psychology
investigates the mental processes responsible for deductive reasoning.
One of its topics concerns the factors determining whether people draw
valid or invalid deductive inferences. One factor is the form of the
argument: for example, people are more successful for arguments of the
form modus ponens than for modus tollens. Another is the content of the
arguments: people are more likely to believe that an argument is valid
if the claim made in its conclusion is plausible. A general finding is
that people tend to perform better for realistic and concrete cases than
for abstract cases. Psychological theories of deductive reasoning aim
to explain these findings by providing an account of the underlying
psychological processes. Mental logic theories hold that deductive reasoning is a language-like process that happens through the manipulation of representations using rules of inference. Mental model theories,
on the other hand, claim that deductive reasoning involves models of
possible states of the world without the medium of language or rules of
inference. According to dual-process theories of reasoning, there are two qualitatively different cognitive systems responsible for reasoning.
The problem of deductive reasoning is relevant to various fields and issues. Epistemology tries to understand how justification is transferred from the belief in the premises to the belief in the conclusion in the process of deductive reasoning. Probability logic
studies how the probability of the premises of an inference affects the
probability of its conclusion. The controversial thesis of deductivism denies that there are other correct forms of inference besides deduction. Natural deduction is a type of proof system based on simple and self-evident rules of inference. In philosophy,
the geometrical method is a way of philosophizing that starts from a
small set of self-evident axioms and tries to build a comprehensive
logical system using deductive reasoning.
Definition
Deductive reasoning is the psychological process of drawing deductive inferences. An inference is a set of premises together with a conclusion. This psychological process starts from the premises and reasons to a conclusion based on and supported by these premises. If the reasoning was done correctly, it results in a valid deduction: the truth of the premises ensures the truth of the conclusion. For example, in the syllogistic
argument "all frogs are reptiles; no cats are reptiles; therefore, no
cats are frogs" the conclusion is true because its two premises are
true. But even arguments with wrong premises can be deductively valid if
they obey this principle, as in "all frogs are mammals; no cats are
mammals; therefore, no cats are frogs". If the premises of a valid argument are true, then it is called a sound argument.
The relation between the premises and the conclusion of a deductive argument is usually referred to as "logical consequence". According to Alfred Tarski, logical consequence has 3 essential features: it is necessary, formal, and knowable a priori.It is necessary in the sense that the premises of valid deductive
arguments necessitate the conclusion: it is impossible for the premises
to be true and the conclusion to be false, independent of any other
circumstances.
Logical consequence is formal in the sense that it depends only on the
form or the syntax of the premises and the conclusion. This means that
the validity of a particular argument does not depend on the specific
contents of this argument. If it is valid, then any argument with the
same logical form is also valid, no matter how different it is on the
level of its contents. Logical consequence is knowable a priori in the sense that no empirical
knowledge of the world is necessary to determine whether a deduction is
valid. So it is not necessary to engage in any form of empirical
investigation. Some logicians define deduction in terms of possible worlds:
A deductive inference is valid if and only if, there is no possible
world in which its conclusion is false while its premises are true. This
means that there are no counterexamples: the conclusion is true in all such cases, not just in most cases.
It has been argued against this and similar definitions that they
fail to distinguish between valid and invalid deductive reasoning, i.e.
they leave it open whether there are invalid deductive inferences and
how to define them.
Some authors define deductive reasoning in psychological terms in order
to avoid this problem. According to Mark Vorobey, whether an argument
is deductive depends on the psychological state of the person making the
argument: "An argument is deductive if, and only if, the author of the
argument believes that the truth of the premises necessitates
(guarantees) the truth of the conclusion". A similar formulation holds that the speaker claims or intends that the premises offer deductive support for their conclusion. This is sometimes categorized as a speaker-determined definition of deduction since it depends also on the speaker whether the argument in question is deductive or not. For speakerless definitions, on the other hand, only the argument itself matters independent of the speaker.
One advantage of this type of formulation is that it makes it possible
to distinguish between good or valid and bad or invalid deductive
arguments: the argument is good if the author's belief concerning the relation between the premises and the conclusion is true, otherwise it is bad.
One consequence of this approach is that deductive arguments cannot be
identified by the law of inference they use. For example, an argument of
the form modus ponens
may be non-deductive if the author's beliefs are sufficiently confused.
That brings with it an important drawback of this definition: it is
difficult to apply to concrete cases since the intentions of the author
are usually not explicitly stated.
Deductive reasoning is studied in logic, psychology, and the cognitive sciences.
Some theorists emphasize in their definition the difference between
these fields. On this view, psychology studies deductive reasoning as an
empirical mental process, i.e. what happens when humans engage in
reasoning. But the descriptive question of how actual reasoning happens is different from the normative question of how it should happen or what constitutes correct deductive reasoning, which is studied by logic.
This is sometimes expressed by stating that, strictly speaking, logic
does not study deductive reasoning but the deductive relation between
premises and a conclusion known as logical consequence. But this distinction is not always precisely observed in the academic literature. One important aspect of this difference is that logic is not interested in whether the conclusion of an argument is sensible.
So from the premise "the printer has ink" one may draw the unhelpful
conclusion "the printer has ink and the printer has ink and the printer
has ink", which has little relevance from a psychological point of view.
Instead, actual reasoners usually try to remove redundant or irrelevant
information and make the relevant information more explicit.
The psychological study of deductive reasoning is also concerned with
how good people are at drawing deductive inferences and with the factors
determining their performance. Deductive inferences are found both in natural language and in formal logical systems, such as propositional logic.
Conceptions of deduction
Deductive
arguments differ from non-deductive arguments in that the truth of
their premises ensures the truth of their conclusion. There are two important conceptions of what this exactly means. They are referred to as the syntactic and the semantic approach.
According to the syntactic approach, whether an argument is deductively
valid depends only on its form, syntax, or structure. Two arguments
have the same form if they use the same logical vocabulary in the same
arrangement, even if their contents differ.
For example, the arguments "if it rains then the street will be wet; it
rains; therefore, the street will be wet" and "if the meat is not
cooled then it will spoil; the meat is not cooled; therefore, it will
spoil" have the same logical form: they follow the modus ponens. Their form can be expressed more abstractly as "if A then B; A; therefore B" in order to make the common syntax explicit. There are various other valid logical forms or rules of inference, like modus tollens or the disjunction elimination.
The syntactic approach then holds that an argument is deductively valid
if and only if its conclusion can be deduced from its premises using a
valid rule of inference. One difficulty for the syntactic approach is that it is usually necessary to express the argument in a formal language in order to assess whether it is valid. But since the problem of deduction is also relevant for natural languages,
this often brings with it the difficulty of translating the natural
language argument into a formal language, a process that comes with
various problems of its own.
Another difficulty is due to the fact that the syntactic approach
depends on the distinction between formal and non-formal features. While
there is a wide agreement concerning the paradigmatic cases, there are
also various controversial cases where it is not clear how this
distinction is to be drawn.
The semantic approach suggests an alternative definition of
deductive validity. It is based on the idea that the sentences
constituting the premises and conclusions have to be interpreted in order to determine whether the argument is valid. This means that one ascribes semantic values to the expressions used in the sentences, such as the reference to an object for singular terms or to a truth-value for atomic sentences. The semantic approach is also referred to as the model-theoretic approach since the branch of mathematics known as model theory is often used to interpret these sentences.
Usually, many different interpretations are possible, such as whether a
singular term refers to one object or to another. According to the
semantic approach, an argument is deductively valid if and only if there
is no possible interpretation where its premises are true and its
conclusion is false.
Some objections to the semantic approach are based on the claim that
the semantics of a language cannot be expressed in the same language,
i.e. that a richer metalanguage
is necessary. This would imply that the semantic approach cannot
provide a universal account of deduction for language as an
all-encompassing medium.
Rules of inference
Deductive reasoning usually happens by applying rules of inference. A rule of inference is a way or schema of drawing a conclusion from a set of premises. This happens usually based only on the logical form
of the premises. A rule of inference is valid if, when applied to true
premises, the conclusion cannot be false. A particular argument is valid
if it follows a valid rule of inference. Deductive arguments that do
not follow a valid rule of inference are called formal fallacies: the truth of their premises does not ensure the truth of their conclusion.
In some cases, whether a rule of inference is valid depends on the logical system one is using. The dominant logical system is classical logic and the rules of inference listed here are all valid in classical logic. But so-called deviant logics provide a different account of which inferences are valid. For example, the rule of inference known as double negation elimination, i.e. that if a proposition is not not true then it is also true, is accepted in classical logic but rejected in intuitionistic logic.
Modus ponens (also known as "affirming the antecedent" or "the law of detachment") is the primary deductive rule of inference. It applies to arguments that have as first premise a conditional statement () and as second premise the antecedent () of the conditional statement. It obtains the consequent () of the conditional statement as its conclusion. The argument form is listed below:
(First premise is a conditional statement)
(Second premise is the antecedent)
(Conclusion deduced is the consequent)
In this form of deductive reasoning, the consequent () obtains as the conclusion from the premises of a conditional statement () and its antecedent (). However, the antecedent () cannot be similarly obtained as the conclusion from the premises of the conditional statement () and the consequent (). Such an argument commits the logical fallacy of affirming the consequent.
The following is an example of an argument using modus ponens:
If it is raining, then there are clouds in the sky.
Modus tollens (also known as "the law of contrapositive") is a
deductive rule of inference. It validates an argument that has as
premises a conditional statement (formula) and the negation of the
consequent () and as conclusion the negation of the antecedent (). In contrast to modus ponens,
reasoning with modus tollens goes in the opposite direction to that of
the conditional. The general expression for modus tollens is the
following:
. (First premise is a conditional statement)
. (Second premise is the negation of the consequent)
. (Conclusion deduced is the negation of the antecedent)
The following is an example of an argument using modus tollens:
If it is raining, then there are clouds in the sky.
A hypothetical syllogism
is an inference that takes two conditional statements and forms a
conclusion by combining the hypothesis of one statement with the
conclusion of another. Here is the general form:
Therefore, .
In there being a subformula in common between the two premises that
does not occur in the consequence, this resembles syllogisms in term logic,
although it differs in that this subformula is a proposition whereas in
Aristotelian logic, this common element is a term and not a
proposition.
The following is an example of an argument using a hypothetical syllogism:
If there had been a thunderstorm, it would have rained.
If it had rained, things would have gotten wet.
Thus, if there had been a thunderstorm, things would have gotten wet.
Fallacies
Various formal fallacies have been described. They are invalid forms of deductive reasoning.
An additional aspect of them is that they appear to be valid on some
occasions or on the first impression. They may thereby seduce people
into accepting and committing them. One type of formal fallacy is affirming the consequent, as in "if John is a bachelor, then he is male; John is male; therefore, John is a bachelor". This is similar to the valid rule of inference named modus ponens, but the second premise and the conclusion are switched around, which is why it is invalid. A similar formal fallacy is denying the antecedent, as in "if Othello is a bachelor, then he is male; Othello is not a bachelor; therefore, Othello is not male". This is similar to the valid rule of inference called modus tollens, the difference being that the second premise and the conclusion are switched around. Other formal fallacies include affirming a disjunct, denying a conjunct, and the fallacy of the undistributed middle.
All of them have in common that the truth of their premises does not
ensure the truth of their conclusion. But it may still happen by
coincidence that both the premises and the conclusion of formal
fallacies are true.
Definitory and strategic rules
Rules
of inferences are definitory rules: they determine whether an argument
is deductively valid or not. But reasoners are usually not just
interested in making any kind of valid argument. Instead, they often
have a specific point or conclusion that they wish to prove or refute.
So given a set of premises, they are faced with the problem of choosing
the relevant rules of inference for their deduction to arrive at their
intended conclusion.
This issue belongs to the field of strategic rules: the question of
which inferences need to be drawn to support one's conclusion. The
distinction between definitory and strategic rules is not exclusive to
logic: it is also found in various games. In chess, for example, the definitory rules state that bishops may only move diagonally while the strategic rules recommend that one should control the center and protect one's king
if one intends to win. In this sense, definitory rules determine
whether one plays chess or something else whereas strategic rules
determine whether one is a good or a bad chess player. The same applies to deductive reasoning: to be an effective reasoner involves mastering both definitory and strategic rules.
Validity and soundness
Deductive arguments are evaluated in terms of their validity and soundness.
An argument is “valid” if it is impossible for its premises
to be true while its conclusion is false. In other words, the
conclusion must be true if the premises are true. An argument can be
“valid” even if one or more of its premises are false.
An argument is “sound” if it is valid and the premises are true.
It is possible to have a deductive argument that is logically valid but is not sound. Fallacious arguments often take that form.
The following is an example of an argument that is “valid”, but not “sound”:
Everyone who eats carrots is a quarterback.
John eats carrots.
Therefore, John is a quarterback.
The example's first premise is false – there are people who eat
carrots who are not quarterbacks – but the conclusion would necessarily
be true, if the premises were true. In other words, it is impossible for
the premises to be true and the conclusion false. Therefore, the
argument is “valid”, but not “sound”. False generalizations – such as
"Everyone who eats carrots is a quarterback" – are often used to make
unsound arguments. The fact that there are some people who eat carrots
but are not quarterbacks proves the flaw of the argument.
Deductive reasoning can be contrasted with inductive reasoning,
in regards to validity and soundness. In cases of inductive reasoning,
even though the premises are true and the argument is “valid”, it is
possible for the conclusion to be false (determined to be false with a
counterexample or other means).
Difference from ampliative reasoning
Deductive reasoning is usually contrasted with non-deductive or ampliative reasoning.
The hallmark of valid deductive inferences is that it is impossible for
their premises to be true and their conclusion to be false. In this
way, the premises provide the strongest possible support to their
conclusion.
The premises of ampliative inferences also support their conclusion.
But this support is weaker: they are not necessarily truth-preserving.
So even for correct ampliative arguments, it is possible that their
premises are true and their conclusion is false. Two important forms of ampliative reasoning are inductive and abductive reasoning. Sometimes the term "inductive reasoning" is used in a very wide sense to cover all forms of ampliative reasoning. However, in a more strict usage, inductive reasoning is just one form of ampliative reasoning. In the narrow sense, inductive inferences are forms of statistical generalization. They are usually based on many individual observations
that all show a certain pattern. These observations are then used to
form a conclusion either about a yet unobserved entity or about a
general law.
For abductive inferences, the premises support the conclusion because
the conclusion is the best explanation of why the premises are true.
The support ampliative arguments provide for their conclusion
comes in degrees: some ampliative arguments are stronger than others. This is often explained in terms of probability: the premises make it more likely that the conclusion is true.
Strong ampliative arguments make their conclusion very likely, but not
absolutely certain. An example of ampliative reasoning is the inference
from the premise "every raven in a random sample of 3200 ravens is
black" to the conclusion "all ravens are black": the extensive random
sample makes the conclusion very likely, but it does not exclude that
there are rare exceptions.
In this sense, ampliative reasoning is defeasible: it may become
necessary to retract an earlier conclusion upon receiving new related
information. Ampliative reasoning is very common in everyday discourse and the sciences.
An important drawback of deductive reasoning is that it does not lead to genuinely new information.
This means that the conclusion only repeats information already found
in the premises. Ampliative reasoning, on the other hand, goes beyond
the premises by arriving at genuinely new information.
One difficulty for this characterization is that it makes deductive
reasoning appear useless: if deduction is uninformative, it is not clear
why people would engage in it and study it.
It has been suggested that this problem can be solved by distinguishing
between surface and depth information. On this view, deductive
reasoning is uninformative on the depth level, in contrast to ampliative
reasoning. But it may still be valuable on the surface level by
presenting the information in the premises in a new and sometimes
surprising way.
A popular misconception of the relation between deduction and
induction identifies their difference on the level of particular and
general claims. On this view, deductive inferences start from general premises and draw
particular conclusions, while inductive inferences start from
particular premises and draw general conclusions. This idea is often
motivated by seeing deduction and induction as two inverse processes
that complement each other: deduction is top-down while induction is bottom-up. But this is a misconception that does not reflect how valid deduction is defined in the field of logic:
a deduction is valid if it is impossible for its premises to be true
while its conclusion is false, independent of whether the premises or
the conclusion are particular or general. Because of this, some deductive inferences have a general conclusion and some also have particular premises.
In various fields
Cognitive psychology
Cognitive psychology studies the psychological processes responsible for deductive reasoning.
It is concerned, among other things, with how good people are at
drawing valid deductive inferences. This includes the study of the
factors affecting their performance, their tendency to commit fallacies, and the underlying biases involved.
A notable finding in this field is that the type of deductive inference
has a significant impact on whether the correct conclusion is drawn. In a meta-analysis of 65 studies, for example, 97% of the subjects evaluated modus ponens inferences correctly, while the success rate for modus tollens was only 72%. On the other hand, even some fallacies like affirming the consequent or denying the antecedent were regarded as valid arguments by the majority of the subjects.
An important factor for these mistakes is whether the conclusion seems
initially plausible: the more believable the conclusion is, the higher
the chance that a subject will mistake a fallacy for a valid argument.
An important bias is the matching bias, which is often illustrated using the Wason selection task. In an often-cited experiment by Peter Wason,
4 cards are presented to the participant. In one case, the visible
sides show the symbols D, K, 3, and 7 on the different cards. The
participant is told that every card has a letter on one side and a
number on the other side, and that "[e]very card which has a D on one
side has a 3 on the other side". Their task is to identify which cards
need to be turned around in order to confirm or refute this conditional
claim. The correct answer, only given by about 10%, is the cards D and
7. Many select card 3 instead, even though the conditional claim does
not involve any requirements on what symbols can be found on the
opposite side of card 3.
But this result can be drastically changed if different symbols are
used: the visible sides show "drinking a beer", "drinking a coke", "16
years of age", and "22 years of age" and the participants are asked to
evaluate the claim "[i]f a person is drinking beer, then the person must
be over 19 years of age". In this case, 74% of the participants
identified correctly that the cards "drinking a beer" and "16 years of
age" have to be turned around.
These findings suggest that the deductive reasoning ability is heavily
influenced by the content of the involved claims and not just by the
abstract logical form of the task: the more realistic and concrete the
cases are, the better the subjects tend to perform.
Another bias is called the "negative conclusion bias", which happens when one of the premises has the form of a negative material conditional,
as in "If the card does not have an A on the left, then it has a 3 on
the right. The card does not have a 3 on the right. Therefore, the card
has an A on the left". The increased tendency to misjudge the validity
of this type of argument is not present for positive material
conditionals, as in "If the card has an A on the left, then it has a 3
on the right. The card does not have a 3 on the right. Therefore, the
card does not have an A on the left".
Psychological theories of deductive reasoning
Various
psychological theories of deductive reasoning have been proposed. These
theories aim to explain how deductive reasoning works in relation to
the underlying psychological processes responsible. They are often used
to explain the empirical findings, such as why human reasoners are more
susceptible to some types of fallacies than to others.
An important distinction is between mental logic theories, sometimes also referred to as rule theories, and mental model theories. Mental logic theories see deductive reasoning as a language-like process that happens through the manipulation of representations. This is done by applying syntactic rules of inference in a way very similar to how systems of natural deduction transform their premises to arrive at a conclusion. On this view, some deductions are simpler than others since they involve fewer inferential steps. This idea can be used, for example, to explain why humans have more difficulties with some deductions, like the modus tollens, than with others, like the modus ponens:
because the more error-prone forms do not have a native rule of
inference but need to be calculated by combining several inferential
steps with other rules of inference. In such cases, the additional
cognitive labor makes the inferences more open to error.
Mental model theories, on the other hand, hold that deductive reasoning involves models or mental representations of possible states of the world without the medium of language or rules of inference.
In order to assess whether a deductive inference is valid, the
reasoner mentally constructs models that are compatible with the
premises of the inference. The conclusion is then tested by looking at
these models and trying to find a counterexample in which the conclusion
is false. The inference is valid if no such counterexample can be
found. In order to reduce cognitive labor, only such models are represented in
which the premises are true. Because of this, the evaluation of some
forms of inference only requires the construction of very few models
while for others, many different models are necessary. In the latter
case, the additional cognitive labor required makes deductive reasoning
more error-prone, thereby explaining the increased rate of error
observed.
This theory can also explain why some errors depend on the content
rather than the form of the argument. For example, when the conclusion
of an argument is very plausible, the subjects may lack the motivation
to search for counterexamples among the constructed models.
Both mental logic theories and mental model theories assume that
there is one general-purpose reasoning mechanism that applies to all
forms of deductive reasoning.
But there are also alternative accounts that posit various different
special-purpose reasoning mechanisms for different contents and
contexts. In this sense, it has been claimed that humans possess a
special mechanism for permissions and obligations, specifically for
detecting cheating in social exchanges. This can be used to explain why
humans are often more successful in drawing valid inferences if the
contents involve human behavior in relation to social norms. Another example is the so-called dual-process theory.
This theory posits that there are two distinct cognitive systems
responsible for reasoning. Their interrelation can be used to explain
commonly observed biases in deductive reasoning. System 1 is the older
system in terms of evolution. It is based on associative learning and
happens fast and automatically without demanding many cognitive
resources.
System 2, on the other hand, is of more recent evolutionary origin. It
is slow and cognitively demanding, but also more flexible and under
deliberate control.
The dual-process theory posits that system 1 is the default system
guiding most of our everyday reasoning in a pragmatic way. But for
particularly difficult problems on the logical level, system 2 is
employed. System 2 is mostly responsible for deductive reasoning.
Intelligence
The ability of deductive reasoning is an important aspect of intelligence and many tests of intelligence include problems that call for deductive inferences. Because of this relation to intelligence, deduction is highly relevant to psychology and the cognitive sciences. But the subject of deductive reasoning is also pertinent to the computer sciences, for example, in the creation of artificial intelligence.
Epistemology
Deductive reasoning plays an important role in epistemology. Epistemology is concerned with the question of justification, i.e. to point out which beliefs are justified and why. Deductive inferences are able to transfer the justification of the premises onto the conclusion.
So while logic is interested in the truth-preserving nature of
deduction, epistemology is interested in the justification-preserving
nature of deduction. There are different theories trying to explain why
deductive reasoning is justification-preserving. According to reliabilism,
this is the case because deductions are truth-preserving: they are
reliable processes that ensure a true conclusion given the premises are
true.
Some theorists hold that the thinker has to have explicit awareness of
the truth-preserving nature of the inference for the justification to be
transferred from the premises to the conclusion. One consequence of
such a view is that, for young children, this deductive transference
does not take place since they lack this specific awareness.
Probability logic
Probability logic
is interested in how the probability of the premises of an argument
affects the probability of its conclusion. It differs from classical
logic, which assumes that propositions are either true or false but does
not take into consideration the probability or certainty that a
proposition is true or false.
The probability of the conclusion of a deductive argument cannot be
calculated by figuring out the cumulative probability of the argument’s
premises. Dr. Timothy McGrew, a specialist in the applications of probability theory, and Dr. Ernest W. Adams, a Professor Emeritus at UC Berkeley,
pointed out that the theorem on the accumulation of uncertainty
designates only a lower limit on the probability of the conclusion. So
the probability of the conjunction of the argument’s premises sets only a
minimum probability of the conclusion. The probability of the
argument’s conclusion cannot be any lower than the probability of the
conjunction of the argument’s premises. For example, if the probability
of a deductive argument’s four premises is ~0.43, then it is assured
that the probability of the argument’s conclusion is no less than ~0.43.
It could be much higher, but it cannot drop under that lower limit.
There can be examples in which each single premise is more likely
true than not and yet it would be unreasonable to accept the
conjunction of the premises. Professor Henry Kyburg, who was known for his work in probability and logic,
clarified that the issue here is one of closure – specifically, closure
under conjunction. There are examples where it is reasonable to accept P
and reasonable to accept Q without its being reasonable to accept the
conjunction (P&Q). Lotteries serve as very intuitive examples of
this, because in a basic non-discriminatory finite lottery with only a
single winner to be drawn, it is sound to think that ticket 1 is a
loser, sound to think that ticket 2 is a loser,...all the way up to the
final number. However, clearly, it is irrational to accept the
conjunction of these statements; the conjunction would deny the very
terms of the lottery because (taken with the background knowledge) it
would entail that there is no winner.
Dr. McGrew further adds that the sole method to ensure that a
conclusion deductively drawn from a group of premises is more probable
than not is to use premises the conjunction of which is more probable
than not. This point is slightly tricky, because it can lead to a
possible misunderstanding. What is being searched for is a general
principle that specifies factors under which, for any logical
consequence C of the group of premises, C is more probable than not.
Particular consequences will differ in their probability. However, the
goal is to state a condition under which this attribute is ensured,
regardless of which consequence one draws, and fulfilment of that
condition is required to complete the task.
This principle can be demonstrated in a moderately clear way. Suppose, for instance, the following group of premises:
{P, Q, R}
Suppose that the conjunction ((P & Q) & R) fails to be
more probable than not. Then there is at least one logical consequence
of the group that fails to be more probable than not – namely, that very
conjunction. So it is an essential factor for the argument to “preserve
plausibility” (Dr. McGrew coins this phrase to mean “guarantee, from
information about the plausibility of the premises alone, that any
conclusion drawn from those premises by deductive inference is itself
more plausible than not”) that the conjunction of the premises be more
probable than not.
History
Aristotle, a Greek philosopher, started documenting deductive reasoning in the 4th century BC. René Descartes, in his book Discourse on Method, refined the idea for the Scientific Revolution.
Developing four rules to follow for proving an idea deductively,
Descartes laid the foundation for the deductive portion of the scientific method.
Descartes' background in geometry and mathematics influenced his ideas
on the truth and reasoning, causing him to develop a system of general
reasoning now used for most mathematical reasoning. Similar to
postulates, Descartes believed that ideas could be self-evident and that
reasoning alone must prove that observations are reliable. These ideas
also lay the foundations for the ideas of rationalism.
Related concepts and theories
Deductivism
Deductivism
is a philosophical position that gives primacy to deductive reasoning
or arguments over their non-deductive counterparts. It is often understood as the evaluative claim that only deductive inferences are good or correct
inferences. This theory would have wide-reaching consequences for
various fields since it implies that the rules of deduction are "the
only acceptable standard of evidence". This way, the rationality or correctness of the different forms of inductive reasoning is denied.Some forms of deductivism express this in terms of degrees of
reasonableness or probability. Inductive inferences are usually seen as
providing a certain degree of support for their conclusion: they make it
more likely that their conclusion is true. Deductivism states that such
inferences are not rational: the premises either ensure their
conclusion, as in deductive reasoning, or they do not provide any
support at all.
One motivation for deductivism is the problem of induction introduced by David Hume.
It consists in the challenge of explaining how or whether inductive
inferences based on past experiences support conclusions about future
events.
For example, a chicken comes to expect, based on all its past
experiences, that the person entering its coop is going to feed it,
until one day the person "at last wrings its neck instead". According to Karl Popper's
falsificationism, deductive reasoning alone is sufficient. This is due
to its truth-preserving nature: a theory can be falsified if one of its
deductive consequences is false.
So while inductive reasoning does not offer positive evidence for a
theory, the theory still remains a viable competitor until falsified by empirical observation. In this sense, deduction alone is sufficient for discriminating between competing hypotheses about what is the case. Hypothetico-deductivism
is a closely related scientific method, according to which science
progresses by formulating hypotheses and then aims to falsify them by
trying to make observations that run counter to their deductive
consequences.
Natural deduction
The term "natural deduction" refers to a class of proof systems based on self-evident rules of inference. The first systems of natural deduction were developed by Gerhard Gentzen and Stanislaw Jaskowski
in the 1930s. The core motivation was to give a simple presentation of
deductive reasoning that closely mirrors how reasoning actually takes
place. In this sense, natural deduction stands in contrast to other less intuitive proof systems, such as Hilbert-style deductive systems, which employ axiom schemes to express logical truths.
Natural deduction, on the other hand, avoids axioms schemes by
including many different rules of inference that can be used to
formulate proofs. These rules of inference express how logical constants behave. They are often divided into introduction rules and elimination rules. Introduction rules specify under which conditions a logical constant may be introduced into a new sentence of the proof. For example, the introduction rule for the logical constant "" (and) is "". It expresses that, given the premises "" and "" individually, one may draw the conclusion "" and thereby include it in one's proof. This way, the symbol "" is introduced into the proof. The removal of this symbol is governed by other rules of inference, such as the elimination rule "", which states that one may deduce the sentence "" from the premise "". Similar introduction and elimination rules are given for other logical constants, such as the propositional operator "", the propositional connectives"" and "", and the quantifiers"" and "".
The focus on rules of inferences instead of axiom schemes is an important feature of natural deduction.
But there is no general agreement on how natural deduction is to be
defined. Some theorists hold that all proof systems with this feature
are forms of natural deduction. This would include various forms of sequent calculi or tableau calculi.
But other theorists use the term in a more narrow sense, for example,
to refer to the proof systems developed by Gentzen and Jaskowski.
Because of its simplicity, natural deduction is often used for teaching
logic to students.
Geometrical method
The geometrical method is a method of philosophy based on deductive reasoning. It starts from a small set of self-evident axioms and tries to build a comprehensive logical system based only on deductive inferences from these first axioms. It was initially formulated by Baruch Spinoza and came to prominence in various rationalist philosophical systems in the modern era. It gets its name from the forms of mathematical demonstration found in traditional geometry, which are usually based on axioms, definitions, and inferred theorems. An important motivation of the geometrical method is to repudiate philosophical skepticism
by grounding one's philosophical system on absolutely certain axioms.
Deductive reasoning is central to this endeavor because of its
necessarily truth-preserving nature. This way, the certainty initially
invested only in the axioms is transferred to all parts of the
philosophical system.
One recurrent criticism of philosophical systems build using the
geometrical method is that their initial axioms are not as self-evident
or certain as their defenders proclaim.
This problem lies beyond the deductive reasoning itself, which only
ensures that the conclusion is true if the premises are true, but not
that the premises themselves are true. For example, Spinoza's
philosophical system has been criticized this way based on objections
raised against the causal axiom, i.e. that "the knowledge of an effect depends on and involves knowledge of its cause".
A different criticism targets not the premises but the reasoning
itself, which may at times implicitly assume premises that are
themselves not self-evident.