Hardware virtualization is the virtualization of computers
as complete hardware platforms, certain logical abstractions of their
componentry, or only the functionality required to run various operating systems.
Virtualization hides the physical characteristics of a computing
platform from the users, presenting instead an abstract computing
platform. At its origins, the software that controlled virtualization was called a "control program", but the terms "hypervisor" or "virtual machine monitor" became preferred over time.
Concept
The term "virtualization" was coined in the 1960s to refer to a virtual machine (sometimes called "pseudo machine"), a term which itself dates from the experimental IBM M44/44X system.
The creation and management of virtual machines has also been called
"platform virtualization", or "server virtualization", more recently.
Platform virtualization is performed on a given hardware platform by host software (a control program), which creates a simulated computer environment, a virtual machine (VM), for its guest
software. The guest software is not limited to user applications; many
hosts allow the execution of complete operating systems. The guest
software executes as if it were running directly on the physical
hardware, with several notable caveats. Access to physical system
resources (such as the network access, display, keyboard, and disk storage) is generally managed at a more restrictive level than the host processor and system-memory. Guests are often restricted from accessing specific peripheral devices,
or may be limited to a subset of the device's native capabilities,
depending on the hardware access policy implemented by the
virtualization host.
Virtualization often exacts performance penalties, both in
resources required to run the hypervisor, and as well as in reduced
performance on the virtual machine compared to running native on the
physical machine.
Reasons for virtualization
In the case of server
consolidation, many small physical servers can be replaced by one
larger physical server to decrease the need for more (costly) hardware
resources such as CPUs, and hard drives. Although hardware is
consolidated in virtual environments, typically OSs are not. Instead,
each OS running on a physical server is converted to a distinct OS
running inside a virtual machine. Thereby, the large server can "host"
many such "guest" virtual machines. This is known as Physical-to-Virtual (P2V) transformation.
In addition to reducing equipment and labor costs associated with
equipment maintenance, consolidating servers can also have the added
benefit of reducing energy consumption and the global footprint in
environmental-ecological sectors of technology. For example, a typical
server runs at 425 W and VMware estimates a hardware reduction ratio of up to 15:1.
A virtual machine (VM) can be more easily controlled and inspected
from a remote site than a physical machine, and the configuration of a
VM is more flexible. This is very useful in kernel development and for
teaching operating system courses, including running legacy operating
systems that do not support modern hardware.
A new virtual machine can be provisioned as required without the need for an up-front hardware purchase.
A virtual machine can easily be relocated from one physical machine
to another as needed. For example, a salesperson going to a customer can
copy a virtual machine with the demonstration software to their laptop,
without the need to transport the physical computer. Likewise, an error
inside a virtual machine does not harm the host system, so there is no
risk of the OS crashing on the laptop.
Because of this ease of relocation, virtual machines can be readily used in disaster recovery scenarios without concerns with impact of refurbished and faulty energy sources.
However, when multiple VMs are concurrently running on the same
physical host, each VM may exhibit varying and unstable performance
which highly depends on the workload imposed on the system by other VMs.
This issue can be addressed by appropriate installation techniques for temporal isolation among virtual machines.
There are several approaches to platform virtualization.
Examples of virtualization use cases:
Running one or more applications that are not supported by the
host OS: A virtual machine running the required guest OS could permit
the desired applications to run, without altering the host OS.
Evaluating an alternate operating system: The new OS could be run within a VM, without altering the host OS.
Server virtualization: Multiple virtual servers could be run on a
single physical server, in order to more fully utilize the hardware
resources of the physical server.
Duplicating specific environments: A virtual machine could,
depending on the virtualization software used, be duplicated and
installed on multiple hosts, or restored to a previously backed-up
system state.
Creating a protected environment: If a guest OS running on a VM
becomes damaged in a way that is not cost-effective to repair, such as
may occur when studying malware
or installing badly behaved software, the VM may simply be discarded
without harm to the host system, and a clean copy used upon rebooting
the guest .
In full virtualization, the virtual machine simulates enough hardware to allow an unmodified "guest" OS designed for the same instruction set to be run in isolation. This approach was pioneered in 1966 with the IBM CP-40 and CP-67, predecessors of the VM family.
In hardware-assisted virtualization, the hardware provides
architectural support that facilitates building a virtual machine
monitor and allows guest OSs to be run in isolation.
Hardware-assisted virtualization was first introduced on the IBM
System/370 in 1972, for use with VM/370, the first virtual machine
operating system.
In 2005 and 2006, Intel and AMD developed additional hardware to support virtualization ran on their platforms. Sun Microsystems (now Oracle Corporation) added similar features in their UltraSPARC T-Series processors in 2005.
In 2006, first-generation 32- and 64-bit x86 hardware support was
found to rarely offer performance advantages over software
virtualization.
In paravirtualization, the virtual machine does not necessarily
simulate hardware, but instead (or in addition) offers a special API
that can only be used by modifying
the "guest" OS. For this to be possible, the "guest" OS's source code
must be available. If the source code is available, it is sufficient to
replace sensitive instructions with calls to VMM APIs (e.g.: "cli" with
"vm_handle_cli()"), then re-compile the OS and use the new binaries.
This system call to the hypervisor is called a "hypercall" in TRANGO and Xen; it is implemented via a DIAG ("diagnose") hardware instruction in IBM's CMS under VM (which was the origin of the term hypervisor)..
In operating-system-level virtualization, a physical server is
virtualized at the operating system level, enabling multiple isolated
and secure virtualized servers to run on a single physical server. The
"guest" operating system environments share the same running instance of
the operating system as the host system. Thus, the same operating system kernel
is also used to implement the "guest" environments, and applications
running in a given "guest" environment view it as a stand-alone system.
Hardware virtualization disaster recovery
A disaster recovery
(DR) plan is often considered good practice for a hardware
virtualization platform. DR of a virtualization environment can ensure
high rate of availability during a wide range of situations that disrupt
normal business operations. In situations where continued operations of
hardware virtualization platforms is important, a disaster recovery
plan can ensure hardware performance and maintenance requirements are
met. A hardware virtualization disaster recovery plan involves both
hardware and software protection by various methods, including those
described below.
Tape backup for software data long-term archival needs
This common method can be used to store data offsite, but data
recovery can be a difficult and lengthy process. Tape backup data is
only as good as the latest copy stored. Tape backup methods will require
a backup device and ongoing storage material.
Whole-file and application replication
The implementation of this method will require control software and
storage capacity for application and data file storage replication
typically on the same site. The data is replicated on a different disk
partition or separate disk device and can be a scheduled activity for
most servers and is implemented more for database-type applications.
Hardware and software redundancy
This method ensures the highest level of disaster recovery
protection for a hardware virtualization solution, by providing
duplicate hardware and software replication in two distinct geographic
areas.
the concept of functions or subroutines which represent a specific way of implementing control flow;
the process of reorganizing common behavior from groups of non-abstract classes into abstract classes using inheritance and sub-classes, as seen in object-oriented programming languages.
Rationale
The essence of abstraction is preserving information that is relevant
in a given context, and forgetting information that is irrelevant in
that context.
Computing mostly operates independently of the concrete world. The hardware implements a model of computation that is interchangeable with others. The software is structured in architectures
to enable humans to create the enormous systems by concentrating on a
few issues at a time. These architectures are made of specific choices
of abstractions. Greenspun's Tenth Rule is an aphorism on how such an architecture is both inevitable and complex.
Some abstractions try to limit the range of concepts a programmer
needs to be aware of, by completely hiding the abstractions that they
in turn are built on. The software engineer and writer Joel Spolsky has criticised these efforts by claiming that all abstractions are leaky – that they can never completely hide the details below; however, this does not negate the usefulness of abstraction.
Some abstractions are designed to inter-operate with other abstractions – for example, a programming language may contain a foreign function interface for making calls to the lower-level language.
Different programming languages provide different types of
abstraction, depending on the intended applications for the language.
For example:
In object-oriented programming languages such as C++, Object Pascal, or Java, the concept of abstraction has itself become a declarative statement – using the syntaxfunction(parameters) = 0; (in C++) or the keywordsabstract and interface (in Java). After such a declaration, it is the responsibility of the programmer to implement a class to instantiate the object of the declaration.
Modern members of the Lisp programming language family such as Clojure, Scheme and Common Lisp support macro systems to allow syntactic abstraction. Other programming languages such as Scala also have macros, or very similar metaprogramming features (for example, Haskell has Template Haskell, and OCaml has MetaOCaml). These can allow a programmer to eliminate boilerplate code, abstract away tedious function call sequences, implement new control flow structures, and implement Domain Specific Languages (DSLs),
which allow domain-specific concepts to be expressed in concise and
elegant ways. All of these, when used correctly, improve both the
programmer's efficiency and the clarity of the code by making the
intended purpose more explicit. A consequence of syntactic abstraction
is also that any Lisp dialect and in fact almost any programming
language can, in principle, be implemented in any modern Lisp with
significantly reduced (but still non-trivial in most cases) effort when
compared to "more traditional" programming languages such as Python, C or Java.
Specification languages generally rely on abstractions of one kind or
another, since specifications are typically defined earlier in a
project, (and at a more abstract level) than an eventual implementation.
The UML specification language, for example, allows the definition of abstract classes, which in a waterfall project, remain abstract during the architecture and specification phase of the project.
Programming languages offer control abstraction as one of the main
purposes of their use. Computer machines understand operations at the
very low level such as moving some bits from one location of the memory
to another location and producing the sum of two sequences of bits.
Programming languages allow this to be done in the higher level. For
example, consider this statement written in a Pascal-like fashion:
a := (1 + 2) * 5
To a human, this seems a fairly simple and obvious calculation ("one plus two is three, times five is fifteen").
However, the low-level steps necessary to carry out this evaluation,
and return the value "15", and then assign that value to the variable
"a", are actually quite subtle and complex. The values need to be
converted to binary representation (often a much more complicated task
than one would think) and the calculations decomposed (by the compiler
or interpreter) into assembly instructions (again, which are much less
intuitive to the programmer: operations such as shifting a binary
register left, or adding the binary complement of the contents of one
register to another, are simply not how humans think about the abstract
arithmetical operations of addition or multiplication). Finally,
assigning the resulting value of "15" to the variable labeled "a", so
that "a" can be used later, involves additional 'behind-the-scenes'
steps of looking up a variable's label and the resultant location in
physical or virtual memory, storing the binary representation of "15" to
that memory location, etc.
Without control abstraction, a programmer would need to specify all
the register/binary-level steps each time they simply wanted to add or
multiply a couple of numbers and assign the result to a variable. Such
duplication of effort has two serious negative consequences:
it forces the programmer to constantly repeat fairly common tasks every time a similar operation is needed
it forces the programmer to program for the particular hardware and instruction set
Structured programming involves the splitting of complex program
tasks into smaller pieces with clear flow-control and interfaces between
components, with a reduction of the complexity potential for
side-effects.
In a simple program, this may aim to ensure that loops have
single or obvious exit points and (where possible) to have single exit
points from functions and procedures.
In a larger system, it may involve breaking down complex tasks
into many different modules. Consider a system which handles payroll on
ships and at shore offices:
The uppermost level may feature a menu of typical end-user operations.
Within that could be standalone executables or libraries for tasks such as signing on and off employees or printing checks.
Within each of those standalone components there could be many
different source files, each containing the program code to handle a
part of the problem, with only selected interfaces available to other
parts of the program. A sign on program could have source files for each
data entry screen and the database interface (which may itself be a
standalone third party library or a statically linked set of library
routines).
Either the database or the payroll application also has to initiate
the process of exchanging data with between ship and shore, and that
data transfer task will often contain many other components.
These layers produce the effect of isolating the implementation
details of one component and its assorted internal methods from the
others. Object-oriented programming embraces and extends this concept.
Data abstraction enforces a clear separation between the abstract properties of a data type and the concrete
details of its implementation. The abstract properties are those that
are visible to client code that makes use of the data type—the interface
to the data type—while the concrete implementation is kept entirely
private, and indeed can change, for example to incorporate efficiency
improvements over time. The idea is that such changes are not supposed
to have any impact on client code, since they involve no difference in
the abstract behaviour.
For example, one could define an abstract data type called lookup table which uniquely associates keys with values,
and in which values may be retrieved by specifying their corresponding
keys. Such a lookup table may be implemented in various ways: as a hash table, a binary search tree, or even a simple linear list of (key:value) pairs. As far as client code is concerned, the abstract properties of the type are the same in each case.
Of course, this all relies on getting the details of the
interface right in the first place, since any changes there can have
major impacts on client code. As one way to look at this: the interface
forms a contract on agreed behaviour between the data type and
client code; anything not spelled out in the contract is subject to
change without notice.
Manual data abstraction
While
much of data abstraction occurs through computer science and
automation, there are times when this process is done manually and
without programming intervention. One way this can be understood is
through data abstraction within the process of conducting a systematic review of the literature. In this methodology, data is abstracted by one or several abstractors when conducting a meta-analysis, with errors reduced through dual data abstraction followed by independent checking, known as adjudication.
In object-oriented programming theory, abstraction
involves the facility to define objects that represent abstract
"actors" that can perform work, report on and change their state, and
"communicate" with other objects in the system. The term encapsulation refers to the hiding of state details, but extending the concept of data type from earlier programming languages to associate behavior most strongly with the data, and standardizing the way that different data types interact, is the beginning of abstraction. When abstraction proceeds into the operations defined, enabling objects of different types to be substituted, it is called polymorphism.
When it proceeds in the opposite direction, inside the types or
classes, structuring them to simplify a complex set of relationships, it
is called delegation or inheritance.
Various object-oriented programming languages offer similar facilities for abstraction, all to support a general strategy of polymorphism in object-oriented programming, which includes the substitution of one type for another in the same or similar role. Although not as generally supported, a configuration or image or package may predetermine a great many of these bindings at compile-time, link-time, or loadtime. This would leave only a minimum of such bindings to change at run-time.
Common Lisp Object System or Self, for example, feature less of a class-instance distinction and more use of delegation for polymorphism. Individual objects and functions are abstracted more flexibly to better fit with a shared functional heritage from Lisp.
C++ exemplifies another extreme: it relies heavily on templates and overloading and other static bindings at compile-time, which in turn has certain flexibility problems.
Although these examples offer alternate strategies for achieving
the same abstraction, they do not fundamentally alter the need to
support abstract nouns in code – all programming relies on an ability to
abstract verbs as functions, nouns as data structures, and either as
processes.
Consider for example a sample Java
fragment to represent some common farm "animals" to a level of
abstraction suitable to model simple aspects of their hunger and
feeding. It defines an Animal class to represent both the state of the animal and its functions:
publicclassAnimalextendsLivingThing{privateLocationloc;privatedoubleenergyReserves;publicbooleanisHungry(){returnenergyReserves<2.5;}publicvoideat(Foodfood){// Consume foodenergyReserves+=food.getCalories();}publicvoidmoveTo(Locationlocation){// Move to new locationthis.loc=location;}}
With the above definition, one could create objects of type Animal and call their methods like this:
In the above example, the class Animal is an abstraction used in place of an actual animal, LivingThing is a further abstraction (in this case a generalisation) of Animal.
If one requires a more differentiated hierarchy of animals – to
differentiate, say, those who provide milk from those who provide
nothing except meat at the end of their lives – that is an intermediary
level of abstraction, probably DairyAnimal (cows, goats) who would eat
foods suitable to giving good milk, and MeatAnimal (pigs, steers) who
would eat foods to give the best meat-quality.
Such an abstraction could remove the need for the application
coder to specify the type of food, so they could concentrate instead on
the feeding schedule. The two classes could be related using inheritance or stand alone, and the programmer could define varying degrees of polymorphism
between the two types. These facilities tend to vary drastically
between languages, but in general each can achieve anything that is
possible with any of the others. A great many operation overloads, data
type by data type, can have the same effect at compile-time as any
degree of inheritance or other means to achieve polymorphism. The class
notation is simply a coder's convenience.
Decisions regarding what to abstract and what to keep under the
control of the coder become the major concern of object-oriented design
and domain analysis—actually determining the relevant relationships in the real world is the concern of object-oriented analysis or legacy analysis.
In general, to determine appropriate abstraction, one must make
many small decisions about scope (domain analysis), determine what other
systems one must cooperate with (legacy analysis), then perform a
detailed object-oriented analysis which is expressed within project time
and budget constraints as an object-oriented design. In our simple
example, the domain is the barnyard, the live pigs and cows and their
eating habits are the legacy constraints, the detailed analysis is that
coders must have the flexibility to feed the animals what is available
and thus there is no reason to code the type of food into the class
itself, and the design is a single simple Animal class of which pigs and
cows are instances with the same functions. A decision to differentiate
DairyAnimal would change the detailed analysis but the domain and
legacy analysis would be unchanged—thus it is entirely under the control
of the programmer, and it is called an abstraction in object-oriented
programming as distinct from abstraction in domain or legacy analysis.
Considerations
When discussing formal semantics of programming languages, formal methods or abstract interpretation, abstraction
refers to the act of considering a less detailed, but safe, definition
of the observed program behaviors. For instance, one may observe only
the final result of program executions instead of considering all the
intermediate steps of executions. Abstraction is defined to a concrete (more precise) model of execution.
Abstraction may be exact or faithful with respect
to a property if one can answer a question about the property equally
well on the concrete or abstract model. For instance, if one wishes to
know what the result of the evaluation of a mathematical expression
involving only integers +, -, ×, is worth modulon, then one needs only perform all operations modulo n (a familiar form of this abstraction is casting out nines).
Abstractions, however, though not necessarily exact, should be sound. That is, it should be possible to get sound answers from them—even though the abstraction may simply yield a result of undecidability.
For instance, students in a class may be abstracted by their minimal
and maximal ages; if one asks whether a certain person belongs to that
class, one may simply compare that person's age with the minimal and
maximal ages; if his age lies outside the range, one may safely answer
that the person does not belong to the class; if it does not, one may
only answer "I don't know".
The level of abstraction included in a programming language can influence its overall usability. The Cognitive dimensions framework includes the concept of abstraction gradient
in a formalism. This framework allows the designer of a programming
language to study the trade-offs between abstraction and other
characteristics of the design, and how changes in abstraction influence
the language usability.
Abstractions can prove useful when dealing with computer
programs, because non-trivial properties of computer programs are
essentially undecidable (see Rice's theorem).
As a consequence, automatic methods for deriving information on the
behavior of computer programs either have to drop termination (on some
occasions, they may fail, crash or never yield out a result), soundness
(they may provide false information), or precision (they may answer "I
don't know" to some questions).
Computer science commonly presents levels (or, less commonly, layers)
of abstraction, wherein each level represents a different model of the
same information and processes, but with varying amounts of detail. Each
level uses a system of expression involving a unique set of objects and
compositions that apply only to a particular domain.
Each relatively abstract, "higher" level builds on a relatively
concrete, "lower" level, which tends to provide an increasingly
"granular" representation. For example, gates build on electronic
circuits, binary on gates, machine language on binary, programming
language on machine language, applications and operating systems on
programming languages. Each level is embodied, but not determined, by
the level beneath it, making it a language of description that is
somewhat self-contained.
Since many users of database systems lack in-depth familiarity with
computer data-structures, database developers often hide complexity
through the following levels:
Physical level: The lowest level of abstraction describes how a system actually stores data. The physical level describes complex low-level data structures in detail.
Logical level: The next higher level of abstraction describes what
data the database stores, and what relationships exist among those
data. The logical level thus describes an entire database in terms of a
small number of relatively simple structures. Although implementation of
the simple structures at the logical level may involve complex physical
level structures, the user of the logical level does not need to be
aware of this complexity. This is referred to as physical data independence. Database administrators, who must decide what information to keep in a database, use the logical level of abstraction.
View level: The highest level of abstraction describes
only part of the entire database. Even though the logical level uses
simpler structures, complexity remains because of the variety of
information stored in a large database. Many users of a database system
do not need all this information; instead, they need to access only a
part of the database. The view level of abstraction exists to simplify
their interaction with the system. The system may provide many views for the same database.
Layered architecture partitions the concerns of the application
into stacked groups (layers).
It is a technique used in designing computer software, hardware, and
communications in which system or network components are isolated in
layers so that changes can be made in one layer without affecting the
others.
A platform can be seen both as a constraint on the software development process,
in that different platforms provide different functionality and
restrictions; and as an assistant to the development process, in that
they provide low-level functionality ready-made. For example, an OS may
be a platform that abstracts the underlying differences in hardware and
provides a generic command for saving files or accessing the network.
Components
Platforms may also include:
Hardware alone, in the case of small embedded systems. Embedded systems can access hardware directly, without an OS; this is referred to as running on "bare metal".
A browser
in the case of web-based software. The browser itself runs on a
hardware+OS platform, but this is not relevant to software running
within the browser.
An application, such as a spreadsheet or word processor, which hosts software written in an application-specific scripting language, such as an Excel macro. This can be extended to writing fully-fledged applications with the Microsoft Office suite as a platform.
Cloud computing and Platform as a Service.
Extending the idea of a software framework, these allow application
developers to build software out of components that are hosted not by
the developer, but by the provider, with internet communication linking
them together. The social networking sites Twitter and Facebook are also considered development platforms.
A virtualized
version of a complete system, including virtualized hardware, OS,
software, and storage. These allow, for instance, a typical Windows
program to run on what is physically a Mac.
Some architectures have multiple layers, with each layer acting as a
platform for the one above it. In general, a component only has to be
adapted to the layer immediately beneath it. For instance, a Java
program has to be written to use the Java virtual machine (JVM) and
associated libraries as a platform but does not have to be adapted to
run on the Windows, Linux or Macintosh OS platforms. However, the JVM,
the layer beneath the application, does have to be built separately for
each OS.
Desktop publishing (DTP) is the creation of documents using page layoutsoftware on a personal ("desktop") computer.
It was first used almost exclusively for print publications, but now it
also assists in the creation of various forms of online content. Desktop publishing software can generate layouts and produce typographic-quality text and images comparable to traditional typography and printing. Desktop publishing is also the main reference for digital typography. This technology allows individuals, businesses, and other organizations to self-publish a wide variety of content, from menus to magazines to books, without the expense of commercial printing.
Desktop publishing often requires the use of a personal computer and WYSIWYG page layout software to create documents for either large-scale publishing or small-scale local multifunction peripheral output and distribution - although non-WYSIWYG systems such as TeX and LaTeX are also used, especially in scientific publishing. Desktop publishing methods provide more control over design, layout, and typography than word processing.
However, word processing software has evolved to include most, if not
all, capabilities previously available only with professional printing
or desktop publishing.
Desktop publishing was first developed at Xerox PARC in the 1970s.
A contradictory claim states that desktop publishing began in 1983 with
a program developed by James Davise at a community newspaper in
Philadelphia. The program Type Processor One ran on a PC using a graphics card for a WYSIWYG display and was offered commercially by Best Info in 1984. Desktop typesetting with only limited page makeup facilities arrived in 1978–1979 with the introduction of TeX, and was extended in 1985 with the introduction of LaTeX.
The desktop publishing market took off in 1985 with the introduction in January of the Apple LaserWriter printer. This momentum was kept up with the addition of PageMaker software from Aldus,
which rapidly became the standard software application for desktop
publishing. With its advanced layout features, PageMaker immediately
relegated word processors like Microsoft Word to the composition and editing of purely textual documents. The term "desktop publishing" is attributed to Aldus founder Paul Brainerd,
who sought a marketing catchphrase to describe the small size and
relative affordability of this suite of products, in contrast to the
expensive commercial phototypesetting equipment of the day.
Before the advent of desktop publishing, the only option
available to most people for producing typed documents (as opposed to
handwritten documents) was a typewriter,
which offered only a handful of typefaces (usually fixed-width) and one
or two font sizes. Indeed, one popular desktop publishing book was
titled The Mac is Not a Typewriter, and it had to actually explain how a Mac could do so much more than a typewriter. The ability to create WYSIWYG page layouts on screen and then print pages containing text and graphical elements at crisp 300 dpi
resolution was revolutionary for both the typesetting industry and the
personal computer industry at the time; newspapers and other print
publications made the move to DTP-based programs from older layout
systems such as Atex and other programs in the early 1980s.
Desktop publishing was still in its embryonic stage in the early
1980s. Users of the PageMaker-LaserWriter-Macintosh 512K system endured
frequent software crashes, cramped display on the Mac's tiny 512 x 342 1-bit monochrome screen, the inability to control letter spacing, kerning, and other typographic features,
and the discrepancies between screen display and printed output.
However, it was a revolutionary combination at the time, and was
received with considerable acclaim.
Behind-the-scenes, technologies developed by Adobe Systems
set the foundation for professional desktop publishing applications.
The LaserWriter and LaserWriter Plus printers included high quality,
scalable Adobe PostScript fonts built into their ROM
memory. The LaserWriter's PostScript capability allowed publication
designers to proof files on a local printer, then print the same file at
DTP service bureaus using optical resolution 600+ ppi PostScript printers such as those from Linotronic.
Later, the Macintosh II
was released, which was considerably more suitable for desktop
publishing due to its greater expandability, support for large color multi-monitor displays, and its SCSI
storage interface (which allowed fast high-capacity hard drives to be
attached to the system). Macintosh-based systems continued to dominate
the market into 1986, when the GEM-based Ventura Publisher was introduced for MS-DOS
computers. PageMaker's pasteboard metaphor closely simulated the
process of creating layouts manually, but Ventura Publisher automated
the layout process through its use of tags and style sheets
and automatically generated indices and other body matter. This made it
particularly suitable for the creation of manuals and other long-format
documents.
Desktop publishing moved into the home market in 1986 with Professional Page for the Amiga, Publishing Partner (now PageStream) for the Atari ST, GST's Timeworks Publisher on the PC and Atari ST, and Calamus for the Atari TT030. Software was published even for 8-bit computers like the Apple II and Commodore 64: Home Publisher, The Newsroom, and geoPublish.
During its early years, desktop publishing acquired a bad reputation as
a result of untrained users who created poorly organized,
unprofessional-looking "ransom note effect" layouts; similar criticism was leveled again against early World Wide Web
publishers a decade later. However, some desktop publishers who
mastered the programs were able to achieve highly professional results.
Desktop publishing skills were considered of primary importance in
career advancement in the 1980s, but increased accessibility to more
user-friendly DTP software has made DTP a secondary skill to art direction, graphic design, multimedia development, marketing communications, and administrative careers.
DTP skill levels range from what may be learned in a couple of hours
(e.g., learning how to put clip art in a word processor), to what's
typically required in a college education. The discipline of DTP skills
range from technical skills such as prepress production and programming, to creative skills such as communication design and graphic image development.
As of 2014, Apple computers remain dominant in publishing, even as the most popular software has changed from QuarkXPress – an estimated 95% market share in the 1990s – to Adobe InDesign. As an Ars Technica
writer puts: "I've heard about Windows-based publishing environments,
but I've never actually seen one in my 20+ years in design and
publishing".
Terminology
There are two types of pages in desktop publishing: digital pages and virtual paper pages to be printed on physical paper pages. All computerized documents are technically digital, which are limited in size only by computer memory or computer data storage space. Virtual paper pages will ultimately be printed, and will therefore require paper parameters coinciding with standard physical paper sizes
such as A4, letterpaper and legalpaper. Alternatively, the virtual
paper page may require a custom size for later trimming. Some desktop
publishing programs allow custom sizes designated for large format
printing used in posters, billboards and trade show displays. A virtual page for printing has a predesignated size of virtual printing material and can be viewed on a monitor in WYSIWYG format. Each page for printing has trim sizes (edge of paper) and a printable area if bleed printing is not possible as is the case with most desktop printers. A web page
is an example of a digital page that is not constrained by virtual
paper parameters. Most digital pages may be dynamically re-sized,
causing either the content to scale in size with the page or the content to re-flow.
Master pages are templates used to automatically copy or link
elements and graphic design styles to some or all the pages of a
multipage document. Linked elements can be modified without having to
change each instance of an element on pages that use the same element.
Master pages can also be used to apply graphic design styles to
automatic page numbering. Cascading Style Sheets can provide the same global formatting functions for web pages that master pages provide for virtual paper pages. Page layout
is the process by which the elements are laid on the page orderly,
aesthetically and precisely. Main types of components to be laid out on a
page include text, linked images
(that can only be modified as an external source), and embedded images
(that may be modified with the layout application software). Some
embedded images are rendered in the application software, while others can be placed from an external source image file. Text may be keyed into the layout, placed, or – with database publishing applications – linked to an external source of text which allows multiple editors to develop a document at the same time.
Graphic design styles such as color, transparency and filters may also be applied to layout elements. Typography styles may be applied to text automatically with style sheets.
Some layout programs include style sheets for images in addition to
text. Graphic styles for images may include border shapes, colors,
transparency, filters, and a parameter designating the way text flows
around the object (also known as "wraparound" or "runaround").
Comparisons
With word processing
As
desktop publishing software still provides extensive features necessary
for print publishing, modern word processors now have publishing
capabilities beyond those of many older DTP applications, blurring the
line between word processing and desktop publishing.
In the early 1980s, graphical user interface
was still in its embryonic stage and DTP software was in a class of its
own when compared to the leading word processing applications of the
time. Programs such as WordPerfect and WordStar
were still mainly text-based and offered little in the way of page
layout, other than perhaps margins and line spacing. On the other hand,
word processing software was necessary for features like indexing and
spell checking – features that are common in many applications today. As
computers and operating systems became more powerful, versatile, and
user-friendly in the 2010s, vendors have sought to provide users with a
single application that can meet almost all their publication needs.
With other digital layout software
In earlier modern-day usage, DTP usually does not include digital tools such as TeX or troff, though both can easily be used on a modern desktop system, and are standard with many Unix-like operating systems and are readily available for other systems. The key difference between digital typesetting software and DTP software is that DTP software is generally interactive and "What you see [onscreen] is what you get" (WYSIWYG) in design, while other digital typesetting software, such as TeX, LaTeX and other variants, tend to operate in "batch mode", requiring the user to enter the processing program's markup language (e.g. HTML)
without immediate visualization of the finished product. This kind of
workflow is less user-friendly than WYSIWYG, but more suitable for
conference proceedings and scholarly articles as well as corporate
newsletters or other applications where consistent, automated layout is
important.
In the 2010s, interactive front-end components of TeX, such as TeXworks and LyX, have produced "what you see is what you mean" (WYSIWYM) hybrids of DTP and batch processing.[13] These hybrids are focused more on the semantics
than the traditional DTP. Furthermore, with the advent of TeX editors
the line between desktop publishing and markup-based typesetting is
becoming increasingly narrow as well; a software which separates itself
from the TeX world and develops itself in the direction of WYSIWYG
markup-based typesetting is GNU TeXmacs.
On a different note, there is a slight overlap between desktop publishing and what is known as hypermedia publishing (e.g. web design, kiosk, CD-ROM). Many graphical HTML editors such as Microsoft FrontPage and Adobe Dreamweaver
use a layout engine similar to that of a DTP program. However, many web
designers still prefer to write HTML without the assistance of a
WYSIWYG editor, for greater control and ability to fine-tune the
appearance and functionality. Another reason that some Web designers
write in HTML is that WYSIWYG editors often result in excessive lines of
code, leading to code bloat that can make the pages hard to troubleshoot.
Deductive reasoning is the mental process of drawing deductive inferences. An inference is deductively valid if its conclusion follows logically from its premises, i.e. it is impossible for the premises to be true and the conclusion to be false.
For example, the inference from the premises "all men are mortal" and "Socrates is a man" to the conclusion "Socrates is mortal" is deductively valid. An argument is sound if it is valid
and all its premises are true. Some theorists define deduction in terms
of the intentions of the author: they have to intend for the premises
to offer deductive support to the conclusion. With the help of this
modification, it is possible to distinguish valid from invalid deductive
reasoning: it is invalid if the author's belief about the deductive
support is false, but even invalid deductive reasoning is a form of
deductive reasoning.
Psychology is interested in deductive reasoning as a psychological process, i.e. how people actually draw inferences. Logic, on the other hand, focuses on the deductive relation of logical consequence between the premises and the conclusion or how people should draw inferences. There are different ways of conceptualizing this relation. According to the semantic approach, an argument is deductively valid if and only if there is no possible interpretation of this argument where its premises are true and its conclusion is false. The syntactic
approach, on the other hand, holds that an argument is deductively
valid if and only if its conclusion can be deduced from its premises
using a valid rule of inference. A rule of inference is a schema of drawing a conclusion from a set of premises based only on their logical form.
There are various rules of inference, like the modus ponens and the modus tollens. Invalid deductive arguments, which do not follow a rule of inference, are called formal fallacies.
Rules of inference are definitory rules and contrast to strategic
rules, which specify what inferences one needs to draw in order to
arrive at an intended conclusion. Deductive reasoning contrasts with
non-deductive or ampliative reasoning. For ampliative arguments, like inductive or abductive arguments,
the premises offer weaker support to their conclusion: they make it
more likely but they do not guarantee its truth. They make up for this
drawback by being able to provide genuinely new information not already
found in the premises, unlike deductive arguments.
Cognitive psychology
investigates the mental processes responsible for deductive reasoning.
One of its topics concerns the factors determining whether people draw
valid or invalid deductive inferences. One factor is the form of the
argument: for example, people are more successful for arguments of the
form modus ponens than for modus tollens. Another is the content of the
arguments: people are more likely to believe that an argument is valid
if the claim made in its conclusion is plausible. A general finding is
that people tend to perform better for realistic and concrete cases than
for abstract cases. Psychological theories of deductive reasoning aim
to explain these findings by providing an account of the underlying
psychological processes. Mental logic theories hold that deductive reasoning is a language-like process that happens through the manipulation of representations using rules of inference. Mental model theories,
on the other hand, claim that deductive reasoning involves models of
possible states of the world without the medium of language or rules of
inference. According to dual-process theories of reasoning, there are two qualitatively different cognitive systems responsible for reasoning.
The problem of deductive reasoning is relevant to various fields and issues. Epistemology tries to understand how justification is transferred from the belief in the premises to the belief in the conclusion in the process of deductive reasoning. Probability logic
studies how the probability of the premises of an inference affects the
probability of its conclusion. The controversial thesis of deductivism denies that there are other correct forms of inference besides deduction. Natural deduction is a type of proof system based on simple and self-evident rules of inference. In philosophy,
the geometrical method is a way of philosophizing that starts from a
small set of self-evident axioms and tries to build a comprehensive
logical system using deductive reasoning.
Definition
Deductive reasoning is the psychological process of drawing deductive inferences. An inference is a set of premises together with a conclusion. This psychological process starts from the premises and reasons to a conclusion based on and supported by these premises. If the reasoning was done correctly, it results in a valid deduction: the truth of the premises ensures the truth of the conclusion. For example, in the syllogistic
argument "all frogs are reptiles; no cats are reptiles; therefore, no
cats are frogs" the conclusion is true because its two premises are
true. But even arguments with wrong premises can be deductively valid if
they obey this principle, as in "all frogs are mammals; no cats are
mammals; therefore, no cats are frogs". If the premises of a valid argument are true, then it is called a sound argument.
The relation between the premises and the conclusion of a deductive argument is usually referred to as "logical consequence". According to Alfred Tarski, logical consequence has 3 essential features: it is necessary, formal, and knowable a priori.It is necessary in the sense that the premises of valid deductive
arguments necessitate the conclusion: it is impossible for the premises
to be true and the conclusion to be false, independent of any other
circumstances.
Logical consequence is formal in the sense that it depends only on the
form or the syntax of the premises and the conclusion. This means that
the validity of a particular argument does not depend on the specific
contents of this argument. If it is valid, then any argument with the
same logical form is also valid, no matter how different it is on the
level of its contents. Logical consequence is knowable a priori in the sense that no empirical
knowledge of the world is necessary to determine whether a deduction is
valid. So it is not necessary to engage in any form of empirical
investigation. Some logicians define deduction in terms of possible worlds:
A deductive inference is valid if and only if, there is no possible
world in which its conclusion is false while its premises are true. This
means that there are no counterexamples: the conclusion is true in all such cases, not just in most cases.
It has been argued against this and similar definitions that they
fail to distinguish between valid and invalid deductive reasoning, i.e.
they leave it open whether there are invalid deductive inferences and
how to define them.
Some authors define deductive reasoning in psychological terms in order
to avoid this problem. According to Mark Vorobey, whether an argument
is deductive depends on the psychological state of the person making the
argument: "An argument is deductive if, and only if, the author of the
argument believes that the truth of the premises necessitates
(guarantees) the truth of the conclusion". A similar formulation holds that the speaker claims or intends that the premises offer deductive support for their conclusion. This is sometimes categorized as a speaker-determined definition of deduction since it depends also on the speaker whether the argument in question is deductive or not. For speakerless definitions, on the other hand, only the argument itself matters independent of the speaker.
One advantage of this type of formulation is that it makes it possible
to distinguish between good or valid and bad or invalid deductive
arguments: the argument is good if the author's belief concerning the relation between the premises and the conclusion is true, otherwise it is bad.
One consequence of this approach is that deductive arguments cannot be
identified by the law of inference they use. For example, an argument of
the form modus ponens
may be non-deductive if the author's beliefs are sufficiently confused.
That brings with it an important drawback of this definition: it is
difficult to apply to concrete cases since the intentions of the author
are usually not explicitly stated.
Deductive reasoning is studied in logic, psychology, and the cognitive sciences.
Some theorists emphasize in their definition the difference between
these fields. On this view, psychology studies deductive reasoning as an
empirical mental process, i.e. what happens when humans engage in
reasoning. But the descriptive question of how actual reasoning happens is different from the normative question of how it should happen or what constitutes correct deductive reasoning, which is studied by logic.
This is sometimes expressed by stating that, strictly speaking, logic
does not study deductive reasoning but the deductive relation between
premises and a conclusion known as logical consequence. But this distinction is not always precisely observed in the academic literature. One important aspect of this difference is that logic is not interested in whether the conclusion of an argument is sensible.
So from the premise "the printer has ink" one may draw the unhelpful
conclusion "the printer has ink and the printer has ink and the printer
has ink", which has little relevance from a psychological point of view.
Instead, actual reasoners usually try to remove redundant or irrelevant
information and make the relevant information more explicit.
The psychological study of deductive reasoning is also concerned with
how good people are at drawing deductive inferences and with the factors
determining their performance. Deductive inferences are found both in natural language and in formal logical systems, such as propositional logic.
Conceptions of deduction
Deductive
arguments differ from non-deductive arguments in that the truth of
their premises ensures the truth of their conclusion. There are two important conceptions of what this exactly means. They are referred to as the syntactic and the semantic approach.
According to the syntactic approach, whether an argument is deductively
valid depends only on its form, syntax, or structure. Two arguments
have the same form if they use the same logical vocabulary in the same
arrangement, even if their contents differ.
For example, the arguments "if it rains then the street will be wet; it
rains; therefore, the street will be wet" and "if the meat is not
cooled then it will spoil; the meat is not cooled; therefore, it will
spoil" have the same logical form: they follow the modus ponens. Their form can be expressed more abstractly as "if A then B; A; therefore B" in order to make the common syntax explicit. There are various other valid logical forms or rules of inference, like modus tollens or the disjunction elimination.
The syntactic approach then holds that an argument is deductively valid
if and only if its conclusion can be deduced from its premises using a
valid rule of inference. One difficulty for the syntactic approach is that it is usually necessary to express the argument in a formal language in order to assess whether it is valid. But since the problem of deduction is also relevant for natural languages,
this often brings with it the difficulty of translating the natural
language argument into a formal language, a process that comes with
various problems of its own.
Another difficulty is due to the fact that the syntactic approach
depends on the distinction between formal and non-formal features. While
there is a wide agreement concerning the paradigmatic cases, there are
also various controversial cases where it is not clear how this
distinction is to be drawn.
The semantic approach suggests an alternative definition of
deductive validity. It is based on the idea that the sentences
constituting the premises and conclusions have to be interpreted in order to determine whether the argument is valid. This means that one ascribes semantic values to the expressions used in the sentences, such as the reference to an object for singular terms or to a truth-value for atomic sentences. The semantic approach is also referred to as the model-theoretic approach since the branch of mathematics known as model theory is often used to interpret these sentences.
Usually, many different interpretations are possible, such as whether a
singular term refers to one object or to another. According to the
semantic approach, an argument is deductively valid if and only if there
is no possible interpretation where its premises are true and its
conclusion is false.
Some objections to the semantic approach are based on the claim that
the semantics of a language cannot be expressed in the same language,
i.e. that a richer metalanguage
is necessary. This would imply that the semantic approach cannot
provide a universal account of deduction for language as an
all-encompassing medium.
Rules of inference
Deductive reasoning usually happens by applying rules of inference. A rule of inference is a way or schema of drawing a conclusion from a set of premises. This happens usually based only on the logical form
of the premises. A rule of inference is valid if, when applied to true
premises, the conclusion cannot be false. A particular argument is valid
if it follows a valid rule of inference. Deductive arguments that do
not follow a valid rule of inference are called formal fallacies: the truth of their premises does not ensure the truth of their conclusion.
In some cases, whether a rule of inference is valid depends on the logical system one is using. The dominant logical system is classical logic and the rules of inference listed here are all valid in classical logic. But so-called deviant logics provide a different account of which inferences are valid. For example, the rule of inference known as double negation elimination, i.e. that if a proposition is not not true then it is also true, is accepted in classical logic but rejected in intuitionistic logic.
Modus ponens (also known as "affirming the antecedent" or "the law of detachment") is the primary deductive rule of inference. It applies to arguments that have as first premise a conditional statement () and as second premise the antecedent () of the conditional statement. It obtains the consequent () of the conditional statement as its conclusion. The argument form is listed below:
(First premise is a conditional statement)
(Second premise is the antecedent)
(Conclusion deduced is the consequent)
In this form of deductive reasoning, the consequent () obtains as the conclusion from the premises of a conditional statement () and its antecedent (). However, the antecedent () cannot be similarly obtained as the conclusion from the premises of the conditional statement () and the consequent (). Such an argument commits the logical fallacy of affirming the consequent.
The following is an example of an argument using modus ponens:
If it is raining, then there are clouds in the sky.
Modus tollens (also known as "the law of contrapositive") is a
deductive rule of inference. It validates an argument that has as
premises a conditional statement (formula) and the negation of the
consequent () and as conclusion the negation of the antecedent (). In contrast to modus ponens,
reasoning with modus tollens goes in the opposite direction to that of
the conditional. The general expression for modus tollens is the
following:
. (First premise is a conditional statement)
. (Second premise is the negation of the consequent)
. (Conclusion deduced is the negation of the antecedent)
The following is an example of an argument using modus tollens:
If it is raining, then there are clouds in the sky.
A hypothetical syllogism
is an inference that takes two conditional statements and forms a
conclusion by combining the hypothesis of one statement with the
conclusion of another. Here is the general form:
Therefore, .
In there being a subformula in common between the two premises that
does not occur in the consequence, this resembles syllogisms in term logic,
although it differs in that this subformula is a proposition whereas in
Aristotelian logic, this common element is a term and not a
proposition.
The following is an example of an argument using a hypothetical syllogism:
If there had been a thunderstorm, it would have rained.
If it had rained, things would have gotten wet.
Thus, if there had been a thunderstorm, things would have gotten wet.
Fallacies
Various formal fallacies have been described. They are invalid forms of deductive reasoning.
An additional aspect of them is that they appear to be valid on some
occasions or on the first impression. They may thereby seduce people
into accepting and committing them. One type of formal fallacy is affirming the consequent, as in "if John is a bachelor, then he is male; John is male; therefore, John is a bachelor". This is similar to the valid rule of inference named modus ponens, but the second premise and the conclusion are switched around, which is why it is invalid. A similar formal fallacy is denying the antecedent, as in "if Othello is a bachelor, then he is male; Othello is not a bachelor; therefore, Othello is not male". This is similar to the valid rule of inference called modus tollens, the difference being that the second premise and the conclusion are switched around. Other formal fallacies include affirming a disjunct, denying a conjunct, and the fallacy of the undistributed middle.
All of them have in common that the truth of their premises does not
ensure the truth of their conclusion. But it may still happen by
coincidence that both the premises and the conclusion of formal
fallacies are true.
Definitory and strategic rules
Rules
of inferences are definitory rules: they determine whether an argument
is deductively valid or not. But reasoners are usually not just
interested in making any kind of valid argument. Instead, they often
have a specific point or conclusion that they wish to prove or refute.
So given a set of premises, they are faced with the problem of choosing
the relevant rules of inference for their deduction to arrive at their
intended conclusion.
This issue belongs to the field of strategic rules: the question of
which inferences need to be drawn to support one's conclusion. The
distinction between definitory and strategic rules is not exclusive to
logic: it is also found in various games. In chess, for example, the definitory rules state that bishops may only move diagonally while the strategic rules recommend that one should control the center and protect one's king
if one intends to win. In this sense, definitory rules determine
whether one plays chess or something else whereas strategic rules
determine whether one is a good or a bad chess player. The same applies to deductive reasoning: to be an effective reasoner involves mastering both definitory and strategic rules.
Validity and soundness
Deductive arguments are evaluated in terms of their validity and soundness.
An argument is “valid” if it is impossible for its premises
to be true while its conclusion is false. In other words, the
conclusion must be true if the premises are true. An argument can be
“valid” even if one or more of its premises are false.
An argument is “sound” if it is valid and the premises are true.
It is possible to have a deductive argument that is logically valid but is not sound. Fallacious arguments often take that form.
The following is an example of an argument that is “valid”, but not “sound”:
Everyone who eats carrots is a quarterback.
John eats carrots.
Therefore, John is a quarterback.
The example's first premise is false – there are people who eat
carrots who are not quarterbacks – but the conclusion would necessarily
be true, if the premises were true. In other words, it is impossible for
the premises to be true and the conclusion false. Therefore, the
argument is “valid”, but not “sound”. False generalizations – such as
"Everyone who eats carrots is a quarterback" – are often used to make
unsound arguments. The fact that there are some people who eat carrots
but are not quarterbacks proves the flaw of the argument.
Deductive reasoning can be contrasted with inductive reasoning,
in regards to validity and soundness. In cases of inductive reasoning,
even though the premises are true and the argument is “valid”, it is
possible for the conclusion to be false (determined to be false with a
counterexample or other means).
Difference from ampliative reasoning
Deductive reasoning is usually contrasted with non-deductive or ampliative reasoning.
The hallmark of valid deductive inferences is that it is impossible for
their premises to be true and their conclusion to be false. In this
way, the premises provide the strongest possible support to their
conclusion.
The premises of ampliative inferences also support their conclusion.
But this support is weaker: they are not necessarily truth-preserving.
So even for correct ampliative arguments, it is possible that their
premises are true and their conclusion is false. Two important forms of ampliative reasoning are inductive and abductive reasoning. Sometimes the term "inductive reasoning" is used in a very wide sense to cover all forms of ampliative reasoning. However, in a more strict usage, inductive reasoning is just one form of ampliative reasoning. In the narrow sense, inductive inferences are forms of statistical generalization. They are usually based on many individual observations
that all show a certain pattern. These observations are then used to
form a conclusion either about a yet unobserved entity or about a
general law.
For abductive inferences, the premises support the conclusion because
the conclusion is the best explanation of why the premises are true.
The support ampliative arguments provide for their conclusion
comes in degrees: some ampliative arguments are stronger than others. This is often explained in terms of probability: the premises make it more likely that the conclusion is true.
Strong ampliative arguments make their conclusion very likely, but not
absolutely certain. An example of ampliative reasoning is the inference
from the premise "every raven in a random sample of 3200 ravens is
black" to the conclusion "all ravens are black": the extensive random
sample makes the conclusion very likely, but it does not exclude that
there are rare exceptions.
In this sense, ampliative reasoning is defeasible: it may become
necessary to retract an earlier conclusion upon receiving new related
information. Ampliative reasoning is very common in everyday discourse and the sciences.
An important drawback of deductive reasoning is that it does not lead to genuinely new information.
This means that the conclusion only repeats information already found
in the premises. Ampliative reasoning, on the other hand, goes beyond
the premises by arriving at genuinely new information.
One difficulty for this characterization is that it makes deductive
reasoning appear useless: if deduction is uninformative, it is not clear
why people would engage in it and study it.
It has been suggested that this problem can be solved by distinguishing
between surface and depth information. On this view, deductive
reasoning is uninformative on the depth level, in contrast to ampliative
reasoning. But it may still be valuable on the surface level by
presenting the information in the premises in a new and sometimes
surprising way.
A popular misconception of the relation between deduction and
induction identifies their difference on the level of particular and
general claims. On this view, deductive inferences start from general premises and draw
particular conclusions, while inductive inferences start from
particular premises and draw general conclusions. This idea is often
motivated by seeing deduction and induction as two inverse processes
that complement each other: deduction is top-down while induction is bottom-up. But this is a misconception that does not reflect how valid deduction is defined in the field of logic:
a deduction is valid if it is impossible for its premises to be true
while its conclusion is false, independent of whether the premises or
the conclusion are particular or general. Because of this, some deductive inferences have a general conclusion and some also have particular premises.
In various fields
Cognitive psychology
Cognitive psychology studies the psychological processes responsible for deductive reasoning.
It is concerned, among other things, with how good people are at
drawing valid deductive inferences. This includes the study of the
factors affecting their performance, their tendency to commit fallacies, and the underlying biases involved.
A notable finding in this field is that the type of deductive inference
has a significant impact on whether the correct conclusion is drawn. In a meta-analysis of 65 studies, for example, 97% of the subjects evaluated modus ponens inferences correctly, while the success rate for modus tollens was only 72%. On the other hand, even some fallacies like affirming the consequent or denying the antecedent were regarded as valid arguments by the majority of the subjects.
An important factor for these mistakes is whether the conclusion seems
initially plausible: the more believable the conclusion is, the higher
the chance that a subject will mistake a fallacy for a valid argument.
An important bias is the matching bias, which is often illustrated using the Wason selection task. In an often-cited experiment by Peter Wason,
4 cards are presented to the participant. In one case, the visible
sides show the symbols D, K, 3, and 7 on the different cards. The
participant is told that every card has a letter on one side and a
number on the other side, and that "[e]very card which has a D on one
side has a 3 on the other side". Their task is to identify which cards
need to be turned around in order to confirm or refute this conditional
claim. The correct answer, only given by about 10%, is the cards D and
7. Many select card 3 instead, even though the conditional claim does
not involve any requirements on what symbols can be found on the
opposite side of card 3.
But this result can be drastically changed if different symbols are
used: the visible sides show "drinking a beer", "drinking a coke", "16
years of age", and "22 years of age" and the participants are asked to
evaluate the claim "[i]f a person is drinking beer, then the person must
be over 19 years of age". In this case, 74% of the participants
identified correctly that the cards "drinking a beer" and "16 years of
age" have to be turned around.
These findings suggest that the deductive reasoning ability is heavily
influenced by the content of the involved claims and not just by the
abstract logical form of the task: the more realistic and concrete the
cases are, the better the subjects tend to perform.
Another bias is called the "negative conclusion bias", which happens when one of the premises has the form of a negative material conditional,
as in "If the card does not have an A on the left, then it has a 3 on
the right. The card does not have a 3 on the right. Therefore, the card
has an A on the left". The increased tendency to misjudge the validity
of this type of argument is not present for positive material
conditionals, as in "If the card has an A on the left, then it has a 3
on the right. The card does not have a 3 on the right. Therefore, the
card does not have an A on the left".
Psychological theories of deductive reasoning
Various
psychological theories of deductive reasoning have been proposed. These
theories aim to explain how deductive reasoning works in relation to
the underlying psychological processes responsible. They are often used
to explain the empirical findings, such as why human reasoners are more
susceptible to some types of fallacies than to others.
An important distinction is between mental logic theories, sometimes also referred to as rule theories, and mental model theories. Mental logic theories see deductive reasoning as a language-like process that happens through the manipulation of representations. This is done by applying syntactic rules of inference in a way very similar to how systems of natural deduction transform their premises to arrive at a conclusion. On this view, some deductions are simpler than others since they involve fewer inferential steps. This idea can be used, for example, to explain why humans have more difficulties with some deductions, like the modus tollens, than with others, like the modus ponens:
because the more error-prone forms do not have a native rule of
inference but need to be calculated by combining several inferential
steps with other rules of inference. In such cases, the additional
cognitive labor makes the inferences more open to error.
Mental model theories, on the other hand, hold that deductive reasoning involves models or mental representations of possible states of the world without the medium of language or rules of inference.
In order to assess whether a deductive inference is valid, the
reasoner mentally constructs models that are compatible with the
premises of the inference. The conclusion is then tested by looking at
these models and trying to find a counterexample in which the conclusion
is false. The inference is valid if no such counterexample can be
found. In order to reduce cognitive labor, only such models are represented in
which the premises are true. Because of this, the evaluation of some
forms of inference only requires the construction of very few models
while for others, many different models are necessary. In the latter
case, the additional cognitive labor required makes deductive reasoning
more error-prone, thereby explaining the increased rate of error
observed.
This theory can also explain why some errors depend on the content
rather than the form of the argument. For example, when the conclusion
of an argument is very plausible, the subjects may lack the motivation
to search for counterexamples among the constructed models.
Both mental logic theories and mental model theories assume that
there is one general-purpose reasoning mechanism that applies to all
forms of deductive reasoning.
But there are also alternative accounts that posit various different
special-purpose reasoning mechanisms for different contents and
contexts. In this sense, it has been claimed that humans possess a
special mechanism for permissions and obligations, specifically for
detecting cheating in social exchanges. This can be used to explain why
humans are often more successful in drawing valid inferences if the
contents involve human behavior in relation to social norms. Another example is the so-called dual-process theory.
This theory posits that there are two distinct cognitive systems
responsible for reasoning. Their interrelation can be used to explain
commonly observed biases in deductive reasoning. System 1 is the older
system in terms of evolution. It is based on associative learning and
happens fast and automatically without demanding many cognitive
resources.
System 2, on the other hand, is of more recent evolutionary origin. It
is slow and cognitively demanding, but also more flexible and under
deliberate control.
The dual-process theory posits that system 1 is the default system
guiding most of our everyday reasoning in a pragmatic way. But for
particularly difficult problems on the logical level, system 2 is
employed. System 2 is mostly responsible for deductive reasoning.
Intelligence
The ability of deductive reasoning is an important aspect of intelligence and many tests of intelligence include problems that call for deductive inferences. Because of this relation to intelligence, deduction is highly relevant to psychology and the cognitive sciences. But the subject of deductive reasoning is also pertinent to the computer sciences, for example, in the creation of artificial intelligence.
Epistemology
Deductive reasoning plays an important role in epistemology. Epistemology is concerned with the question of justification, i.e. to point out which beliefs are justified and why. Deductive inferences are able to transfer the justification of the premises onto the conclusion.
So while logic is interested in the truth-preserving nature of
deduction, epistemology is interested in the justification-preserving
nature of deduction. There are different theories trying to explain why
deductive reasoning is justification-preserving. According to reliabilism,
this is the case because deductions are truth-preserving: they are
reliable processes that ensure a true conclusion given the premises are
true.
Some theorists hold that the thinker has to have explicit awareness of
the truth-preserving nature of the inference for the justification to be
transferred from the premises to the conclusion. One consequence of
such a view is that, for young children, this deductive transference
does not take place since they lack this specific awareness.
Probability logic
Probability logic
is interested in how the probability of the premises of an argument
affects the probability of its conclusion. It differs from classical
logic, which assumes that propositions are either true or false but does
not take into consideration the probability or certainty that a
proposition is true or false.
The probability of the conclusion of a deductive argument cannot be
calculated by figuring out the cumulative probability of the argument’s
premises. Dr. Timothy McGrew, a specialist in the applications of probability theory, and Dr. Ernest W. Adams, a Professor Emeritus at UC Berkeley,
pointed out that the theorem on the accumulation of uncertainty
designates only a lower limit on the probability of the conclusion. So
the probability of the conjunction of the argument’s premises sets only a
minimum probability of the conclusion. The probability of the
argument’s conclusion cannot be any lower than the probability of the
conjunction of the argument’s premises. For example, if the probability
of a deductive argument’s four premises is ~0.43, then it is assured
that the probability of the argument’s conclusion is no less than ~0.43.
It could be much higher, but it cannot drop under that lower limit.
There can be examples in which each single premise is more likely
true than not and yet it would be unreasonable to accept the
conjunction of the premises. Professor Henry Kyburg, who was known for his work in probability and logic,
clarified that the issue here is one of closure – specifically, closure
under conjunction. There are examples where it is reasonable to accept P
and reasonable to accept Q without its being reasonable to accept the
conjunction (P&Q). Lotteries serve as very intuitive examples of
this, because in a basic non-discriminatory finite lottery with only a
single winner to be drawn, it is sound to think that ticket 1 is a
loser, sound to think that ticket 2 is a loser,...all the way up to the
final number. However, clearly, it is irrational to accept the
conjunction of these statements; the conjunction would deny the very
terms of the lottery because (taken with the background knowledge) it
would entail that there is no winner.
Dr. McGrew further adds that the sole method to ensure that a
conclusion deductively drawn from a group of premises is more probable
than not is to use premises the conjunction of which is more probable
than not. This point is slightly tricky, because it can lead to a
possible misunderstanding. What is being searched for is a general
principle that specifies factors under which, for any logical
consequence C of the group of premises, C is more probable than not.
Particular consequences will differ in their probability. However, the
goal is to state a condition under which this attribute is ensured,
regardless of which consequence one draws, and fulfilment of that
condition is required to complete the task.
This principle can be demonstrated in a moderately clear way. Suppose, for instance, the following group of premises:
{P, Q, R}
Suppose that the conjunction ((P & Q) & R) fails to be
more probable than not. Then there is at least one logical consequence
of the group that fails to be more probable than not – namely, that very
conjunction. So it is an essential factor for the argument to “preserve
plausibility” (Dr. McGrew coins this phrase to mean “guarantee, from
information about the plausibility of the premises alone, that any
conclusion drawn from those premises by deductive inference is itself
more plausible than not”) that the conjunction of the premises be more
probable than not.
History
Aristotle, a Greek philosopher, started documenting deductive reasoning in the 4th century BC. René Descartes, in his book Discourse on Method, refined the idea for the Scientific Revolution.
Developing four rules to follow for proving an idea deductively,
Descartes laid the foundation for the deductive portion of the scientific method.
Descartes' background in geometry and mathematics influenced his ideas
on the truth and reasoning, causing him to develop a system of general
reasoning now used for most mathematical reasoning. Similar to
postulates, Descartes believed that ideas could be self-evident and that
reasoning alone must prove that observations are reliable. These ideas
also lay the foundations for the ideas of rationalism.
Related concepts and theories
Deductivism
Deductivism
is a philosophical position that gives primacy to deductive reasoning
or arguments over their non-deductive counterparts. It is often understood as the evaluative claim that only deductive inferences are good or correct
inferences. This theory would have wide-reaching consequences for
various fields since it implies that the rules of deduction are "the
only acceptable standard of evidence". This way, the rationality or correctness of the different forms of inductive reasoning is denied.Some forms of deductivism express this in terms of degrees of
reasonableness or probability. Inductive inferences are usually seen as
providing a certain degree of support for their conclusion: they make it
more likely that their conclusion is true. Deductivism states that such
inferences are not rational: the premises either ensure their
conclusion, as in deductive reasoning, or they do not provide any
support at all.
One motivation for deductivism is the problem of induction introduced by David Hume.
It consists in the challenge of explaining how or whether inductive
inferences based on past experiences support conclusions about future
events.
For example, a chicken comes to expect, based on all its past
experiences, that the person entering its coop is going to feed it,
until one day the person "at last wrings its neck instead". According to Karl Popper's
falsificationism, deductive reasoning alone is sufficient. This is due
to its truth-preserving nature: a theory can be falsified if one of its
deductive consequences is false.
So while inductive reasoning does not offer positive evidence for a
theory, the theory still remains a viable competitor until falsified by empirical observation. In this sense, deduction alone is sufficient for discriminating between competing hypotheses about what is the case. Hypothetico-deductivism
is a closely related scientific method, according to which science
progresses by formulating hypotheses and then aims to falsify them by
trying to make observations that run counter to their deductive
consequences.
Natural deduction
The term "natural deduction" refers to a class of proof systems based on self-evident rules of inference. The first systems of natural deduction were developed by Gerhard Gentzen and Stanislaw Jaskowski
in the 1930s. The core motivation was to give a simple presentation of
deductive reasoning that closely mirrors how reasoning actually takes
place. In this sense, natural deduction stands in contrast to other less intuitive proof systems, such as Hilbert-style deductive systems, which employ axiom schemes to express logical truths.
Natural deduction, on the other hand, avoids axioms schemes by
including many different rules of inference that can be used to
formulate proofs. These rules of inference express how logical constants behave. They are often divided into introduction rules and elimination rules. Introduction rules specify under which conditions a logical constant may be introduced into a new sentence of the proof. For example, the introduction rule for the logical constant "" (and) is "". It expresses that, given the premises "" and "" individually, one may draw the conclusion "" and thereby include it in one's proof. This way, the symbol "" is introduced into the proof. The removal of this symbol is governed by other rules of inference, such as the elimination rule "", which states that one may deduce the sentence "" from the premise "". Similar introduction and elimination rules are given for other logical constants, such as the propositional operator "", the propositional connectives"" and "", and the quantifiers"" and "".
The focus on rules of inferences instead of axiom schemes is an important feature of natural deduction.
But there is no general agreement on how natural deduction is to be
defined. Some theorists hold that all proof systems with this feature
are forms of natural deduction. This would include various forms of sequent calculi or tableau calculi.
But other theorists use the term in a more narrow sense, for example,
to refer to the proof systems developed by Gentzen and Jaskowski.
Because of its simplicity, natural deduction is often used for teaching
logic to students.
Geometrical method
The geometrical method is a method of philosophy based on deductive reasoning. It starts from a small set of self-evident axioms and tries to build a comprehensive logical system based only on deductive inferences from these first axioms. It was initially formulated by Baruch Spinoza and came to prominence in various rationalist philosophical systems in the modern era. It gets its name from the forms of mathematical demonstration found in traditional geometry, which are usually based on axioms, definitions, and inferred theorems. An important motivation of the geometrical method is to repudiate philosophical skepticism
by grounding one's philosophical system on absolutely certain axioms.
Deductive reasoning is central to this endeavor because of its
necessarily truth-preserving nature. This way, the certainty initially
invested only in the axioms is transferred to all parts of the
philosophical system.
One recurrent criticism of philosophical systems build using the
geometrical method is that their initial axioms are not as self-evident
or certain as their defenders proclaim.
This problem lies beyond the deductive reasoning itself, which only
ensures that the conclusion is true if the premises are true, but not
that the premises themselves are true. For example, Spinoza's
philosophical system has been criticized this way based on objections
raised against the causal axiom, i.e. that "the knowledge of an effect depends on and involves knowledge of its cause".
A different criticism targets not the premises but the reasoning
itself, which may at times implicitly assume premises that are
themselves not self-evident.