Search This Blog

Thursday, March 9, 2023

History of logic

From Wikipedia, the free encyclopedia

The history of logic deals with the study of the development of the science of valid inference (logic). Formal logics developed in ancient times in India, China, and Greece. Greek methods, particularly Aristotelian logic (or term logic) as found in the Organon, found wide application and acceptance in Western science and mathematics for millennia. The Stoics, especially Chrysippus, began the development of predicate logic.

Christian and Islamic philosophers such as Boethius (died 524), Ibn Sina (Avicenna, died 1037) Thomas Aquinas (died 1274) and William of Ockham (died 1347) further developed Aristotle's logic in the Middle Ages, reaching a high point in the mid-fourteenth century, with Jean Buridan. The period between the fourteenth century and the beginning of the nineteenth century saw largely decline and neglect, and at least one historian of logic regards this time as barren. Empirical methods ruled the day, as evidenced by Sir Francis Bacon's Novum Organon of 1620.

Logic revived in the mid-nineteenth century, at the beginning of a revolutionary period when the subject developed into a rigorous and formal discipline which took as its exemplar the exact method of proof used in mathematics, a hearkening back to the Greek tradition. The development of the modern "symbolic" or "mathematical" logic during this period by the likes of Boole, Frege, Russell, and Peano is the most significant in the two-thousand-year history of logic, and is arguably one of the most important and remarkable events in human intellectual history.

Progress in mathematical logic in the first few decades of the twentieth century, particularly arising from the work of Gödel and Tarski, had a significant impact on analytic philosophy and philosophical logic, particularly from the 1950s onwards, in subjects such as modal logic, temporal logic, deontic logic, and relevance logic.

Logic in the East

Logic in India

Hindu logic

Origin

The Nasadiya Sukta of the Rigveda (RV 10.129) contains ontological speculation in terms of various logical divisions that were later recast formally as the four circles of catuskoti: "A", "not A", "A and 'not A'", and "not A and not not A".

Who really knows?
Who will here proclaim it?
Whence was it produced? Whence is this creation?
The gods came afterwards, with the creation of this universe.
Who then knows whence it has arisen?

Logic began independently in ancient India and continued to develop to early modern times without any known influence from Greek logic.

Before Gautama

Though the origins in India of public debate (pariṣad), one form of rational inquiry, are not clear, we know that public debates were common in preclassical India, for they are frequently alluded to in various Upaniṣads and in the early Buddhist literature. Public debate is not the only form of public deliberations in preclassical India. Assemblies (pariṣad or sabhā) of various sorts, comprising relevant experts, were regularly convened to deliberate on a variety of matters, including administrative, legal and religious matters.

Dattatreya

A philosopher named Dattatreya is stated in the Bhagavata purana to have taught Anvlksikl to Aiarka, Prahlada and others. It appears from the Markandeya purana that the Anvlksikl-vidya expounded by him consisted of a mere disquisition on soul in accordance with the yoga philosophy. Dattatreya expounded the philosophical side of Anvlksiki and not its logical aspect.

Medhatithi Gautama

While the teachers mentioned before dealt with some particular topics of Anviksiki, the credit of founding the Anviksiki in its special sense of a science is to be attributed to Medhatithi Gautama (c. 6th century BC). Guatama founded the anviksiki school of logic. The Mahabharata (12.173.45), around the 5th century BC, refers to the anviksiki and tarka schools of logic.

Panini

Pāṇini (c. 5th century BC) developed a form of logic (to which Boolean logic has some similarities) for his formulation of Sanskrit grammar. Logic is described by Chanakya (c. 350-283 BC) in his Arthashastra as an independent field of inquiry.

Nyaya-Vaisheshika

Two of the six Indian schools of thought deal with logic: Nyaya and Vaisheshika. The Nyāya Sūtras of Aksapada Gautama (c. 2nd century AD) constitute the core texts of the Nyaya school, one of the six orthodox schools of Hindu philosophy. This realist school developed a rigid five-member schema of inference involving an initial premise, a reason, an example, an application, and a conclusion. The idealist Buddhist philosophy became the chief opponent to the Naiyayikas.

Jain Logic

Umaswati (2nd century AD),Author of first Jain work in Sanskrit, Tattvārthasūtra, expounding the Jain philosophy in a most systematized form acceptable to all sects of Jainism.

Jains made its own unique contribution to this mainstream development of logic by also occupying itself with the basic epistemological issues, namely, with those concerning the nature of knowledge, how knowledge is derived, and in what way knowledge can be said to be reliable.

The Jains have doctrines of relativity used for logic and reasoning:

  • Anekāntavāda – the theory of relative pluralism or manifoldness;
  • Syādvāda – the theory of conditioned predication and;
  • Nayavāda – The theory of partial standpoints.

These Jain philosophical concepts made most important contributions to the ancient Indian philosophy, especially in the areas of skepticism and relativity. 

Buddhist logic

Nagarjuna

Nagarjuna (c. 150-250 AD), the founder of the Madhyamaka ("Middle Way") developed an analysis known as the catuṣkoṭi (Sanskrit), a "four-cornered" system of argumentation that involves the systematic examination and rejection of each of the 4 possibilities of a proposition, P:

  1. P; that is, being.
  2. not P; that is, not being.
  1. Painting of Nāgārjuna from the Shingon Hassozō, a series of scrolls authored by the Shingon school of Buddhism. Japan, Kamakura Period (13th-14th century)
    P and not P; that is, being and not being.
  2. not (P or not P); that is, neither being nor not being.
    Under propositional logic, De Morgan's laws imply that this is equivalent to the third case (P and not P), and is therefore superfluous; there are actually only 3 cases to consider.

Dignaga

However, Dignāga (c 480-540 AD) is sometimes said to have developed a formal syllogism, and it was through him and his successor, Dharmakirti, that Buddhist logic reached its height; it is contested whether their analysis actually constitutes a formal syllogistic system. In particular, their analysis centered on the definition of an inference-warranting relation, "vyapti", also known as invariable concomitance or pervasion. To this end, a doctrine known as "apoha" or differentiation was developed. This involved what might be called inclusion and exclusion of defining properties.

Dignāga's famous "wheel of reason" (Hetucakra) is a method of indicating when one thing (such as smoke) can be taken as an invariable sign of another thing (like fire), but the inference is often inductive and based on past observation. Matilal remarks that Dignāga's analysis is much like John Stuart Mill's Joint Method of Agreement and Difference, which is inductive.

Syllogism and influence

In addition, the traditional five-member Indian syllogism, though deductively valid, has repetitions that are unnecessary to its logical validity. As a result, some commentators see the traditional Indian syllogism as a rhetorical form that is entirely natural in many cultures of the world, and yet not as a logical form—not in the sense that all logically unnecessary elements have been omitted for the sake of analysis.

Logic in China

In China, a contemporary of Confucius, Mozi, "Master Mo", is credited with founding the Mohist school, whose canons dealt with issues relating to valid inference and the conditions of correct conclusions. In particular, one of the schools that grew out of Mohism, the Logicians, are credited by some scholars for their early investigation of formal logic. Due to the harsh rule of Legalism in the subsequent Qin Dynasty, this line of investigation disappeared in China until the introduction of Indian philosophy by Buddhists.

Logic in the West

Prehistory of logic

Valid reasoning has been employed in all periods of human history. However, logic studies the principles of valid reasoning, inference and demonstration. It is probable that the idea of demonstrating a conclusion first arose in connection with geometry, which originally meant the same as "land measurement". The ancient Egyptians discovered geometry, including the formula for the volume of a truncated pyramid. Ancient Babylon was also skilled in mathematics. Esagil-kin-apli's medical Diagnostic Handbook in the 11th century BC was based on a logical set of axioms and assumptions, while Babylonian astronomers in the 8th and 7th centuries BC employed an internal logic within their predictive planetary systems, an important contribution to the philosophy of science.

Ancient Greece before Aristotle

While the ancient Egyptians empirically discovered some truths of geometry, the great achievement of the ancient Greeks was to replace empirical methods by demonstrative proof. Both Thales and Pythagoras of the Pre-Socratic philosophers seemed aware of geometric methods.

Fragments of early proofs are preserved in the works of Plato and Aristotle, and the idea of a deductive system was probably known in the Pythagorean school and the Platonic Academy. The proofs of Euclid of Alexandria are a paradigm of Greek geometry. The three basic principles of geometry are as follows:

  • Certain propositions must be accepted as true without demonstration; such a proposition is known as an axiom of geometry.
  • Every proposition that is not an axiom of geometry must be demonstrated as following from the axioms of geometry; such a demonstration is known as a proof or a "derivation" of the proposition.
  • The proof must be formal; that is, the derivation of the proposition must be independent of the particular subject matter in question.

Further evidence that early Greek thinkers were concerned with the principles of reasoning is found in the fragment called dissoi logoi, probably written at the beginning of the fourth century BC. This is part of a protracted debate about truth and falsity. In the case of the classical Greek city-states, interest in argumentation was also stimulated by the activities of the Rhetoricians or Orators and the Sophists, who used arguments to defend or attack a thesis, both in legal and political contexts.

Thales Theorem

Thales

It is said Thales, most widely regarded as the first philosopher in the Greek tradition, measured the height of the pyramids by their shadows at the moment when his own shadow was equal to his height. Thales was said to have had a sacrifice in celebration of discovering Thales' theorem just as Pythagoras had the Pythagorean theorem.

Thales is the first known individual to use deductive reasoning applied to geometry, by deriving four corollaries to his theorem, and the first known individual to whom a mathematical discovery has been attributed. Indian and Babylonian mathematicians knew his theorem for special cases before he proved it. It is believed that Thales learned that an angle inscribed in a semicircle is a right angle during his travels to Babylon.

Pythagoras

Proof of the Pythagorean Theorem in Euclid's Elements

Before 520 BC, on one of his visits to Egypt or Greece, Pythagoras might have met the c. 54 years older Thales. The systematic study of proof seems to have begun with the school of Pythagoras (i. e. the Pythagoreans) in the late sixth century BC. Indeed, the Pythagoreans, believing all was number, are the first philosophers to emphasize form rather than matter.

Heraclitus and Parmenides

The writing of Heraclitus (c. 535 – c. 475 BC) was the first place where the word logos was given special attention in ancient Greek philosophy, Heraclitus held that everything changes and all was fire and conflicting opposites, seemingly unified only by this Logos. He is known for his obscure sayings.

This logos holds always but humans always prove unable to understand it, both before hearing it and when they have first heard it. For though all things come to be in accordance with this logos, humans are like the inexperienced when they experience such words and deeds as I set out, distinguishing each in accordance with its nature and saying how it is. But other people fail to notice what they do when awake, just as they forget what they do while asleep.

— Diels-Kranz, 22B1
Parmenides has been called the discoverer of logic.

In contrast to Heraclitus, Parmenides held that all is one and nothing changes. He may have been a dissident Pythagorean, disagreeing that One (a number) produced the many. "X is not" must always be false or meaningless. What exists can in no way not exist. Our sense perceptions with its noticing of generation and destruction are in grievous error. Instead of sense perception, Parmenides advocated logos as the means to Truth. He has been called the discoverer of logic,

For this view, that That Which Is Not exists, can never predominate. You must debar your thought from this way of search, nor let ordinary experience in its variety force you along this way, (namely, that of allowing) the eye, sightless as it is, and the ear, full of sound, and the tongue, to rule; but (you must) judge by means of the Reason (Logos) the much-contested proof which is expounded by me.

— B 7.1–8.2

Zeno of Elea, a pupil of Parmenides, had the idea of a standard argument pattern found in the method of proof known as reductio ad absurdum. This is the technique of drawing an obviously false (that is, "absurd") conclusion from an assumption, thus demonstrating that the assumption is false. Therefore, Zeno and his teacher are seen as the first to apply the art of logic. Plato's dialogue Parmenides portrays Zeno as claiming to have written a book defending the monism of Parmenides by demonstrating the absurd consequence of assuming that there is plurality. Zeno famously used this method to develop his paradoxes in his arguments against motion. Such dialectic reasoning later became popular. The members of this school were called "dialecticians" (from a Greek word meaning "to discuss").

Plato

Let no one ignorant of geometry enter here.

— Inscribed over the entrance to Plato's Academy.
Mosaic: seven men standing under a tree

None of the surviving works of the great fourth-century philosopher Plato (428–347 BC) include any formal logic, but they include important contributions to the field of philosophical logic. Plato raises three questions:

  • What is it that can properly be called true or false?
  • What is the nature of the connection between the assumptions of a valid argument and its conclusion?
  • What is the nature of definition?

The first question arises in the dialogue Theaetetus, where Plato identifies thought or opinion with talk or discourse (logos). The second question is a result of Plato's theory of Forms. Forms are not things in the ordinary sense, nor strictly ideas in the mind, but they correspond to what philosophers later called universals, namely an abstract entity common to each set of things that have the same name. In both the Republic and the Sophist, Plato suggests that the necessary connection between the assumptions of a valid argument and its conclusion corresponds to a necessary connection between "forms". The third question is about definition. Many of Plato's dialogues concern the search for a definition of some important concept (justice, truth, the Good), and it is likely that Plato was impressed by the importance of definition in mathematics. What underlies every definition is a Platonic Form, the common nature present in different particular things. Thus, a definition reflects the ultimate object of understanding, and is the foundation of all valid inference. This had a great influence on Plato's student Aristotle, in particular Aristotle's notion of the essence of a thing.

Aristotle

Aristotle

The logic of Aristotle, and particularly his theory of the syllogism, has had an enormous influence in Western thought. Aristotle was the first logician to attempt a systematic analysis of logical syntax, of noun (or term), and of verb. He was the first formal logician, in that he demonstrated the principles of reasoning by employing variables to show the underlying logical form of an argument. He sought relations of dependence which characterize necessary inference, and distinguished the validity of these relations, from the truth of the premises. He was the first to deal with the principles of contradiction and excluded middle in a systematic way.

Front cover of book, titled "Aristotelis Logica", with an illustration of eagle on a snake
Aristotle's logic was still influential in the Renaissance

The Organon

His logical works, called the Organon, are the earliest formal study of logic that have come down to modern times. Though it is difficult to determine the dates, the probable order of writing of Aristotle's logical works is:

This diagram shows the contradictory relationships between categorical propositions in the square of opposition of Aristotelian logic.

These works are of outstanding importance in the history of logic. In the Categories, he attempts to discern all the possible things to which a term can refer; this idea underpins his philosophical work Metaphysics, which itself had a profound influence on Western thought.

He also developed a theory of non-formal logic (i.e., the theory of fallacies), which is presented in Topics and Sophistical Refutations.

On Interpretation contains a comprehensive treatment of the notions of opposition and conversion; chapter 7 is at the origin of the square of opposition (or logical square); chapter 9 contains the beginning of modal logic.

The Prior Analytics contains his exposition of the "syllogism", where three important principles are applied for the first time in history: the use of variables, a purely formal treatment, and the use of an axiomatic system.

Stoics

The other great school of Greek logic is that of the Stoics. Stoic logic traces its roots back to the late 5th century BC philosopher Euclid of Megara, a pupil of Socrates and slightly older contemporary of Plato, probably following in the tradition of Parmenides and Zeno. His pupils and successors were called "Megarians", or "Eristics", and later the "Dialecticians". The two most important dialecticians of the Megarian school were Diodorus Cronus and Philo, who were active in the late 4th century BC.

Stone bust of a bearded, grave-looking man
Chrysippus of Soli

The Stoics adopted the Megarian logic and systemized it. The most important member of the school was Chrysippus (c. 278–c. 206 BC), who was its third head, and who formalized much of Stoic doctrine. He is supposed to have written over 700 works, including at least 300 on logic, almost none of which survive. Unlike with Aristotle, we have no complete works by the Megarians or the early Stoics, and have to rely mostly on accounts (sometimes hostile) by later sources, including prominently Diogenes Laërtius, Sextus Empiricus, Galen, Aulus Gellius, Alexander of Aphrodisias, and Cicero.

Three significant contributions of the Stoic school were (i) their account of modality, (ii) their theory of the Material conditional, and (iii) their account of meaning and truth.

  • Modality. According to Aristotle, the Megarians of his day claimed there was no distinction between potentiality and actuality. Diodorus Cronus defined the possible as that which either is or will be, the impossible as what will not be true, and the contingent as that which either is already, or will be false. Diodorus is also famous for what is known as his Master argument, which states that each pair of the following 3 propositions contradicts the third proposition:
  • Everything that is past is true and necessary.
  • The impossible does not follow from the possible.
  • What neither is nor will be is possible.
Diodorus used the plausibility of the first two to prove that nothing is possible if it neither is nor will be true. Chrysippus, by contrast, denied the second premise and said that the impossible could follow from the possible.
  • Conditional statements. The first logicians to debate conditional statements were Diodorus and his pupil Philo of Megara. Sextus Empiricus refers three times to a debate between Diodorus and Philo. Philo regarded a conditional as true unless it has both a true antecedent and a false consequent. Precisely, let T0 and T1 be true statements, and let F0 and F1 be false statements; then, according to Philo, each of the following conditionals is a true statement, because it is not the case that the consequent is false while the antecedent is true (it is not the case that a false statement is asserted to follow from a true statement):
  • If T0, then T1
  • If F0, then T0
  • If F0, then F1
The following conditional does not meet this requirement, and is therefore a false statement according to Philo:
  • If T0, then F0
Indeed, Sextus says "According to [Philo], there are three ways in which a conditional may be true, and one in which it may be false." Philo's criterion of truth is what would now be called a truth-functional definition of "if ... then"; it is the definition used in modern logic.
In contrast, Diodorus allowed the validity of conditionals only when the antecedent clause could never lead to an untrue conclusion. A century later, the Stoic philosopher Chrysippus attacked the assumptions of both Philo and Diodorus.
  • Meaning and truth. The most important and striking difference between Megarian-Stoic logic and Aristotelian logic is that Megarian-Stoic logic concerns propositions, not terms, and is thus closer to modern propositional logic. The Stoics distinguished between utterance (phone), which may be noise, speech (lexis), which is articulate but which may be meaningless, and discourse (logos), which is meaningful utterance. The most original part of their theory is the idea that what is expressed by a sentence, called a lekton, is something real; this corresponds to what is now called a proposition. Sextus says that according to the Stoics, three things are linked together: that which signifies, that which is signified, and the object; for example, that which signifies is the word Dion, and that which is signified is what Greeks understand but barbarians do not, and the object is Dion himself.

Medieval logic

Logic in the Middle East

Arabic text in pink and blue
A text by Avicenna, founder of Avicennian logic

The works of Al-Kindi, Al-Farabi, Avicenna, Al-Ghazali, Averroes and other Muslim logicians were based on Aristotelian logic and were important in communicating the ideas of the ancient world to the medieval West. Al-Farabi (Alfarabi) (873–950) was an Aristotelian logician who discussed the topics of future contingents, the number and relation of the categories, the relation between logic and grammar, and non-Aristotelian forms of inference. Al-Farabi also considered the theories of conditional syllogisms and analogical inference, which were part of the Stoic tradition of logic rather than the Aristotelian.

Maimonides (1138-1204) wrote a Treatise on Logic (Arabic: Maqala Fi-Sinat Al-Mantiq), referring to Al-Farabi as the "second master", the first being Aristotle.

Ibn Sina (Avicenna) (980–1037) was the founder of Avicennian logic, which replaced Aristotelian logic as the dominant system of logic in the Islamic world, and also had an important influence on Western medieval writers such as Albertus Magnus. Avicenna wrote on the hypothetical syllogism and on the propositional calculus, which were both part of the Stoic logical tradition. He developed an original "temporally modalized" syllogistic theory, involving temporal logic and modal logic. He also made use of inductive logic, such as the methods of agreement, difference, and concomitant variation which are critical to the scientific method. One of Avicenna's ideas had a particularly important influence on Western logicians such as William of Ockham: Avicenna's word for a meaning or notion (ma'na), was translated by the scholastic logicians as the Latin intentio; in medieval logic and epistemology, this is a sign in the mind that naturally represents a thing. This was crucial to the development of Ockham's conceptualism: A universal term (e.g., "man") does not signify a thing existing in reality, but rather a sign in the mind (intentio in intellectu) which represents many things in reality; Ockham cites Avicenna's commentary on Metaphysics V in support of this view.

Fakhr al-Din al-Razi (b. 1149) criticised Aristotle's "first figure" and formulated an early system of inductive logic, foreshadowing the system of inductive logic developed by John Stuart Mill (1806–1873). Al-Razi's work was seen by later Islamic scholars as marking a new direction for Islamic logic, towards a Post-Avicennian logic. This was further elaborated by his student Afdaladdîn al-Khûnajî (d. 1249), who developed a form of logic revolving around the subject matter of conceptions and assents. In response to this tradition, Nasir al-Din al-Tusi (1201–1274) began a tradition of Neo-Avicennian logic which remained faithful to Avicenna's work and existed as an alternative to the more dominant Post-Avicennian school over the following centuries.

The Illuminationist school was founded by Shahab al-Din Suhrawardi (1155–1191), who developed the idea of "decisive necessity", which refers to the reduction of all modalities (necessity, possibility, contingency and impossibility) to the single mode of necessity. Ibn al-Nafis (1213–1288) wrote a book on Avicennian logic, which was a commentary of Avicenna's Al-Isharat (The Signs) and Al-Hidayah (The Guidance). Ibn Taymiyyah (1263–1328), wrote the Ar-Radd 'ala al-Mantiqiyyin, where he argued against the usefulness, though not the validity, of the syllogism and in favour of inductive reasoning. Ibn Taymiyyah also argued against the certainty of syllogistic arguments and in favour of analogy; his argument is that concepts founded on induction are themselves not certain but only probable, and thus a syllogism based on such concepts is no more certain than an argument based on analogy. He further claimed that induction itself is founded on a process of analogy. His model of analogical reasoning was based on that of juridical arguments. This model of analogy has been used in the recent work of John F. Sowa.

The Sharh al-takmil fi'l-mantiq written by Muhammad ibn Fayd Allah ibn Muhammad Amin al-Sharwani in the 15th century is the last major Arabic work on logic that has been studied. However, "thousands upon thousands of pages" on logic were written between the 14th and 19th centuries, though only a fraction of the texts written during this period have been studied by historians, hence little is known about the original work on Islamic logic produced during this later period.

Logic in medieval Europe

Top left corner of early printed text, with an illuminated S, beginning "Sicut dicit philosophus"
Brito's questions on the Old Logic

"Medieval logic" (also known as "Scholastic logic") generally means the form of Aristotelian logic developed in medieval Europe throughout roughly the period 1200–1600. For centuries after Stoic logic had been formulated, it was the dominant system of logic in the classical world. When the study of logic resumed after the Dark Ages, the main source was the work of the Christian philosopher Boethius, who was familiar with some of Aristotle's logic, but almost none of the work of the Stoics. Until the twelfth century, the only works of Aristotle available in the West were the Categories, On Interpretation, and Boethius's translation of the Isagoge of Porphyry (a commentary on the Categories). These works were known as the "Old Logic" (Logica Vetus or Ars Vetus). An important work in this tradition was the Logica Ingredientibus of Peter Abelard (1079–1142). His direct influence was small, but his influence through pupils such as John of Salisbury was great, and his method of applying rigorous logical analysis to theology shaped the way that theological criticism developed in the period that followed. The proof for the principle of explosion, also known as the principle of Pseudo-Scotus, the law according to which any proposition can be proven from a contradiction (including its negation), was first given by the 12th century French logician William of Soissons.

By the early thirteenth century, the remaining works of Aristotle's Organon, including the Prior Analytics, Posterior Analytics, and the Sophistical Refutations (collectively known as the Logica Nova or "New Logic"), had been recovered in the West. Logical work until then was mostly paraphrasis or commentary on the work of Aristotle. The period from the middle of the thirteenth to the middle of the fourteenth century was one of significant developments in logic, particularly in three areas which were original, with little foundation in the Aristotelian tradition that came before. These were:

  • The theory of supposition. Supposition theory deals with the way that predicates (e.g., 'man') range over a domain of individuals (e.g., all men). In the proposition 'every man is an animal', does the term 'man' range over or 'supposit for' men existing just in the present, or does the range include past and future men? Can a term supposit for a non-existing individual? Some medievalists have argued that this idea is a precursor of modern first-order logic. "The theory of supposition with the associated theories of copulatio (sign-capacity of adjectival terms), ampliatio (widening of referential domain), and distributio constitute one of the most original achievements of Western medieval logic".
  • The theory of syncategoremata. Syncategoremata are terms which are necessary for logic, but which, unlike categorematic terms, do not signify on their own behalf, but 'co-signify' with other words. Examples of syncategoremata are 'and', 'not', 'every', 'if', and so on.
  • The theory of consequences. A consequence is a hypothetical, conditional proposition: two propositions joined by the terms 'if ... then'. For example, 'if a man runs, then God exists' (Si homo currit, Deus est). A fully developed theory of consequences is given in Book III of William of Ockham's work Summa Logicae. There, Ockham distinguishes between 'material' and 'formal' consequences, which are roughly equivalent to the modern material implication and logical implication respectively. Similar accounts are given by Jean Buridan and Albert of Saxony.

The last great works in this tradition are the Logic of John Poinsot (1589–1644, known as John of St Thomas), the Metaphysical Disputations of Francisco Suarez (1548–1617), and the Logica Demonstrativa of Giovanni Girolamo Saccheri (1667–1733).

Traditional logic

The textbook tradition

Frontispiece, with title beginning "The Artes of Logike and Rethorike, plainlie set foorth in the English tounge, easie to be learned and practised".
Dudley Fenner's Art of Logic (1584)

Traditional logic generally means the textbook tradition that begins with Antoine Arnauld's and Pierre Nicole's Logic, or the Art of Thinking, better known as the Port-Royal Logic. Published in 1662, it was the most influential work on logic after Aristotle until the nineteenth century. The book presents a loosely Cartesian doctrine (that the proposition is a combining of ideas rather than terms, for example) within a framework that is broadly derived from Aristotelian and medieval term logic. Between 1664 and 1700, there were eight editions, and the book had considerable influence after that. The Port-Royal introduces the concepts of extension and intension. The account of propositions that Locke gives in the Essay is essentially that of the Port-Royal: "Verbal propositions, which are words, [are] the signs of our ideas, put together or separated in affirmative or negative sentences. So that proposition consists in the putting together or separating these signs, according as the things which they stand for agree or disagree."

Dudley Fenner helped popularize Ramist logic, a reaction against Aristotle. Another influential work was the Novum Organum by Francis Bacon, published in 1620. The title translates as "new instrument". This is a reference to Aristotle's work known as the Organon. In this work, Bacon rejects the syllogistic method of Aristotle in favor of an alternative procedure "which by slow and faithful toil gathers information from things and brings it into understanding". This method is known as inductive reasoning, a method which starts from empirical observation and proceeds to lower axioms or propositions; from these lower axioms, more general ones can be induced. For example, in finding the cause of a phenomenal nature such as heat, 3 lists should be constructed:

  • The presence list: a list of every situation where heat is found.
  • The absence list: a list of every situation that is similar to at least one of those of the presence list, except for the lack of heat.
  • The variability list: a list of every situation where heat can vary.

Then, the form nature (or cause) of heat may be defined as that which is common to every situation of the presence list, and which is lacking from every situation of the absence list, and which varies by degree in every situation of the variability list.

Other works in the textbook tradition include Isaac Watts's Logick: Or, the Right Use of Reason (1725), Richard Whately's Logic (1826), and John Stuart Mill's A System of Logic (1843). Although the latter was one of the last great works in the tradition, Mill's view that the foundations of logic lie in introspection influenced the view that logic is best understood as a branch of psychology, a view which dominated the next fifty years of its development, especially in Germany.

Logic in Hegel's philosophy

Georg Wilhelm Friedrich Hegel

G.W.F. Hegel indicated the importance of logic to his philosophical system when he condensed his extensive Science of Logic into a shorter work published in 1817 as the first volume of his Encyclopaedia of the Philosophical Sciences. The "Shorter" or "Encyclopaedia" Logic, as it is often known, lays out a series of transitions which leads from the most empty and abstract of categories—Hegel begins with "Pure Being" and "Pure Nothing"—to the "Absolute", the category which contains and resolves all the categories which preceded it. Despite the title, Hegel's Logic is not really a contribution to the science of valid inference. Rather than deriving conclusions about concepts through valid inference from premises, Hegel seeks to show that thinking about one concept compels thinking about another concept (one cannot, he argues, possess the concept of "Quality" without the concept of "Quantity"); this compulsion is, supposedly, not a matter of individual psychology, because it arises almost organically from the content of the concepts themselves. His purpose is to show the rational structure of the "Absolute"—indeed of rationality itself. The method by which thought is driven from one concept to its contrary, and then to further concepts, is known as the Hegelian dialectic.

Although Hegel's Logic has had little impact on mainstream logical studies, its influence can be seen elsewhere:

  • Carl von Prantl's Geschichte der Logik im Abendland (1855–1867).
  • The work of the British Idealists, such as F.H. Bradley's Principles of Logic (1883).
  • The economic, political, and philosophical studies of Karl Marx, and in the various schools of Marxism.

Logic and psychology

Between the work of Mill and Frege stretched half a century during which logic was widely treated as a descriptive science, an empirical study of the structure of reasoning, and thus essentially as a branch of psychology. The German psychologist Wilhelm Wundt, for example, discussed deriving "the logical from the psychological laws of thought", emphasizing that "psychological thinking is always the more comprehensive form of thinking." This view was widespread among German philosophers of the period:

  • Theodor Lipps described logic as "a specific discipline of psychology".
  • Christoph von Sigwart understood logical necessity as grounded in the individual's compulsion to think in a certain way.
  • Benno Erdmann argued that "logical laws only hold within the limits of our thinking".

Such was the dominant view of logic in the years following Mill's work. This psychological approach to logic was rejected by Gottlob Frege. It was also subjected to an extended and destructive critique by Edmund Husserl in the first volume of his Logical Investigations (1900), an assault which has been described as "overwhelming". Husserl argued forcefully that grounding logic in psychological observations implied that all logical truths remained unproven, and that skepticism and relativism were unavoidable consequences.

Such criticisms did not immediately extirpate what is called "psychologism". For example, the American philosopher Josiah Royce, while acknowledging the force of Husserl's critique, remained "unable to doubt" that progress in psychology would be accompanied by progress in logic, and vice versa.

Rise of modern logic

The period between the fourteenth century and the beginning of the nineteenth century had been largely one of decline and neglect, and is generally regarded as barren by historians of logic. The revival of logic occurred in the mid-nineteenth century, at the beginning of a revolutionary period where the subject developed into a rigorous and formalistic discipline whose exemplar was the exact method of proof used in mathematics. The development of the modern "symbolic" or "mathematical" logic during this period is the most significant in the 2000-year history of logic, and is arguably one of the most important and remarkable events in human intellectual history.

A number of features distinguish modern logic from the old Aristotelian or traditional logic, the most important of which are as follows: Modern logic is fundamentally a calculus whose rules of operation are determined only by the shape and not by the meaning of the symbols it employs, as in mathematics. Many logicians were impressed by the "success" of mathematics, in that there had been no prolonged dispute about any truly mathematical result. C.S. Peirce noted that even though a mistake in the evaluation of a definite integral by Laplace led to an error concerning the moon's orbit that persisted for nearly 50 years, the mistake, once spotted, was corrected without any serious dispute. Peirce contrasted this with the disputation and uncertainty surrounding traditional logic, and especially reasoning in metaphysics. He argued that a truly "exact" logic would depend upon mathematical, i.e., "diagrammatic" or "iconic" thought. "Those who follow such methods will ... escape all error except such as will be speedily corrected after it is once suspected". Modern logic is also "constructive" rather than "abstractive"; i.e., rather than abstracting and formalising theorems derived from ordinary language (or from psychological intuitions about validity), it constructs theorems by formal methods, then looks for an interpretation in ordinary language. It is entirely symbolic, meaning that even the logical constants (which the medieval logicians called "syncategoremata") and the categoric terms are expressed in symbols.

Modern logic

The development of modern logic falls into roughly five periods:

  • The embryonic period from Leibniz to 1847, when the notion of a logical calculus was discussed and developed, particularly by Leibniz, but no schools were formed, and isolated periodic attempts were abandoned or went unnoticed.
  • The algebraic period from Boole's Analysis to Schröder's Vorlesungen. In this period, there were more practitioners, and a greater continuity of development.
  • The logicist period from the Begriffsschrift of Frege to the Principia Mathematica of Russell and Whitehead. The aim of the "logicist school" was to incorporate the logic of all mathematical and scientific discourse in a single unified system which, taking as a fundamental principle that all mathematical truths are logical, did not accept any non-logical terminology. The major logicists were Frege, Russell, and the early Wittgenstein. It culminates with the Principia, an important work which includes a thorough examination and attempted solution of the antinomies which had been an obstacle to earlier progress.
  • The metamathematical period from 1910 to the 1930s, which saw the development of metalogic, in the finitist system of Hilbert, and the non-finitist system of Löwenheim and Skolem, the combination of logic and metalogic in the work of Gödel and Tarski. Gödel's incompleteness theorem of 1931 was one of the greatest achievements in the history of logic. Later in the 1930s, Gödel developed the notion of set-theoretic constructibility.
  • The period after World War II, when mathematical logic branched into four inter-related but separate areas of research: model theory, proof theory, computability theory, and set theory, and its ideas and methods began to influence philosophy.

Embryonic period

Leibniz

The idea that inference could be represented by a purely mechanical process is found as early as Raymond Llull, who proposed a (somewhat eccentric) method of drawing conclusions by a system of concentric rings. The work of logicians such as the Oxford Calculators led to a method of using letters instead of writing out logical calculations (calculationes) in words, a method used, for instance, in the Logica magna by Paul of Venice. Three hundred years after Llull, the English philosopher and logician Thomas Hobbes suggested that all logic and reasoning could be reduced to the mathematical operations of addition and subtraction. The same idea is found in the work of Leibniz, who had read both Llull and Hobbes, and who argued that logic can be represented through a combinatorial process or calculus. But, like Llull and Hobbes, he failed to develop a detailed or comprehensive system, and his work on this topic was not published until long after his death. Leibniz says that ordinary languages are subject to "countless ambiguities" and are unsuited for a calculus, whose task is to expose mistakes in inference arising from the forms and structures of words; hence, he proposed to identify an alphabet of human thought comprising fundamental concepts which could be composed to express complex ideas, and create a calculus ratiocinator that would make all arguments "as tangible as those of the Mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: Let us calculate."

Gergonne (1816) said that reasoning does not have to be about objects about which one has perfectly clear ideas, because algebraic operations can be carried out without having any idea of the meaning of the symbols involved. Bolzano anticipated a fundamental idea of modern proof theory when he defined logical consequence or "deducibility" in terms of variables:

Hence I say that propositions , , ,… are deducible from propositions , , , ,… with respect to variable parts , ,…, if every class of ideas whose substitution for , ,… makes all of , , , ,… true, also makes all of , , ,… true. Occasionally, since it is customary, I shall say that propositions , , ,… follow, or can be inferred or derived, from , , , ,…. Propositions , , , ,… I shall call the premises, , , ,… the conclusions.

This is now known as semantic validity.

Algebraic period

George Boole

Modern logic begins with what is known as the "algebraic school", originating with Boole and including Peirce, Jevons, Schröder, and Venn. Their objective was to develop a calculus to formalise reasoning in the area of classes, propositions, and probabilities. The school begins with Boole's seminal work Mathematical Analysis of Logic which appeared in 1847, although De Morgan (1847) is its immediate precursor. The fundamental idea of Boole's system is that algebraic formulae can be used to express logical relations. This idea occurred to Boole in his teenage years, working as an usher in a private school in Lincoln, Lincolnshire. For example, let x and y stand for classes, let the symbol = signify that the classes have the same members, xy stand for the class containing all and only the members of x and y and so on. Boole calls these elective symbols, i.e. symbols which select certain objects for consideration. An expression in which elective symbols are used is called an elective function, and an equation of which the members are elective functions, is an elective equation. The theory of elective functions and their "development" is essentially the modern idea of truth-functions and their expression in disjunctive normal form.

Boole's system admits of two interpretations, in class logic, and propositional logic. Boole distinguished between "primary propositions" which are the subject of syllogistic theory, and "secondary propositions", which are the subject of propositional logic, and showed how under different "interpretations" the same algebraic system could represent both. An example of a primary proposition is "All inhabitants are either Europeans or Asiatics." An example of a secondary proposition is "Either all inhabitants are Europeans or they are all Asiatics." These are easily distinguished in modern predicate logic, where it is also possible to show that the first follows from the second, but it is a significant disadvantage that there is no way of representing this in the Boolean system.

In his Symbolic Logic (1881), John Venn used diagrams of overlapping areas to express Boolean relations between classes or truth-conditions of propositions. In 1869 Jevons realised that Boole's methods could be mechanised, and constructed a "logical machine" which he showed to the Royal Society the following year. In 1885 Allan Marquand proposed an electrical version of the machine that is still extant (picture at the Firestone Library).

Charles Sanders Peirce

The defects in Boole's system (such as the use of the letter v for existential propositions) were all remedied by his followers. Jevons published Pure Logic, or the Logic of Quality apart from Quantity in 1864, where he suggested a symbol to signify exclusive or, which allowed Boole's system to be greatly simplified. This was usefully exploited by Schröder when he set out theorems in parallel columns in his Vorlesungen (1890–1905). Peirce (1880) showed how all the Boolean elective functions could be expressed by the use of a single primitive binary operation, "neither ... nor ..." and equally well "not both ... and ...", however, like many of Peirce's innovations, this remained unknown or unnoticed until Sheffer rediscovered it in 1913. Boole's early work also lacks the idea of the logical sum which originates in Peirce (1867), Schröder (1877) and Jevons (1890), and the concept of inclusion, first suggested by Gergonne (1816) and clearly articulated by Peirce (1870).

Coloured diagram of 4 interlocking sets
Boolean multiples

The success of Boole's algebraic system suggested that all logic must be capable of algebraic representation, and there were attempts to express a logic of relations in such form, of which the most ambitious was Schröder's monumental Vorlesungen über die Algebra der Logik ("Lectures on the Algebra of Logic", vol iii 1895), although the original idea was again anticipated by Peirce.

Boole's unwavering acceptance of Aristotle's logic is emphasized by the historian of logic John Corcoran in an accessible introduction to Laws of Thought Corcoran also wrote a point-by-point comparison of Prior Analytics and Laws of Thought. According to Corcoran, Boole fully accepted and endorsed Aristotle's logic. Boole's goals were "to go under, over, and beyond" Aristotle's logic by 1) providing it with mathematical foundations involving equations, 2) extending the class of problems it could treat — from assessing validity to solving equations — and 3) expanding the range of applications it could handle — e.g. from propositions having only two terms to those having arbitrarily many.

More specifically, Boole agreed with what Aristotle said; Boole's 'disagreements', if they might be called that, concern what Aristotle did not say. First, in the realm of foundations, Boole reduced the four propositional forms of Aristotelian logic to formulas in the form of equations — by itself a revolutionary idea. Second, in the realm of logic's problems, Boole's addition of equation solving to logic — another revolutionary idea — involved Boole's doctrine that Aristotle's rules of inference (the "perfect syllogisms") must be supplemented by rules for equation solving. Third, in the realm of applications, Boole's system could handle multi-term propositions and arguments whereas Aristotle could handle only two-termed subject-predicate propositions and arguments. For example, Aristotle's system could not deduce "No quadrangle that is a square is a rectangle that is a rhombus" from "No square that is a quadrangle is a rhombus that is a rectangle" or from "No rhombus that is a rectangle is a square that is a quadrangle".

Logicist period

Gottlob Frege.

After Boole, the next great advances were made by the German mathematician Gottlob Frege. Frege's objective was the program of Logicism, i.e. demonstrating that arithmetic is identical with logic. Frege went much further than any of his predecessors in his rigorous and formal approach to logic, and his calculus or Begriffsschrift is important. Frege also tried to show that the concept of number can be defined by purely logical means, so that (if he was right) logic includes arithmetic and all branches of mathematics that are reducible to arithmetic. He was not the first writer to suggest this. In his pioneering work Die Grundlagen der Arithmetik (The Foundations of Arithmetic), sections 15–17, he acknowledges the efforts of Leibniz, J.S. Mill as well as Jevons, citing the latter's claim that "algebra is a highly developed logic, and number but logical discrimination."

Frege's first work, the Begriffsschrift ("concept script") is a rigorously axiomatised system of propositional logic, relying on just two connectives (negational and conditional), two rules of inference (modus ponens and substitution), and six axioms. Frege referred to the "completeness" of this system, but was unable to prove this. The most significant innovation, however, was his explanation of the quantifier in terms of mathematical functions. Traditional logic regards the sentence "Caesar is a man" as of fundamentally the same form as "all men are mortal." Sentences with a proper name subject were regarded as universal in character, interpretable as "every Caesar is a man". At the outset Frege abandons the traditional "concepts subject and predicate", replacing them with argument and function respectively, which he believes "will stand the test of time. It is easy to see how regarding a content as a function of an argument leads to the formation of concepts. Furthermore, the demonstration of the connection between the meanings of the words if, and, not, or, there is, some, all, and so forth, deserves attention". Frege argued that the quantifier expression "all men" does not have the same logical or semantic form as "all men", and that the universal proposition "every A is B" is a complex proposition involving two functions, namely ' – is A' and ' – is B' such that whatever satisfies the first, also satisfies the second. In modern notation, this would be expressed as

In English, "for all x, if Ax then Bx". Thus only singular propositions are of subject-predicate form, and they are irreducibly singular, i.e. not reducible to a general proposition. Universal and particular propositions, by contrast, are not of simple subject-predicate form at all. If "all mammals" were the logical subject of the sentence "all mammals are land-dwellers", then to negate the whole sentence we would have to negate the predicate to give "all mammals are not land-dwellers". But this is not the case. This functional analysis of ordinary-language sentences later had a great impact on philosophy and linguistics.

This means that in Frege's calculus, Boole's "primary" propositions can be represented in a different way from "secondary" propositions. "All inhabitants are either men or women" is

Straight line with bend; text "x" over bend; text "F(x)" to the right of the line.
Frege's "Concept Script"

whereas "All the inhabitants are men or all the inhabitants are women" is

As Frege remarked in a critique of Boole's calculus:

"The real difference is that I avoid [the Boolean] division into two parts ... and give a homogeneous presentation of the lot. In Boole the two parts run alongside one another, so that one is like the mirror image of the other, but for that very reason stands in no organic relation to it'

As well as providing a unified and comprehensive system of logic, Frege's calculus also resolved the ancient problem of multiple generality. The ambiguity of "every girl kissed a boy" is difficult to express in traditional logic, but Frege's logic resolves this through the different scope of the quantifiers. Thus

Peano

means that to every girl there corresponds some boy (any one will do) who the girl kissed. But

means that there is some particular boy whom every girl kissed. Without this device, the project of logicism would have been doubtful or impossible. Using it, Frege provided a definition of the ancestral relation, of the many-to-one relation, and of mathematical induction.

Ernst Zermelo

This period overlaps with the work of what is known as the "mathematical school", which included Dedekind, Pasch, Peano, Hilbert, Zermelo, Huntington, Veblen and Heyting. Their objective was the axiomatisation of branches of mathematics like geometry, arithmetic, analysis and set theory. Most notable was Hilbert's Program, which sought to ground all of mathematics to a finite set of axioms, proving its consistency by "finitistic" means and providing a procedure which would decide the truth or falsity of any mathematical statement. The standard axiomatization of the natural numbers is named the Peano axioms eponymously. Peano maintained a clear distinction between mathematical and logical symbols. While unaware of Frege's work, he independently recreated his logical apparatus based on the work of Boole and Schröder.

The logicist project received a near-fatal setback with the discovery of a paradox in 1901 by Bertrand Russell. This proved Frege's naive set theory led to a contradiction. Frege's theory contained the axiom that for any formal criterion, there is a set of all objects that meet the criterion. Russell showed that a set containing exactly the sets that are not members of themselves would contradict its own definition (if it is not a member of itself, it is a member of itself, and if it is a member of itself, it is not). This contradiction is now known as Russell's paradox. One important method of resolving this paradox was proposed by Ernst Zermelo. Zermelo set theory was the first axiomatic set theory. It was developed into the now-canonical Zermelo–Fraenkel set theory (ZF). Russell's paradox symbolically is as follows:

The monumental Principia Mathematica, a three-volume work on the foundations of mathematics, written by Russell and Alfred North Whitehead and published 1910–13 also included an attempt to resolve the paradox, by means of an elaborate system of types: a set of elements is of a different type than is each of its elements (set is not the element; one element is not the set) and one cannot speak of the "set of all sets". The Principia was an attempt to derive all mathematical truths from a well-defined set of axioms and inference rules in symbolic logic.

Metamathematical period

Kurt Gödel

The names of Gödel and Tarski dominate the 1930s, a crucial period in the development of metamathematics – the study of mathematics using mathematical methods to produce metatheories, or mathematical theories about other mathematical theories. Early investigations into metamathematics had been driven by Hilbert's program. Work on metamathematics culminated in the work of Gödel, who in 1929 showed that a given first-order sentence is deducible if and only if it is logically valid – i.e. it is true in every structure for its language. This is known as Gödel's completeness theorem. A year later, he proved two important theorems, which showed Hibert's program to be unattainable in its original form. The first is that no consistent system of axioms whose theorems can be listed by an effective procedure such as an algorithm or computer program is capable of proving all facts about the natural numbers. For any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system. The second is that if such a system is also capable of proving certain basic facts about the natural numbers, then the system cannot prove the consistency of the system itself. These two results are known as Gödel's incompleteness theorems, or simply Gödel's Theorem. Later in the decade, Gödel developed the concept of set-theoretic constructibility, as part of his proof that the axiom of choice and the continuum hypothesis are consistent with Zermelo–Fraenkel set theory. In proof theory, Gerhard Gentzen developed natural deduction and the sequent calculus. The former attempts to model logical reasoning as it 'naturally' occurs in practice and is most easily applied to intuitionistic logic, while the latter was devised to clarify the derivation of logical proofs in any formal system. Since Gentzen's work, natural deduction and sequent calculi have been widely applied in the fields of proof theory, mathematical logic and computer science. Gentzen also proved normalization and cut-elimination theorems for intuitionistic and classical logic which could be used to reduce logical proofs to a normal form.

Balding man, with bookshelf in background
Alfred Tarski

Alfred Tarski, a pupil of Łukasiewicz, is best known for his definition of truth and logical consequence, and the semantic concept of logical satisfaction. In 1933, he published (in Polish) The concept of truth in formalized languages, in which he proposed his semantic theory of truth: a sentence such as "snow is white" is true if and only if snow is white. Tarski's theory separated the metalanguage, which makes the statement about truth, from the object language, which contains the sentence whose truth is being asserted, and gave a correspondence (the T-schema) between phrases in the object language and elements of an interpretation. Tarski's approach to the difficult idea of explaining truth has been enduringly influential in logic and philosophy, especially in the development of model theory. Tarski also produced important work on the methodology of deductive systems, and on fundamental principles such as completeness, decidability, consistency and definability. According to Anita Feferman, Tarski "changed the face of logic in the twentieth century".

Alonzo Church and Alan Turing proposed formal models of computability, giving independent negative solutions to Hilbert's Entscheidungsproblem in 1936 and 1937, respectively. The Entscheidungsproblem asked for a procedure that, given any formal mathematical statement, would algorithmically determine whether the statement is true. Church and Turing proved there is no such procedure; Turing's paper introduced the halting problem as a key example of a mathematical problem without an algorithmic solution.

Church's system for computation developed into the modern λ-calculus, while the Turing machine became a standard model for a general-purpose computing device. It was soon shown that many other proposed models of computation were equivalent in power to those proposed by Church and Turing. These results led to the Church–Turing thesis that any deterministic algorithm that can be carried out by a human can be carried out by a Turing machine. Church proved additional undecidability results, showing that both Peano arithmetic and first-order logic are undecidable. Later work by Emil Post and Stephen Cole Kleene in the 1940s extended the scope of computability theory and introduced the concept of degrees of unsolvability.

The results of the first few decades of the twentieth century also had an impact upon analytic philosophy and philosophical logic, particularly from the 1950s onwards, in subjects such as modal logic, temporal logic, deontic logic, and relevance logic.

Logic after WWII

Man with a beard and straw hat on a beach

After World War II, mathematical logic branched into four inter-related but separate areas of research: model theory, proof theory, computability theory, and set theory.

In set theory, the method of forcing revolutionized the field by providing a robust method for constructing models and obtaining independence results. Paul Cohen introduced this method in 1963 to prove the independence of the continuum hypothesis and the axiom of choice from Zermelo–Fraenkel set theory. His technique, which was simplified and extended soon after its introduction, has since been applied to many other problems in all areas of mathematical logic.

Computability theory had its roots in the work of Turing, Church, Kleene, and Post in the 1930s and 40s. It developed into a study of abstract computability, which became known as recursion theory. The priority method, discovered independently by Albert Muchnik and Richard Friedberg in the 1950s, led to major advances in the understanding of the degrees of unsolvability and related structures. Research into higher-order computability theory demonstrated its connections to set theory. The fields of constructive analysis and computable analysis were developed to study the effective content of classical mathematical theorems; these in turn inspired the program of reverse mathematics. A separate branch of computability theory, computational complexity theory, was also characterized in logical terms as a result of investigations into descriptive complexity.

Model theory applies the methods of mathematical logic to study models of particular mathematical theories. Alfred Tarski published much pioneering work in the field, which is named after a series of papers he published under the title Contributions to the theory of models. In the 1960s, Abraham Robinson used model-theoretic techniques to develop calculus and analysis based on infinitesimals, a problem that first had been proposed by Leibniz.

In proof theory, the relationship between classical mathematics and intuitionistic mathematics was clarified via tools such as the realizability method invented by Georg Kreisel and Gödel's Dialectica interpretation. This work inspired the contemporary area of proof mining. The Curry–Howard correspondence emerged as a deep analogy between logic and computation, including a correspondence between systems of natural deduction and typed lambda calculi used in computer science. As a result, research into this class of formal systems began to address both logical and computational aspects; this area of research came to be known as modern type theory. Advances were also made in ordinal analysis and the study of independence results in arithmetic such as the Paris–Harrington theorem.

This was also a period, particularly in the 1950s and afterwards, when the ideas of mathematical logic begin to influence philosophical thinking. For example, tense logic is a formalised system for representing, and reasoning about, propositions qualified in terms of time. The philosopher Arthur Prior played a significant role in its development in the 1960s. Modal logics extend the scope of formal logic to include the elements of modality (for example, possibility and necessity). The ideas of Saul Kripke, particularly about possible worlds, and the formal system now called Kripke semantics have had a profound impact on analytic philosophy. His best known and most influential work is Naming and Necessity (1980). Deontic logics are closely related to modal logics: they attempt to capture the logical features of obligation, permission and related concepts. Although some basic novelties syncretizing mathematical and philosophical logic were shown by Bolzano in the early 1800s, it was Ernst Mally, a pupil of Alexius Meinong, who was to propose the first formal deontic system in his Grundgesetze des Sollens, based on the syntax of Whitehead's and Russell's propositional calculus.

Another logical system founded after World War II was fuzzy logic by Azerbaijani mathematician Lotfi Asker Zadeh in 1965.

Mathematical logic

From Wikipedia, the free encyclopedia

Mathematical logic is the study of formal logic within mathematics. Major subareas include model theory, proof theory, set theory, and recursion theory. Research in mathematical logic commonly addresses the mathematical properties of formal systems of logic such as their expressive or deductive power. However, it can also include uses of logic to characterize correct mathematical reasoning or to establish foundations of mathematics.

Since its inception, mathematical logic has both contributed to and been motivated by the study of foundations of mathematics. This study began in the late 19th century with the development of axiomatic frameworks for geometry, arithmetic, and analysis. In the early 20th century it was shaped by David Hilbert's program to prove the consistency of foundational theories. Results of Kurt Gödel, Gerhard Gentzen, and others provided partial resolution to the program, and clarified the issues involved in proving consistency. Work in set theory showed that almost all ordinary mathematics can be formalized in terms of sets, although there are some theorems that cannot be proven in common axiom systems for set theory. Contemporary work in the foundations of mathematics often focuses on establishing which parts of mathematics can be formalized in particular formal systems (as in reverse mathematics) rather than trying to find theories in which all of mathematics can be developed.

Subfields and scope

The Handbook of Mathematical Logic in 1977 makes a rough division of contemporary mathematical logic into four areas:

  1. set theory
  2. model theory
  3. recursion theory, and
  4. proof theory and constructive mathematics (considered as parts of a single area).

Additionally, sometimes the field of computational complexity theory is also included as part of mathematical logic. Each area has a distinct focus, although many techniques and results are shared among multiple areas. The borderlines amongst these fields, and the lines separating mathematical logic and other fields of mathematics, are not always sharp. Gödel's incompleteness theorem marks not only a milestone in recursion theory and proof theory, but has also led to Löb's theorem in modal logic. The method of forcing is employed in set theory, model theory, and recursion theory, as well as in the study of intuitionistic mathematics.

The mathematical field of category theory uses many formal axiomatic methods, and includes the study of categorical logic, but category theory is not ordinarily considered a subfield of mathematical logic. Because of its applicability in diverse fields of mathematics, mathematicians including Saunders Mac Lane have proposed category theory as a foundational system for mathematics, independent of set theory. These foundations use toposes, which resemble generalized models of set theory that may employ classical or nonclassical logic.

History

Mathematical logic emerged in the mid-19th century as a subfield of mathematics, reflecting the confluence of two traditions: formal philosophical logic and mathematics. "Mathematical logic, also called 'logistic', 'symbolic logic', the 'algebra of logic', and, more recently, simply 'formal logic', is the set of logical theories elaborated in the course of the last Nineteenth Century with the aid of an artificial notation and a rigorously deductive method." Before this emergence, logic was studied with rhetoric, with calculationes, through the syllogism, and with philosophy. The first half of the 20th century saw an explosion of fundamental results, accompanied by vigorous debate over the foundations of mathematics.

Early history

Theories of logic were developed in many cultures in history, including China, India, Greece and the Islamic world. Greek methods, particularly Aristotelian logic (or term logic) as found in the Organon, found wide application and acceptance in Western science and mathematics for millennia. The Stoics, especially Chrysippus, began the development of predicate logic. In 18th-century Europe, attempts to treat the operations of formal logic in a symbolic or algebraic way had been made by philosophical mathematicians including Leibniz and Lambert, but their labors remained isolated and little known.

19th century

In the middle of the nineteenth century, George Boole and then Augustus De Morgan presented systematic mathematical treatments of logic. Their work, building on work by algebraists such as George Peacock, extended the traditional Aristotelian doctrine of logic into a sufficient framework for the study of foundations of mathematics. Charles Sanders Peirce later built upon the work of Boole to develop a logical system for relations and quantifiers, which he published in several papers from 1870 to 1885.

Gottlob Frege presented an independent development of logic with quantifiers in his Begriffsschrift, published in 1879, a work generally considered as marking a turning point in the history of logic. Frege's work remained obscure, however, until Bertrand Russell began to promote it near the turn of the century. The two-dimensional notation Frege developed was never widely adopted and is unused in contemporary texts.

From 1890 to 1905, Ernst Schröder published Vorlesungen über die Algebra der Logik in three volumes. This work summarized and extended the work of Boole, De Morgan, and Peirce, and was a comprehensive reference to symbolic logic as it was understood at the end of the 19th century.

Foundational theories

Concerns that mathematics had not been built on a proper foundation led to the development of axiomatic systems for fundamental areas of mathematics such as arithmetic, analysis, and geometry.

In logic, the term arithmetic refers to the theory of the natural numbers. Giuseppe Peano published a set of axioms for arithmetic that came to bear his name (Peano axioms), using a variation of the logical system of Boole and Schröder but adding quantifiers. Peano was unaware of Frege's work at the time. Around the same time Richard Dedekind showed that the natural numbers are uniquely characterized by their induction properties. Dedekind proposed a different characterization, which lacked the formal logical character of Peano's axioms. Dedekind's work, however, proved theorems inaccessible in Peano's system, including the uniqueness of the set of natural numbers (up to isomorphism) and the recursive definitions of addition and multiplication from the successor function and mathematical induction.

In the mid-19th century, flaws in Euclid's axioms for geometry became known. In addition to the independence of the parallel postulate, established by Nikolai Lobachevsky in 1826, mathematicians discovered that certain theorems taken for granted by Euclid were not in fact provable from his axioms. Among these is the theorem that a line contains at least two points, or that circles of the same radius whose centers are separated by that radius must intersect. Hilbert developed a complete set of axioms for geometry, building on previous work by Pasch. The success in axiomatizing geometry motivated Hilbert to seek complete axiomatizations of other areas of mathematics, such as the natural numbers and the real line. This would prove to be a major area of research in the first half of the 20th century.

The 19th century saw great advances in the theory of real analysis, including theories of convergence of functions and Fourier series. Mathematicians such as Karl Weierstrass began to construct functions that stretched intuition, such as nowhere-differentiable continuous functions. Previous conceptions of a function as a rule for computation, or a smooth graph, were no longer adequate. Weierstrass began to advocate the arithmetization of analysis, which sought to axiomatize analysis using properties of the natural numbers. The modern (ε, δ)-definition of limit and continuous functions was already developed by Bolzano in 1817, but remained relatively unknown. Cauchy in 1821 defined continuity in terms of infinitesimals (see Cours d'Analyse, page 34). In 1858, Dedekind proposed a definition of the real numbers in terms of Dedekind cuts of rational numbers, a definition still employed in contemporary texts.

Georg Cantor developed the fundamental concepts of infinite set theory. His early results developed the theory of cardinality and proved that the reals and the natural numbers have different cardinalities. Over the next twenty years, Cantor developed a theory of transfinite numbers in a series of publications. In 1891, he published a new proof of the uncountability of the real numbers that introduced the diagonal argument, and used this method to prove Cantor's theorem that no set can have the same cardinality as its powerset. Cantor believed that every set could be well-ordered, but was unable to produce a proof for this result, leaving it as an open problem in 1895.

20th century

In the early decades of the 20th century, the main areas of study were set theory and formal logic. The discovery of paradoxes in informal set theory caused some to wonder whether mathematics itself is inconsistent, and to look for proofs of consistency.

In 1900, Hilbert posed a famous list of 23 problems for the next century. The first two of these were to resolve the continuum hypothesis and prove the consistency of elementary arithmetic, respectively; the tenth was to produce a method that could decide whether a multivariate polynomial equation over the integers has a solution. Subsequent work to resolve these problems shaped the direction of mathematical logic, as did the effort to resolve Hilbert's Entscheidungsproblem, posed in 1928. This problem asked for a procedure that would decide, given a formalized mathematical statement, whether the statement is true or false.

Set theory and paradoxes

Ernst Zermelo gave a proof that every set could be well-ordered, a result Georg Cantor had been unable to obtain. To achieve the proof, Zermelo introduced the axiom of choice, which drew heated debate and research among mathematicians and the pioneers of set theory. The immediate criticism of the method led Zermelo to publish a second exposition of his result, directly addressing criticisms of his proof. This paper led to the general acceptance of the axiom of choice in the mathematics community.

Skepticism about the axiom of choice was reinforced by recently discovered paradoxes in naive set theory. Cesare Burali-Forti was the first to state a paradox: the Burali-Forti paradox shows that the collection of all ordinal numbers cannot form a set. Very soon thereafter, Bertrand Russell discovered Russell's paradox in 1901, and Jules Richard discovered Richard's paradox.

Zermelo provided the first set of axioms for set theory. These axioms, together with the additional axiom of replacement proposed by Abraham Fraenkel, are now called Zermelo–Fraenkel set theory (ZF). Zermelo's axioms incorporated the principle of limitation of size to avoid Russell's paradox.

In 1910, the first volume of Principia Mathematica by Russell and Alfred North Whitehead was published. This seminal work developed the theory of functions and cardinality in a completely formal framework of type theory, which Russell and Whitehead developed in an effort to avoid the paradoxes. Principia Mathematica is considered one of the most influential works of the 20th century, although the framework of type theory did not prove popular as a foundational theory for mathematics.

Fraenkel proved that the axiom of choice cannot be proved from the axioms of Zermelo's set theory with urelements. Later work by Paul Cohen showed that the addition of urelements is not needed, and the axiom of choice is unprovable in ZF. Cohen's proof developed the method of forcing, which is now an important tool for establishing independence results in set theory.

Symbolic logic

Leopold Löwenheim and Thoralf Skolem obtained the Löwenheim–Skolem theorem, which says that first-order logic cannot control the cardinalities of infinite structures. Skolem realized that this theorem would apply to first-order formalizations of set theory, and that it implies any such formalization has a countable model. This counterintuitive fact became known as Skolem's paradox.

In his doctoral thesis, Kurt Gödel proved the completeness theorem, which establishes a correspondence between syntax and semantics in first-order logic. Gödel used the completeness theorem to prove the compactness theorem, demonstrating the finitary nature of first-order logical consequence. These results helped establish first-order logic as the dominant logic used by mathematicians.

In 1931, Gödel published On Formally Undecidable Propositions of Principia Mathematica and Related Systems, which proved the incompleteness (in a different meaning of the word) of all sufficiently strong, effective first-order theories. This result, known as Gödel's incompleteness theorem, establishes severe limitations on axiomatic foundations for mathematics, striking a strong blow to Hilbert's program. It showed the impossibility of providing a consistency proof of arithmetic within any formal theory of arithmetic. Hilbert, however, did not acknowledge the importance of the incompleteness theorem for some time.

Gödel's theorem shows that a consistency proof of any sufficiently strong, effective axiom system cannot be obtained in the system itself, if the system is consistent, nor in any weaker system. This leaves open the possibility of consistency proofs that cannot be formalized within the system they consider. Gentzen proved the consistency of arithmetic using a finitistic system together with a principle of transfinite induction. Gentzen's result introduced the ideas of cut elimination and proof-theoretic ordinals, which became key tools in proof theory. Gödel gave a different consistency proof, which reduces the consistency of classical arithmetic to that of intuitionistic arithmetic in higher types.

The first textbook on symbolic logic for the layman was written by Lewis Carroll, author of Alice in Wonderland, in 1896.

Beginnings of the other branches

Alfred Tarski developed the basics of model theory.

Beginning in 1935, a group of prominent mathematicians collaborated under the pseudonym Nicolas Bourbaki to publish Éléments de mathématique, a series of encyclopedic mathematics texts. These texts, written in an austere and axiomatic style, emphasized rigorous presentation and set-theoretic foundations. Terminology coined by these texts, such as the words bijection, injection, and surjection, and the set-theoretic foundations the texts employed, were widely adopted throughout mathematics.

The study of computability came to be known as recursion theory or computability theory, because early formalizations by Gödel and Kleene relied on recursive definitions of functions. When these definitions were shown equivalent to Turing's formalization involving Turing machines, it became clear that a new concept – the computable function – had been discovered, and that this definition was robust enough to admit numerous independent characterizations. In his work on the incompleteness theorems in 1931, Gödel lacked a rigorous concept of an effective formal system; he immediately realized that the new definitions of computability could be used for this purpose, allowing him to state the incompleteness theorems in generality that could only be implied in the original paper.

Numerous results in recursion theory were obtained in the 1940s by Stephen Cole Kleene and Emil Leon Post. Kleene introduced the concepts of relative computability, foreshadowed by Turing, and the arithmetical hierarchy. Kleene later generalized recursion theory to higher-order functionals. Kleene and Georg Kreisel studied formal versions of intuitionistic mathematics, particularly in the context of proof theory.

Formal logical systems

At its core, mathematical logic deals with mathematical concepts expressed using formal logical systems. These systems, though they differ in many details, share the common property of considering only expressions in a fixed formal language. The systems of propositional logic and first-order logic are the most widely studied today, because of their applicability to foundations of mathematics and because of their desirable proof-theoretic properties. Stronger classical logics such as second-order logic or infinitary logic are also studied, along with Non-classical logics such as intuitionistic logic.

First-order logic

First-order logic is a particular formal system of logic. Its syntax involves only finite expressions as well-formed formulas, while its semantics are characterized by the limitation of all quantifiers to a fixed domain of discourse.

Early results from formal logic established limitations of first-order logic. The Löwenheim–Skolem theorem (1919) showed that if a set of sentences in a countable first-order language has an infinite model then it has at least one model of each infinite cardinality. This shows that it is impossible for a set of first-order axioms to characterize the natural numbers, the real numbers, or any other infinite structure up to isomorphism. As the goal of early foundational studies was to produce axiomatic theories for all parts of mathematics, this limitation was particularly stark.

Gödel's completeness theorem established the equivalence between semantic and syntactic definitions of logical consequence in first-order logic. It shows that if a particular sentence is true in every model that satisfies a particular set of axioms, then there must be a finite deduction of the sentence from the axioms. The compactness theorem first appeared as a lemma in Gödel's proof of the completeness theorem, and it took many years before logicians grasped its significance and began to apply it routinely. It says that a set of sentences has a model if and only if every finite subset has a model, or in other words that an inconsistent set of formulas must have a finite inconsistent subset. The completeness and compactness theorems allow for sophisticated analysis of logical consequence in first-order logic and the development of model theory, and they are a key reason for the prominence of first-order logic in mathematics.

Gödel's incompleteness theorems establish additional limits on first-order axiomatizations. The first incompleteness theorem states that for any consistent, effectively given (defined below) logical system that is capable of interpreting arithmetic, there exists a statement that is true (in the sense that it holds for the natural numbers) but not provable within that logical system (and which indeed may fail in some non-standard models of arithmetic which may be consistent with the logical system). For example, in every logical system capable of expressing the Peano axioms, the Gödel sentence holds for the natural numbers but cannot be proved.

Here a logical system is said to be effectively given if it is possible to decide, given any formula in the language of the system, whether the formula is an axiom, and one which can express the Peano axioms is called "sufficiently strong." When applied to first-order logic, the first incompleteness theorem implies that any sufficiently strong, consistent, effective first-order theory has models that are not elementarily equivalent, a stronger limitation than the one established by the Löwenheim–Skolem theorem. The second incompleteness theorem states that no sufficiently strong, consistent, effective axiom system for arithmetic can prove its own consistency, which has been interpreted to show that Hilbert's program cannot be reached.

Other classical logics

Many logics besides first-order logic are studied. These include infinitary logics, which allow for formulas to provide an infinite amount of information, and higher-order logics, which include a portion of set theory directly in their semantics.

The most well studied infinitary logic is . In this logic, quantifiers may only be nested to finite depths, as in first-order logic, but formulas may have finite or countably infinite conjunctions and disjunctions within them. Thus, for example, it is possible to say that an object is a whole number using a formula of such as

Higher-order logics allow for quantification not only of elements of the domain of discourse, but subsets of the domain of discourse, sets of such subsets, and other objects of higher type. The semantics are defined so that, rather than having a separate domain for each higher-type quantifier to range over, the quantifiers instead range over all objects of the appropriate type. The logics studied before the development of first-order logic, for example Frege's logic, had similar set-theoretic aspects. Although higher-order logics are more expressive, allowing complete axiomatizations of structures such as the natural numbers, they do not satisfy analogues of the completeness and compactness theorems from first-order logic, and are thus less amenable to proof-theoretic analysis.

Another type of logics are fixed-point logics that allow inductive definitions, like one writes for primitive recursive functions.

One can formally define an extension of first-order logic — a notion which encompasses all logics in this section because they behave like first-order logic in certain fundamental ways, but does not encompass all logics in general, e.g. it does not encompass intuitionistic, modal or fuzzy logic.

Lindström's theorem implies that the only extension of first-order logic satisfying both the compactness theorem and the downward Löwenheim–Skolem theorem is first-order logic.

Nonclassical and modal logic

Modal logics include additional modal operators, such as an operator which states that a particular formula is not only true, but necessarily true. Although modal logic is not often used to axiomatize mathematics, it has been used to study the properties of first-order provability and set-theoretic forcing.

Intuitionistic logic was developed by Heyting to study Brouwer's program of intuitionism, in which Brouwer himself avoided formalization. Intuitionistic logic specifically does not include the law of the excluded middle, which states that each sentence is either true or its negation is true. Kleene's work with the proof theory of intuitionistic logic showed that constructive information can be recovered from intuitionistic proofs. For example, any provably total function in intuitionistic arithmetic is computable; this is not true in classical theories of arithmetic such as Peano arithmetic.

Algebraic logic

Algebraic logic uses the methods of abstract algebra to study the semantics of formal logics. A fundamental example is the use of Boolean algebras to represent truth values in classical propositional logic, and the use of Heyting algebras to represent truth values in intuitionistic propositional logic. Stronger logics, such as first-order logic and higher-order logic, are studied using more complicated algebraic structures such as cylindric algebras.

Set theory

Set theory is the study of sets, which are abstract collections of objects. Many of the basic notions, such as ordinal and cardinal numbers, were developed informally by Cantor before formal axiomatizations of set theory were developed. The first such axiomatization, due to Zermelo, was extended slightly to become Zermelo–Fraenkel set theory (ZF), which is now the most widely used foundational theory for mathematics.

Other formalizations of set theory have been proposed, including von Neumann–Bernays–Gödel set theory (NBG), Morse–Kelley set theory (MK), and New Foundations (NF). Of these, ZF, NBG, and MK are similar in describing a cumulative hierarchy of sets. New Foundations takes a different approach; it allows objects such as the set of all sets at the cost of restrictions on its set-existence axioms. The system of Kripke–Platek set theory is closely related to generalized recursion theory.

Two famous statements in set theory are the axiom of choice and the continuum hypothesis. The axiom of choice, first stated by Zermelo, was proved independent of ZF by Fraenkel, but has come to be widely accepted by mathematicians. It states that given a collection of nonempty sets there is a single set C that contains exactly one element from each set in the collection. The set C is said to "choose" one element from each set in the collection. While the ability to make such a choice is considered obvious by some, since each set in the collection is nonempty, the lack of a general, concrete rule by which the choice can be made renders the axiom nonconstructive. Stefan Banach and Alfred Tarski showed that the axiom of choice can be used to decompose a solid ball into a finite number of pieces which can then be rearranged, with no scaling, to make two solid balls of the original size. This theorem, known as the Banach–Tarski paradox, is one of many counterintuitive results of the axiom of choice.

The continuum hypothesis, first proposed as a conjecture by Cantor, was listed by David Hilbert as one of his 23 problems in 1900. Gödel showed that the continuum hypothesis cannot be disproven from the axioms of Zermelo–Fraenkel set theory (with or without the axiom of choice), by developing the constructible universe of set theory in which the continuum hypothesis must hold. In 1963, Paul Cohen showed that the continuum hypothesis cannot be proven from the axioms of Zermelo–Fraenkel set theory. This independence result did not completely settle Hilbert's question, however, as it is possible that new axioms for set theory could resolve the hypothesis. Recent work along these lines has been conducted by W. Hugh Woodin, although its importance is not yet clear.

Contemporary research in set theory includes the study of large cardinals and determinacy. Large cardinals are cardinal numbers with particular properties so strong that the existence of such cardinals cannot be proved in ZFC. The existence of the smallest large cardinal typically studied, an inaccessible cardinal, already implies the consistency of ZFC. Despite the fact that large cardinals have extremely high cardinality, their existence has many ramifications for the structure of the real line. Determinacy refers to the possible existence of winning strategies for certain two-player games (the games are said to be determined). The existence of these strategies implies structural properties of the real line and other Polish spaces.

Model theory

Model theory studies the models of various formal theories. Here a theory is a set of formulas in a particular formal logic and signature, while a model is a structure that gives a concrete interpretation of the theory. Model theory is closely related to universal algebra and algebraic geometry, although the methods of model theory focus more on logical considerations than those fields.

The set of all models of a particular theory is called an elementary class; classical model theory seeks to determine the properties of models in a particular elementary class, or determine whether certain classes of structures form elementary classes.

The method of quantifier elimination can be used to show that definable sets in particular theories cannot be too complicated. Tarski established quantifier elimination for real-closed fields, a result which also shows the theory of the field of real numbers is decidable. He also noted that his methods were equally applicable to algebraically closed fields of arbitrary characteristic. A modern subfield developing from this is concerned with o-minimal structures.

Morley's categoricity theorem, proved by Michael D. Morley, states that if a first-order theory in a countable language is categorical in some uncountable cardinality, i.e. all models of this cardinality are isomorphic, then it is categorical in all uncountable cardinalities.

A trivial consequence of the continuum hypothesis is that a complete theory with less than continuum many nonisomorphic countable models can have only countably many. Vaught's conjecture, named after Robert Lawson Vaught, says that this is true even independently of the continuum hypothesis. Many special cases of this conjecture have been established.

Recursion theory

Recursion theory, also called computability theory, studies the properties of computable functions and the Turing degrees, which divide the uncomputable functions into sets that have the same level of uncomputability. Recursion theory also includes the study of generalized computability and definability. Recursion theory grew from the work of Rózsa Péter, Alonzo Church and Alan Turing in the 1930s, which was greatly extended by Kleene and Post in the 1940s.

Classical recursion theory focuses on the computability of functions from the natural numbers to the natural numbers. The fundamental results establish a robust, canonical class of computable functions with numerous independent, equivalent characterizations using Turing machines, λ calculus, and other systems. More advanced results concern the structure of the Turing degrees and the lattice of recursively enumerable sets.

Generalized recursion theory extends the ideas of recursion theory to computations that are no longer necessarily finite. It includes the study of computability in higher types as well as areas such as hyperarithmetical theory and α-recursion theory.

Contemporary research in recursion theory includes the study of applications such as algorithmic randomness, computable model theory, and reverse mathematics, as well as new results in pure recursion theory.

Algorithmically unsolvable problems

An important subfield of recursion theory studies algorithmic unsolvability; a decision problem or function problem is algorithmically unsolvable if there is no possible computable algorithm that returns the correct answer for all legal inputs to the problem. The first results about unsolvability, obtained independently by Church and Turing in 1936, showed that the Entscheidungsproblem is algorithmically unsolvable. Turing proved this by establishing the unsolvability of the halting problem, a result with far-ranging implications in both recursion theory and computer science.

There are many known examples of undecidable problems from ordinary mathematics. The word problem for groups was proved algorithmically unsolvable by Pyotr Novikov in 1955 and independently by W. Boone in 1959. The busy beaver problem, developed by Tibor Radó in 1962, is another well-known example.

Hilbert's tenth problem asked for an algorithm to determine whether a multivariate polynomial equation with integer coefficients has a solution in the integers. Partial progress was made by Julia Robinson, Martin Davis and Hilary Putnam. The algorithmic unsolvability of the problem was proved by Yuri Matiyasevich in 1970.

Proof theory and constructive mathematics

Proof theory is the study of formal proofs in various logical deduction systems. These proofs are represented as formal mathematical objects, facilitating their analysis by mathematical techniques. Several deduction systems are commonly considered, including Hilbert-style deduction systems, systems of natural deduction, and the sequent calculus developed by Gentzen.

The study of constructive mathematics, in the context of mathematical logic, includes the study of systems in non-classical logic such as intuitionistic logic, as well as the study of predicative systems. An early proponent of predicativism was Hermann Weyl, who showed it is possible to develop a large part of real analysis using only predicative methods.

Because proofs are entirely finitary, whereas truth in a structure is not, it is common for work in constructive mathematics to emphasize provability. The relationship between provability in classical (or nonconstructive) systems and provability in intuitionistic (or constructive, respectively) systems is of particular interest. Results such as the Gödel–Gentzen negative translation show that it is possible to embed (or translate) classical logic into intuitionistic logic, allowing some properties about intuitionistic proofs to be transferred back to classical proofs.

Recent developments in proof theory include the study of proof mining by Ulrich Kohlenbach and the study of proof-theoretic ordinals by Michael Rathjen.

Applications

"Mathematical logic has been successfully applied not only to mathematics and its foundations (G. Frege, B. Russell, D. Hilbert, P. Bernays, H. Scholz, R. Carnap, S. Lesniewski, T. Skolem), but also to physics (R. Carnap, A. Dittrich, B. Russell, C. E. Shannon, A. N. Whitehead, H. Reichenbach, P. Fevrier), to biology (J. H. Woodger, A. Tarski), to psychology (F. B. Fitch, C. G. Hempel), to law and morals (K. Menger, U. Klug, P. Oppenheim), to economics (J. Neumann, O. Morgenstern), to practical questions (E. C. Berkeley, E. Stamm), and even to metaphysics (J. [Jan] Salamucha, H. Scholz, J. M. Bochenski). Its applications to the history of logic have proven extremely fruitful (J. Lukasiewicz, H. Scholz, B. Mates, A. Becker, E. Moody, J. Salamucha, K. Duerr, Z. Jordan, P. Boehner, J. M. Bochenski, S. [Stanislaw] T. Schayer, D. Ingalls)." "Applications have also been made to theology (F. Drewnowski, J. Salamucha, I. Thomas)."

Connections with computer science

The study of computability theory in computer science is closely related to the study of computability in mathematical logic. There is a difference of emphasis, however. Computer scientists often focus on concrete programming languages and feasible computability, while researchers in mathematical logic often focus on computability as a theoretical concept and on noncomputability.

The theory of semantics of programming languages is related to model theory, as is program verification (in particular, model checking). The Curry–Howard correspondence between proofs and programs relates to proof theory, especially intuitionistic logic. Formal calculi such as the lambda calculus and combinatory logic are now studied as idealized programming languages.

Computer science also contributes to mathematics by developing techniques for the automatic checking or even finding of proofs, such as automated theorem proving and logic programming.

Descriptive complexity theory relates logics to computational complexity. The first significant result in this area, Fagin's theorem (1974) established that NP is precisely the set of languages expressible by sentences of existential second-order logic.

Foundations of mathematics

In the 19th century, mathematicians became aware of logical gaps and inconsistencies in their field. It was shown that Euclid's axioms for geometry, which had been taught for centuries as an example of the axiomatic method, were incomplete. The use of infinitesimals, and the very definition of function, came into question in analysis, as pathological examples such as Weierstrass' nowhere-differentiable continuous function were discovered.

Cantor's study of arbitrary infinite sets also drew criticism. Leopold Kronecker famously stated "God made the integers; all else is the work of man," endorsing a return to the study of finite, concrete objects in mathematics. Although Kronecker's argument was carried forward by constructivists in the 20th century, the mathematical community as a whole rejected them. David Hilbert argued in favor of the study of the infinite, saying "No one shall expel us from the Paradise that Cantor has created."

Mathematicians began to search for axiom systems that could be used to formalize large parts of mathematics. In addition to removing ambiguity from previously naive terms such as function, it was hoped that this axiomatization would allow for consistency proofs. In the 19th century, the main method of proving the consistency of a set of axioms was to provide a model for it. Thus, for example, non-Euclidean geometry can be proved consistent by defining point to mean a point on a fixed sphere and line to mean a great circle on the sphere. The resulting structure, a model of elliptic geometry, satisfies the axioms of plane geometry except the parallel postulate.

With the development of formal logic, Hilbert asked whether it would be possible to prove that an axiom system is consistent by analyzing the structure of possible proofs in the system, and showing through this analysis that it is impossible to prove a contradiction. This idea led to the study of proof theory. Moreover, Hilbert proposed that the analysis should be entirely concrete, using the term finitary to refer to the methods he would allow but not precisely defining them. This project, known as Hilbert's program, was seriously affected by Gödel's incompleteness theorems, which show that the consistency of formal theories of arithmetic cannot be established using methods formalizable in those theories. Gentzen showed that it is possible to produce a proof of the consistency of arithmetic in a finitary system augmented with axioms of transfinite induction, and the techniques he developed to do so were seminal in proof theory.

A second thread in the history of foundations of mathematics involves nonclassical logics and constructive mathematics. The study of constructive mathematics includes many different programs with various definitions of constructive. At the most accommodating end, proofs in ZF set theory that do not use the axiom of choice are called constructive by many mathematicians. More limited versions of constructivism limit themselves to natural numbers, number-theoretic functions, and sets of natural numbers (which can be used to represent real numbers, facilitating the study of mathematical analysis). A common idea is that a concrete means of computing the values of the function must be known before the function itself can be said to exist.

In the early 20th century, Luitzen Egbertus Jan Brouwer founded intuitionism as a part of philosophy of mathematics . This philosophy, poorly understood at first, stated that in order for a mathematical statement to be true to a mathematician, that person must be able to intuit the statement, to not only believe its truth but understand the reason for its truth. A consequence of this definition of truth was the rejection of the law of the excluded middle, for there are statements that, according to Brouwer, could not be claimed to be true while their negations also could not be claimed true. Brouwer's philosophy was influential, and the cause of bitter disputes among prominent mathematicians. Later, Kleene and Kreisel would study formalized versions of intuitionistic logic (Brouwer rejected formalization, and presented his work in unformalized natural language). With the advent of the BHK interpretation and Kripke models, intuitionism became easier to reconcile with classical mathematics.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...