Search This Blog

Sunday, August 14, 2022

Logicism

From Wikipedia, the free encyclopedia

In the philosophy of mathematics, logicism is a programme comprising one or more of the theses that — for some coherent meaning of 'logic' — mathematics is an extension of logic, some or all of mathematics is reducible to logic, or some or all of mathematics may be modelled in logic. Bertrand Russell and Alfred North Whitehead championed this programme, initiated by Gottlob Frege and subsequently developed by Richard Dedekind and Giuseppe Peano.

Overview

Dedekind's path to logicism had a turning point when he was able to construct a model satisfying the axioms characterizing the real numbers using certain sets of rational numbers. This and related ideas convinced him that arithmetic, algebra and analysis were reducible to the natural numbers plus a "logic" of classes. Furthermore by 1872 he had concluded that the naturals themselves were reducible to sets and mappings. It is likely that other logicists, most importantly Frege, were also guided by the new theories of the real numbers published in the year 1872.

The philosophical impetus behind Frege's logicist programme from the Grundlagen der Arithmetik onwards was in part his dissatisfaction with the epistemological and ontological commitments of then-extant accounts of the natural numbers, and his conviction that Kant's use of truths about the natural numbers as examples of synthetic a priori truth was incorrect.

This started a period of expansion for logicism, with Dedekind and Frege as its main exponents. However, this initial phase of the logicist programme was brought into crisis with the discovery of the classical paradoxes of set theory (Cantor 1896, Zermelo and Russell 1900–1901). Frege gave up on the project after Russell recognized and communicated his paradox identifying an inconsistency in Frege's system set out in the Grundgesetze der Arithmetik. Note that naive set theory also suffers from this difficulty.

On the other hand, Russell wrote The Principles of Mathematics in 1903 using the paradox and developments of Giuseppe Peano's school of geometry. Since he treated the subject of primitive notions in geometry and set theory, this text is a watershed in the development of logicism. Evidence of the assertion of logicism was collected by Russell and Whitehead in their Principia Mathematica.

Today, the bulk of extant mathematics is believed to be derivable logically from a small number of extralogical axioms, such as the axioms of Zermelo–Fraenkel set theory (or its extension ZFC), from which no inconsistencies have as yet been derived. Thus, elements of the logicist programmes have proved viable, but in the process theories of classes, sets and mappings, and higher-order logics other than with Henkin semantics, have come to be regarded as extralogical in nature, in part under the influence of Quine's later thought.

Kurt Gödel's incompleteness theorems show that no formal system from which the Peano axioms for the natural numbers may be derived — such as Russell's systems in PM — can decide all the well-formed sentences of that system. This result damaged David Hilbert's programme for foundations of mathematics whereby 'infinitary' theories — such as that of PM — were to be proved consistent from finitary theories, with the aim that those uneasy about 'infinitary methods' could be reassured that their use should provably not result in the derivation of a contradiction. Gödel's result suggests that in order to maintain a logicist position, while still retaining as much as possible of classical mathematics, one must accept some axiom of infinity as part of logic. On the face of it, this damages the logicist programme also, albeit only for those already doubtful concerning 'infinitary methods'. Nonetheless, positions deriving from both logicism and from Hilbertian finitism have continued to be propounded since the publication of Gödel's result.

One argument that programmes derived from logicism remain valid might be that the incompleteness theorems are 'proved with logic just like any other theorems'. However, that argument appears not to acknowledge the distinction between theorems of first-order logic and theorems of higher-order logic. The former can be proven using finistic methods, while the latter — in general — cannot. Tarski's undefinability theorem shows that Gödel numbering can be used to prove syntactical constructs, but not semantic assertions. Therefore, the claim that logicism remains a valid programme may commit one to holding that a system of proof based on the existence and properties of the natural numbers is less convincing than one based on some particular formal system.

Logicism — especially through the influence of Frege on Russell and Wittgenstein and later Dummett — was a significant contributor to the development of analytic philosophy during the twentieth century.

Origin of the name 'logicism'

Ivor Grattan-Guinness states that the French word 'Logistique' was "introduced by Couturat and others at the 1904 International Congress of Philosophy, and was used by Russell and others from then on, in versions appropriate for various languages." (G-G 2000:501).

Apparently the first (and only) usage by Russell appeared in his 1919: "Russell referred several time [sic] to Frege, introducing him as one 'who first succeeded in "logicising" mathematics' (p. 7). Apart from the misrepresentation (which Russell partly rectified by explaining his own view of the role of arithmetic in mathematics), the passage is notable for the word which he put in quotation marks, but their presence suggests nervousness, and he never used the word again, so that 'logicism' did not emerge until the later 1920s" (G-G 2002:434).

About the same time as Rudolf Carnap (1929), but apparently independently, Fraenkel (1928) used the word: "Without comment he used the name 'logicism' to characterise the Whitehead/Russell position (in the title of the section on p. 244, explanation on p. 263)" (G-G 2002:269). Carnap used a slightly different word 'Logistik'; Behmann complained about its use in Carnap's manuscript so Carnap proposed the word 'Logizismus', but he finally stuck to his word-choice 'Logistik' (G-G 2002:501). Ultimately "the spread was mainly due to Carnap, from 1930 onwards." (G-G 2000:502).

Intent, or goal, of logicism

Symbolic logic: The overt intent of Logicism is to derive all of mathematics from symbolic logic (Frege, Dedekind, Peano, Russell.) As contrasted with algebraic logic (Boolean logic) that employs arithmetic concepts, symbolic logic begins with a very reduced set of marks (non-arithmetic symbols), a few "logical" axioms that embody the "laws of thought", and rules of inference that dictate how the marks are to be assembled and manipulated. Logicism also adopts from Frege's groundwork the reduction of natural language statements from "subject|predicate" into either propositional "atoms" or the "argument|function" of "generalization"—the notions "all", "some", "class" (collection, aggregate) and "relation".

In a logicist derivation of the natural numbers and their properties, no "intuition" of number should "sneak in" either as an axiom or by accident. The goal is to derive all of mathematics, starting with the counting numbers and then the real numbers, from some chosen "laws of thought" alone, without any tacit assumptions of "before" and "after" or "less" and "more" or to the point: "successor" and "predecessor". Gödel 1944 summarized Russell's logicistic "constructions", when compared to "constructions" in the foundational systems of Intuitionism and Formalism ("the Hilbert School") as follows: "Both of these schools base their constructions on a mathematical intuition whose avoidance is exactly one of the principal aims of Russell's constructivism" (Gödel 1944 in Collected Works 1990:119).

History: Gödel 1944 summarized the historical background from Leibniz's in Characteristica universalis, through Frege and Peano to Russell: "Frege was chiefly interested in the analysis of thought and used his calculus in the first place for deriving arithmetic from pure logic", whereas Peano "was more interested in its applications within mathematics". But "It was only [Russell's] Principia Mathematica that full use was made of the new method for actually deriving large parts of mathematics from a very few logical concepts and axioms. In addition, the young science was enriched by a new instrument, the abstract theory of relations" (p. 120-121).

Kleene 1952 states it this way: "Leibniz (1666) first conceived of logic as a science containing the ideas and principles underlying all other sciences. Dedekind (1888) and Frege (1884, 1893, 1903) were engaged in defining mathematical notions in terms of logical ones, and Peano (1889, 1894–1908) in expressing mathematical theorems in a logical symbolism" (p. 43); in the previous paragraph he includes Russell and Whitehead as exemplars of the "logicistic school", the other two "foundational" schools being the intuitionistic and the "formalistic or axiomatic school" (p. 43).

Frege 1879 describes his intent in the Preface to his 1879 Begriffsschrift: He started with a consideration of arithmetic: did it derive from "logic" or from "facts of experience"?

"I first had to ascertain how far one could proceed in arithmetic by means of inferences alone, with the sole support of those laws of thought that transcend all particulars. My initial step was to attempt to reduce the concept of ordering in a sequence to that of logical consequence, so as to proceed from there to the concept of number. To prevent anything intuitive from penetrating here unnoticed I had to bend every effort to keep the chain of inferences free of gaps . . . I found the inadequacy of language to be an obstacle; no matter how unwieldy the expressions I was ready to accept, I was less and less able, as the relations became more and more complex, to attain the precision that my purpose required. This deficiency led me to the idea of the present ideography. Its first purpose, therefore, is to provide us with the most reliable test of the validity of a chain of inferences and to point out every presupposition that tries to sneak in unnoticed" (Frege 1879 in van Heijenoort 1967:5).

Dedekind 1887 describes his intent in the 1887 Preface to the First Edition of his The Nature and Meaning of Numbers. He believed that in the "foundations of the simplest science; viz., that part of logic which deals with the theory of numbers" had not been properly argued — "nothing capable of proof ought to be accepted without proof":

In speaking of arithmetic (algebra, analysis) as a part of logic I mean to imply that I consider the number-concept entirely independent of the notions of intuitions of space and time, that I consider it an immediate result from the laws of thought . . . numbers are free creations of the human mind . . . [and] only through the purely logical process of building up the science of numbers . . . are we prepared accurately to investigate our notions of space and time by bringing them into relation with this number-domain created in our mind" (Dedekind 1887 Dover republication 1963 :31).

Peano 1889 states his intent in his Preface to his 1889 Principles of Arithmetic:

Questions that pertain to the foundations of mathematics, although treated by many in recent times, still lack a satisfactory solution. The difficulty has its main source in the ambiguity of language. ¶ That is why it is of the utmost importance to examine attentively the very words we use. My goal has been to undertake this examination" (Peano 1889 in van Heijenoort 1967:85).

Russell 1903 describes his intent in the Preface to his 1903 Principles of Mathematics:

"THE present work has two main objects. One of these, the proof that all pure mathematics deals exclusively with concepts definable in terms of a very small number of fundamental logical concepts, and that all its propositions are deducible from a very small number of fundamental logical principles" (Preface 1903:vi).
"A few words as to the origin of the present work may serve to show the importance of the questions discussed. About six years ago, I began an investigation into the philosophy of Dynamics. . . . [From two questions — acceleration and absolute motion in a "relational theory of space"] I was led to a re-examination of the principles of Geometry, thence to the philosophy of continuity and infinity, and then, with a view to discovering the meaning of the word any, to Symbolic Logic" (Preface 1903:vi-vii).

Epistemology, ontology and logicism

Dedekind and Frege: The epistemologies of Dedekind and of Frege seem less well-defined than that of Russell, but both seem accepting as a priori the customary "laws of thought" concerning simple propositional statements (usually of belief); these laws would be sufficient in themselves if augmented with theory of classes and relations (e.g. x R y) between individuals x and y linked by the generalization R.

Dedekind's "free formations of the human mind" in contrast to the "strictures" of Kronecker: Dedekind's argument begins with "1. In what follows I understand by thing every object of our thought"; we humans use symbols to discuss these "things" of our minds; "A thing is completely determined by all that can be affirmed or thought concerning it" (p. 44). In a subsequent paragraph Dedekind discusses what a "system S is: it is an aggregate, a manifold, a totality of associated elements (things) a, b, c"; he asserts that "such a system S . . . as an object of our thought is likewise a thing (1); it is completely determined when with respect to every thing it is determined whether it is an element of S or not.*" (p. 45, italics added). The * indicates a footnote where he states that:

"Kronecker not long ago (Crelle's Journal, Vol. 99, pp. 334-336) has endeavored to impose certain limitations upon the free formation of concepts in mathematics which I do not believe to be justified" (p. 45).

Indeed he awaits Kronecker's "publishing his reasons for the necessity or merely the expediency of these limitations" (p. 45).

Leopold Kronecker, famous for his assertion that "God made the integers, all else is the work of man" had his foes, among them Hilbert. Hilbert called Kronecker a "dogmatist, to the extent that he accepts the integer with its essential properties as a dogma and does not look back" and equated his extreme constructivist stance with that of Brouwer's intuitionism, accusing both of "subjectivism": "It is part of the task of science to liberate us from arbitrariness, sentiment and habit and to protect us from the subjectivism that already made itself felt in Kronecker's views and, it seems to me, finds its culmination in intuitionism". Hilbert then states that "mathematics is a presuppositionless science. To found it I do not need God, as does Kronecker . . ." (p. 479).

Russell as realist: Russell's Realism served him as an antidote to British Idealism, with portions borrowed from European Rationalism and British empiricism. To begin with, "Russell was a realist about two key issues: universals and material objects" (Russell 1912:xi). For Russell, tables are real things that exist independent of Russell the observer. Rationalism would contribute the notion of a priori knowledge, while empiricism would contribute the role of experiential knowledge (induction from experience). Russell would credit Kant with the idea of "a priori" knowledge, but he offers an objection to Kant he deems "fatal": "The facts [of the world] must always conform to logic and arithmetic. To say that logic and arithmetic are contributed by us does not account for this" (1912:87); Russell concludes that the a priori knowledge that we possess is "about things, and not merely about thoughts" (1912:89). And in this Russell's epistemology seems different from that of Dedekind's belief that "numbers are free creations of the human mind" (Dedekind 1887:31)

But his epistemology about the innate (he prefers the word a priori when applied to logical principles, cf. 1912:74) is intricate. He would strongly, unambiguously express support for the Platonic "universals" (cf. 1912:91-118) and he would conclude that truth and falsity are "out there"; minds create beliefs and what makes a belief true is a fact, "and this fact does not (except in exceptional cases) involve the mind of the person who has the belief" (1912:130).

Where did Russell derive these epistemic notions? He tells us in the Preface to his 1903 Principles of Mathematics. Note that he asserts that the belief: "Emily is a rabbit" is non-existent, and yet the truth of this non-existent proposition is independent of any knowing mind; if Emily really is a rabbit, the fact of this truth exists whether or not Russell or any other mind is alive or dead, and the relation of Emily to rabbit-hood is "ultimate" :

"On fundamental questions of philosophy, my position, in all its chief features, is derived from Mr G. E. Moore. I have accepted from him the non-existential nature of propositions (except such as happen to assert existence) and their independence of any knowing mind; also the pluralism which regards the world, both that of existents and that of entities, as composed of an infinite number of mutually independent entities, with relations which are ultimate, and not reducible to adjectives of their terms or of the whole which these compose. . . . The doctrines just mentioned are, in my opinion, quite indispensable to any even tolerably satisfactory philosophy of mathematics, as I hope the following pages will show. . . . Formally, my premisses are simply assumed; but the fact that they allow mathematics to be true, which most current philosophies do not, is surely a powerful argument in their favour." (Preface 1903:viii)

Russell's paradox: In 1902 Russell discovered a "vicious circle" (Russell's paradox) in Frege's Grundgesetze der Arithmetik, derived from Frege's Basic Law V and he was determined not to repeat it in his 1903 Principles of Mathematics. In two Appendices added at the last minute he devoted 28 pages to both a detailed analysis of Frege's theory contrasted against his own, and a fix for the paradox. But he was not optimistic about the outcome:

"In the case of classes, I must confess, I have failed to perceive any concept fulfilling the conditions requisite for the notion of class. And the contradiction discussed in Chapter x. proves that something is amiss, but what this is I have hitherto failed to discover. (Preface to Russell 1903:vi)"

"Fictionalism" and Russell's no-class theory: Gödel in his 1944 would disagree with the young Russell of 1903 ("[my premisses] allow mathematics to be true") but would probably agree with Russell's statement quoted above ("something is amiss"); Russell's theory had failed to arrive at a satisfactory foundation of mathematics: the result was "essentially negative; i.e. the classes and concepts introduced this way do not have all the properties required for the use of mathematics" (Gödel 1944:132).

How did Russell arrive in this situation? Gödel observes that Russell is a surprising "realist" with a twist: he cites Russell's 1919:169 "Logic is concerned with the real world just as truly as zoology" (Gödel 1944:120). But he observes that "when he started on a concrete problem, the objects to be analyzed (e.g. the classes or propositions) soon for the most part turned into "logical fictions" . . . [meaning] only that we have no direct perception of them." (Gödel 1944:120)

In an observation pertinent to Russell's brand of logicism, Perry remarks that Russell went through three phases of realism: extreme, moderate and constructive (Perry 1997:xxv). In 1903 he was in his extreme phase; by 1905 he would be in his moderate phase. In a few years he would "dispense with physical or material objects as basic bits of the furniture of the world. He would attempt to construct them out of sense-data" in his next book Our knowledge of the External World [1914]" (Perry 1997:xxvi).

These constructions in what Gödel 1944 would call "nominalistic constructivism . . . which might better be called fictionalism" derived from Russell's "more radical idea, the no-class theory" (p. 125):

"according to which classes or concepts never exist as real objects, and sentences containing these terms are meaningful only as they can be interpreted as . . . a manner of speaking about other things" (p. 125).

See more in the Criticism sections, below.

An example of a logicist construction of the natural numbers: Russell's construction in the Principia

The logicism of Frege and Dedekind is similar to that of Russell, but with differences in the particulars (see Criticisms, below). Overall, the logicist derivations of the natural numbers are different from derivations from, for example, Zermelo's axioms for set theory ('Z'). Whereas, in derivations from Z, one definition of "number" uses an axiom of that system — the axiom of pairing — that leads to the definition of "ordered pair" — no overt number axiom exists in the various logicist axiom systems allowing the derivation of the natural numbers. Note that the axioms needed to derive the definition of a number may differ between axiom systems for set theory in any case. For instance, in ZF and ZFC, the axiom of pairing, and hence ultimately the notion of an ordered pair is derivable from the Axiom of Infinity and the Axiom of Replacement and is required in the definition of the Von Neumann numerals (but not the Zermelo numerals), whereas in NFU the Frege numerals may be derived in an analogous way to their derivation in the Grundgesetze.

The Principia, like its forerunner the Grundgesetze, begins its construction of the numbers from primitive propositions such as "class", "propositional function", and in particular, relations of "similarity" ("equinumerosity": placing the elements of collections in one-to-one correspondence) and "ordering" (using "the successor of" relation to order the collections of the equinumerous classes)". The logicistic derivation equates the cardinal numbers constructed this way to the natural numbers, and these numbers end up all of the same "type" — as classes of classes — whereas in some set theoretical constructions — for instance the von Neumman and the Zermelo numerals — each number has its predecessor as a subset. Kleene observes the following. (Kleene's assumptions (1) and (2) state that 0 has property P and n+1 has property P whenever n has property P.)

"The viewpoint here is very different from that of [Kronecker]'s maxim that 'God made the integers' plus Peano's axioms of number and mathematical induction], where we presupposed an intuitive conception of the natural number sequence, and elicited from it the principle that, whenever a particular property P of natural numbers is given such that (1) and (2), then any given natural number must have the property P." (Kleene 1952:44).

The importance to the logicist programme of the construction of the natural numbers derives from Russell's contention that "That all traditional pure mathematics can be derived from the natural numbers is a fairly recent discovery, though it had long been suspected" (1919:4). One derivation of the real numbers derives from the theory of Dedekind cuts on the rational numbers, rational numbers in turn being derived from the naturals. While an example of how this is done is useful, it relies first on the derivation of the natural numbers. So, if philosophical difficulties appear in a logicist derivation of the natural numbers, these problems should be sufficient to stop the program until these are resolved (see Criticisms, below).

One attempt to construct the natural numbers is summarized by Bernays 1930–1931. But rather than use Bernays' précis, which is incomplete in some details, an attempt at a paraphrase of Russell's construction, incorporating some finite illustrations, is set out below:

Preliminaries

For Russell, collections (classes) are aggregates of "things" specified by proper names, that come about as the result of propositions (assertions of fact about a thing or things). Russell analysed this general notion. He begins with "terms" in sentences, which he analysed as follows:

Terms: For Russell, "terms" are either "things" or "concepts": "Whatever may be an object of thought, or may occur in any true or false proposition, or can be counted as one, I call a term. This, then, is the widest word in the philosophical vocabulary. I shall use as synonymous with it the words, unit, individual, and entity. The first two emphasize the fact that every term is one, while the third is derived from the fact that every term has being, i.e. is in some sense. A man, a moment, a number, a class, a relation, a chimaera, or anything else that can be mentioned, is sure to be a term; and to deny that such and such a thing is a term must always be false" (Russell 1903:43)

Things are indicated by proper names; concepts are indicated by adjectives or verbs: "Among terms, it is possible to distinguish two kinds, which I shall call respectively things and concepts; the former are the terms indicated by proper names, the latter those indicated by all other words . . . Among concepts, again, two kinds at least must be distinguished, namely those indicated by adjectives and those indicated by verbs" (1903:44).

Concept-adjectives are "predicates"; concept-verbs are "relations": "The former kind will often be called predicates or class-concepts; the latter are always or almost always relations." (1903:44)

The notion of a "variable" subject appearing in a proposition: "I shall speak of the terms of a proposition as those terms, however numerous, which occur in a proposition and may be regarded as subjects about which the proposition is. It is a characteristic of the terms of a proposition that anyone of them may be replaced by any other entity without our ceasing to have a proposition. Thus we shall say that "Socrates is human" is a proposition having only one term; of the remaining component of the proposition, one is the verb, the other is a predicate.. . . Predicates, then, are concepts, other than verbs, which occur in propositions having only one term or subject." (1903:45)

Truth and falsehood: Suppose one were to point to an object and say: "This object in front of me named 'Emily' is a woman." This is a proposition, an assertion of the speaker's belief, which is to be tested against the "facts" of the outer world: "Minds do not create truth or falsehood. They create beliefs . . . what makes a belief true is a fact, and this fact does not (except in exceptional cases) in any way involve the mind of the person who has the belief" (1912:130). If by investigation of the utterance and correspondence with "fact", Russell discovers that Emily is a rabbit, then his utterance is considered "false"; if Emily is a female human (a female "featherless biped" as Russell likes to call humans, following Diogenes Laërtius's anecdote about Plato), then his utterance is considered "true".

Classes (aggregates, complexes): "The class, as opposed to the class-concept, is the sum or conjunction of all the terms which have the given predicate" (1903 p. 55). Classes can be specified by extension (listing their members) or by intension, i.e. by a "propositional function" such as "x is a u" or "x is v". But "if we take extension pure, our class is defined by enumeration of its terms, and this method will not allow us to deal, as Symbolic Logic does, with infinite classes. Thus our classes must in general be regarded as objects denoted by concepts, and to this extent the point of view of intension is essential." (1909 p. 66)

Propositional functions: "The characteristic of a class concept, as distinguished from terms in general, is that "x is a u" is a propositional function when, and only when, u is a class-concept." (1903:56)

Extensional versus intensional definition of a class: "71. Class may be defined either extensionally or intensionally. That is to say, we may define the kind of object which is a class, or the kind of concept which denotes a class: this is the precise meaning of the opposition of extension and intension in this connection. But although the general notion can be defined in this two-fold manner, particular classes, except when they happen to be finite, can only be defined intensionally, i.e. as the objects denoted by such and such concepts. . . logically; the extensional definition appears to be equally applicable to infinite classes, but practically, if we were to attempt it, Death would cut short our laudable endeavour before it had attained its goal."(1903:69)

The definition of the natural numbers

In the Prinicipia, the natural numbers derive from all propositions that can be asserted about any collection of entities. Russell makes this clear in the second (italicized) sentence below.

"In the first place, numbers themselves form an infinite collection, and cannot therefore be defined by enumeration. In the second place, the collections having a given number of terms themselves presumably form an infinite collection: it is to be presumed, for example, that there are an infinite collection of trios in the world, for if this were not the case the total number of things in the world would be finite, which, though possible, seems unlikely. In the third place, we wish to define "number" in such a way that infinite numbers may be possible; thus we must be able to speak of the number of terms in an infinite collection, and such a collection must be defined by intension, i.e. by a property common to all its members and peculiar to them." (1919:13)

To illustrate, consider the following finite example: Suppose there are 12 families on a street. Some have children, some do not. To discuss the names of the children in these households requires 12 propositions asserting "childname is the name of a child in family Fn" applied to this collection of households on the particular street of families with names F1, F2, . . . F12. Each of the 12 propositions regards whether or not the "argument" childname applies to a child in a particular household. The children's names (childname) can be thought of as the x in a propositional function f(x), where the function is "name of a child in the family with name Fn".

Step 1: Assemble all the classes: Whereas the preceding example is finite over the finite propositional function "childnames of the children in family Fn'" on the finite street of a finite number of families, Russell apparently intended the following to extend to all propositional functions extending over an infinite domain so as to allow the creation of all the numbers.

Kleene considers that Russell has set out an impredicative definition that he will have to resolve, or risk deriving something like the Russell paradox. "Here instead we presuppose the totality of all properties of cardinal numbers, as existing in logic, prior to the definition of the natural number sequence" (Kleene 1952:44). The problem will appear, even in the finite example presented here, when Russell deals with the unit class (cf. Russell 1903:517).

The question arises what precisely a "class" is or should be. For Dedekind and Frege, a class is a distinct entity in its own right, a 'unity' that can be identified with all those entities x that satisfy some propositional function F. (This symbolism appears in Russell, attributed there to Frege: "The essence of a function is what is left when the x is taken away, i.e in the above instance, 2( )3 + ( ). The argument x does not belong to the function, but the two together make a whole (ib. p. 6 [i.e. Frege's 1891 Function und Begriff]" (Russell 1903:505).) For example, a particular "unity" could be given a name; suppose a family Fα has the children with the names Annie, Barbie and Charles:

{ a, b, c }

This notion of collection or class as object, when used without restriction, results in Russell's paradox; see more below about impredicative definitions. Russell's solution was to define the notion of a class to be only those elements that satisfy the proposition, his argument being that, indeed, the arguments x do not belong to the propositional function aka "class" created by the function. The class itself is not to be regarded as a unitary object in its own right, it exists only as a kind of useful fiction: "We have avoided the decision as to whether a class of things has in any sense an existence as one object. A decision of this question in either way is indifferent to our logic" (First edition of Principia Mathematica 1927:24).

Russell continues to hold this opinion in his 1919; observe the words "symbolic fictions":

"When we have decided that classes cannot be things of the same sort as their members, that they cannot be just heaps or aggregates, and also that they cannot be identified with propositional functions, it becomes very difficult to see what they can be, if they are to be more than symbolic fictions. And if we can find any way of dealing with them as symbolic fictions, we increase the logical security of our position, since we avoid the need of assuming that there are classes without being compelled to make the opposite assumption that there are no classes. We merely abstain from both assumptions. . . . But when we refuse to assert that there are classes, we must not be supposed to be asserting dogmatically that there are none. We are merely agnostic as regards them . . .." (1919:184)

And in the second edition of PM (1927) Russell holds that "functions occur only through their values, . . . all functions of functions are extensional, . . . [and] consequently there is no reason to distinguish between functions and classes . . . Thus classes, as distinct from functions, lose even that shadowy being which they retain in *20" (p. xxxix). In other words, classes as a separate notion have vanished altogether.

Step 2: Collect "similar" classes into 'bundles' : These above collections can be put into a "binary relation" (comparing for) similarity by "equinumerosity", symbolized here by , i.e. one-one correspondence of the elements, and thereby create Russellian classes of classes or what Russell called "bundles". "We can suppose all couples in one bundle, all trios in another, and so on. In this way we obtain various bundles of collections, each bundle consisting of all the collections that have a certain number of terms. Each bundle is a class whose members are collections, i.e. classes; thus each is a class of classes" (Russell 1919:14).

Step 3: Define the null class: Notice that a certain class of classes is special because its classes contain no elements, i.e. no elements satisfy the predicates whose assertion defined this particular class/collection.

The resulting entity may be called "the null class" or "the empty class". Russell symbolized the null/empty class with Λ. So what exactly is the Russellian null class? In PM Russell says that "A class is said to exist when it has at least one member . . . the class which has no members is called the "null class" . . . "α is the null-class" is equivalent to "α does not exist". The question naturally arises whether the null class itself 'exists'? Difficulties related to this question occur in Russell's 1903 work. After he discovered the paradox in Frege's Grundgesetze he added Appendix A to his 1903 where through the analysis of the nature of the null and unit classes, he discovered the need for a "doctrine of types"; see more about the unit class, the problem of impredicative definitions and Russell's "vicious circle principle" below.

Step 4: Assign a "numeral" to each bundle: For purposes of abbreviation and identification, to each bundle assign a unique symbol (aka a "numeral"). These symbols are arbitrary.

Step 5: Define "0" Following Frege, Russell picked the empty or null class of classes as the appropriate class to fill this role, this being the class of classes having no members. This null class of classes may be labelled "0"

Step 6: Define the notion of "successor": Russell defined a new characteristic "hereditary" (cf Frege's 'ancestral'), a property of certain classes with the ability to "inherit" a characteristic from another class (which may be a class of classes) i.e. "A property is said to be "hereditary" in the natural-number series if, whenever it belongs to a number n, it also belongs to n+1, the successor of n". (1903:21). He asserts that "the natural numbers are the posterity — the "children", the inheritors of the "successor" — of 0 with respect to the relation "the immediate predecessor of (which is the converse of "successor") (1919:23).

Note Russell has used a few words here without definition, in particular "number series", "number n", and "successor". He will define these in due course. Observe in particular that Russell does not use the unit class of classes "1" to construct the successor. The reason is that, in Russell's detailed analysis, if a unit class becomes an entity in its own right, then it too can be an element in its own proposition; this causes the proposition to become "impredicative" and result in a "vicious circle". Rather, he states: "We saw in Chapter II that a cardinal number is to be defined as a class of classes, and in Chapter III that the number 1 is to be defined as the class of all unit classes, of all that have just one member, as we should say but for the vicious circle. Of course, when the number 1 is defined as the class of all unit classes, unit classes must be defined so as not to assume that we know what is meant by one (1919:181).

For his definition of successor, Russell will use for his "unit" a single entity or "term" as follows:

"It remains to define "successor". Given any number n let α be a class which has n members, and let x be a term which is not a member of α. Then the class consisting of α with x added on will have +1 members. Thus we have the following definition:
the successor of the number of terms in the class α is the number of terms in the class consisting of α together with x where x is not any term belonging to the class." (1919:23)

Russell's definition requires a new "term" which is "added into" the collections inside the bundles.

Step 7: Construct the successor of the null class.

Step 8: For every class of equinumerous classes, create its successor.

Step 9: Order the numbers: The process of creating a successor requires the relation " . . . is the successor of . . .", which may be denoted "S", between the various "numerals". "We must now consider the serial character of the natural numbers in the order 0, 1, 2, 3, . . . We ordinarily think of the numbers as in this order, and it is an essential part of the work of analysing our data to seek a definition of "order" or "series " in logical terms. . . . The order lies, not in the class of terms, but in a relation among the members of the class, in respect of which some appear as earlier and some as later." (1919:31)

Russell applies to the notion of "ordering relation" three criteria: First, he defines the notion of "asymmetry" i.e. given the relation such as S (" . . . is the successor of . . . ") between two terms x, and y: x S y ≠ y S x. Second, he defines the notion of "transitivity" for three numerals x, y and z: if x S y and y S z then x S z. Third, he defines the notion of "connected": "Given any two terms of the class which is to be ordered, there must be one which precedes and the other which follows. . . . A relation is connected when, given any two different terms of its field [both domain and converse domain of a relation e.g. husbands versus wives in the relation of married] the relation holds between the first and the second or between the second and the first (not excluding the possibility that both may happen, though both cannot happen if the relation is asymmetrical).(1919:32)

He concludes: ". . . [natural] number m is said to be less than another number n when n possesses every hereditary property possessed by the successor of m. It is easy to see, and not difficult to prove, that the relation "less than", so defined, is asymmetrical, transitive, and connected, and has the [natural] numbers for its field [i.e. both domain and converse domain are the numbers]." (1919:35)

Criticism

The presumption of an 'extralogical' notion of iteration: Kleene notes that "the logicistic thesis can be questioned finally on the ground that logic already presupposes mathematical ideas in its formulation. In the Intuitionistic view, an essential mathematical kernel is contained in the idea of iteration" (Kleene 1952:46)

Bernays 1930–1931 observes that this notion "two things" already presupposes something, even without the claim of existence of two things, and also without reference to a predicate, which applies to the two things; it means, simply, "a thing and one more thing. . . . With respect to this simple definition, the Number concept turns out to be an elementary structural concept . . . the claim of the logicists that mathematics is purely logical knowledge turns out to be blurred and misleading upon closer observation of theoretical logic. . . . [one can extend the definition of "logical"] however, through this definition what is epistemologically essential is concealed, and what is peculiar to mathematics is overlooked" (in Mancosu 1998:243).

Hilbert 1931:266-7, like Bernays, considers there is "something extra-logical" in mathematics: "Besides experience and thought, there is yet a third source of knowledge. Even if today we can no longer agree with Kant in the details, nevertheless the most general and fundamental idea of the Kantian epistemology retains its significance: to ascertain the intuitive a priori mode of thought, and thereby to investigate the condition of the possibility of all knowledge. In my opinion this is essentially what happens in my investigations of the principles of mathematics. The a priori is here nothing more and nothing less than a fundamental mode of thought, which I also call the finite mode of thought: something is already given to us in advance in our faculty of representation: certain extra-logical concrete objects that exist intuitively as an immediate experience before all thought. If logical inference is to be certain, then these objects must be completely surveyable in all their parts, and their presentation, their differences, their succeeding one another or their being arrayed next to one another is immediately and intuitively given to us, along with the objects, as something that neither can be reduced to anything else, nor needs such a reduction." (Hilbert 1931 in Mancosu 1998: 266, 267).

In brief, according to Hilbert and Bernays, the notion of "sequence" or "successor" is an a priori notion that lies outside symbolic logic.

Hilbert dismissed logicism as a "false path": "Some tried to define the numbers purely logically; others simply took the usual number-theoretic modes of inference to be self-evident. On both paths they encountered obstacles that proved to be insuperable." (Hilbert 1931 in Mancoso 1998:267). The incompleteness theorems arguably constitute a similar obstacle for Hilbertian finitism.

Mancosu states that Brouwer concluded that: "the classical laws or principles of logic are part of [the] perceived regularity [in the symbolic representation]; they are derived from the post factum record of mathematical constructions . . . Theoretical logic . . . [is] an empirical science and an application of mathematics" (Brouwer quoted by Mancosu 1998:9).

Gödel 1944: With respect to the technical aspects of Russellian logicism as it appears in Principia Mathematica (either edition), Gödel was disappointed:

"It is to be regretted that this first comprehensive and thorough-going presentation of a mathematical logic and the derivation of mathematics from it [is?] so greatly lacking in formal precision in the foundations (contained in *1–*21 of Principia) that it presents in this respect a considerable step backwards as compared with Frege. What is missing, above all, is a precise statement of the syntax of the formalism" (cf. footnote 1 in Gödel 1944 Collected Works 1990:120).

In particular he pointed out that "The matter is especially doubtful for the rule of substitution and of replacing defined symbols by their definiens" (Russell 1944:120)

With respect to the philosophy that might underlie these foundations, Gödel considered Russell's "no-class theory" as embodying a "nominalistic kind of constructivism . . . which might better be called fictionalism" (cf. footnote 1 in Gödel 1944:119) — to be faulty. See more in "Gödel's criticism and suggestions" below.

Grattan-Guinness: A complicated theory of relations continued to strangle Russell's explanatory 1919 Introduction to Mathematical Philosophy and his 1927 second edition of Principia. Set theory, meanwhile had moved on with its reduction of relation to the ordered pair of sets. Grattan-Guinness observes that in the second edition of Principia Russell ignored this reduction that had been achieved by his own student Norbert Wiener (1914). Perhaps because of "residual annoyance, Russell did not react at all". By 1914 Hausdorff would provide another, equivalent definition, and Kuratowski in 1921 would provide the one in use today.

The unit class, impredicativity, and the vicious circle principle

A benign impredicative definition: Suppose a librarian wants to index her collection into a single book (call it Ι for "index"). Her index will list all the books and their locations in the library. As it turns out, there are only three books, and these have titles Ά, β, and Γ. To form her index I, she goes out and buys a book of 200 blank pages and labels it "I". Now she has four books: I, Ά, β, and Γ. Her task is not difficult. When completed, the contents of her index I are 4 pages, each with a unique title and unique location (each entry abbreviated as Title.LocationT):

I = { I.LI, Ά.LΆ, β.Lβ, Γ.LΓ}.

This sort of definition of I was deemed by Poincaré to be "impredicative". He seems to have considered that only predicative definitions can be allowed in mathematics:

"a definition is 'predicative' and logically admissible only if it excludes all objects that are dependent upon the notion defined, that is, that can in any way be determined by it".

By Poincaré's definition, the librarian's index book is "impredicative" because the definition of I is dependent upon the definition of the totality I, Ά, β, and Γ. As noted below, some commentators insist that impredicativity in commonsense versions is harmless, but as the examples show below there are versions which are not harmless. In response to these difficulties, Russell advocated a strong prohibition, his "vicious circle principle":

"No totality can contain members definable only in terms of this totality, or members involving or presupposing this totality" (vicious circle principle)" (Gödel 1944 appearing in Collected Works Vol. II 1990:125).

A pernicious impredicativity: α = NOT-α: To illustrate what a pernicious instance of impredicativity might be, consider the consequence of inputting argument α into the function f with output ω = 1 – α. This may be seen as the equivalent 'algebraic-logic' expression to the 'symbolic-logic' expression ω = NOT-α, with truth values 1 and 0. When input α = 0, output ω = 1; when input α = 1, output ω = 0.

To make the function "impredicative", identify the input with the output, yielding α = 1-α

Within the algebra of, say, rational numbers the equation is satisfied when α = 0.5. But within, for instance, a Boolean algebra, where only "truth values" 0 and 1 are permitted, then the equality cannot be satisfied.

Fatal impredicativity in the definition of the unit class: Some of the difficulties in the logicist programme may derive from the α = NOT-α paradox Russell discovered in Frege's 1879 Begriffsschrift that Frege had allowed a function to derive its input "functional" (value of its variable) not only from an object (thing, term), but also from the function's own output.

As described above, Both Frege's and Russell's constructions of the natural numbers begin with the formation of equinumerous classes of classes ("bundles"), followed by an assignment of a unique "numeral" to each bundle, and then by the placing of the bundles into an order via a relation S that is asymmetric: x S yy S x. But Frege, unlike Russell, allowed the class of unit classes to be identified as a unit itself:

But, since the class with numeral 1 is a single object or unit in its own right, it too must be included in the class of unit classes. This inclusion results in an infinite regress of increasing type and increasing content.

Russell avoided this problem by declaring a class to be more or a "fiction". By this he meant that a class could designate only those elements that satisfied its propositional function and nothing else. As a "fiction" a class cannot be considered to be a thing: an entity, a "term", a singularity, a "unit". It is an assemblage but is not in Russell's view "worthy of thing-hood":

"The class as many . . . is unobjectionable, but is many and not one. We may, if we choose, represent this by a single symbol: thus x ε u will mean " x is one of the u's." This must not be taken as a relation of two terms, x and u, because u as the numerical conjunction is not a single term . . . Thus a class of classes will be many many's; its constituents will each be only many, and cannot therefore in any sense, one might suppose, be single constituents.[etc]" (1903:516).

This supposes that "at the bottom" every single solitary "term" can be listed (specified by a "predicative" predicate) for any class, for any class of classes, for class of classes of classes, etc, but it introduces a new problem—a hierarchy of "types" of classes.

A solution to impredicativity: a hierarchy of types

Classes as non-objects, as useful fictions: Gödel 1944:131 observes that "Russell adduces two reasons against the extensional view of classes, namely the existence of (1) the null class, which cannot very well be a collection, and (2) the unit classes, which would have to be identical with their single elements." He suggests that Russell should have regarded these as fictitious, but not derive the further conclusion that all classes (such as the class-of-classes that define the numbers 2, 3, etc) are fictions.

But Russell did not do this. After a detailed analysis in Appendix A: The Logical and Arithmetical Doctrines of Frege in his 1903, Russell concludes:

"The logical doctrine which is thus forced upon us is this: The subject of a proposition may be not a single term, but essentially many terms; this is the case with all propositions asserting numbers other than 0 and 1" (1903:516).

In the following notice the wording "the class as many"—a class is an aggregate of those terms (things) that satisfy the propositional function, but a class is not a thing-in-itself:

"Thus the final conclusion is, that the correct theory of classes is even more extensional than that of Chapter VI; that the class as many is the only object always defined by a propositional function, and that this is adequate for formal purposes" (1903:518).

It is as if a rancher were to round up all his livestock (sheep, cows and horses) into three fictitious corrals (one for the sheep, one for the cows, and one for the horses) that are located in his fictitious ranch. What actually exist are the sheep, the cows and the horses (the extensions), but not the fictitious "concepts" corrals and ranch.

Ramified theory of types: function-orders and argument-types, predicative functions: When Russell proclaimed all classes are useful fictions he solved the problem of the "unit" class, but the overall problem did not go away; rather, it arrived in a new form: "It will now be necessary to distinguish (1) terms, (2) classes, (3) classes of classes, and so on ad infinitum; we shall have to hold that no member of one set is a member of any other set, and that x ε u requires that x should be of a set of a degree lower by one than the set to which u belongs. Thus x ε x will become a meaningless proposition; and in this way the contradiction is avoided" (1903:517).

This is Russell's "doctrine of types". To guarantee that impredicative expressions such as x ε x can be treated in his logic, Russell proposed, as a kind of working hypothesis, that all such impredicative definitions have predicative definitions. This supposition requires the notions of function-"orders" and argument-"types". First, functions (and their classes-as-extensions, i.e. "matrices") are to be classified by their "order", where functions of individuals are of order 1, functions of functions (classes of classes) are of order 2, and so forth. Next, he defines the "type" of a function's arguments (the function's "inputs") to be their "range of significance", i.e. what are those inputs α (individuals? classes? classes-of-classes? etc.) that, when plugged into f(x), yield a meaningful output ω. Note that this means that a "type" can be of mixed order, as the following example shows:

"Joe DiMaggio and the Yankees won the 1947 World Series".

This sentence can be decomposed into two clauses: "x won the 1947 World Series" + "y won the 1947 World Series". The first sentence takes for x an individual "Joe DiMaggio" as its input, the other takes for y an aggregate "Yankees" as its input. Thus the composite-sentence has a (mixed) type of 2, mixed as to order (1 and 2).

By "predicative", Russell meant that the function must be of an order higher than the "type" of its variable(s). Thus a function (of order 2) that creates a class of classes can only entertain arguments for its variable(s) that are classes (type 1) and individuals (type 0), as these are lower types. Type 3 can only entertain types 2, 1 or 0, and so forth. But these types can be mixed (for example, for this sentence to be (sort of) true: " z won the 1947 World Series " could accept the individual (type 0) "Joe DiMaggio" and/or the names of his other teammates, and it could accept the class (type 1) of individual players "The Yankees".

The axiom of reducibility: The axiom of reducibility is the hypothesis that any function of any order can be reduced to (or replaced by) an equivalent predicative function of the appropriate order. A careful reading of the first edition indicates that an nth order predicative function need not be expressed "all the way down" as a huge "matrix" or aggregate of individual atomic propositions. "For in practice only the relative types of variables are relevant; thus the lowest type occurring in a given context may be called that of individuals" (p. 161). But the axiom of reducibility proposes that in theory a reduction "all the way down" is possible.

Russell 1927 abandons the axiom of reducibility: By the 2nd edition of PM of 1927, though, Russell had given up on the axiom of reducibility and concluded he would indeed force any order of function "all the way down" to its elementary propositions, linked together with logical operators:

"All propositions, of whatever order, are derived from a matrix composed of elementary propositions combined by means of the stroke" (PM 1927 Appendix A, p. 385)

(The "stroke" is Sheffer's stroke — adopted for the 2nd edition of PM — a single two argument logical function from which all other logical functions may be defined.)

The net result, though, was a collapse of his theory. Russell arrived at this disheartening conclusion: that "the theory of ordinals and cardinals survives . . . but irrationals, and real numbers generally, can no longer be adequately dealt with. . . . Perhaps some further axiom, less objectionable than the axiom of reducibility, might give these results, but we have not succeeded in finding such an axiom" (PM 1927:xiv).

Gödel 1944 agrees that Russell's logicist project was stymied; he seems to disagree that even the integers survived:

"[In the second edition] The axiom of reducibility is dropped, and it is stated explicitly that all primitive predicates belong to the lowest type and that the only purpose of variables (and evidently also of constants) of higher orders and types is to make it possible to assert more complicated truth-functions of atomic propositions" (Gödel 1944 in Collected Works:134).

Gödel asserts, however, that this procedure seems to presuppose arithmetic in some form or other (p. 134). He deduces that "one obtains integers of different orders" (p. 134-135); the proof in Russell 1927 PM Appendix B that "the integers of any order higher than 5 are the same as those of order 5" is "not conclusive" and "the question whether (or to what extent) the theory of integers can be obtained on the basis of the ramified hierarchy [classes plus types] must be considered as unsolved at the present time". Gödel concluded that it wouldn't matter anyway because propositional functions of order n (any n) must be described by finite combinations of symbols (all quotes and content derived from page 135).

Gödel's criticism and suggestions

Gödel, in his 1944 work, identifies the place where he considers Russell's logicism to fail and offers suggestions to rectify the problems. He submits the "vicious circle principle" to re-examination, splitting it into three parts "definable only in terms of", "involving" and "presupposing". It is the first part that "makes impredicative definitions impossible and thereby destroys the derivation of mathematics from logic, effected by Dedekind and Frege, and a good deal of mathematics itself". Since, he argues, mathematics sees to rely on its inherent impredicativities (e.g. "real numbers defined by reference to all real numbers"), he concludes that what he has offered is "a proof that the vicious circle principle is false [rather] than that classical mathematics is false" (all quotes Gödel 1944:127).

Russell's no-class theory is the root of the problem: Gödel believes that impredicativity is not "absurd", as it appears throughout mathematics. Russell's problem derives from his "constructivistic (or nominalistic") standpoint toward the objects of logic and mathematics, in particular toward propositions, classes, and notions . . . a notion being a symbol . . . so that a separate object denoted by the symbol appears as a mere fiction" (p. 128).

Indeed, Russell's "no class" theory, Gödel concludes:

"is of great interest as one of the few examples, carried out in detail, of the tendency to eliminate assumptions about the existence of objects outside the "data" and to replace them by constructions on the basis of these data33. The "data" are to understand in a relative sense here; i.e. in our case as logic without the assumption of the existence of classes and concepts]. The result has been in this case essentially negative; i.e. the classes and concepts introduced in this way do not have all the properties required from their use in mathematics. . . . All this is only a verification of the view defended above that logic and mathematics (just as physics) are built up on axioms with a real content which cannot be explained away" (p. 132)

He concludes his essay with the following suggestions and observations:

"One should take a more conservative course, such as would consist in trying to make the meaning of terms "class" and "concept" clearer, and to set up a consistent theory of classes and concepts as objectively existing entities. This is the course which the actual development of mathematical logic has been taking and which Russell himself has been forced to enter upon in the more constructive parts of his work. Major among the attempts in this direction . . . are the simple theory of types . . . and axiomatic set theory, both of which have been successful at least to this extent, that they permit the derivation of modern mathematics and at the same time avoid all known paradoxes . . . ¶ It seems reasonable to suspect that it is this incomplete understanding of the foundations which is responsible for the fact that mathematical logic has up to now remained so far behind the high expectations of Peano and others . . .." (p. 140)

Neo-logicism

Neo-logicism describes a range of views considered by their proponents to be successors of the original logicist program. More narrowly, neo-logicism may be seen as the attempt to salvage some or all elements of Frege's programme through the use of a modified version of Frege's system in the Grundgesetze (which may be seen as a kind of second-order logic).

For instance, one might replace Basic Law V (analogous to the axiom schema of unrestricted comprehension in naive set theory) with some 'safer' axiom so as to prevent the derivation of the known paradoxes. The most cited candidate to replace BLV is Hume's principle, the contextual definition of '#' given by '#F = #G if and only if there is a bijection between F and G'. This kind of neo-logicism is often referred to as neo-Fregeanism. Proponents of neo-Fregeanism include Crispin Wright and Bob Hale, sometimes also called the Scottish School or abstractionist Platonism, who espouse a form of epistemic foundationalism.

Other major proponents of neo-logicism include Bernard Linsky and Edward N. Zalta, sometimes called the Stanford–Edmonton School, abstract structuralism or modal neo-logicism who espouse a form of axiomatic metaphysics. Modal neo-logicism derives the Peano axioms within second-order modal object theory.

Another quasi-neo-logicist approach has been suggested by M. Randall Holmes. In this kind of amendment to the Grundgesetze, BLV remains intact, save for a restriction to stratifiable formulae in the manner of Quine's NF and related systems. Essentially all of the Grundgesetze then 'goes through'. The resulting system has the same consistency strength as Jensen's NFU + Rosser's Axiom of Counting.

Natural product

From Wikipedia, the free encyclopedia
 
The anticancer drug paclitaxel is a natural product derived from the yew tree.

A natural product is a chemical compound or substance produced by a living organism—that is, found in nature. In the broadest sense, natural products include any substance produced by life. Natural products can also be prepared by chemical synthesis (both semisynthesis and total synthesis) and have played a central role in the development of the field of organic chemistry by providing challenging synthetic targets. The term natural product has also been extended for commercial purposes to refer to cosmetics, dietary supplements, and foods produced from natural sources without added artificial ingredients.

Within the field of organic chemistry, the definition of natural products is usually restricted to organic compounds isolated from natural sources that are produced by the pathways of primary or secondary metabolism. Within the field of medicinal chemistry, the definition is often further restricted to secondary metabolites. Secondary metabolites are not essential for survival, but nevertheless provide organisms that produce them an evolutionary advantage. Many secondary metabolites are cytotoxic and have been selected and optimized through evolution for use as "chemical warfare" agents against prey, predators, and competing organisms.

Natural sources may lead to basic research on potential bioactive components for commercial development as lead compounds in drug discovery. Although natural products have inspired numerous drugs, drug development from natural sources has received declining attention in the 21st century by pharmaceutical companies, partly due to unreliable access and supply, intellectual property, cost, and profit concerns, seasonal or environmental variability of composition, and loss of sources due to rising extinction rates.

Classes

The broadest definition of natural product is anything that is produced by life, and includes the likes of biotic materials (e.g. wood, silk), bio-based materials (e.g. bioplastics, cornstarch), bodily fluids (e.g. milk, plant exudates), and other natural materials (e.g. soil, coal).

Natural products may be classified according to their biological function, biosynthetic pathway, or source. Depending on the sources, the number of known natural product molecules ranges between 300,000 and 400,000.

Function

Following Albrecht Kossel's original proposal in 1891, natural products are often divided into two major classes, the primary and secondary metabolites. Primary metabolites have an intrinsic function that is essential to the survival of the organism that produces them. Secondary metabolites in contrast have an extrinsic function that mainly affects other organisms. Secondary metabolites are not essential to survival but do increase the competitiveness of the organism within its environment. Because of their ability to modulate biochemical and signal transduction pathways, some secondary metabolites have useful medicinal properties.

Natural products especially within the field of organic chemistry are often defined as primary and secondary metabolites. A more restrictive definition limiting natural products to secondary metabolites is commonly used within the fields of medicinal chemistry and pharmacognosy.

Primary metabolites

Molecular building blocks of life

Primary metabolites as defined by Kossel are components of basic metabolic pathways that are required for life. They are associated with essential cellular functions such as nutrient assimilation, energy production, and growth/development. They have a wide species distribution that span many phyla and frequently more than one kingdom. Primary metabolites include carbohydrates, lipids, amino acids, and nucleic acids which are the basic building blocks of life.

Primary metabolites that are involved with energy production include respiratory and photosynthetic enzymes. Enzymes in turn are composed of amino acids and often non-peptidic cofactors that are essential for enzyme function. The basic structure of cells and of organisms are also composed of primary metabolites. These include cell membranes (e.g. phospholipids), cell walls (e.g. peptidoglycan, chitin), and cytoskeletons (proteins).

Primary metabolite enzymatic cofactors include members of the vitamin B family. Vitamin B1 as thiamine diphosphate is a coenzyme for pyruvate dehydrogenase, 2-oxoglutarate dehydrogenase, and transketolase which are all involved in carbohydrate metabolism. Vitamin B2 (riboflavin) is a constituent of FMN and FAD which are necessary for many redox reactions. Vitamin B3 (nicotinic acid or niacin), synthesized from tryptophan is a component of the coenzymes NAD+ and NADP+ which in turn are required for electron transport in the Krebs cycle, oxidative phosphorylation, as well as many other redox reactions. Vitamin B5 (pantothenic acid) is a constituent of coenzyme A, a basic component of carbohydrate and amino acid metabolism as well as the biosynthesis of fatty acids and polyketides. Vitamin B6 (pyridoxol, pyridoxal, and pyridoxamine) as pyridoxal 5′-phosphate is a cofactor for many enzymes especially transaminases involve in amino acid metabolism. Vitamin B12 (cobalamins) contain a corrin ring similar in structure to porphyrin and is an essential coenzyme for the catabolism of fatty acids as well for the biosynthesis of methionine.

DNA and RNA which store and transmit genetic information are composed of nucleic acid primary metabolites.

First messengers are signaling molecules that control metabolism or cellular differentiation. These signaling molecules include hormones and growth factors in turn are composed of peptides, biogenic amines, steroid hormones, auxins, gibberellins etc. These first messengers interact with cellular receptors which are composed of proteins. Cellular receptors in turn activate second messengers are used to relay the extracellular message to intracellular targets. These signaling molecules include the primary metabolites cyclic nucleotides, diacyl glycerol etc.

Secondary metabolites

Representative examples of each of the major classes of secondary metabolites

Secondary in contrast to primary metabolites are dispensable and not absolutely required for survival. Furthermore, secondary metabolites typically have a narrow species distribution.

Secondary metabolites have a broad range of functions. These include pheromones that act as social signaling molecules with other individuals of the same species, communication molecules that attract and activate symbiotic organisms, agents that solubilize and transport nutrients (siderophores etc.), and competitive weapons (repellants, venoms, toxins etc.) that are used against competitors, prey, and predators. For many other secondary metabolites, the function is unknown. One hypothesis is that they confer a competitive advantage to the organism that produces them. An alternative view is that, in analogy to the immune system, these secondary metabolites have no specific function, but having the machinery in place to produce these diverse chemical structures is important and a few secondary metabolites are therefore produced and selected for.

General structural classes of secondary metabolites include alkaloids, phenylpropanoids, polyketides, and terpenoids.

Biosynthesis

Biosynthesis of primary and secondary metabolites.

The biosynthetic pathways leading to the major classes of natural products are described below.

Carbohydrates

Carbohydrates are an essential energy source for most life forms. In addition, polysaccharides formed from simpler carbohydrates are important structural components of many organisms such the cell walls of bacteria and plants.

Carbohydrate are the products of plant photosynthesis and animal gluconeogenesis. Photosynthesis produces initially 3-phosphoglyceraldehyde, a three carbon atom containing sugar (a triose). This triose in turn may be converted into glucose (a six carbon atom containing sugar) or a variety of pentoses (five carbon atom containing sugars) through the Calvin cycle. In animals, the three carbon precursors lactate or glycerol can be converted into pyruvate which in turn can be converted into carbohydrates in the liver.

Fatty acids and polyketides

Through the process of glycolysis sugars are broken down into acetyl-CoA. In an ATP dependent enzymatically catalyzed reaction, acetyl-CoA is carboxylated to form malonyl-CoA. Acetyl-CoA and malonyl-CoA undergo a Claisen condensation with lose of carbon dioxide to form acetoacetyl-CoA. Additional condensation reactions produce successively higher molecular weight poly-β-keto chains which are then converted into other polyketides. The polyketide class of natural products have diverse structures and functions and include prostaglandins and macrolide antibiotics.

One molecule of acetyl-CoA (the "starter unit") and several molecules malonyl-CoA (the "extender units") are condensed by fatty acid synthase to produce fatty acids. Fatty acid are essential components of lipid bilayers that form cell membranes as well as fat energy stores in animals.

Sources

Natural products may be extracted from the cells, tissues, and secretions of microorganisms, plants and animals. A crude (unfractionated) extract from any one of these sources will contain a range of structurally diverse and often novel chemical compounds. Chemical diversity in nature is based on biological diversity, so researchers collect samples from around the world to analyze and evaluate in drug discovery screens or bioassays. This effort to search for biologically active natural products is known as bioprospecting.

Pharmacognosy provides the tools to detect, isolate and identify bioactive natural products that could be developed for medicinal use. When an "active principle" is isolated from a traditional medicine or other biological material, this is known as a "hit". Subsequent scientific and legal work is then performed to validate the hit (eg. elucidation of mechanism of action, confirmation that there is no intellectual property conflict). This is followed by the hit to lead stage of drug discovery, where derivatives of the active compound are produced in an attempt to improve its potency and safety. In this and related ways, modern medicines can be developed directly from natural sources.

Although traditional medicines and other biological material are considered an excellent source of novel compounds, the extraction and isolation of these compounds can be a slow, expensive and inefficient process. For large scale manufacture therefore, attempts may be made to produce the new compound by total synthesis or semisynthesis. Because natural products are generally secondary metabolites with complex chemical structures, their total/semisynthesis is not always commercially viable. In these cases, efforts can be made to design simpler analogues with comparable potency and safety that are amenable to total/semisynthesis.

Prokaryotic

Bacteria

Botulinum toxin types A and B (Botox, Dysport, Xeomin, MyoBloc), used both medicinally and cosmetically, are natural products from the bacterium Clostridium botulinum.

The serendipitous discovery and subsequent clinical success of penicillin prompted a large-scale search for other environmental microorganisms that might produce anti-infective natural products. Soil and water samples were collected from all over the world, leading to the discovery of streptomycin (derived from Streptomyces griseus), and the realization that bacteria, not just fungi, represent an important source of pharmacologically active natural products. This, in turn, led to the development of an impressive arsenal of antibacterial and antifungal agents including amphotericin B, chloramphenicol, daptomycin and tetracycline (from Streptomyces spp.), the polymyxins (from Paenibacillus polymyxa), and the rifamycins (from Amycolatopsis rifamycinica).

Although most of the drugs derived from bacteria are employed as anti-infectives, some have found use in other fields of medicine. Botulinum toxin (from Clostridium botulinum) and bleomycin (from Streptomyces verticillus) are two examples. Botulinum, the neurotoxin responsible for botulism, can be injected into specific muscles (such as those controlling the eyelid) to prevent muscle spasm. Also, the glycopeptide bleomycin is used for the treatment of several cancers including Hodgkin's lymphoma, head and neck cancer, and testicular cancer. Newer trends in the field include the metabolic profiling and isolation of natural products from novel bacterial species present in underexplored environments. Examples include symbionts or endophytes from tropical environments, subterranean bacteria found deep underground via mining/drilling, and marine bacteria.

Archaea

Because many Archaea have adapted to life in extreme environments such as polar regions, hot springs, acidic springs, alkaline springs, salt lakes, and the high pressure of deep ocean water, they possess enzymes that are functional under quite unusual conditions. These enzymes are of potential use in the food, chemical, and pharmaceutical industries, where biotechnological processes frequently involve high temperatures, extremes of pH, high salt concentrations, and / or high pressure. Examples of enzymes identified to date include amylases, pullulanases, cyclodextrin glycosyltransferases, cellulases, xylanases, chitinases, proteases, alcohol dehydrogenase, and esterases. Archaea represent a source of novel chemical compounds also, for example isoprenyl glycerol ethers 1 and 2 from Thermococcus S557 and Methanocaldococcus jannaschii, respectively.

Eukaryotic

Fungi

The antibiotic penicillin is a natural product derived from the fungus Penicillium rubens.

Several anti-infective medications have been derived from fungi including penicillin and the cephalosporins (antibacterial drugs from Penicillium rubens and Cephalosporium acremonium, respectively) and griseofulvin (an antifungal drug from Penicillium griseofulvum). Other medicinally useful fungal metabolites include lovastatin (from Pleurotus ostreatus), which became a lead for a series of drugs that lower cholesterol levels, cyclosporin (from Tolypocladium inflatum), which is used to suppress the immune response after organ transplant operations, and ergometrine (from Claviceps spp.), which acts as a vasoconstrictor, and is used to prevent bleeding after childbirth. Asperlicin (from Aspergillus alliaceus) is another example. Asperlicin is a novel antagonist of cholecystokinin, a neurotransmitter thought to be involved in panic attacks, and could potentially be used to treat anxiety.

Plants

The opioid analgesic drug morphine is a natural product derived from the plant Papaver somniferum.

Plants are a major source of complex and highly structurally diverse chemical compounds (phytochemicals), this structural diversity attributed in part to the natural selection of organisms producing potent compounds to deter herbivory (feeding deterrents). Major classes of phytochemical include phenols, polyphenols, tannins, terpenes, and alkaloids. Though the number of plants that have been extensively studied is relatively small, many pharmacologically active natural products have already been identified. Clinically useful examples include the anticancer agents paclitaxel and omacetaxine mepesuccinate (from Taxus brevifolia and Cephalotaxus harringtonii, respectively), the antimalarial agent artemisinin (from Artemisia annua), and the acetylcholinesterase inhibitor galantamine (from Galanthus spp.), used to treat Alzheimer's disease. Other plant-derived drugs, used medicinally and/or recreationally include morphine, cocaine, quinine, tubocurarine, muscarine, and nicotine.

Animals

The analgesic drug ω-conotoxin (ziconotide) is a natural product derived from the sea snail Conus magus.

Animals also represent a source of bioactive natural products. In particular, venomous animals such as snakes, spiders, scorpions, caterpillars, bees, wasps, centipedes, ants, toads, and frogs have attracted much attention. This is because venom constituents (peptides, enzymes, nucleotides, lipids, biogenic amines etc.) often have very specific interactions with a macromolecular target in the body (e.g. α-bungarotoxin from cobras). As with plant feeding deterrents, this biological activity is attributed to natural selection, organisms capable of killing or paralyzing their prey and/or defending themselves against predators being more likely to survive and reproduce.

Because of these specific chemical-target interactions, venom constituents have proved important tools for studying receptors, ion channels, and enzymes. In some cases, they have also served as leads in the development of novel drugs. For example, teprotide, a peptide isolated from the venom of the Brazilian pit viper Bothrops jararaca, was a lead in the development of the antihypertensive agents cilazapril and captopril. Also, echistatin, a disintegrin from the venom of the saw-scaled viper Echis carinatus was a lead in the development of the antiplatelet drug tirofiban.

In addition to the terrestrial animals and amphibians described above, many marine animals have been examined for pharmacologically active natural products, with corals, sponges, tunicates, sea snails, and bryozoans yielding chemicals with interesting analgesic, antiviral, and anticancer activities. Two examples developed for clinical use include ω-conotoxin (from the marine snail Conus magus) and ecteinascidin 743 (from the tunicate Ecteinascidia turbinata). The former, ω-conotoxin, is used to relieve severe and chronic pain, while the latter, ecteinascidin 743 is used to treat metastatic soft tissue sarcoma. Other natural products derived from marine animals and under investigation as possible therapies include the antitumour agents discodermolide (from the sponge Discodermia dissoluta), eleutherobin (from the coral Erythropodium caribaeorum), and the bryostatins (from the bryozoan Bugula neritina).

Medical uses

Natural products sometimes have pharmacological activity that can be of therapeutic benefit in treating diseases. Moreover, synthetic analogs of natural products with improved potency and safety can be prepared and therefore natural products are often used as starting points for drug discovery. Natural product constituents have inspired numerous drug discovery efforts that eventually gained approval as new drugs.

Traditional medicine

Representative examples of drugs based on natural products

Indigenous peoples and ancient civilizations experimented with various plant and animal parts to determine what effect they might have. Through trial and error in isolated cases, traditional healers or shamans found some sources to provide therapeutic effect, representing knowledge of a crude drug that was passed down through generations in such practices as traditional Chinese medicine and Ayurveda. Extracts of some natural products led to modern discovery of their active constituents and eventually to the development of new drugs.

Modern natural product-derived drugs

A large number of currently prescribed drugs have been either directly derived from or inspired by natural products.

Some of the oldest natural product based drugs are analgesics. The bark of the willow tree has been known from antiquity to have pain relieving properties. This is due to presence of the natural product salicin which in turn may be hydrolyzed into salicylic acid. A synthetic derivative acetylsalicylic acid better known as aspirin is a widely used pain reliever. Its mechanism of action is inhibition of the cyclooxygenase (COX) enzyme. Another notable example is opium is extracted from the latex from Papaver somniferous (a flowering poppy plant). The most potent narcotic component of opium is the alkaloid morphine which acts as an opioid receptor agonist. A more recent example is the N-type calcium channel blocker ziconotide analgesic which is based on a cyclic peptide cone snail toxin (ω-conotoxin MVIIA) from the species Conus magus.

A significant number of anti-infectives are based on natural products. The first antibiotic to be discovered, penicillin, was isolated from the mold Penicillium. Penicillin and related beta lactams work by inhibiting DD-transpeptidase enzyme that is required by bacteria to cross link peptidoglycan to form the cell wall.

Several natural product drugs target tubulin, which is a component of the cytoskeleton. These include the tubulin polymerization inhibitor colchicine isolated from the Colchicum autumnale (autumn crocus flowering plant), which is used to treat gout. Colchicine is biosynthesized from the amino acids phenylalanine and tryptophan. Paclitaxel, in contrast, is a tubulin polymerization stabilizer and is used as a chemotherapeutic drug. Paclitaxel is based on the terpenoid natural product taxol, which is isolated from Taxus brevifolia (the pacific yew tree).

A class of drugs widely used to lower cholesterol are the HMG-CoA reductase inhibitors, for example atorvastatin. These were developed from mevastatin, a polyketide produced by the fungus Penicillium citrinum. Finally, a number natural product drugs are used to treat hypertension and congestive heart failure. These include the angiotensin-converting enzyme inhibitor captopril. Captopril is based on the peptidic bradykinin potentiating factor isolated from venom of the Brazilian arrowhead viper (Bothrops jararaca).

Limiting and enabling factors

Numerous challenges limit the use of natural products for drug discovery, resulting in 21st century preference by pharmaceutical companies to dedicate discovery efforts toward high-throughput screening of pure synthetic compounds with shorter timelines to refinement. Natural product sources are often unreliable to access and supply, have a high probability of duplication, inherently create intellectual property concerns about patent protection, vary in composition due to sourcing season or environment, and are susceptible to rising extinction rates.

The biological resource for drug discovery from natural products remains abundant, with small percentages of microorganisms, plant species, and insects assessed for bioactivity. In enormous numbers, bacteria and marine microorganisms remain unexamined. As of 2008, the field of metagenomics was proposed to examine genes and their function in soil microbes, but most pharmaceutical firms have not exploited this resource fully, choosing instead to develop "diversity-oriented synthesis" from libraries of known drugs or natural sources for lead compounds with higher potential for bioactivity.

Isolation and purification

Penicillin G, the first of its class fungal antibiotic, first studied by Scottish microbiologist Alexander Fleming in the late 1920s, and made practical as a therapeutic via natural product isolation in the late 1930s by Ernst Boris Chain, Howard Florey, and others, these three named scientists sharing the 1945 Nobel Prize in Medicine for the work. Fleming recognized the antibacterial activity and clinical potential of "pen G", but was unable to purify or stabilize it. Developments in chromatographic separations and freeze drying helped move progress forward in the production of commercial quantities of penicillin and other natural products.

All natural products begin as mixtures with other compounds from the natural source, often very complex mixtures, from which the product of interest must be isolated and purified. The isolation of a natural product refers, depending on context, either to the isolation of sufficient quantities of pure chemical matter for chemical structure elucidation, derivitzation/degradation chemistry, biological testing, and other research needs (generally milligrams to grams, but historically, often more), or to the isolation of "analytical quantities" of the substance of interest, where the focus is on identification and quantitation of the substance (e.g. in biological tissue or fluid), and where the quantity isolated depends on the analytical method applied (but is generally always sub-microgram in scale). The ease with which the active agent can be isolated and purified depends on the structure, stability, and quantity of the natural product. The methods of isolation applied toward achieving these two distinct scales of product are likewise distinct, but generally involve extraction, precipitation, adsorptions, chromatography, and sometimes crystallizations. In both cases, the isolated substance is purified to chemical homogeneity, i.e. specific combined separation and analytical methods such as LC-MS methods are chosen to be "orthogonal"—achieving their separations based on distinct modes of interaction between substance and isolating matrix—with the goal being repeated detection of only a single species present in the putative pure sample. Early isolation is almost inevitably followed by structure determination, especially if an important pharmacologic activity is associated with the purified natural product.

Structure determination refers to methods applied to determine the chemical structure of an isolated, pure natural product, a process that involves an array of chemical and physical methods that have changed markedly over the history of natural products research; in earliest days, these focused on chemical transformation of unknown substances into known substances, and measurement of physical properties such as melting point and boiling point, and related methods for determining molecular weight. In the modern era, methods focus on mass spectrometry and nuclear magnetic resonance methods, often multidimensional, and, when feasible, small molecule crystallography. For instance, the chemical structure of penicillin was determined by Dorothy Crowfoot Hodgkin in 1945, work for which she later received a Nobel Prize in Chemistry (1964).

Synthesis

Many natural products have very complex structures. The perceived complexity of a natural product is a qualitative matter, consisting of consideration of its molecular mass, the particular arrangements of substructures (functional groups, rings etc.) with respect to one another, the number and density of those functional groups, the stability of those groups and of the molecule as a whole, the number and type of stereochemical elements, the physical properties of the molecule and its intermediates (which bear on the ease of its handling and purification), all of these viewed in the context of the novelty of the structure and whether preceding related synthetic efforts have been successful (see below for details). Some natural products, especially those less complex, are easily and cost-effectively prepared via complete chemical synthesis from readily available, simpler chemical ingredients, a process referred to as total synthesis (especially when the process involves no steps mediated by biological agents). Not all natural products are amenable to total synthesis, cost-effective or otherwise. In particular, those most complex often are not. Many are accessible, but the required routes are simply too expensive to allow synthesis on any practical or industrial scale. However, to be available for further study, all natural products must yield to isolation and purification. This may suffice if isolation provides appropriate quantities of the natural product for the intended purpose (e.g. as a drug to alleviate disease). Drugs such as penicillin, morphine, and paclitaxel proved to be affordably acquired at needed commercial scales solely via isolation procedures (without any significant synthetic chemistry contributing). However, in other cases, needed agents are not available without synthetic chemistry manipulations.

Semisynthesis

The process of isolating a natural product from its source can be costly in terms of committed time and material expense, and it may challenge the availability of the relied upon natural resource (or have ecological consequences for the resource). For instance, it has been estimated that the bark of an entire yew tree (Taxus brevifolia) would have to be harvested to extract enough paclitaxel for just a single dose of therapy. Furthermore, the number of structural analogues obtainable for structure–activity analysis (SAR) simply via harvest (if more than one structural analogue is even present) is limited by the biology at work in the organism, and so outside of the experimentalist's control.

In such cases where the ultimate target is harder to come by, or limits SAR, it is sometimes possible to source a middle-to-late stage biosynthetic precursor or analogue from which the ultimate target can be prepared. This is termed semisynthesis or partial synthesis. With this approach, the related biosynthetic intermediate is harvested and then converted to the final product by conventional procedures of chemical synthesis.

This strategy can have two advantages. Firstly, the intermediate may be more easily extracted, and in higher yield, than the ultimate desired product. An example of this is paclitaxel, which can be manufactured by extracting 10-deacetylbaccatin III from T. brevifolia needles, then carrying out a four-step synthesis. Secondly, the route designed between semisynthetic starting material and ultimate product may permit analogues of the final product to be synthesized. The newer generation semisynthetic penicillins are an illustration of the benefit of this approach.

Total synthesis

Structural representation of cobalamin, an early natural product isolated and structurally characterized. The variable R group can be a methyl or 5'-adenosyl group, or a cyanide or hydroxide anion. The "proof" by synthesis of vitamin B12 was accomplished in 1972 by the groups of R.B. Woodward and A. Eschenmoser.

In general, the total synthesis of natural products is a non-commercial research activity, aimed at deeper understanding of the synthesis of particular natural product frameworks, and the development of fundamental new synthetic methods. Even so, it is of tremendous commercial and societal importance. By providing challenging synthetic targets, for example, it has played a central role in the development of the field of organic chemistry. Prior to the development of analytical chemistry methods in the twentieth century, the structures of natural products were affirmed by total synthesis (so-called "structure proof by synthesis"). Early efforts in natural products synthesis targeted complex substances such as cobalamin (vitamin B12), an essential cofactor in cellular metabolism.

Symmetry

Examination of dimerized and trimerized natural products has shown that an element of bilateral symmetry is often present. Bilateral symmetry refers to a molecule or system that contains a C2, Cs, or C2v point group identity. C2 symmetry tends to be much more abundant than other types of bilateral symmetry. This finding sheds light on how these compounds might be mechanistically created, as well as providing insight into the thermodynamic properties that make these compounds more favorable. Density functional theoretical (DFT), Hartree Fock, and semiempirical calculations also show some favorability for dimerization in natural products due to evolution of more energy per bond than the equivalent trimer or tetramer. This is proposed to be due to steric hindrance at the core of the molecule, as most natural products dimerize and trimerize in a head-to-head fashion rather than head-to-tail.

Research and teaching

Research and teaching activities related to natural products fall into a number of diverse academic areas, including organic chemistry, medicinal chemistry, pharmacognosy, ethnobotany, traditional medicine, and ethnopharmacology. Other biological areas include chemical biology, chemical ecology, chemogenomics, systems biology, molecular modeling, chemometrics, and chemoinformatics.

Chemistry

Natural products chemistry is a distinct area of chemical research which was important in the development and history of chemistry. Isolating and identifying natural products has been important to source substances for early preclinical drug discovery research, to understand traditional medicine and ethnopharmacology, and to find pharmacologically useful areas of chemical space. To achieve this, many technological advances have been made, such as the evolution of technology associated with chemical separations, and the development of modern methods in chemical structure determination such as NMR. In addition, natural products are prepared by organic synthesis, to provide confirmation of their structure, or to give access to larger quantities of natural products of interest. In this process, the structure of some natural products have been revised, and the challenge of synthesising natural products has led to the development of new synthetic methodology, synthetic strategy, and tactics. In this regard, natural products play a central role in the training of new synthetic organic chemists, and are a principal motivation in the development of new variants of old chemical reactions (e.g., the Evans aldol reaction), as well as the discovery of completely new chemical reactions (e.g., the Woodward cis-hydroxylation, Sharpless epoxidation, and Suzuki–Miyaura cross-coupling reactions).

Biochemistry

The biosynthesis of natural products is interest. Knowledge of the biosynthesis enables improved routes to valuable natural products. This knowledge can then be used to access larger quantities of natural products with interesting biological activity, and allow medicinally useful natural products, such as alkaloids to be produced more efficiently and economically.

History

Antoine Lavoisier (1743-1794)
 
Friedrich Wöhler (1800–1882)
 
Hermann Emil Fischer (1852–1919)
 
Richard Willstätter (1872–1942)
 
Robert Robinson (1886–1975)

Foundations of organic and natural product chemistry

The concept of natural products dates back to the early 19th century, when the foundations of organic chemistry were laid. Organic chemistry was regarded at that time as the chemistry of substances that plants and animals are composed of. It was a relatively complex form of chemistry and stood in stark contrast to inorganic chemistry, the principles of which had been established in 1789 by the Frenchman Antoine Lavoisier in his work Traité Élémentaire de Chimie.

Isolation

Lavoisier showed at the end of the 18th century that organic substances consisted of a limited number of elements: primarily carbon and hydrogen and supplemented by oxygen and nitrogen. He quickly focused on the isolation of these substances, often because they had an interesting pharmacological activity. Plants were the main source of such compounds, especially alkaloids and glycosides. It was long been known that opium, a sticky mixture of alkaloids (including codeine, morphine, noscapine, thebaine, and papaverine) from the opium poppy (Papaver somniferum), possessed a narcotic and at the same time mind-altering properties. By 1805, morphine had already been isolated by the German chemist Friedrich Sertürner and in the 1870s it was discovered that boiling morphine with acetic anhydride produced a substance with a strong pain suppressive effect: heroin. In 1815, Eugène Chevreul isolated cholesterol, a crystalline substance, from animal tissue that belongs to the class of steroids, and in 1820 strychnine, an alkaloid was isolated.

Synthesis

A second important step was the synthesis of organic compounds. Whereas the synthesis of inorganic substances had been known for a long time, the synthesis of organic substances was a difficult hurdle. In 1827 the Swedish chemist Jöns Jacob Berzelius held that an indispensable force of nature for the synthesis of organic compounds, called vital force or life force, was needed. This philosophical idea, vitalism, well into the 19th century had many supporters, even after the introduction of the atomic theory. The idea of vitalism especially fitted in with beliefs in medicine; the most traditional healing practices believed that disease was the result of some imbalance in the vital energies that distinguishes life from nonlife. A first attempt to break the vitalism idea in science was made in 1828, when the German chemist Friedrich Wöhler succeeded in synthesizing urea, a natural product found in urine, by heating ammonium cyanate, an inorganic substance:

This reaction showed that there was no need for a life force in order to prepare organic substances. This idea, however, was initially met with a high degree of skepticism, and only 20 years later, with the synthesis of acetic acid from carbon by Adolph Wilhelm Hermann Kolbe, was the idea accepted. Organic chemistry has since developed into an independent area of research dedicated to the study of carbon-containing compounds, since that element in common was detected in a variety of nature-derived substances. An important factor in the characterization of organic materials was on the basis of their physical properties (such as melting point, boiling point, solubility, crystallinity, or color).

Structural theories

A third step was the structure elucidation of organic substances: although the elemental composition of pure organic substances (irrespective of whether they were of natural or synthetic origin) could be determined fairly accurately, the molecular structure was still a problem. The urge to do structural elucidation resulted from a dispute between Friedrich Wöhler and Justus von Liebig, who both studied a silver salt of the same composition but had different properties. Wöhler studied silver cyanate, a harmless substance, while von Liebig investigated silver fulminate, a salt with explosive properties. The elemental analysis shows that both salts contain equal quantities of silver, carbon, oxygen and nitrogen. According to the then prevailing ideas, both substances should possess the same properties, but this was not the case. This apparent contradiction was later solved by Berzelius's theory of isomers, whereby not only the number and type of elements are of importance to the properties and chemical reactivity, but also the position of atoms in within a compound. This was a direct cause for the development of structure theories, such as the radical theory of Jean-Baptiste Dumas and the substitution theory of Auguste Laurent. However, it took until 1858 before by August Kekulé formulated a definite structure theory. He posited that carbon is tetravalent and can bind to itself to form carbon chains as they occur in natural products.

Expanding the concept

The concept of natural product, which initially based on organic compounds that could be isolated from plants, was extended to include animal material in the middle of the 19th century by the German Justus von Liebig. Hermann Emil Fischer in 1884, turned his attention to the study of carbohydrates and purines, work for which he was awarded the Nobel Prize in 1902. He also succeeded to make synthetically in the laboratory in a variety of carbohydrates, including glucose and mannose. After the discovery of penicillin by Alexander Fleming in 1928, fungi and other micro-organisms were added to the arsenal of sources of natural products.

Milestones

By the 1930s, several large classes of natural products were known. Important milestones included:

Politics of Europe

From Wikipedia, the free encyclopedia ...