Search This Blog

Saturday, March 15, 2025

Skepticism

From Wikipedia, the free encyclopedia

Skepticism (also spelled scepticism in British English) is a questioning attitude or doubt toward knowledge claims that are seen as mere belief or dogma. For example, if a person is skeptical about claims made by their government about an ongoing war then the person doubts that these claims are accurate. In such cases, skeptics normally recommend not disbelief but suspension of belief, i.e. maintaining a neutral attitude that neither affirms nor denies the claim. This attitude is often motivated by the impression that the available evidence is insufficient to support the claim. Formally, skepticism is a topic of interest in philosophy, particularly epistemology.

More informally, skepticism as an expression of questioning or doubt can be applied to any topic, such as politics, religion, or pseudoscience. It is often applied within restricted domains, such as morality (moral skepticism), atheism (skepticism about the existence of God), or the supernatural. Some theorists distinguish "good" or moderate skepticism, which seeks strong evidence before accepting a position, from "bad" or radical skepticism, which wants to suspend judgment indefinitely.

Philosophical skepticism is one important form of skepticism. It rejects knowledge claims that seem certain from the perspective of common sense. Radical forms of philosophical skepticism deny that "knowledge or rational belief is possible" and urge us to suspend judgment on many or all controversial matters. More moderate forms claim only that nothing can be known with certainty, or that we can know little or nothing about nonempirical matters, such as whether God exists, whether human beings have free will, or whether there is an afterlife. In ancient philosophy, skepticism was understood as a way of life associated with inner peace.

Skepticism has been responsible for many important developments in science and philosophy. It has also inspired several contemporary social movements. Religious skepticism advocates for doubt concerning basic religious principles, such as immortality, providence, and revelation. Scientific skepticism advocates for testing beliefs for reliability, by subjecting them to systematic investigation using the scientific method, to discover empirical evidence for them.

Definition and semantic field

Skepticism, also spelled scepticism (from the Greek σκέπτομαι skeptomai, to search, to think about or look for), refers to a doubting attitude toward knowledge claims. So if a person is skeptical of their government's claims about an ongoing war then the person has doubts that these claims are true. Or being skeptical that one's favorite hockey team will win the championship means that one is uncertain about the strength of their performance. Skepticism about a claim implies that one does not believe the claim to be true. But it does not automatically follow that one should believe that the claim is false either. Instead, skeptics usually recommend a neutral attitude: beliefs about this matter should be suspended. In this regard, skepticism about a claim can be defined as the thesis that "the only justified attitude with respect to [this claim] is suspension of judgment". It is often motivated by the impression that one cannot be certain about it. This is especially relevant when there is significant expert disagreement. Skepticism is usually restricted to a claim or a field of inquiry. So religious and moral skeptics have a doubtful attitude about religious and moral doctrines. But some forms of philosophical skepticism, are wider in that they reject any form of knowledge.

Some definitions, often inspired by ancient philosophy, see skepticism not just as an attitude but as a way of life. This is based on the idea that maintaining the skeptical attitude of doubt toward most concerns in life is superior to living in dogmatic certainty, for example because such a skeptic has more happiness and peace of mind or because it is morally better. In contemporary philosophy, on the other hand, skepticism is often understood neither as an attitude nor as a way of life but as a thesis: the thesis that knowledge does not exist.

Skepticism is related to various terms. It is sometimes equated with agnosticism and relativism. However, there are slight differences in meaning. Agnosticism is often understood more narrowly as skepticism about religious questions, in particular, about the Christian doctrine. Relativism does not deny the existence of knowledge or truth but holds that they are relative to a person and differ from person to person, for example, because they follow different cognitive norms. The opposite of skepticism is dogmatism, which implies an attitude of certainty in the form of an unquestioning belief. A similar contrast is often drawn in relation to blind faith and credulity.

Types

Various types of skepticism have been discussed in the academic literature. Skepticism is usually restricted to knowledge claims on one particular subject, which is why its different forms can be distinguished based on the subject. For example, religious skeptics distrust religious doctrines and moral skeptics raise doubts about accepting various moral requirements and customs. Skepticism can also be applied to knowledge in general. However, this attitude is usually only found in some forms of philosophical skepticism. A closely related classification distinguishes based on the source of knowledge, such as skepticism about perception, memory, or intuition. A further distinction is based on the degree of the skeptical attitude. The strongest forms assert that there is no knowledge at all or that knowledge is impossible. Weaker forms merely state that one can never be absolutely certain.

Some theorists distinguish between a good or healthy form of moderate skepticism in contrast to a bad or unhealthy form of radical skepticism. On this view, the "good" skeptic is a critically-minded person who seeks strong evidence before accepting a position. The "bad" skeptic, on the other hand, wants to "suspend judgment indefinitely... even in the face of demonstrable truth". Another categorization focuses on the motivation for the skeptical attitude. Some skeptics have ideological motives: they want to replace inferior beliefs with better ones. Others have a more practical outlook in that they see problematic beliefs as the cause of harmful customs they wish to stop. Some skeptics have very particular goals in mind, such as bringing down a certain institution associated with the spread of claims they reject.

Philosophical skepticism is a prominent form of skepticism and can be contrasted with non-philosophical or ordinary skepticism. Ordinary skepticism involves a doubting attitude toward knowledge claims that are rejected by many. Almost everyone shows some form of ordinary skepticism, for example, by doubting the knowledge claims made by flat earthers or astrologers. Philosophical skepticism, on the other hand, is a much more radical and rare position. It includes the rejection of knowledge claims that seem certain from the perspective of common sense. Some forms of it even deny that one knows that "I have two hands" or that "the sun will come out tomorrow". It is taken seriously in philosophy nonetheless because it has proven very hard to conclusively refute philosophical skepticism.

In various fields

Skepticism has been responsible for important developments in various fields, such as science, medicine, and philosophy. In science, the skeptical attitude toward traditional opinions was a key factor in the development of the scientific method. It emphasizes the need to scrutinize knowledge claims by testing them through experimentation and precise measurement. In the field of medicine, skepticism has helped establish more advanced forms of treatment by putting into doubt traditional forms that were based on intuitive appeal rather than empirical evidence. In the history of philosophy, skepticism has often played a productive role not just for skeptics but also for non-skeptical philosophers. This is due to its critical attitude that challenges the epistemological foundations of philosophical theories. This can help to keep speculation in check and may provoke creative responses, transforming the theory in question in order to overcome the problems posed by skepticism. According to Richard H. Popkin, "the history of philosophy can be seen, in part, as a struggle with skepticism". This struggle has led many contemporary philosophers to abandon the quest for absolutely certain or indubitable first principles of philosophy, which was still prevalent in many earlier periods. Skepticism has been an important topic throughout the history of philosophy and is still widely discussed today.

Philosophy

As a philosophical school or movement, skepticism arose both in ancient Greece and India. In India the Ajñana school of philosophy espoused skepticism. It was a major early rival of Buddhism and Jainism, and possibly a major influence on Buddhism. Two of the foremost disciples of the Buddha, Sariputta and Moggallāna, were initially students of the Ajñana philosopher Sanjaya Belatthiputta. A strong element of skepticism is found in Early Buddhism, most particularly in the Aṭṭhakavagga sutra. However the total effect these philosophies had on each other is difficult to discern. Since skepticism is a philosophical attitude and a style of philosophizing rather than a position, the Ajñanins may have influenced other skeptical thinkers of India such as Nagarjuna, Jayarāśi Bhaṭṭa, and Shriharsha.

In Greece, philosophers as early as Xenophanes (c. 570c. 475 BCE) expressed skeptical views, as did Democritus and a number of Sophists. Gorgias, for example, reputedly argued that nothing exists, that even if there were something we could not know it, and that even if we could know it we could not communicate it. The Heraclitean philosopher Cratylus refused to discuss anything and would merely wriggle his finger, claiming that communication is impossible since meanings are constantly changing. Socrates also had skeptical tendencies, claiming to know nothing worthwhile.

Pyrrho of Elis was the founder of the school of skepticism known as Pyrrhonism.

There were two major schools of skepticism in the ancient Greek and Roman world. The first was Pyrrhonism, founded by Pyrrho of Elis (c. 360–270 BCE). The second was Academic Skepticism, so-called because its two leading defenders, Arcesilaus (c. 315–240 BCE) who initiated the philosophy, and Carneades (c. 217–128 BCE), the philosophy's most famous proponent, were heads of Plato's Academy. Pyrrhonism's aims are psychological. It urges suspension of judgment (epoche) to achieve mental tranquility (ataraxia). The Academic Skeptics denied that knowledge is possible (acatalepsy). The Academic Skeptics claimed that some beliefs are more reasonable or probable than others, whereas Pyrrhonian skeptics argue that equally compelling arguments can be given for or against any disputed view. Nearly all the writings of the ancient skeptics are now lost. Most of what we know about ancient skepticism is from Sextus Empiricus, a Pyrrhonian skeptic who lived in the second or third century CE. His works contain a lucid summary of stock skeptical arguments.

Ancient skepticism faded out during the late Roman Empire, particularly after Augustine (354–430 CE) attacked the skeptics in his work Against the Academics (386 CE). There was little knowledge of, or interest in, ancient skepticism in Christian Europe during the Middle Ages. Interest revived during the Renaissance and Reformation, particularly after the complete writings of Sextus Empiricus were translated into Latin in 1569 and after Martin Luther's skepticism of holy orders. A number of Catholic writers, including Francisco Sanches (c. 1550–1623), Michel de Montaigne (1533–1592), Pierre Gassendi (1592–1655), and Marin Mersenne (1588–1648) deployed ancient skeptical arguments to defend moderate forms of skepticism and to argue that faith, rather than reason, must be the primary guide to truth. Similar arguments were offered later (perhaps ironically) by the Protestant thinker Pierre Bayle in his influential Historical and Critical Dictionary (1697–1702).

The growing popularity of skeptical views created an intellectual crisis in seventeenth-century Europe. An influential response was offered by the French philosopher and mathematician René Descartes (1596–1650). In his classic work, Meditations of First Philosophy (1641), Descartes sought to refute skepticism, but only after he had formulated the case for skepticism as powerfully as possible. Descartes argued that no matter what radical skeptical possibilities we imagine there are certain truths (e.g., that thinking is occurring, or that I exist) that are absolutely certain. Thus, the ancient skeptics were wrong to claim that knowledge is impossible. Descartes also attempted to refute skeptical doubts about the reliability of our senses, our memory, and other cognitive faculties. To do this, Descartes tried to prove that God exists and that God would not allow us to be systematically deceived about the nature of reality. Many contemporary philosophers question whether this second stage of Descartes's critique of skepticism is successful.

In the eighteenth century a new case for skepticism was offered by the Scottish philosopher David Hume (1711–1776). Hume was an empiricist, claiming that all genuine ideas can be traced back to original impressions of sensation or introspective consciousness. Hume argued that on empiricist grounds there are no sound reasons for belief in God, an enduring self or soul, an external world, causal necessity, objective morality, or inductive reasoning. In fact, he argued that "Philosophy would render us entirely Pyrrhonian, were not Nature too strong for it." As Hume saw it, the real basis of human belief is not reason, but custom or habit. We are hard-wired by nature to trust, say, our memories or inductive reasoning, and no skeptical arguments, however powerful, can dislodge those beliefs. In this way, Hume embraced what he called a "mitigated" skepticism, while rejecting an "excessive" Pyrrhonian skepticism that he saw as both impractical and psychologically impossible.

Hume's skepticism provoked a number of important responses. Hume's Scottish contemporary, Thomas Reid (1710–1796), challenged Hume's strict empiricism and argued that it is rational to accept "common-sense" beliefs such as the basic reliability of our senses, our reason, our memories, and inductive reasoning, even though none of these things can be proved. In Reid's view, such common-sense beliefs are foundational and require no proof in order to be rationally justified. Not long after Hume's death, the German philosopher Immanuel Kant (1724–1804) argued that human empirical experience has possibility conditions which could not have been realized unless Hume's skeptical conclusions about causal synthetic a priori judgements were false.

Today, skepticism continues to be a topic of lively debate among philosophers. British philosopher Julian Baggini posits that reason is perceived as "an enemy of mystery and ambiguity," but, if used properly, can be an effective tool for solving many larger societal issues.

Religion

Religious skepticism generally refers to doubting particular religious beliefs or claims. For example, a religious skeptic might believe that Jesus existed (see historicity of Jesus) while questioning claims that he was the messiah or performed miracles. Historically, religious skepticism can be traced back to Xenophanes, who doubted many religious claims of his time, although he recognized that "God is one, supreme among gods and men, and not like mortals in body or in mind." He maintained that there was one greatest God. God is one eternal being, spherical in form, comprehending all things within himself, is the absolute mind and thought, therefore is intelligent, and moves all things, but bears no resemblance to human nature either in body or mind."

Religious skepticism is not the same as atheism or agnosticism, though these often do involve skeptical attitudes toward religion and philosophical theology (for example, towards divine omnipotence). Religious people are generally skeptical about claims of other religions, at least when the two denominations conflict concerning some belief. Additionally, they may also be skeptical of the claims made by atheists.

The historian Will Durant writes that Plato was "as skeptical of atheism as of any other dogma". The Baháʼí Faith encourages skepticism that is mainly centered around self-investigation of truth.

Science

A scientific or empirical skeptic is one who questions beliefs on the basis of scientific understanding and empirical evidence.

Scientific skepticism may discard beliefs pertaining to purported phenomena not subject to reliable observation and thus not systematic or empirically testable. Most scientists, being scientific skeptics, test the reliability of certain kinds of claims by subjecting them to systematic investigation via the scientific method. As a result, a number of ostensibly scientific claims are considered to be "pseudoscience" if they are found to improperly apply or to ignore the fundamental aspects of the scientific method.

Auditing

Professional skepticism is an important concept in auditing. It requires an auditor to have a "questioning mind", to make a critical assessment of evidence, and to consider the sufficiency of the evidence.

Nucleic acid double helix

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Nucleic_acid_double_helix
Two complementary regions of nucleic acid molecules will bind and form a double helical structure held together by base pairs.

In molecular biology, the term double helix refers to the structure formed by double-stranded molecules of nucleic acids such as DNA. The double helical structure of a nucleic acid complex arises as a consequence of its secondary structure, and is a fundamental component in determining its tertiary structure. The structure was discovered by Maurice Wilkins, Rosalind Franklin, her student Raymond Gosling, James Watson, and Francis Crick, while the term "double helix" entered popular culture with the 1968 publication of Watson's The Double Helix: A Personal Account of the Discovery of the Structure of DNA.

The DNA double helix biopolymer of nucleic acid is held together by nucleotides which base pair together. In B-DNA, the most common double helical structure found in nature, the double helix is right-handed with about 10–10.5 base pairs per turn. The double helix structure of DNA contains a major groove and minor groove. In B-DNA the major groove is wider than the minor groove. Given the difference in widths of the major groove and minor groove, many proteins which bind to B-DNA do so through the wider major groove.

History

The double-helix model of DNA structure was first published in the journal Nature by James Watson and Francis Crick in 1953, (X,Y,Z coordinates in 1954) based on the work of Rosalind Franklin and her student Raymond Gosling, who took the crucial X-ray diffraction image of DNA labeled as "Photo 51", and Maurice Wilkins, Alexander Stokes, and Herbert Wilson, and base-pairing chemical and biochemical information by Erwin Chargaff. Before this, Linus Pauling—who had already accurately characterised the conformation of protein secondary structure motifs—and his collaborator Robert Corey had posited, erroneously, that DNA would adopt a triple-stranded conformation.

The realization that the structure of DNA is that of a double-helix elucidated the mechanism of base pairing by which genetic information is stored and copied in living organisms and is widely considered one of the most important scientific discoveries of the 20th century. Crick, Wilkins, and Watson each received one-third of the 1962 Nobel Prize in Physiology or Medicine for their contributions to the discovery.

Nucleic acid hybridization

Hybridization is the process of complementary base pairs binding to form a double helix. Melting is the process by which the interactions between the strands of the double helix are broken, separating the two nucleic acid strands. These bonds are weak, easily separated by gentle heating, enzymes, or mechanical force. Melting occurs preferentially at certain points in the nucleic acid. T and A rich regions are more easily melted than C and G rich regions. Some base steps (pairs) are also susceptible to DNA melting, such as T A and T G. These mechanical features are reflected by the use of sequences such as TATA at the start of many genes to assist RNA polymerase in melting the DNA for transcription.

Strand separation by gentle heating, as used in polymerase chain reaction (PCR), is simple, providing the molecules have fewer than about 10,000 base pairs (10 kilobase pairs, or 10 kbp). The intertwining of the DNA strands makes long segments difficult to separate. The cell avoids this problem by allowing its DNA-melting enzymes (helicases) to work concurrently with topoisomerases, which can chemically cleave the phosphate backbone of one of the strands so that it can swivel around the other. Helicases unwind the strands to facilitate the advance of sequence-reading enzymes such as DNA polymerase.

Base pair geometry

Base pair geometries

The geometry of a base, or base pair step can be characterized by 6 coordinates: shift, slide, rise, tilt, roll, and twist. These values precisely define the location and orientation in space of every base or base pair in a nucleic acid molecule relative to its predecessor along the axis of the helix. Together, they characterize the helical structure of the molecule. In regions of DNA or RNA where the normal structure is disrupted, the change in these values can be used to describe such disruption.

For each base pair, considered relative to its predecessor, there are the following base pair geometries to consider:

  • Shear
  • Stretch
  • Stagger
  • Buckle
  • Propeller: rotation of one base with respect to the other in the same base pair.
  • Opening
  • Shift: displacement along an axis in the base-pair plane perpendicular to the first, directed from the minor to the major groove.
  • Slide: displacement along an axis in the plane of the base pair directed from one strand to the other.
  • Rise: displacement along the helix axis.
  • Tilt: rotation around the shift axis.
  • Roll: rotation around the slide axis.
  • Twist: rotation around the rise axis.
  • x-displacement
  • y-displacement
  • inclination
  • tip
  • pitch: the height per complete turn of the helix.

Rise and twist determine the handedness and pitch of the helix. The other coordinates, by contrast, can be zero. Slide and shift are typically small in B-DNA, but are substantial in A- and Z-DNA. Roll and tilt make successive base pairs less parallel, and are typically small.

"Tilt" has often been used differently in the scientific literature, referring to the deviation of the first, inter-strand base-pair axis from perpendicularity to the helix axis. This corresponds to slide between a succession of base pairs, and in helix-based coordinates is properly termed "inclination".

Helix geometries

At least three DNA conformations are believed to be found in nature, A-DNA, B-DNA, and Z-DNA. The B form described by James Watson and Francis Crick is believed to predominate in cells. It is 23.7 Å wide and extends 34 Å per 10 bp of sequence. The double helix has a right-hand twist that makes one complete turn about its axis every 10.4–10.5 base pairs in solution. This frequency of twist (termed the helical pitch) depends largely on stacking forces that each base exerts on its neighbours in the chain. The absolute configuration of the bases determines the direction of the helical curve for a given conformation.

A-DNA and Z-DNA differ significantly in their geometry and dimensions to B-DNA, although still form helical structures. It was long thought that the A form only occurs in dehydrated samples of DNA in the laboratory, such as those used in crystallographic experiments, and in hybrid pairings of DNA and RNA strands, but DNA dehydration does occur in vivo, and A-DNA is now known to have biological functions. Segments of DNA that cells have methylated for regulatory purposes may adopt the Z geometry, in which the strands turn about the helical axis the opposite way to A-DNA and B-DNA. There is also evidence of protein-DNA complexes forming Z-DNA structures.

Other conformations are possible; A-DNA, B-DNA, C-DNA, E-DNA, L-DNA (the enantiomeric form of D-DNA), P-DNA, S-DNA, Z-DNA, etc. have been described so far. In fact, only the letters F, Q, U, V, and Y are now available to describe any new DNA structure that may appear in the future. However, most of these forms have been created synthetically and have not been observed in naturally occurring biological systems. There are also triple-stranded DNA forms and quadruplex forms such as the G-quadruplex and the i-motif.

The structures of A-, B-, and Z-DNA.
The helix axis of A-, B-, and Z-DNA.
Structural features of the three major forms of DNA
Geometry attribute A-DNA B-DNA Z-DNA
Helix sense right-handed right-handed left-handed
Repeating unit 1 bp 1 bp 2 bp
Rotation/bp 32.7° 34.3° 60°/2
bp/turn 11 10.5 12
Inclination of bp to axis +19° −1.2° −9°
Rise/bp along axis 2.3 Å (0.23 nm) 3.32 Å (0.332 nm) 3.8 Å (0.38 nm)
Pitch/turn of helix 28.2 Å (2.82 nm) 33.2 Å (3.32 nm) 45.6 Å (4.56 nm)
Mean propeller twist +18° +16°
Glycosyl angle anti anti C: anti,
G: syn
Sugar pucker C3'-endo C2'-endo C: C2'-endo,
G: C2'-exo
Diameter 23 Å (2.3 nm) 20 Å (2.0 nm) 18 Å (1.8 nm)

Grooves

Major and minor grooves of DNA. Minor groove is a binding site for the dye Hoechst 33258.

Twin helical strands form the DNA backbone. Another double helix may be found by tracing the spaces, or grooves, between the strands. These voids are adjacent to the base pairs and may provide a binding site. As the strands are not directly opposite each other, the grooves are unequally sized. One groove, the major groove, is 22 Å wide and the other, the minor groove, is 12 Å wide. The narrowness of the minor groove means that the edges of the bases are more accessible in the major groove. As a result, proteins like transcription factors that can bind to specific sequences in double-stranded DNA usually make contacts to the sides of the bases exposed in the major groove. This situation varies in unusual conformations of DNA within the cell (see below), but the major and minor grooves are always named to reflect the differences in size that would be seen if the DNA is twisted back into the ordinary B form.

Non-double helical forms

Alternative non-helical models were briefly considered in the late 1970s as a potential solution to problems in DNA replication in plasmids and chromatin. However, the models were set aside in favor of the double-helical model due to subsequent experimental advances such as X-ray crystallography of DNA duplexes and later the nucleosome core particle, and the discovery of topoisomerases. Also, the non-double-helical models are not currently accepted by the mainstream scientific community.

Bending

DNA is a relatively rigid polymer, typically modelled as a worm-like chain. It has three significant degrees of freedom; bending, twisting, and compression, each of which cause certain limits on what is possible with DNA within a cell. Twisting-torsional stiffness is important for the circularisation of DNA and the orientation of DNA bound proteins relative to each other and bending-axial stiffness is important for DNA wrapping and circularisation and protein interactions. Compression-extension is relatively unimportant in the absence of high tension.

Persistence length, axial stiffness

Example sequences and their persistence lengths (B DNA)
Sequence Persistence length
/ base pairs
Random 154±10
(CA)repeat 133±10
(CAG)repeat 124±10
(TATA)repeat 137±10

DNA in solution does not take a rigid structure but is continually changing conformation due to thermal vibration and collisions with water molecules, which makes classical measures of rigidity impossible to apply. Hence, the bending stiffness of DNA is measured by the persistence length, defined as:

Bending flexibility of a polymer is conventionally quantified in terms of its persistence length, Lp, a length scale below which the polymer behaves more or less like a rigid rod. Specifically, Lp is defined as length of the polymer segment over which the time-averaged orientation of the polymer becomes uncorrelated...

This value may be directly measured using an atomic force microscope to directly image DNA molecules of various lengths. In an aqueous solution, the average persistence length has been found to be of around 50 nm (or 150 base pairs). More broadly, it has been observed to be between 45 and 60 nm or 132–176 base pairs (the diameter of DNA is 2 nm) This can vary significantly due to variations in temperature, aqueous solution conditions and DNA length. This makes DNA a moderately stiff molecule.

The persistence length of a section of DNA is somewhat dependent on its sequence, and this can cause significant variation. The variation is largely due to base stacking energies and the residues which extend into the minor and major grooves.

Models for DNA bending

Stacking stability of base steps (B DNA)
Step Stacking ΔG
/kcal mol−1
T A -0.19
T G or C A -0.55
C G -0.91
A G or C T -1.06
A A or T T -1.11
A T -1.34
G A or T C -1.43
C C or G G -1.44
A C or G T -1.81
G C -2.17

At length-scales larger than the persistence length, the entropic flexibility of DNA is remarkably consistent with standard polymer physics models, such as the Kratky-Porod worm-like chain model. Consistent with the worm-like chain model is the observation that bending DNA is also described by Hooke's law at very small (sub-piconewton) forces. For DNA segments less than the persistence length, the bending force is approximately constant and behaviour deviates from the worm-like chain predictions.

This effect results in unusual ease in circularising small DNA molecules and a higher probability of finding highly bent sections of DNA.

Bending preference

DNA molecules often have a preferred direction to bend, i.e., anisotropic bending. This is, again, due to the properties of the bases which make up the DNA sequence - a random sequence will have no preferred bend direction, i.e., isotropic bending.

Preferred DNA bend direction is determined by the stability of stacking each base on top of the next. If unstable base stacking steps are always found on one side of the DNA helix then the DNA will preferentially bend away from that direction. As bend angle increases then steric hindrances and ability to roll the residues relative to each other also play a role, especially in the minor groove. A and T residues will be preferentially be found in the minor grooves on the inside of bends. This effect is particularly seen in DNA-protein binding where tight DNA bending is induced, such as in nucleosome particles. See base step distortions above.

DNA molecules with exceptional bending preference can become intrinsically bent. This was first observed in trypanosomatid kinetoplast DNA. Typical sequences which cause this contain stretches of 4-6 T and A residues separated by G and C rich sections which keep the A and T residues in phase with the minor groove on one side of the molecule. For example:


¦

















¦

















¦

















¦

















¦

















¦
G
A
T
T
C
C
C
A
A
A
A
A
T
G
T
C
A
A
A
A
A
A
T
A
G
G
C
A
A
A
A
A
A
T
G
C
C
A
A
A
A
A
A
T
C
C
C
A
A
A
C

The intrinsically bent structure is induced by the 'propeller twist' of base pairs relative to each other allowing unusual bifurcated Hydrogen-bonds between base steps. At higher temperatures this structure is denatured, and so the intrinsic bend is lost.

All DNA which bends anisotropically has, on average, a longer persistence length and greater axial stiffness. This increased rigidity is required to prevent random bending which would make the molecule act isotropically.

Circularization

DNA circularization depends on both the axial (bending) stiffness and torsional (rotational) stiffness of the molecule. For a DNA molecule to successfully circularize it must be long enough to easily bend into the full circle and must have the correct number of bases so the ends are in the correct rotation to allow bonding to occur. The optimum length for circularization of DNA is around 400 base pairs (136 nm), with an integral number of turns of the DNA helix, i.e., multiples of 10.4 base pairs. Having a non integral number of turns presents a significant energy barrier for circularization, for example a 10.4 x 30 = 312 base pair molecule will circularize hundreds of times faster than 10.4 x 30.5 ≈ 317 base pair molecule.

The bending of short circularized DNA segments is non-uniform. Rather, for circularized DNA segments less than the persistence length, DNA bending is localised to 1-2 kinks that form preferentially in AT-rich segments. If a nick is present, bending will be localised to the nick site.

Stretching

Elastic stretching regime

Longer stretches of DNA are entropically elastic under tension. When DNA is in solution, it undergoes continuous structural variations due to the energy available in the thermal bath of the solvent. This is due to the thermal vibration of the molecule combined with continual collisions with water molecules. For entropic reasons, more compact relaxed states are thermally accessible than stretched out states, and so DNA molecules are almost universally found in a tangled relaxed layouts. For this reason, one molecule of DNA will stretch under a force, straightening it out. Using optical tweezers, the entropic stretching behavior of DNA has been studied and analyzed from a polymer physics perspective, and it has been found that DNA behaves largely like the Kratky-Porod worm-like chain model under physiologically accessible energy scales.

Phase transitions under stretching

Under sufficient tension and positive torque, DNA is thought to undergo a phase transition with the bases splaying outwards and the phosphates moving to the middle. This proposed structure for overstretched DNA has been called P-form DNA, in honor of Linus Pauling who originally presented it as a possible structure of DNA.

Evidence from mechanical stretching of DNA in the absence of imposed torque points to a transition or transitions leading to further structures which are generally referred to as S-form DNA. These structures have not yet been definitively characterised due to the difficulty of carrying out atomic-resolution imaging in solution while under applied force although many computer simulation studies have been made (for example,).

Proposed S-DNA structures include those which preserve base-pair stacking and hydrogen bonding (GC-rich), while releasing extension by tilting, as well as structures in which partial melting of the base-stack takes place, while base-base association is nonetheless overall preserved (AT-rich).

Periodic fracture of the base-pair stack with a break occurring once per three bp (therefore one out of every three bp-bp steps) has been proposed as a regular structure which preserves planarity of the base-stacking and releases the appropriate amount of extension, with the term "Σ-DNA" introduced as a mnemonic, with the three right-facing points of the Sigma character serving as a reminder of the three grouped base pairs. The Σ form has been shown to have a sequence preference for GNC motifs which are believed under the GNC hypothesis to be of evolutionary importance.

Supercoiling and topology

Supercoiled structure of circular DNA molecules with low writhe. The helical aspect of the DNA duplex is omitted for clarity.

The B form of the DNA helix twists 360° per 10.4-10.5 bp in the absence of torsional strain. But many molecular biological processes can induce torsional strain. A DNA segment with excess or insufficient helical twisting is referred to, respectively, as positively or negatively supercoiled. DNA in vivo is typically negatively supercoiled, which facilitates the unwinding (melting) of the double-helix required for RNA transcription.

Within the cell most DNA is topologically restricted. DNA is typically found in closed loops (such as plasmids in prokaryotes) which are topologically closed, or as very long molecules whose diffusion coefficients produce effectively topologically closed domains. Linear sections of DNA are also commonly bound to proteins or physical structures (such as membranes) to form closed topological loops.

Francis Crick was one of the first to propose the importance of linking numbers when considering DNA supercoils. In a paper published in 1976, Crick outlined the problem as follows:

In considering supercoils formed by closed double-stranded molecules of DNA certain mathematical concepts, such as the linking number and the twist, are needed. The meaning of these for a closed ribbon is explained and also that of the writhing number of a closed curve. Some simple examples are given, some of which may be relevant to the structure of chromatin.

Analysis of DNA topology uses three values:

  • L = linking number - the number of times one DNA strand wraps around the other. It is an integer for a closed loop and constant for a closed topological domain.
  • T = twist - total number of turns in the double stranded DNA helix. This will normally tend to approach the number of turns that a topologically open double stranded DNA helix makes free in solution: number of bases/10.5, assuming there are no intercalating agents (e.g., ethidium bromide) or other elements modifying the stiffness of the DNA.
  • W = writhe - number of turns of the double stranded DNA helix around the superhelical axis
  • L = T + W and ΔL = ΔT + ΔW

Any change of T in a closed topological domain must be balanced by a change in W, and vice versa. This results in higher order structure of DNA. A circular DNA molecule with a writhe of 0 will be circular. If the twist of this molecule is subsequently increased or decreased by supercoiling then the writhe will be appropriately altered, making the molecule undergo plectonemic or toroidal superhelical coiling.

When the ends of a piece of double stranded helical DNA are joined so that it forms a circle the strands are topologically knotted. This means the single strands cannot be separated any process that does not involve breaking a strand (such as heating). The task of un-knotting topologically linked strands of DNA falls to enzymes termed topoisomerases. These enzymes are dedicated to un-knotting circular DNA by cleaving one or both strands so that another double or single stranded segment can pass through. This un-knotting is required for the replication of circular DNA and various types of recombination in linear DNA which have similar topological constraints.

The linking number paradox

For many years, the origin of residual supercoiling in eukaryotic genomes remained unclear. This topological puzzle was referred to by some as the "linking number paradox". However, when experimentally determined structures of the nucleosome displayed an over-twisted left-handed wrap of DNA around the histone octamer, this paradox was considered to be solved by the scientific community.

History of the function concept

From Wikipedia, the free encyclopedia

The mathematical concept of a function dates from the 17th century in connection with the development of calculus; for example, the slope of a graph at a point was regarded as a function of the x-coordinate of the point. Functions were not explicitly considered in antiquity, but some precursors of the concept can perhaps be seen in the work of medieval philosophers and mathematicians such as Oresme.

Mathematicians of the 18th century typically regarded a function as being defined by an analytic expression. In the 19th century, the demands of the rigorous development of analysis by Weierstrass and others, the reformulation of geometry in terms of analysis, and the invention of set theory by Cantor, eventually led to the much more general modern concept of a function as a single-valued mapping from one set to another.

Functions before the 17th century

In the 12th century, mathematician Sharaf al-Din al-Tusi analyzed the equation x3 + d = b ⋅ x2 in the form x2 ⋅ (bx) = d, stating that the left hand side must at least equal the value of d for the equation to have a solution. He then determined the maximum value of this expression. It is arguable that the isolation of this expression is an early approach to the notion of a "function". A value less than d means no positive solution; a value equal to d corresponds to one solution, while a value greater than d corresponds to two solutions. Sharaf al-Din's analysis of this equation was a notable development in Islamic mathematics, but his work was not pursued any further at that time, neither in the Muslim world nor in Europe.

According to Dieudonné and Ponte, the concept of a function emerged in the 17th century as a result of the development of analytic geometry and the infinitesimal calculus. Nevertheless, Medvedev suggests that the implicit concept of a function is one with an ancient lineage. Ponte also sees more explicit approaches to the concept in the Middle Ages:

Historically, some mathematicians can be regarded as having foreseen and come close to a modern formulation of the concept of function. Among them is Oresme (1323–1382) . . . In his theory, some general ideas about independent and dependent variable quantities seem to be present.

The development of analytical geometry around 1640 allowed mathematicians to go between geometric problems about curves and algebraic relations between "variable coordinates x and y." Calculus was developed using the notion of variables, with their associated geometric meaning, which persisted well into the eighteenth century. However, the terminology of "function" came to be used in interactions between Leibniz and Bernoulli towards the end of the 17th century.

The notion of "function" in analysis

The term "function" was literally introduced by Gottfried Leibniz, in a 1673 letter, to describe a quantity related to points of a curve, such as a coordinate or curve's slope. Johann Bernoulli started calling expressions made of a single variable "functions." In 1698, he agreed with Leibniz that any quantity formed "in an algebraic and transcendental manner" may be called a function of x. By 1718, he came to regard as a function "any expression made up of a variable and some constants." Alexis Claude Clairaut (in approximately 1734) and Leonhard Euler introduced the familiar notation for the value of a function.

The functions considered in those times are called today differentiable functions. For this type of function, one can talk about limits and derivatives; both are measurements of the output or the change in the output as it depends on the input or the change in the input. Such functions are the basis of calculus.

Euler

In the first volume of his fundamental text Introductio in analysin infinitorum, published in 1748, Euler gave essentially the same definition of a function as his teacher Bernoulli, as an expression or formula involving variables and constants e.g., . Euler's own definition reads:

A function of a variable quantity is an analytic expression composed in any way whatsoever of the variable quantity and numbers or constant quantities.

Euler also allowed multi-valued functions whose values are determined by an implicit equation.

In 1755, however, in his Institutiones calculi differentialis, Euler gave a more general concept of a function:

When certain quantities depend on others in such a way that they undergo a change when the latter change, then the first are called functions of the second. This name has an extremely broad character; it encompasses all the ways in which one quantity can be determined in terms of others.

Medvedev considers that "In essence this is the definition that became known as Dirichlet's definition." Edwards also credits Euler with a general concept of a function and says further that

The relations among these quantities are not thought of as being given by formulas, but on the other hand they are surely not thought of as being the sort of general set-theoretic, anything-goes subsets of product spaces that modern mathematicians mean when they use the word "function".

Fourier

In his Théorie Analytique de la Chaleur, Fourier claimed that an arbitrary function could be represented by a Fourier series. Fourier had a general conception of a function, which included functions that were neither continuous nor defined by an analytical expression. Related questions on the nature and representation of functions, arising from the solution of the wave equation for a vibrating string, had already been the subject of dispute between d'Alembert and Euler, and they had a significant impact in generalizing the notion of a function. Luzin observes that:

The modern understanding of function and its definition, which seems correct to us, could arise only after Fourier's discovery. His discovery showed clearly that most of the misunderstandings that arose in the debate about the vibrating string were the result of confusing two seemingly identical but actually vastly different concepts, namely that of function and that of its analytic representation. Indeed, prior to Fourier's discovery no distinction was drawn between the concepts of "function" and of "analytic representation," and it was this discovery that brought about their disconnection.

Cauchy

During the 19th century, mathematicians started to formalize all the different branches of mathematics. One of the first to do so was Cauchy; his somewhat imprecise results were later made completely rigorous by Weierstrass, who advocated building calculus on arithmetic rather than on geometry, which favoured Euler's definition over Leibniz's (see arithmetization of analysis). According to Smithies, Cauchy thought of functions as being defined by equations involving real or complex numbers, and tacitly assumed they were continuous:

Cauchy makes some general remarks about functions in Chapter I, Section 1 of his Analyse algébrique (1821). From what he says there, it is clear that he normally regards a function as being defined by an analytic expression (if it is explicit) or by an equation or a system of equations (if it is implicit); where he differs from his predecessors is that he is prepared to consider the possibility that a function may be defined only for a restricted range of the independent variable.

Lobachevsky and Dirichlet

Nikolai Lobachevsky and Peter Gustav Lejeune Dirichlet are traditionally credited with independently giving the modern "formal" definition of a function as a relation in which every first element has a unique second element.

Lobachevsky (1834) writes that

The general concept of a function requires that a function of x be defined as a number given for each x and varying gradually with x. The value of the function can be given either by an analytic expression, or by a condition that provides a means of examining all numbers and choosing one of them; or finally the dependence may exist but remain unknown.

while Dirichlet (1837) writes

If now a unique finite y corresponding to each x, and moreover in such a way that when x ranges continuously over the interval from a to b, also varies continuously, then y is called a continuous function of x for this interval. It is not at all necessary here that y be given in terms of x by one and the same law throughout the entire interval, and it is not necessary that it be regarded as a dependence expressed using mathematical operations.

Eves asserts that "the student of mathematics usually meets the Dirichlet definition of function in his introductory course in calculus.

Dirichlet's claim to this formalization has been disputed by Imre Lakatos:

There is no such definition in Dirichlet's works at all. But there is ample evidence that he had no idea of this concept. In his [1837] paper for instance, when he discusses piecewise continuous functions, he says that at points of discontinuity the function has two values: ...

However, Gardiner says "...it seems to me that Lakatos goes too far, for example, when he asserts that 'there is ample evidence that [Dirichlet] had no idea of [the modern function] concept'." Moreover, as noted above, Dirichlet's paper does appear to include a definition along the lines of what is usually ascribed to him, even though (like Lobachevsky) he states it only for continuous functions of a real variable.

Similarly, Lavine observes that:

It is a matter of some dispute how much credit Dirichlet deserves for the modern definition of a function, in part because he restricted his definition to continuous functions....I believe Dirichlet defined the notion of continuous function to make it clear that no rule or law is required even in the case of continuous functions, not just in general. This would have deserved special emphasis because of Euler's definition of a continuous function as one given by single expression-or law. But I also doubt there is sufficient evidence to settle the dispute.

Because Lobachevsky and Dirichlet have been credited as among the first to introduce the notion of an arbitrary correspondence, this notion is sometimes referred to as the Dirichlet or Lobachevsky-Dirichlet definition of a function. A general version of this definition was later used by Bourbaki (1939), and some in the education community refer to it as the "Dirichlet–Bourbaki" definition of a function.

Dedekind

Dieudonné, who was one of the founding members of the Bourbaki group, credits a precise and general modern definition of a function to Dedekind in his work Was sind und was sollen die Zahlen, which appeared in 1888 but had already been drafted in 1878. Dieudonné observes that instead of confining himself, as in previous conceptions, to real (or complex) functions, Dedekind defines a function as a single-valued mapping between any two sets:

What was new and what was to be essential for the whole of mathematics was the entirely general conception of a function.

Hardy

Hardy 1908, pp. 26–28 defined a function as a relation between two variables x and y such that "to some values of x at any rate correspond values of y." He neither required the function to be defined for all values of x nor to associate each value of x to a single value of y. This broad definition of a function encompasses more relations than are ordinarily considered functions in contemporary mathematics. For example, Hardy's definition includes multivalued functions and what in computability theory are called partial functions.

The logician's "function" prior to 1850

Logicians of this time were primarily involved with analyzing syllogisms (the 2000-year-old Aristotelian forms and otherwise), or as Augustus De Morgan (1847) stated it: "the examination of that part of reasoning which depends upon the manner in which inferences are formed, and the investigation of general maxims and rules for constructing arguments". At this time the notion of (logical) "function" is not explicit, but at least in the work of De Morgan and George Boole it is implied: we see abstraction of the argument forms, the introduction of variables, the introduction of a symbolic algebra with respect to these variables, and some of the notions of set theory.

De Morgan's 1847 "FORMAL LOGIC OR, The Calculus of Inference, Necessary and Probable" observes that "[a] logical truth depends upon the structure of the statement, and not upon the particular matters spoken of"; he wastes no time (preface page i) abstracting: "In the form of the proposition, the copula is made as abstract as the terms". He immediately (p. 1) casts what he calls "the proposition" (present-day propositional function or relation) into a form such as "X is Y", where the symbols X, "is", and Y represent, respectively, the subject, copula, and predicate. While the word "function" does not appear, the notion of "abstraction" is there, "variables" are there, the notion of inclusion in his symbolism "all of the Δ is in the О" (p. 9) is there, and lastly a new symbolism for logical analysis of the notion of "relation" (he uses the word with respect to this example " X)Y " (p. 75) ) is there:

" A1 X)Y To take an X it is necessary to take a Y" [or To be an X it is necessary to be a Y]
" A1 Y)X To take a Y it is sufficient to take a X" [or To be a Y it is sufficient to be an X], etc.

In his 1848 The Nature of Logic Boole asserts that "logic . . . is in a more especial sense the science of reasoning by signs", and he briefly discusses the notions of "belonging to" and "class": "An individual may possess a great variety of attributes and thus belonging to a great variety of different classes". Like De Morgan he uses the notion of "variable" drawn from analysis; he gives an example of "represent[ing] the class oxen by x and that of horses by y and the conjunction and by the sign + . . . we might represent the aggregate class oxen and horses by x + y".

In the context of "the Differential Calculus" Boole defined (circa 1849) the notion of a function as follows:

"That quantity whose variation is uniform . . . is called the independent variable. That quantity whose variation is referred to the variation of the former is said to be a function of it. The Differential calculus enables us in every case to pass from the function to the limit. This it does by a certain Operation. But in the very Idea of an Operation is . . . the idea of an inverse operation. To effect that inverse operation in the present instance is the business of the Int[egral] Calculus."

The logicians' "function" 1850–1950

Eves observes "that logicians have endeavored to push down further the starting level of the definitional development of mathematics and to derive the theory of sets, or classes, from a foundation in the logic of propositions and propositional functions". But by the late 19th century the logicians' research into the foundations of mathematics was undergoing a major split. The direction of the first group, the Logicists, can probably be summed up best by Bertrand Russell 1903 – "to fulfil two objects, first, to show that all mathematics follows from symbolic logic, and secondly to discover, as far as possible, what are the principles of symbolic logic itself."

The second group of logicians, the set-theorists, emerged with Georg Cantor's "set theory" (1870–1890) but were driven forward partly as a result of Russell's discovery of a paradox that could be derived from Frege's conception of "function", but also as a reaction against Russell's proposed solution. Zermelo's set-theoretic response was his 1908 Investigations in the foundations of set theory I – the first axiomatic set theory; here too the notion of "propositional function" plays a role.

George Boole's The Laws of Thought 1854; John Venn's Symbolic Logic 1881

In his An Investigation into the laws of thought Boole now defined a function in terms of a symbol x as follows:

"8. Definition. – Any algebraic expression involving symbol x is termed a function of x, and may be represented by the abbreviated form f(x)"

Boole then used algebraic expressions to define both algebraic and logical notions, e.g., 1 − x is logical NOT(x), xy is the logical AND(x,y), x + y is the logical OR(x, y), x(x + y) is xx + xy, and "the special law" xx = x2 = x.

In his 1881 Symbolic Logic Venn was using the words "logical function" and the contemporary symbolism (x = f(y), y = f −1(x), cf page xxi) plus the circle-diagrams historically associated with Venn to describe "class relations", the notions "'quantifying' our predicate", "propositions in respect of their extension", "the relation of inclusion and exclusion of two classes to one another", and "propositional function" (all on p. 10), the bar over a variable to indicate not-x (page 43), etc. Indeed he equated unequivocally the notion of "logical function" with "class" [modern "set"]: "... on the view adopted in this book, f(x) never stands for anything but a logical class. It may be a compound class aggregated of many simple classes; it may be a class indicated by certain inverse logical operations, it may be composed of two groups of classes equal to one another, or what is the same thing, their difference declared equal to zero, that is, a logical equation. But however composed or derived, f(x) with us will never be anything else than a general expression for such logical classes of things as may fairly find a place in ordinary Logic".

Frege's Begriffsschrift 1879

Gottlob Frege's Begriffsschrift (1879) preceded Giuseppe Peano (1889), but Peano had no knowledge of Frege 1879 until after he had published his 1889. Both writers strongly influenced Russell (1903). Russell in turn influenced much of 20th-century mathematics and logic through his Principia Mathematica (1913) jointly authored with Alfred North Whitehead.

At the outset Frege abandons the traditional "concepts subject and predicate", replacing them with argument and function respectively, which he believes "will stand the test of time. It is easy to see how regarding a content as a function of an argument leads to the formation of concepts. Furthermore, the demonstration of the connection between the meanings of the words if, and, not, or, there is, some, all, and so forth, deserves attention".

Frege begins his discussion of "function" with an example: Begin with the expression "Hydrogen is lighter than carbon dioxide". Now remove the sign for hydrogen (i.e., the word "hydrogen") and replace it with the sign for oxygen (i.e., the word "oxygen"); this makes a second statement. Do this again (using either statement) and substitute the sign for nitrogen (i.e., the word "nitrogen") and note that "This changes the meaning in such a way that "oxygen" or "nitrogen" enters into the relations in which "hydrogen" stood before". There are three statements:

  • "Hydrogen is lighter than carbon dioxide."
  • "Oxygen is lighter than carbon dioxide."
  • "Nitrogen is lighter than carbon dioxide."

Now observe in all three a "stable component, representing the totality of [the] relations"; call this the function, i.e.,

"... is lighter than carbon dioxide", is the function.

Frege calls the argument of the function "[t]he sign [e.g., hydrogen, oxygen, or nitrogen], regarded as replaceable by others that denotes the object standing in these relations". He notes that we could have derived the function as "Hydrogen is lighter than . . .." as well, with an argument position on the right; the exact observation is made by Peano (see more below). Finally, Frege allows for the case of two (or more) arguments. For example, remove "carbon dioxide" to yield the invariant part (the function) as:

  • "... is lighter than ... "

The one-argument function Frege generalizes into the form Φ(A) where A is the argument and Φ( ) represents the function, whereas the two-argument function he symbolizes as Ψ(A, B) with A and B the arguments and Ψ( , ) the function and cautions that "in general Ψ(A, B) differs from Ψ(B, A)". Using his unique symbolism he translates for the reader the following symbolism:

"We can read |--- Φ(A) as "A has the property Φ. |--- Ψ(A, B) can be translated by "B stands in the relation Ψ to A" or "B is a result of an application of the procedure Ψ to the object A".

Peano's The Principles of Arithmetic 1889

Peano defined the notion of "function" in a manner somewhat similar to Frege, but without the precision. First Peano defines the sign "K means class, or aggregate of objects", the objects of which satisfy three simple equality-conditions, a = a, (a = b) = (b = a), IF ((a = b) AND (b = c)) THEN (a = c). He then introduces φ, "a sign or an aggregate of signs such that if x is an object of the class s, the expression φx denotes a new object". Peano adds two conditions on these new objects: First, that the three equality-conditions hold for the objects φx; secondly, that "if x and y are objects of class s and if x = y, we assume it is possible to deduce φx = φy". Given all these conditions are met, φ is a "function presign". Likewise he identifies a "function postsign". For example if φ is the function presign a+, then φx yields a+x, or if φ is the function postsign +a then xφ yields x+a.

Bertrand Russell's The Principles of Mathematics 1903

While the influence of Cantor and Peano was paramount, in Appendix A "The Logical and Arithmetical Doctrines of Frege" of The Principles of Mathematics, Russell arrives at a discussion of Frege's notion of function, "...a point in which Frege's work is very important, and requires careful examination". In response to his 1902 exchange of letters with Frege about the contradiction he discovered in Frege's Begriffsschrift Russell tacked this section on at the last moment.

For Russell the bedeviling notion is that of variable: "6. Mathematical propositions are not only characterized by the fact that they assert implications, but also by the fact that they contain variables. The notion of the variable is one of the most difficult with which logic has to deal. For the present, I openly wish to make it plain that there are variables in all mathematical propositions, even where at first sight they might seem to be absent. . . . We shall find always, in all mathematical propositions, that the words any or some occur; and these words are the marks of a variable and a formal implication".

As expressed by Russell "the process of transforming constants in a proposition into variables leads to what is called generalization, and gives us, as it were, the formal essence of a proposition ... So long as any term in our proposition can be turned into a variable, our proposition can be generalized; and so long as this is possible, it is the business of mathematics to do it"; these generalizations Russell named propositional functions. Indeed he cites and quotes from Frege's Begriffsschrift and presents a vivid example from Frege's 1891 Function und Begriff: That "the essence of the arithmetical function 2x3 + x is what is left when the x is taken away, i.e., in the above instance 2( )3 + ( ). The argument x does not belong to the function but the two taken together make the whole". Russell agreed with Frege's notion of "function" in one sense: "He regards functions – and in this I agree with him – as more fundamental than predicates and relations" but Russell rejected Frege's "theory of subject and assertion", in particular "he thinks that, if a term a occurs in a proposition, the proposition can always be analysed into a and an assertion about a".

Evolution of Russell's notion of "function" 1908–1913

Russell would carry his ideas forward in his 1908 Mathematical logical as based on the theory of types and into his and Whitehead's 1910–1913 Principia Mathematica. By the time of Principia Mathematica Russell, like Frege, considered the propositional function fundamental: "Propositional functions are the fundamental kind from which the more usual kinds of function, such as "sin x" or log x or "the father of x" are derived. These derivative functions . . . are called "descriptive functions". The functions of propositions . . . are a particular case of propositional functions".

Propositional functions: Because his terminology is different from the contemporary, the reader may be confused by Russell's "propositional function". An example may help. Russell writes a propositional function in its raw form, e.g., as φŷ: "ŷ is hurt". (Observe the circumflex or "hat" over the variable y). For our example, we will assign just 4 values to the variable ŷ: "Bob", "This bird", "Emily the rabbit", and "y". Substitution of one of these values for variable ŷ yields a proposition; this proposition is called a "value" of the propositional function. In our example there are four values of the propositional function, e.g., "Bob is hurt", "This bird is hurt", "Emily the rabbit is hurt" and "y is hurt." A proposition, if it is significant—i.e., if its truth is determinate—has a truth-value of truth or falsity. If a proposition's truth value is "truth" then the variable's value is said to satisfy the propositional function. Finally, per Russell's definition, "a class [set] is all objects satisfying some propositional function" (p. 23). Note the word "all" – this is how the contemporary notions of "For all ∀" and "there exists at least one instance ∃" enter the treatment (p. 15).

To continue the example: Suppose (from outside the mathematics/logic) one determines that the propositions "Bob is hurt" has a truth value of "falsity", "This bird is hurt" has a truth value of "truth", "Emily the rabbit is hurt" has an indeterminate truth value because "Emily the rabbit" doesn't exist, and "y is hurt" is ambiguous as to its truth value because the argument y itself is ambiguous. While the two propositions "Bob is hurt" and "This bird is hurt" are significant (both have truth values), only the value "This bird" of the variable ŷ satisfies the propositional function φŷ: "ŷ is hurt". When one goes to form the class α: φŷ: "ŷ is hurt", only "This bird" is included, given the four values "Bob", "This bird", "Emily the rabbit" and "y" for variable ŷ and their respective truth-values: falsity, truth, indeterminate, ambiguous.

Russell defines functions of propositions with arguments, and truth-functions f(p). For example, suppose one were to form the "function of propositions with arguments" p1: "NOT(p) AND q" and assign its variables the values of p: "Bob is hurt" and q: "This bird is hurt". (We are restricted to the logical linkages NOT, AND, OR and IMPLIES, and we can only assign "significant" propositions to the variables p and q). Then the "function of propositions with arguments" is p1: NOT("Bob is hurt") AND "This bird is hurt". To determine the truth value of this "function of propositions with arguments" we submit it to a "truth function", e.g., f(p1): f( NOT("Bob is hurt") AND "This bird is hurt" ), which yields a truth value of "truth".

The notion of a "many-one" functional relation": Russell first discusses the notion of "identity", then defines a descriptive function (pages 30ff) as the unique value ιx that satisfies the (2-variable) propositional function (i.e., "relation") φŷ.

N.B. The reader should be warned here that the order of the variables are reversed! y is the independent variable and x is the dependent variable, e.g., x = sin(y).

Russell symbolizes the descriptive function as "the object standing in relation to y": R'y =DEF (ιx)(x R y). Russell repeats that "R'y is a function of y, but not a propositional function [sic]; we shall call it a descriptive function. All the ordinary functions of mathematics are of this kind. Thus in our notation "sin y" would be written " sin 'y ", and "sin" would stand for the relation sin 'y has to y".

The formalist's "function": David Hilbert's axiomatization of mathematics (1904–1927)

David Hilbert set himself the goal of "formalizing" classical mathematics "as a formal axiomatic theory, and this theory shall be proved to be consistent, i.e., free from contradiction". In Hilbert 1927 The Foundations of Mathematics he frames the notion of function in terms of the existence of an "object":

13. A(a) --> A(ε(A)) Here ε(A) stands for an object of which the proposition A(a) certainly holds if it holds of any object at all; let us call ε the logical ε-function". [The arrow indicates "implies".]

Hilbert then illustrates the three ways how the ε-function is to be used, firstly as the "for all" and "there exists" notions, secondly to represent the "object of which [a proposition] holds", and lastly how to cast it into the choice function.

Recursion theory and computability: But the unexpected outcome of Hilbert's and his student Bernays's effort was failure; see Gödel's incompleteness theorems of 1931. At about the same time, in an effort to solve Hilbert's Entscheidungsproblem, mathematicians set about to define what was meant by an "effectively calculable function" (Alonzo Church 1936), i.e., "effective method" or "algorithm", that is, an explicit, step-by-step procedure that would succeed in computing a function. Various models for algorithms appeared, in rapid succession, including Church's lambda calculus (1936), Stephen Kleene's μ-recursive functions(1936) and Alan Turing's (1936–7) notion of replacing human "computers" with utterly-mechanical "computing machines" (see Turing machines). It was shown that all of these models could compute the same class of computable functions. Church's thesis holds that this class of functions exhausts all the number-theoretic functions that can be calculated by an algorithm. The outcomes of these efforts were vivid demonstrations that, in Turing's words, "there can be no general process for determining whether a given formula U of the functional calculus K [Principia Mathematica] is provable"; see more at Independence (mathematical logic) and Computability theory.

Development of the set-theoretic definition of "function"

Set theory began with the work of the logicians with the notion of "class" (modern "set") for example De Morgan (1847), Jevons (1880), Venn (1881), Frege (1879) and Peano (1889). It was given a push by Georg Cantor's attempt to define the infinite in set-theoretic treatment (1870–1890) and a subsequent discovery of an antinomy (contradiction, paradox) in this treatment (Cantor's paradox), by Russell's discovery (1902) of an antinomy in Frege's 1879 (Russell's paradox), by the discovery of more antinomies in the early 20th century (e.g., the 1897 Burali-Forti paradox and the 1905 Richard paradox), and by resistance to Russell's complex treatment of logic and dislike of his axiom of reducibility (1908, 1910–1913) that he proposed as a means to evade the antinomies.

Russell's paradox 1902

In 1902 Russell sent a letter to Frege pointing out that Frege's 1879 Begriffsschrift allowed a function to be an argument of itself: "On the other hand, it may also be that the argument is determinate and the function indeterminate . . .." From this unconstrained situation Russell was able to form a paradox:

"You state ... that a function, too, can act as the indeterminate element. This I formerly believed, but now this view seems doubtful to me because of the following contradiction. Let w be the predicate: to be a predicate that cannot be predicated of itself. Can w be predicated of itself?"

Frege responded promptly that "Your discovery of the contradiction caused me the greatest surprise and, I would almost say, consternation, since it has shaken the basis on which I intended to build arithmetic".

From this point forward development of the foundations of mathematics became an exercise in how to dodge "Russell's paradox", framed as it was in "the bare [set-theoretic] notions of set and element".

Zermelo's set theory (1908) modified by Skolem (1922)

The notion of "function" appears as Zermelo's axiom III—the Axiom of Separation (Axiom der Aussonderung). This axiom constrains us to use a propositional function Φ(x) to "separate" a subset MΦ from a previously formed set M:

"AXIOM III. (Axiom of separation). Whenever the propositional function Φ(x) is definite for all elements of a set M, M possesses a subset MΦ containing as elements precisely those elements x of M for which Φ(x) is true".

As there is no universal set — sets originate by way of Axiom II from elements of (non-set) domain B – "...this disposes of the Russell antinomy so far as we are concerned". But Zermelo's "definite criterion" is imprecise, and is fixed by Weyl, Fraenkel, Skolem, and von Neumann.

In fact Skolem in his 1922 referred to this "definite criterion" or "property" as a "definite proposition":

"... a finite expression constructed from elementary propositions of the form a ε b or a = b by means of the five operations [logical conjunction, disjunction, negation, universal quantification, and existential quantification].

van Heijenoort summarizes:

"A property is definite in Skolem's sense if it is expressed . . . by a well-formed formula in the simple predicate calculus of first order in which the sole predicate constants are ε and possibly, =. ... Today an axiomatization of set theory is usually embedded in a logical calculus, and it is Weyl's and Skolem's approach to the formulation of the axiom of separation that is generally adopted.

In this quote the reader may observe a shift in terminology: nowhere is mentioned the notion of "propositional function", but rather one sees the words "formula", "predicate calculus", "predicate", and "logical calculus." This shift in terminology is discussed more in the section that covers "function" in contemporary set theory.

The Wiener–Hausdorff–Kuratowski "ordered pair" definition 1914–1921

The history of the notion of "ordered pair" is not clear. As noted above, Frege (1879) proposed an intuitive ordering in his definition of a two-argument function Ψ(A, B). Norbert Wiener in his 1914 (see below) observes that his own treatment essentially "revert(s) to Schröder's treatment of a relation as a class of ordered couples". Russell (1903) considered the definition of a relation (such as Ψ(A, B)) as a "class of couples" but rejected it:

"There is a temptation to regard a relation as definable in extension as a class of couples. This is the formal advantage that it avoids the necessity for the primitive proposition asserting that every couple has a relation holding between no other pairs of terms. But it is necessary to give sense to the couple, to distinguish the referent [domain] from the relatum [converse domain]: thus a couple becomes essentially distinct from a class of two terms, and must itself be introduced as a primitive idea. . . . It seems therefore more correct to take an intensional view of relations, and to identify them rather with class-concepts than with classes."

By 1910–1913 and Principia Mathematica Russell had given up on the requirement for an intensional definition of a relation, stating that "mathematics is always concerned with extensions rather than intensions" and "Relations, like classes, are to be taken in extension". To demonstrate the notion of a relation in extension Russell now embraced the notion of ordered couple: "We may regard a relation ... as a class of couples ... the relation determined by φ(x, y) is the class of couples (x, y) for which φ(x, y) is true". In a footnote he clarified his notion and arrived at this definition:

"Such a couple has a sense, i.e., the couple (x, y) is different from the couple (y, x) unless x = y. We shall call it a "couple with sense," ... it may also be called an ordered couple

But he goes on to say that he would not introduce the ordered couples further into his "symbolic treatment"; he proposes his "matrix" and his unpopular axiom of reducibility in their place.

An attempt to solve the problem of the antinomies led Russell to propose his "doctrine of types" in an appendix B of his 1903 The Principles of Mathematics. In a few years he would refine this notion and propose in his 1908 The Theory of Types two axioms of reducibility, the purpose of which were to reduce (single-variable) propositional functions and (dual-variable) relations to a "lower" form (and ultimately into a completely extensional form); he and Alfred North Whitehead would carry this treatment over to Principia Mathematica 1910–1913 with a further refinement called "a matrix". The first axiom is *12.1; the second is *12.11. To quote Wiener the second axiom *12.11 "is involved only in the theory of relations". Both axioms, however, were met with skepticism and resistance; see more at Axiom of reducibility. By 1914 Norbert Wiener, using Whitehead and Russell's symbolism, eliminated axiom *12.11 (the "two-variable" (relational) version of the axiom of reducibility) by expressing a relation as an ordered pair using the null set. At approximately the same time, Hausdorff (1914, p. 32) gave the definition of the ordered pair (a, b) as {{a,1}, {b, 2}}. A few years later Kuratowski (1921) offered a definition that has been widely used ever since, namely {{a, b}, {a}}". As noted by Suppes (1960) "This definition . . . was historically important in reducing the theory of relations to the theory of sets.

Observe that while Wiener "reduced" the relational *12.11 form of the axiom of reducibility he did not reduce nor otherwise change the propositional-function form *12.1; indeed he declared this "essential to the treatment of identity, descriptions, classes and relations".

Schönfinkel's notion of "function" as a many-one "correspondence" 1924

Where exactly the general notion of "function" as a many-one correspondence derives from is unclear. Russell in his 1920 Introduction to Mathematical Philosophy states that "It should be observed that all mathematical functions result form one-many [sic – contemporary usage is many-one] relations . . . Functions in this sense are descriptive functions". A reasonable possibility is the Principia Mathematica notion of "descriptive function" – R 'y =DEFx)(x R y): "the singular object that has a relation R to y". Whatever the case, by 1924, Moses Schönfinkel expressed the notion, claiming it to be "well known":

"As is well known, by function we mean in the simplest case a correspondence between the elements of some domain of quantities, the argument domain, and those of a domain of function values ... such that to each argument value there corresponds at most one function value".

According to Willard Quine, Schönfinkel 1924 "provide[s] for ... the whole sweep of abstract set theory. The crux of the matter is that Schönfinkel lets functions stand as arguments. For Schönfinkel, substantially as for Frege, classes are special sorts of functions. They are propositional functions, functions whose values are truth values. All functions, propositional and otherwise, are for Schönfinkel one-place functions". Remarkably, Schönfinkel reduces all mathematics to an extremely compact functional calculus consisting of only three functions: Constancy, fusion (i.e., composition), and mutual exclusivity. Quine notes that Haskell Curry (1958) carried this work forward "under the head of combinatory logic".

Von Neumann's set theory 1925

By 1925 Abraham Fraenkel (1922) and Thoralf Skolem (1922) had amended Zermelo's set theory of 1908. But von Neumann was not convinced that this axiomatization could not lead to the antinomies. So he proposed his own theory, his 1925 An axiomatization of set theory. It explicitly contains a "contemporary", set-theoretic version of the notion of "function":

"[Unlike Zermelo's set theory] [w]e prefer, however, to axiomatize not "set" but "function". The latter notion certainly includes the former. (More precisely, the two notions are completely equivalent, since a function can be regarded as a set of pairs, and a set as a function that can take two values.)".

At the outset he begins with I-objects and II-objects, two objects A and B that are I-objects (first axiom), and two types of "operations" that assume ordering as a structural property obtained of the resulting objects [x, y] and (x, y). The two "domains of objects" are called "arguments" (I-objects) and "functions" (II-objects); where they overlap are the "argument functions" (he calls them I-II objects). He introduces two "universal two-variable operations" – (i) the operation [x, y]: ". . . read 'the value of the function x for the argument y . . . it itself is a type I object", and (ii) the operation (x, y): ". . . (read 'the ordered pair x, y') whose variables x and y must both be arguments and that itself produces an argument (x, y). Its most important property is that x1 = x2 and y1 = y2 follow from (x1 = y2) = (x2 = y2)". To clarify the function pair he notes that "Instead of f(x) we write [f,x] to indicate that f, just like x, is to be regarded as a variable in this procedure". To avoid the "antinomies of naive set theory, in Russell's first of all . . . we must forgo treating certain functions as arguments". He adopts a notion from Zermelo to restrict these "certain functions".

Suppes observes that von Neumann's axiomatization was modified by Bernays "in order to remain nearer to the original Zermelo system . . . He introduced two membership relations: one between sets, and one between sets and classes". Then Gödel [1940] further modified the theory: "his primitive notions are those of set, class and membership (although membership alone is sufficient)". This axiomatization is now known as von Neumann–Bernays–Gödel set theory.

Bourbaki 1939

In 1939, Bourbaki, in addition to giving the well-known ordered pair definition of a function as a certain subset of the cartesian product E × F, gave the following:

"Let E and F be two sets, which may or may not be distinct. A relation between a variable element x of E and a variable element y of F is called a functional relation in y if, for all xE, there exists a unique yF which is in the given relation with x. We give the name of function to the operation which in this way associates with every element xE the element yF which is in the given relation with x, and the function is said to be determined by the given functional relation. Two equivalent functional relations determine the same function."

Since 1950

Notion of "function" in contemporary set theory

Both axiomatic and naive forms of Zermelo's set theory as modified by Fraenkel (1922) and Skolem (1922) define "function" as a relation, define a relation as a set of ordered pairs, and define an ordered pair as a set of two "dissymetric" sets.

While the reader of Suppes (1960) Axiomatic Set Theory or Halmos (1970) Naive Set Theory observes the use of function-symbolism in the axiom of separation, e.g., φ(x) (in Suppes) and S(x) (in Halmos), they will see no mention of "proposition" or even "first order predicate calculus". In their place are "expressions of the object language", "atomic formulae", "primitive formulae", and "atomic sentences".

Kleene (1952) defines the words as follows: "In word languages, a proposition is expressed by a sentence. Then a 'predicate' is expressed by an incomplete sentence or sentence skeleton containing an open place. For example, "___ is a man" expresses a predicate ... The predicate is a propositional function of one variable. Predicates are often called 'properties' ... The predicate calculus will treat of the logic of predicates in this general sense of 'predicate', i.e., as propositional function".

In 1954, Bourbaki, on p. 76 in Chapitre II of Theorie des Ensembles (theory of sets), gave a definition of a function as a triple f = (F, A, B). Here F is a functional graph, meaning a set of pairs where no two pairs have the same first member. On p. 77 (op. cit.) Bourbaki states (literal translation): "Often we shall use, in the remainder of this Treatise, the word function instead of functional graph."

Suppes (1960) in Axiomatic Set Theory, formally defines a relation (p. 57) as a set of pairs, and a function (p. 86) as a relation where no two pairs have the same first member.

Relational form of a function

The reason for the disappearance of the words "propositional function" e.g., in Suppes (1960), and Halmos (1970), is explained by Tarski (1946) together with further explanation of the terminology:

"An expression such as x is an integer, which contains variables and, on replacement of these variables by constants becomes a sentence, is called a SENTENTIAL [i.e., propositional cf his index] FUNCTION. But mathematicians, by the way, are not very fond of this expression, because they use the term "function" with a different meaning. ... sentential functions and sentences composed entirely of mathematical symbols (and not words of everyday language), such as: x + y = 5 are usually referred to by mathematicians as FORMULAE. In place of "sentential function" we shall sometimes simply say "sentence" – but only in cases where there is no danger of any misunderstanding".

For his part Tarski calls the relational form of function a "FUNCTIONAL RELATION or simply a FUNCTION". After a discussion of this "functional relation" he asserts that:

"The concept of a function which we are considering now differs essentially from the concepts of a sentential [propositional] and of a designatory function .... Strictly speaking ... [these] do not belong to the domain of logic or mathematics; they denote certain categories of expressions which serve to compose logical and mathematical statements, but they do not denote things treated of in those statements... . The term "function" in its new sense, on the other hand, is an expression of a purely logical character; it designates a certain type of things dealt with in logic and mathematics."

See more about "truth under an interpretation" at Alfred Tarski.

Skepticism

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Skepticism   ...