Search This Blog

Wednesday, July 30, 2025

History of randomness

From Wikipedia, the free encyclopedia
Ancient fresco of dice players in Pompei

In ancient history, the concepts of chance and randomness were intertwined with that of fate. Many ancient peoples threw dice to determine fate, and this later evolved into games of chance. At the same time, most ancient cultures used various methods of divination to attempt to circumvent randomness and fate. Beyond religion and games of chance, randomness has been attested for sortition since at least ancient Athenian democracy in the form of a kleroterion.

The formalization of odds and chance was perhaps earliest done by the Chinese 3,000 years ago. The Greek philosophers discussed randomness at length, but only in non-quantitative forms. It was only in the sixteenth century that Italian mathematicians began to formalize the odds associated with various games of chance. The invention of modern calculus had a positive impact on the formal study of randomness. In the 19th century the concept of entropy was introduced in physics.

The early part of the twentieth century saw a rapid growth in the formal analysis of randomness, and mathematical foundations for probability were introduced, leading to its axiomatization in 1933. At the same time, the advent of quantum mechanics changed the scientific perspective on determinacy. In the mid to late 20th-century, ideas of algorithmic information theory introduced new dimensions to the field via the concept of algorithmic randomness.

Although randomness had often been viewed as an obstacle and a nuisance for many centuries, in the twentieth century computer scientists began to realize that the deliberate introduction of randomness into computations can be an effective tool for designing better algorithms. In some cases, such randomized algorithms are able to outperform the best deterministic methods.

Antiquity to the Middle Ages

Depiction of Roman goddess Fortuna who determined fate, by Hans Beham, 1541

Pre-Christian people along the Mediterranean threw dice to determine fate, and this later evolved into games of chance. There is also evidence of games of chance played by ancient Egyptians, Hindus and Chinese, dating back to 2100 BC. The Chinese used dice before the Europeans, and have a long history of playing games of chance.

Over 3,000 years ago, the problems concerned with the tossing of several coins were considered in the I Ching, one of the oldest Chinese mathematical texts, that probably dates to 1150 BC. The two principal elements yin and yang were combined in the I Ching in various forms to produce Heads and Tails permutations of the type HH, TH, HT, etc. and the Chinese seem to have been aware of Pascal's triangle long before the Europeans formalized it in the 17th century. However, Western philosophy focused on the non-mathematical aspects of chance and randomness until the 16th century.

The development of the concept of chance throughout history has been very gradual. Historians have wondered why progress in the field of randomness was so slow, given that humans have encountered chance since antiquity. Deborah J. Bennett suggests that ordinary people face an inherent difficulty in understanding randomness, although the concept is often taken as being obvious and self-evident. She cites studies by Kahneman and Tversky; these concluded that statistical principles are not learned from everyday experience because people do not attend to the detail necessary to gain such knowledge.

The Greek philosophers were the earliest Western thinkers to address chance and randomness. Around 400 BC, Democritus presented a view of the world as governed by the unambiguous laws of order and considered randomness as a subjective concept that only originated from the inability of humans to understand the nature of events. He used the example of two men who would send their servants to bring water at the same time to cause them to meet. The servants, unaware of the plan, would view the meeting as random.

Aristotle saw chance and necessity as opposite forces. He argued that nature had rich and constant patterns that could not be the result of chance alone, but that these patterns never displayed the machine-like uniformity of necessary determinism. He viewed randomness as a genuine and widespread part of the world, but as subordinate to necessity and order. Aristotle classified events into three types: certain events that happen necessarily; probable events that happen in most cases; and unknowable events that happen by pure chance. He considered the outcome of games of chance as unknowable.

Around 300 BC Epicurus proposed the concept that randomness exists by itself, independent of human knowledge. He believed that in the atomic world, atoms would swerve at random along their paths, bringing about randomness at higher levels.

Hotei, the deity of fortune observing a cock fight in a 16th-century Japanese print

For several centuries thereafter, the idea of chance continued to be intertwined with fate. Divination was practiced in many cultures, using diverse methods. The Chinese analyzed the cracks in turtle shells, while the Germans, who according to Tacitus had the highest regards for lots and omens, utilized strips of bark. In the Roman Empire, chance was personified by the Goddess Fortuna. The Romans would partake in games of chance to simulate what Fortuna would have decided. In 49 BC, Julius Caesar allegedly decided on his fateful decision to cross the Rubicon after throwing dice.

Aristotle's classification of events into the three classes: certain, probable and unknowable was adopted by Roman philosophers, but they had to reconcile it with deterministic Christian teachings in which even events unknowable to man were considered to be predetermined by God. About 960 Bishop Wibold of Cambrai correctly enumerated the 56 different outcomes (without permutations) of playing with three dice. No reference to playing cards has been found in Europe before 1350. The Church preached against card playing, and card games spread much more slowly than games based on dice. The Christian Church specifically forbade divination; and wherever Christianity went, divination lost most of its old-time power.

Over the centuries, many Christian scholars wrestled with the conflict between the belief in free will and its implied randomness, and the idea that God knows everything that happens. Saints Augustine and Aquinas tried to reach an accommodation between foreknowledge and free will, but Martin Luther argued against randomness and took the position that God's omniscience renders human actions unavoidable and determined. In the 13th century, Thomas Aquinas viewed randomness not as the result of a single cause, but of several causes coming together by chance. While he believed in the existence of randomness, he rejected it as an explanation of the end-directedness of nature, for he saw too many patterns in nature to have been obtained by chance.

The Greeks and Romans had not noticed the magnitudes of the relative frequencies of the games of chance. For centuries, chance was discussed in Europe with no mathematical foundation and it was only in the 16th century that Italian mathematicians began to discuss the outcomes of games of chance as ratios. In his 1565 Liber de Lude Aleae (a gambler's manual published after his death) Gerolamo Cardano wrote one of the first formal tracts to analyze the odds of winning at various games.

17th–19th centuries

Statue of Blaise Pascal, Louvre

Around 1620 Galileo wrote a paper called On a discovery concerning dice that used an early probabilistic model to address specific questions. In 1654, prompted by Chevalier de Méré's interest in gambling, Blaise Pascal corresponded with Pierre de Fermat, and much of the groundwork for probability theory was laid. Pascal's Wager was noted for its early use of the concept of infinity, and the first formal use of decision theory. The work of Pascal and Fermat influenced Leibniz's work on the infinitesimal calculus, which in turn provided further momentum for the formal analysis of probability and randomness.

The first known suggestion for viewing randomness in terms of complexity was made by Leibniz in an obscure 17th-century document discovered after his death. Leibniz asked how one could know if a set of points on a piece of paper were selected at random (e.g. by splattering ink) or not. Given that for any set of finite points there is always a mathematical equation that can describe the points, (e.g. by Lagrangian interpolation) the question focuses on the way the points are expressed mathematically. Leibniz viewed the points as random if the function describing them had to be extremely complex. Three centuries later, the same concept was formalized as algorithmic randomness by A. N. Kolmogorov and Gregory Chaitin as the minimal length of a computer program needed to describe a finite string as random.

The Doctrine of Chances, the first textbook on probability theory was published in 1718 and the field continued to grow thereafter. The frequency theory approach to probability was first developed by Robert Ellis and John Venn late in the 19th century.

The Fortune Teller by Vouet, 1617

While the mathematical elite was making progress in understanding randomness from the 17th to the 19th century, the public at large continued to rely on practices such as fortune telling in the hope of taming chance. Fortunes were told in a multitude of ways both in the Orient (where fortune telling was later termed an addiction) and in Europe by gypsies and others. English practices such as the reading of eggs dropped in a glass were exported to Puritan communities in North America.

"I know of scarcely anything so apt to impress the imagination as the wonderful form of cosmic order expressed by the "Law of Frequency of Error." The law would have been personified by the Greeks and deified, if they had known of it. It reigns with serenity and in complete self-effacement amidst the wildest confusion. The huger the mob, and the greater the apparent anarchy, the more perfect is its sway. It is the supreme law of Unreason. Whenever a large sample of chaotic elements are taken in hand and marshalled in the order of their magnitude, an unsuspected and most beautiful form of regularity proves to have been latent all along. The tops of the marshalled row form a flowing curve of invariable proportions; and each element, as it is sorted into place, finds, as it were, a pre-ordained niche, accurately adapted to fit it."

Galton (1894)

The term entropy, which is now a key element in the study of randomness, was coined by Rudolf Clausius in 1865 as he studied heat engines in the context of the second law of thermodynamics. Clausius was the first to state "entropy always increases".

From the time of Newton until about 1890, it was generally believed that if one knows the initial state of a system with great accuracy, and if all the forces acting on the system can be formulated with equal accuracy, it would be possible, in principle, to make predictions of the state of the universe for an infinitely long time. The limits to such predictions in physical systems became clear as early as 1893 when Henri Poincaré showed that in the three-body problem in astronomy, small changes to the initial state could result in large changes in trajectories during the numerical integration of the equations.

During the 19th century, as probability theory was formalized and better understood, the attitude towards "randomness as nuisance" began to be questioned. Goethe wrote:

The tissue of the world is built from necessities and randomness; the intellect of men places itself between both and can control them; it considers the necessity and the reason of its existence; it knows how randomness can be managed, controlled, and used.

The words of Goethe proved prophetic, when in the 20th century randomized algorithms were discovered as powerful tools. By the end of the 19th century, Newton's model of a mechanical universe was fading away as the statistical view of the collision of molecules in gases was studied by Maxwell and Boltzmann. Boltzmann's equation S = k loge W (inscribed on his tombstone) first related entropy with logarithms.

20th century

Antony Gormley's Quantum Cloud sculpture in London was designed by a computer using a random walk algorithm.

During the 20th century, the five main interpretations of probability theory (e.g., classical, logical, frequency, propensity and subjective) became better understood, were discussed, compared and contrasted. A significant number of application areas were developed in this century, from finance to physics. In 1900 Louis Bachelier applied Brownian motion to evaluate stock options, effectively launching the fields of financial mathematics and stochastic processes.

Émile Borel was one of the first mathematicians to formally address randomness in 1909, and introduced normal numbers. In 1919 Richard von Mises gave the first definition of algorithmic randomness via the impossibility of a gambling system. He advanced the frequency theory of randomness in terms of what he called the collective, i.e. a random sequence. Von Mises regarded the randomness of a collective as an empirical law, established by experience. He related the "disorder" or randomness of a collective to the lack of success of attempted gambling systems. This approach led him to suggest a definition of randomness that was later refined and made mathematically rigorous by Alonzo Church by using computable functions in 1940. Von Mises likened the principle of the impossibility of a gambling system to the principle of the conservation of energy, a law that cannot be proven, but has held true in repeated experiments.

Von Mises never totally formalized his rules for sub-sequence selection, but in his 1940 paper "On the concept of random sequence", Alonzo Church suggested that the functions used for place settings in the formalism of von Mises be computable functions rather than arbitrary functions of the initial segments of the sequence, appealing to the Church–Turing thesis on effectiveness.

The advent of quantum mechanics in the early 20th century and the formulation of the Heisenberg uncertainty principle in 1927 saw the end to the Newtonian mindset among physicists regarding the determinacy of nature. In quantum mechanics, there is not even a way to consider all observable elements in a system as random variables at once, since many observables do not commute.

Café Central, one of the early meeting places of the Vienna circle

By the early 1940s, the frequency theory approach to probability was well accepted within the Vienna circle, but in the 1950s Karl Popper proposed the propensity theory. Given that the frequency approach cannot deal with "a single toss" of a coin, and can only address large ensembles or collectives, the single-case probabilities were treated as propensities or chances. The concept of propensity was also driven by the desire to handle single-case probability settings in quantum mechanics, e.g. the probability of decay of a specific atom at a specific moment. In more general terms, the frequency approach can not deal with the probability of the death of a specific person given that the death can not be repeated multiple times for that person. Karl Popper echoed the same sentiment as Aristotle in viewing randomness as subordinate to order when he wrote that "the concept of chance is not opposed to the concept of law" in nature, provided one considers the laws of chance.

Claude Shannon's development of Information theory in 1948 gave rise to the entropy view of randomness. In this view, randomness is the opposite of determinism in a stochastic process. Hence if a stochastic system has entropy zero it has no randomness and any increase in entropy increases randomness. Shannon's formulation defaults to Boltzmann's 19th century formulation of entropy in case all probabilities are equal. Entropy is now widely used in diverse fields of science from thermodynamics to quantum chemistry.

Martingales for the study of chance and betting strategies were introduced by Paul Lévy in the 1930s and were formalized by Joseph L. Doob in the 1950s. The application of random walk hypothesis in financial theory was first proposed by Maurice Kendall in 1953. It was later promoted by Eugene Fama and Burton Malkiel.

Random strings were first studied in the 1960s by A. N. Kolmogorov (who had provided the first axiomatic definition of probability theory in 1933), Chaitin and Martin-Löf. The algorithmic randomness of a string was defined as the minimum size of a program (e.g. in bits) executed on a universal computer that yields the string. Chaitin's Omega number later related randomness and the halting probability for programs.

In 1964, Benoît Mandelbrot suggested that most statistical models approached only a first stage of dealing with indeterminism, and that they ignored many aspects of real world turbulence. In his 1997 he defined seven states of randomness ranging from "mild to wild", with traditional randomness being at the mild end of the scale.

Despite mathematical advances, reliance on other methods of dealing with chance, such as fortune telling and astrology continued in the 20th century. The government of Myanmar reportedly shaped 20th century economic policy based on fortune telling and planned the move of the capital of the country based on the advice of astrologers. White House Chief of Staff Donald Regan criticized the involvement of astrologer Joan Quigley in decisions made during Ronald Reagan's presidency in the 1980s. Quigley claims to have been the White House astrologer for seven years.

During the 20th century, limits in dealing with randomness were better understood. The best-known example of both theoretical and operational limits on predictability is weather forecasting, simply because models have been used in the field since the 1950s. Predictions of weather and climate are necessarily uncertain. Observations of weather and climate are uncertain and incomplete, and the models into which the data are fed are uncertain. In 1961, Edward Lorenz noticed that a very small change to the initial data submitted to a computer program for weather simulation could result in a completely different weather scenario. This later became known as the butterfly effect, often paraphrased as the question: "Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas?". A key example of serious practical limits on predictability is in geology, where the ability to predict earthquakes either on an individual or on a statistical basis remains a remote prospect.

In the late 1970s and early 1980s, computer scientists began to realize that the deliberate introduction of randomness into computations can be an effective tool for designing better algorithms. In some cases, such randomized algorithms outperform the best deterministic methods.

Consilience

From Wikipedia, the free encyclopedia

In science and history, consilience (also convergence of evidence or concordance of evidence) is the principle that evidence from independent, unrelated sources can "converge" on strong conclusions. That is, when multiple sources of evidence are in agreement, the conclusion can be very strong even when none of the individual sources of evidence is significantly so on its own. Most established scientific knowledge is supported by a convergence of evidence: if not, the evidence is comparatively weak, and there will probably not be a strong scientific consensus.

The principle is based on unity of knowledge; measuring the same result by several different methods should lead to the same answer. For example, it should not matter whether one measures distances within the Giza pyramid complex by laser rangefinding, by satellite imaging, or with a metre-stick – in all three cases, the answer should be approximately the same. For the same reason, different dating methods in geochronology should concur, a result in chemistry should not contradict a result in geology, etc.

The word consilience was originally coined as the phrase "consilience of inductions" by William Whewell (consilience refers to a "jumping together" of knowledge). The word comes from Latin com- "together" and -siliens "jumping" (as in resilience).

Description

Consilience requires the use of independent methods of measurement, meaning that the methods have few shared characteristics. That is, the mechanism by which the measurement is made is different; each method is dependent on an unrelated natural phenomenon. For example, the accuracy of laser range-finding measurements is based on the scientific understanding of lasers, while satellite pictures and metre-sticks (or yardsticks) rely on different phenomena. Because the methods are independent, when one of several methods is in error, it is very unlikely to be in error in the same way as any of the other methods, and a difference between the measurements will be observed. If the scientific understanding of the properties of lasers was inaccurate, then the laser measurement would be inaccurate but the others would not.

As a result, when several different methods agree, this is strong evidence that none of the methods are in error and the conclusion is correct. This is because of a greatly reduced likelihood of errors: for a consensus estimate from multiple measurements to be wrong, the errors would have to be similar for all samples and all methods of measurement, which is extremely unlikely. Random errors will tend to cancel out as more measurements are made, due to regression to the mean; systematic errors will be detected by differences between the measurements and will also tend to cancel out since the direction of the error will still be random. This is how scientific theories reach high confidence—over time, they build up a large degree of evidence which converges on the same conclusion.

When results from different strong methods do appear to conflict, this is treated as a serious problem to be reconciled. For example, in the 19th century, the Sun appeared to be no more than 20 million years old, but the Earth appeared to be no less than 300 million years (resolved by the discovery of nuclear fusion and radioactivity, and the theory of quantum mechanics); or current attempts to resolve theoretical differences between quantum mechanics and general relativity.

Significance

Because of consilience, the strength of evidence for any particular conclusion is related to how many independent methods are supporting the conclusion, as well as how different these methods are. Those techniques with the fewest (or no) shared characteristics provide the strongest consilience and result in the strongest conclusions. This also means that confidence is usually strongest when considering evidence from different fields because the techniques are usually very different.

For example, the theory of evolution is supported by a convergence of evidence from genetics, molecular biology, paleontology, geology, biogeography, comparative anatomy, comparative physiology, and many other fields. In fact, the evidence within each of these fields is itself a convergence providing evidence for the theory. As a result, to disprove evolution, most or all of these independent lines of evidence would have to be found to be in error. The strength of the evidence, considered together as a whole, results in the strong scientific consensus that the theory is correct. In a similar way, evidence about the history of the universe is drawn from astronomy, astrophysics, planetary geology, and physics.

Finding similar conclusions from multiple independent methods is also evidence for the reliability of the methods themselves, because consilience eliminates the possibility of all potential errors that do not affect all the methods equally. This is also used for the validation of new techniques through comparison with the consilient ones. If only partial consilience is observed, this allows for the detection of errors in methodology; any weaknesses in one technique can be compensated for by the strengths of the others. Alternatively, if using more than one or two techniques for every experiment is infeasible, some of the benefits of consilience may still be obtained if it is well-established that these techniques usually give the same result.

Consilience is important across all of science, including the social sciences, and is often used as an argument for scientific realism by philosophers of science. Each branch of science studies a subset of reality that depends on factors studied in other branches. Atomic physics underlies the workings of chemistry, which studies emergent properties that in turn are the basis of biology. Psychology is not separate from the study of properties emergent from the interaction of neurons and synapses. Sociology, economics, and anthropology are each, in turn, studies of properties emergent from the interaction of countless individual humans. The concept that all the different areas of research are studying one real, existing universe is an apparent explanation of why scientific knowledge determined in one field of inquiry has often helped in understanding other fields.

Deviations

Consilience does not forbid deviations: in fact, since not all experiments are perfect, some deviations from established knowledge are expected. However, when the convergence is strong enough, then new evidence inconsistent with the previous conclusion is not usually enough to outweigh that convergence. Without an equally strong convergence on the new result, the weight of evidence will still favor the established result. This means that the new evidence is most likely to be wrong.

Science denialism (for example, AIDS denialism) is often based on a misunderstanding of this property of consilience. A denier may promote small gaps not yet accounted for by the consilient evidence, or small amounts of evidence contradicting a conclusion without accounting for the pre-existing strength resulting from consilience. More generally, to insist that all evidence converge precisely with no deviations would be naïve falsificationism, equivalent to considering a single contrary result to falsify a theory when another explanation, such as equipment malfunction or misinterpretation of results, is much more likely.

In history

Historical evidence also converges in an analogous way. For example: if five ancient historians, none of whom knew each other, all claim that Julius Caesar seized power in Rome in 49 BCE, this is strong evidence in favor of that event occurring even if each individual historian is only partially reliable. By contrast, if the same historian had made the same claim five times in five different places (and no other types of evidence were available), the claim is much weaker because it originates from a single source. The evidence from the ancient historians could also converge with evidence from other fields, such as archaeology: for example, evidence that many senators fled Rome at the time, that the battles of Caesar's civil war occurred, and so forth.

Consilience has also been discussed in reference to Holocaust denial.

"We [have now discussed] eighteen proofs all converging on one conclusion...the deniers shift the burden of proof to historians by demanding that each piece of evidence, independently and without corroboration between them, prove the Holocaust. Yet no historian has ever claimed that one piece of evidence proves the Holocaust. We must examine the collective whole."

That is, individually the evidence may underdetermine the conclusion, but together they overdetermine it. A similar way to state this is that to ask for one particular piece of evidence in favor of a conclusion is a flawed question.

Outside the sciences

In addition to the sciences, consilience can be important to the arts, ethics and religion. Both artists and scientists have identified the importance of biology in the process of artistic innovation.

History of the concept

Consilience has its roots in the ancient Greek concept of an intrinsic orderliness that governs our cosmos, inherently comprehensible by logical process, a vision at odds with mystical views in many cultures that surrounded the Hellenes. The rational view was recovered during the high Middle Ages, separated from theology during the Renaissance and found its apogee in the Age of Enlightenment.

Whewell's definition was that:

The Consilience of Inductions takes place when an Induction, obtained from one class of facts, coincides with an Induction obtained from another different class. Thus Consilience is a test of the truth of the Theory in which it occurs.

More recent descriptions include:

"Where there is a convergence of evidence, where the same explanation is implied, there is increased confidence in the explanation. Where there is divergence, then either the explanation is at fault or one or more of the sources of information is in error or requires reinterpretation."

"Proof is derived through a convergence of evidence from numerous lines of inquiry—multiple, independent inductions, all of which point to an unmistakable conclusion."

Edward O. Wilson

Although the concept of consilience in Whewell's sense was widely discussed by philosophers of science, the term was unfamiliar to the broader public until the end of the 20th century, when it was revived in Consilience: The Unity of Knowledge, a 1998 book by the author and biologist E. O. Wilson, as an attempt to bridge the cultural gap between the sciences and the humanities that was the subject of C. P. Snow's The Two Cultures and the Scientific Revolution (1959). Wilson believed that "the humanities, ranging from philosophy and history to moral reasoning, comparative religion, and interpretation of the arts, will draw closer to the sciences and partly fuse with them" with the result that science and the scientific method, from within this fusion, would not only explain the physical phenomenon but also provide moral guidance and be the ultimate source of all truths.

Wilson held that with the rise of the modern sciences, the sense of unity gradually was lost in the increasing fragmentation and specialization of knowledge in the last two centuries. He asserted that the sciences, humanities, and arts have a common goal: to give a purpose to understand the details, to lend to all inquirers "a conviction, far deeper than a mere working proposition, that the world is orderly and can be explained by a small number of natural laws." An important point made by Wilson is that hereditary human nature and evolution itself profoundly affect the evolution of culture, in essence, a sociobiological concept. Wilson's concept is a much broader notion of consilience than that of Whewell, who was merely pointing out that generalizations invented to account for one set of phenomena often account for others as well.

A parallel view lies in the term universology, which literally means "the science of the universe." Universology was first promoted for the study of the interconnecting principles and truths of all domains of knowledge by Stephen Pearl Andrews, a 19th-century utopian futurist and anarchist.

Problem of induction

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Problem_of_induction
Usually inferred from repeated observations: "The sun always rises in the east."
Usually not inferred from repeated observations: "When someone dies, it's never me."

The problem of induction is a philosophical problem that questions the rationality of predictions about unobserved things based on previous observations. These inferences from the observed to the unobserved are known as "inductive inferences". David Hume, who first formulated the problem in 1739, argued that there is no non-circular way to justify inductive inferences, while he acknowledged that everyone does and must make such inferences.

The traditional inductivist view is that all claimed empirical laws, either in everyday life or through the scientific method, can be justified through some form of reasoning. The problem is that many philosophers tried to find such a justification but their proposals were not accepted by others. Identifying the inductivist view as the scientific view, C. D. Broad once said that induction is "the glory of science and the scandal of philosophy". In contrast, Karl Popper's critical rationalism claimed that inductive justifications are never used in science and proposed instead that science is based on the procedure of conjecturing hypotheses, deductively calculating consequences, and then empirically attempting to falsify them.

Formulation of the problem

In inductive reasoning, one makes a series of observations and infers a claim based on them. For instance, from a series of observations that a woman walks her dog by the market at 8 am on Monday, it seems valid to infer that next Monday she will do the same, or that, in general, the woman walks her dog by the market every Monday. That next Monday the woman walks by the market merely adds to the series of observations, but it does not prove she will walk by the market every Monday. First of all, it is not certain, regardless of the number of observations, that the woman always walks by the market at 8 am on Monday. In fact, David Hume even argued that we cannot claim it is "more probable", since this still requires the assumption that the past predicts the future.

Second, the observations themselves do not establish the validity of inductive reasoning, except inductively. Bertrand Russell illustrated this point in The Problems of Philosophy:

Domestic animals expect food when they see the person who usually feeds them. We know that all these rather crude expectations of uniformity are liable to be misleading. The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken.

Ancient and early modern origins

Pyrrhonism

The works of the Pyrrhonist philosopher Sextus Empiricus contain the oldest surviving questioning of the validity of inductive reasoning. He wrote:

It is also easy, I consider, to set aside the method of induction. For, when they propose to establish the universal from the particulars by means of induction, they will effect this by a review either of all or of some of the particular instances. But if they review some, the induction will be insecure, since some of the particulars omitted in the induction may contravene the universal; while if they are to review all, they will be toiling at the impossible, since the particulars are infinite and indefinite. Thus on both grounds, as I think, the consequence is that induction is invalidated.

The focus upon the gap between the premises and conclusion present in the above passage appears different from Hume's focus upon the circular reasoning of induction. However, Weintraub claims in The Philosophical Quarterly that although Sextus's approach to the problem appears different, Hume's approach was actually an application of another argument raised by Sextus:

Those who claim for themselves to judge the truth are bound to possess a criterion of truth. This criterion, then, either is without a judge's approval or has been approved. But if it is without approval, whence comes it that it is truthworthy? For no matter of dispute is to be trusted without judging. And, if it has been approved, that which approves it, in turn, either has been approved or has not been approved, and so on ad infinitum.

Although the criterion argument applies to both deduction and induction, Weintraub believes that Sextus's argument "is precisely the strategy Hume invokes against induction: it cannot be justified, because the purported justification, being inductive, is circular." She concludes that "Hume's most important legacy is the supposition that the justification of induction is not analogous to that of deduction." She ends with a discussion of Hume's implicit sanction of the validity of deduction, which Hume describes as intuitive in a manner analogous to modern foundationalism.

Indian philosophy

The Cārvāka, a materialist and skeptic school of Indian philosophy, used the problem of induction to point out the flaws in using inference as a way to gain valid knowledge. They held that since inference needed an invariable connection between the middle term and the predicate, and further, that since there was no way to establish this invariable connection, that the efficacy of inference as a means of valid knowledge could never be stated.

The 9th century Indian skeptic, Jayarasi Bhatta, also made an attack on inference, along with all means of knowledge, and showed by a type of reductio argument that there was no way to conclude universal relations from the observation of particular instances.

Medieval philosophy

Medieval writers such as al-Ghazali and William of Ockham connected the problem with God's absolute power, asking how we can be certain that the world will continue behaving as expected when God could at any moment miraculously cause the opposite. Duns Scotus, however, argued that inductive inference from a finite number of particulars to a universal generalization was justified by "a proposition reposing in the soul, 'Whatever occurs in a great many instances by a cause that is not free, is the natural effect of that cause.'" Some 17th-century Jesuits argued that although God could create the end of the world at any moment, it was necessarily a rare event and hence our confidence that it would not happen very soon was largely justified.

David Hume

David Hume, a Scottish thinker of the Enlightenment era, is the philosopher most often associated with induction. His formulation of the problem of induction can be found in An Enquiry concerning Human Understanding, §4. Here, Hume introduces his famous distinction between "relations of ideas" and "matters of fact". Relations of ideas are propositions which can be derived from deductive logic, which can be found in fields such as geometry and algebra. Matters of fact, meanwhile, are not verified through the workings of deductive logic but by experience. Specifically, matters of fact are established by making an inference about causes and effects from repeatedly observed experience. While relations of ideas are supported by reason alone, matters of fact must rely on the connection of a cause and effect through experience. Causes of effects cannot be linked through a priori reasoning, but by positing a "necessary connection" that depends on the "uniformity of nature".

Hume situates his introduction to the problem of induction in A Treatise of Human Nature within his larger discussion on the nature of causes and effects (Book I, Part III, Section VI). He writes that reasoning alone cannot establish the grounds of causation. Instead, the human mind imputes causation to phenomena after repeatedly observing a connection between two objects. For Hume, establishing the link between causes and effects relies not on reasoning alone, but the observation of "constant conjunction" throughout one's sensory experience. From this discussion, Hume goes on to present his formulation of the problem of induction in A Treatise of Human Nature, writing "there can be no demonstrative arguments to prove, that those instances, of which we have had no experience, resemble those, of which we have had experience."

In other words, the problem of induction can be framed in the following way: we cannot apply a conclusion about a particular set of observations to a more general set of observations. While deductive logic allows one to arrive at a conclusion with certainty, inductive logic can only provide a conclusion that is probably true. It is mistaken to frame the difference between deductive and inductive logic as one between general to specific reasoning and specific to general reasoning. This is a common misperception about the difference between inductive and deductive thinking. According to the literal standards of logic, deductive reasoning arrives at certain conclusions while inductive reasoning arrives at probable conclusions.  Hume's treatment of induction helps to establish the grounds for probability, as he writes in A Treatise of Human Nature that "probability is founded on the presumption of a resemblance betwixt those objects, of which we have had experience, and those, of which we have had none" (Book I, Part III, Section VI).

Therefore, Hume establishes induction as the very grounds for attributing causation. There might be many effects which stem from a single cause. Over repeated observation, one establishes that a certain set of effects are linked to a certain set of causes. However, the future resemblance of these connections to connections observed in the past depends on induction. Induction allows one to conclude that "Effect A2" was caused by "Cause A2" because a connection between "Effect A1" and "Cause A1" was observed repeatedly in the past. Given that reason alone can not be sufficient to establish the grounds of induction, Hume implies that induction must be accomplished through imagination. One does not make an inductive reference through a priori reasoning, but through an imaginative step automatically taken by the mind.

Hume does not challenge that induction is performed by the human mind automatically, but rather hopes to show more clearly how much human inference depends on inductive—not a priori—reasoning. He does not deny future uses of induction, but shows that it is distinct from deductive reasoning, helps to ground causation, and wants to inquire more deeply into its validity. Hume offers no solution to the problem of induction himself. He prompts other thinkers and logicians to argue for the validity of induction as an ongoing dilemma for philosophy. A key issue with establishing the validity of induction is that one is tempted to use an inductive inference as a form of justification itself. This is because people commonly justify the validity of induction by pointing to the many instances in the past when induction proved to be accurate. For example, one might argue that it is valid to use inductive inference in the future because this type of reasoning has yielded accurate results in the past. However, this argument relies on an inductive premise itself—that past observations of induction being valid will mean that future observations of induction will also be valid. Thus, many solutions to the problem of induction tend to be circular.

Nelson Goodman's new riddle of induction

Nelson Goodman's Fact, Fiction, and Forecast (1955) presented a different description of the problem of induction in the chapter entitled "The New Riddle of Induction". Goodman proposed the new predicate "grue". Something is grue if and only if it has been (or will be, according to a scientific, general hypothesis) observed to be green before a certain time t, and blue if observed after that time. The "new" problem of induction is, since all emeralds we have ever seen are both green and grue, why do we suppose that after time t we will find green but not grue emeralds? The problem here raised is that two different inductions will be true and false under the same conditions. In other words:

  • Given the observations of a lot of green emeralds, someone using a common language will inductively infer that all emeralds are green (therefore, he will believe that any emerald he will ever find will be green, even after time t).
  • Given the same set of observations of green emeralds, someone using the predicate "grue" will inductively infer that all emeralds, which will be observed after t, will be blue, despite the fact that he observed only green emeralds so far.

One could argue, using Occam's razor, that greenness is more likely than grueness because the concept of grueness is more complex than that of greenness. Goodman, however, points out that the predicate "grue" only appears more complex than the predicate "green" because we have defined grue in terms of blue and green. If we had always been brought up to think in terms of "grue" and "bleen" (where bleen is blue before time t, and green thereafter), we would intuitively consider "green" to be a crazy and complicated predicate. Goodman believed that which scientific hypotheses we favour depend on which predicates are "entrenched" in our language.

Willard Van Orman Quine offers a practical solution to this problem by making the metaphysical claim that only predicates that identify a "natural kind" (i.e. a real property of real things) can be legitimately used in a scientific hypothesis. R. Bhaskar also offers a practical solution to the problem. He argues that the problem of induction only arises if we deny the possibility of a reason for the predicate, located in the enduring nature of something. For example, we know that all emeralds are green, not because we have only ever seen green emeralds, but because the chemical make-up of emeralds insists that they must be green. If we were to change that structure, they would not be green. For instance, emeralds are a kind of green beryl, made green by trace amounts of chromium and sometimes vanadium. Without these trace elements, the gems would be colourless.

Notable interpretations

Hume

Although induction is not made by reason, Hume observes that we nonetheless perform it and improve from it. He proposes a descriptive explanation for the nature of induction in §5 of the Enquiry, titled "Skeptical solution of these doubts". It is by custom or habit that one draws the inductive connection described above, and "without the influence of custom we would be entirely ignorant of every matter of fact beyond what is immediately present to the memory and senses". The result of custom is belief, which is instinctual and much stronger than imagination alone.

John Maynard Keynes

In his Treatise on Probability, John Maynard Keynes notes:

An inductive argument affirms, not that a certain matter of fact is so, but that relative to certain evidence there is a probability in its favour. The validity of the induction, relative to the original evidence, is not upset, therefore, if, as a fact, the truth turns out to be otherwise.

This approach was endorsed by Bertrand Russell.

David Stove and Donald Williams

David Stove's argument for induction, based on the statistical syllogism, was presented in the Rationality of Induction and was developed from an argument put forward by one of Stove's heroes, the late Donald Cary Williams (formerly Professor at Harvard) in his book The Ground of Induction. Stove argued that it is a statistical truth that the great majority of the possible subsets of specified size (as long as this size is not too small) are similar to the larger population to which they belong. For example, the majority of the subsets which contain 3000 ravens which you can form from the raven population are similar to the population itself (and this applies no matter how large the raven population is, as long as it is not infinite). Consequently, Stove argued that if you find yourself with such a subset then the chances are that this subset is one of the ones that are similar to the population, and so you are justified in concluding that it is likely that this subset "matches" the population reasonably closely. The situation would be analogous to drawing a ball out of a barrel of balls, 99% of which are red. In such a case you have a 99% chance of drawing a red ball. Similarly, when getting a sample of ravens the probability is very high that the sample is one of the matching or "representative" ones. So as long as you have no reason to think that your sample is an unrepresentative one, you are justified in thinking that probably (although not certainly) that it is.

Biting the bullet: Keith Campbell and Claudio Costa

An intuitive answer to Hume would be to say that a world inaccessible to any inductive procedure would simply not be conceivable. This intuition was taken into account by Keith Campbell by considering that, to be built, a concept must be reapplied, which demands a certain continuity in its object of application and consequently some openness to induction. Claudio Costa has noted that a future can only be a future of its own past if it holds some identity with it. Moreover, the nearer a future is to the point of junction with its past, the greater are the similarities tendentially involved. Consequently – contra Hume – some form of principle of homogeneity (causal or structural) between future and past must be warranted, which would make some inductive procedure always possible.

Karl Popper

Karl Popper, a philosopher of science, sought to solve the problem of induction. He argued that science does not use induction, and induction is in fact a myth. Instead, knowledge is created by conjecture and criticism. The main role of observations and experiments in science, he argued, is in attempts to criticize and refute existing theories.

According to Popper, the problem of induction as usually conceived is asking the wrong question: it is asking how to justify theories given they cannot be justified by induction. Popper argued that justification is not needed at all, and seeking justification "begs for an authoritarian answer". Instead, Popper said, what should be done is to look to find and correct errors. Popper regarded theories that have survived criticism as better corroborated in proportion to the amount and stringency of the criticism, but, in sharp contrast to the inductivist theories of knowledge, emphatically as less likely to be true. Popper held that seeking for theories with a high probability of being true was a false goal that is in conflict with the search for knowledge. Science should seek for theories that are most probably false on the one hand (which is the same as saying that they are highly falsifiable and so there are many ways that they could turn out to be wrong), but still all actual attempts to falsify them have failed so far (that they are highly corroborated).

Wesley C. Salmon criticizes Popper on the grounds that predictions need to be made both for practical purposes and in order to test theories. That means Popperians need to make a selection from the number of unfalsified theories available to them, which is generally more than one. Popperians would wish to choose well-corroborated theories, in their sense of corroboration, but face a dilemma: either they are making the essentially inductive claim that a theory's having survived criticism in the past means it will be a reliable predictor in the future; or Popperian corroboration is no indicator of predictive power at all, so there is no rational motivation for their preferred selection principle.

David Miller has criticized this kind of criticism by Salmon and others because it makes inductivist assumptions. Popper does not say that corroboration is an indicator of predictive power. The predictive power is in the theory itself, not in its corroboration. The rational motivation for choosing a well-corroborated theory is that it is simply easier to falsify: Well-corroborated means that at least one kind of experiment (already conducted at least once) could have falsified (but did not actually falsify) the one theory, while the same kind of experiment, regardless of its outcome, could not have falsified the other. So it is rational to choose the well-corroborated theory: It may not be more likely to be true, but if it is actually false, it is easier to get rid of when confronted with the conflicting evidence that will eventually turn up. Accordingly, it is wrong to consider corroboration as a reason, a justification for believing in a theory or as an argument in favor of a theory to convince someone who objects to it.

Great Church

From Wikipedia, the free encyclopedia
The Church Fathers in an 11th-century depiction from Kyiv

The term "Great Church" (Latin: ecclesia magna) is used in the historiography of early Christianity to mean the period of about 180 to 313, between that of primitive Christianity and that of the legalization of the Christian religion in the Roman Empire, corresponding closely to what is called the Ante-Nicene Period. "It has rightly been called the period of the Great Church, in view of its numerical growth, its constitutional development and its intense theological activity."

The Great Church, also called the catholic (i.e., universal) Church, has been defined also as meaning "the Church as defended by such as Ignatius of Antioch, Irenaeus of Lyons, Cyprian of Carthage, and Origen of Alexandria and characterized as possessing a single teaching and communion over and against the division of the sects, e.g., gnosticism, and the heresies".

By the beginning of the fourth century, the Great Church already formed about 15% of the population of the Roman Empire and was ready, both numerically and structurally, for its role as the church of the empire, becoming the state religion of the Roman Empire in 380.

Roger F. Olson says: "According to the Roman Catholic account of the history of Christian theology, the Great Church catholic and orthodox lived on from the apostles to today in the West and all bishops that remained in fellowship with the bishop of Rome have constituted its hierarchy"; or, as the Catholic Church itself has expressed it, "This Church constituted and organized in the world as a society, subsists in the Catholic Church, which is governed by the successor of Peter and by the Bishops in communion with him, although many elements of sanctification and of truth are found outside of its visible structure." Thus, the Roman Catholic Church identifies itself as the continuation of the Great Church, which in turn was the same as the early Church founded by Jesus Christ. Because of this, it identifies itself as the "one true church".

The unbroken continuity of the Great Church is affirmed also by the Eastern Orthodox Church: "Orthodoxy regards the Great Church in antiquity (for most of the first millennium) as comprising, on one side, the Eastern Orthodox world (the Byzantine patriarchates presided over by the hierarch of the Church of Constantinople together with the Slavic Orthodox churches); and, on the other side, the Western Catholic Church, presided over by the hierarch of the Church of Rome."

Emergence

Irenaeus (2nd century – c. 202)

Lawrence S. Cunningham, and separately, Kugel and Greer state that Irenaeus's statement in Against Heresies Chapter X 1–2 (written c. 180 AD) is the first recorded reference to the existence of a "Church" with a core set of shared beliefs as opposed to the ideas of dissident groups. Irenaeus states:

The Church, though dispersed through the whole world, even to the ends of the earth, has received from the apostles and their disciples this faith: ... As I have already observed, the Church, having received this preaching and this faith, although scattered throughout the whole world, yet, as if occupying but one house, carefully preserves it. ... For the churches which have been planted in Germany do not believe or hand down anything different, nor do those in Spain, nor those in Gaul, nor those in the East, nor those in Egypt, nor those in Libya, nor those which have been established in the central regions of the world. But as the sun, that creature of God, is one and the same throughout the whole world, so also the preaching of the truth shineth everywhere, and enlightens all men that are willing to come to a knowledge of the truth.

Cunningham states that two points in Irenaeus' writing deserve attention. First, that Irenaeus distinguished the Church singular from "the churches" plural, and more importantly, Irenaeus holds that only in the larger singular Church does one find the truth handed down by the apostles of Christ.

At the beginning of the 3rd century the Great Church that Irenaeus and Celsus had referred to had spread across a significant portion of the world, with most of its members living in cities (see early centers of Christianity). The growth was less than uniform across the world. The Chronicle of Arbela stated that in 225 AD, there were 20 bishops in all of Persia, while at approximately the same time, surrounding areas of Rome had over 60 bishops. But the Great Church of the 3rd century was not monolithic, consisting of a network of churches connected across cultural zones by lines of communication which at times included personal relationships.

The Great Church grew in the 2nd century and entered the 3rd century mainly in two empires: the Roman and the Persian, with the network of bishops usually acting as the cohesive element across cultural zones. In 313, the Edict of Milan ended the persecution of Christians, and by 380 the Great Church had gathered enough followers to become the State church of the Roman Empire by virtue of the Edict of Thessalonica.

Historical references

In Contra Celsum 5.59 and 5.61 the Church Father Origen mentions Celsus' late 2nd century use of the terms "church of the multitudes" or "great church" to refer to the emerging consensus traditions among Christians at the time, as Christianity was taking shape.

In the 4th century, as Saint Augustine commented on Psalm XXII, he interpreted the term to mean the whole world, writing: "The great Church, Brethren, what is it? Is a scanty portion of the earth the great Church? The great Church means the whole world." Augustine continued to expound on how various churches all considered themselves "the great Church", but that only the whole world could be seen as the great Church.

Theological underpinnings and separation

Emperor Constantine and bishops with the Creed of 381.

The epoch of the Great Church witnessed the development of key theological concepts which now form the fabric of the religious beliefs of the large majority of Christians.

Relying on Scripture, prevailing mysticism and popular piety, Irenaeus formalized some of the attributes of God, writing in Against Heresies Book IV, Chapter 19: "His greatness lacks nothing, but contains all things." Irenaeus also referred to the early use of the "Father, Son and Holy Spirit" formula which appeared as part of Christian Creeds, writing in Against Heresies (Book I Chapter X):

The Church… believes in one God, the Father Almighty, Maker of heaven, and earth, and the sea, and all things that are in them; and in one Christ Jesus, the Son of God, who became incarnate for our salvation; and in the Holy Spirit.

Around 213 AD in Adversus Praxeas (chapter 3) Tertullian provided a formal representation of the concept of the Trinity, i.e., that God exists as one "substance" but three "Persons": The Father, the Son and the Holy Spirit. Unlike later forms of the Trinity, however, Tertullian also expressed in chapter 6 of his work Adversus Praxeas, a belief in the Son being both "created and begotten", and in his work Adversus Hermogenes (chapter 3) of the Son having a "beginning" and time he did "not exist". The First Council of Nicaea in 325 and later the First Council of Constantinople in 381 after the 55 year long Arian Controversy that threatened to split the Great Church in two over a debate concerning the "nature and substance" of the Son, then formalized these elements, but differing to Tertullian with the affirmation that the Son was also "co-eternal" with the Father without a beginning, being "begotten not made".

After 381, Christians outside of the Roman Empire living in the Gothic Kingdoms, continued to adhere to Arian Christology, but were considered schismatics and heretics by the majority in Great Church and Rome. The Goth Kingdoms later converted to the "Nicene orthodoxy" of the Great Church by the end of the 7th century.

In 451, all the bishops of the Great Church were ordered to attend the Council of Chalcedon to discuss theological issues that had emerged. This turned out to be a turning point at which the Western and Eastern churches parted ways based on seemingly small Christological differences, and began the fracturing of the claim to the term Great Church by both sides.

Modern theories on the formation of the Great Church

Official Catholic publications, and other writers, sometimes consider that the concept of the "Great Church" can be found already in the Epistles of Paul, such as in "This is my rule in all the churches" (1 Corinthians 7:17) and in the Apostolic Fathers such as the letters of Ignatius of Antioch. Exegesis has even located the ecclesia magna in the Latin Vulgate translations of the "great congregation" (kahal rab) of the Hebrew Bible. This interpretation was also offered by Pope Benedict XVI, and by Martin Luther.

Dennis Minns (2010) considers that the concept of a "Great Church" was developed by polemical heresiologists such as Irenaeus. The presentation of early Christian unity and orthodoxy (see Proto-orthodox Christianity), and counter presentation of groups such as those sects labelled "Gnostic", by early heresiologists such as Irenaeus is questioned by modern historians.

Roger E. Olson (1999) uses the term to refer to the Great Church at the time of the Council of Chalcedon (451) when the Patriarch of Constantinople and Bishop of Rome were in fellowship with each other.

In contrast to "Jewish Christianity"

The term is contrasted with Jewish Christians who came to be more and more clearly separated from the Great Church. Wilhelm Schneemelcher and others writing on New Testament Apocrypha distinguish writings as being sectarian or from the Great Church.

Gabriele Waste (2005) is among German scholars using similar references, where the "Große Kirche" ("Great Church") is defined as "Ecclesia ex gentibus" (Church of the Gentiles) in comparison to the "Ecclesia ex circumcisione" (Church of the Circumcision).

In the anglophone world, Bruce J. Malina (1976) contrasted what he calls "Christian Judaism" (usually termed "Jewish Christianity") with "the historically perceived orthodox Christianity that undergirds the ideology of the emergent Great Church."

In francophone scholarship, the term Grande Église (Latin: Ecclesia magna) has also been equated with the "more hellenized" as opposed to "Judaizing" sections of the early church, and the Bar Kokhba revolt is seen as a definitive stage in the separation between Judaism and the Christianity of the "Grande Église". Those stressing this binary view of early Christianity include Simon Claude Mimouni and François Blanchetière.

Religion and children

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Religion_and_children   Childr...