Search This Blog

Thursday, July 31, 2025

Norse mythology

From Wikipedia, the free encyclopedia
The Tjängvide image stone with illustrations from Norse mythology

Norse, Nordic, or Scandinavian mythology, is the body of myths belonging to the North Germanic peoples, stemming from Old Norse religion and continuing after the Christianization of Scandinavia as the Nordic folklore of the modern period. The northernmost extension of Germanic mythology and stemming from Proto-Germanic folklore, Norse mythology consists of tales of various deities, beings, and heroes derived from numerous sources from both before and after the pagan period, including medieval manuscripts, archaeological representations, and folk tradition. The source texts mention numerous gods such as the thunder-god Thor, the raven-flanked god Odin, the goddess Freyja, and numerous other deities.

The god Loki, son of Fárbauti and Laufey

Most of the surviving mythology centers on the plights of the gods and their interaction with several other beings, such as humanity and the jötnar, beings who may be friends, lovers, foes, or family members of the gods. The cosmos in Norse mythology consists of Nine Worlds that flank a central sacred tree, Yggdrasil. Units of time and elements of the cosmology are personified as deities or beings. Various forms of a creation myth are recounted, where the world is created from the flesh of the primordial being Ymir, and the first two humans are Ask and Embla. These worlds are foretold to be reborn after the events of Ragnarök when an immense battle occurs between the gods and their enemies, and the world is enveloped in flames, only to be reborn anew. There the surviving gods will meet, and the land will be fertile and green, and two humans will repopulate the world.

Norse mythology has been the subject of scholarly discourse since the 17th century when key texts attracted the attention of the intellectual circles of Europe. By way of comparative mythology and historical linguistics, scholars have identified elements of Germanic mythology reaching as far back as Proto-Indo-European mythology. During the modern period, the Romanticist Viking revival re-awoke an interest in the subject matter, and references to Norse mythology may now be found throughout modern popular culture. The myths have further been revived in a religious context among adherents of Germanic Neopaganism.

Terminology

The historical religion of the Norse people is commonly referred to as Norse mythology. Other terms are Scandinavian mythologyNorth Germanic mythology or Nordic mythology.

Sources

The Rök runestone (Ög 136), located in Rök, Sweden, features a Younger Futhark runic inscription that makes various references to Norse mythology.

Norse mythology is primarily attested in dialects of Old Norse, a North Germanic language spoken by the Scandinavian people during the European Middle Ages and the ancestor of modern Scandinavian languages. The majority of these Old Norse texts were created in Iceland, where the oral tradition stemming from the pre-Christian inhabitants of the island was collected and recorded in manuscripts. This occurred primarily in the 13th century. These texts include the Prose Edda, composed in the 13th century by the Icelandic scholar, lawspeaker, and historian Snorri Sturluson, and the Poetic Edda, a collection of poems from earlier traditional material anonymously compiled in the 13th century.

The Prose Edda was composed as a prose manual for producing skaldic poetry—traditional Old Norse poetry composed by skalds. Originally composed and transmitted orally, skaldic poetry utilizes alliterative verse, kennings, and several metrical forms. The Prose Edda presents numerous examples of works by various skalds from before and after the Christianization process and also frequently refers back to the poems found in the Poetic Edda. The Poetic Edda consists almost entirely of poems, with some prose narrative added, and this poetry—Eddic poetry—utilizes fewer kennings. In comparison to skaldic poetry, Eddic poetry is relatively unadorned.

Title page of a late manuscript of the Prose Edda written by Snorri Sturluson (13th century), showing the Ancient Norse Gods Odin, Heimdallr, Sleipnir, and other figures from Norse mythology

The Prose Edda features layers of euhemerization, a process in which deities and supernatural beings are presented as having been either actual, magic-wielding human beings who have been deified in time or beings demonized by way of Christian mythology. Texts such as Heimskringla, composed in the 13th century by Snorri and Gesta Danorum, composed in Latin by Saxo Grammaticus in Denmark in the 12th century, are the results of heavy amounts of euhemerization.

Numerous additional texts, such as the sagas, provide further information. The saga corpus consists of thousands of tales recorded in Old Norse ranging from Icelandic family histories (Sagas of Icelanders) to Migration period tales mentioning historic figures such as Attila the Hun (legendary sagas). Objects and monuments such as the Rök runestone and the Kvinneby amulet feature runic inscriptions—texts written in the runic alphabet, the indigenous alphabet of the Germanic peoples—that mention figures and events from Norse mythology.

Objects from the archaeological record may also be interpreted as depictions of subjects from Norse mythology, such as amulets of the god Thor's hammer Mjölnir found among pagan burials and small silver female figures interpreted as valkyries or dísir, beings associated with war, fate or ancestor cults. By way of historical linguistics and comparative mythology, comparisons to other attested branches of Germanic mythology (such as the Old High German Merseburg Incantations) may also lend insight. Wider comparisons to the mythology of other Indo-European peoples by scholars has resulted in the potential reconstruction of far earlier myths.

Only a tiny amount of poems and tales survive of the many mythical tales and poems that are presumed to have existed during the Middle Ages, Viking Age, Migration Period, and before. Later sources reaching into the modern period, such as a medieval charm recorded as used by the Norwegian woman Ragnhild Tregagås—convicted of witchcraft in Norway in the 14th century—and spells found in the 17th century Icelandic Galdrabók grimoire also sometimes make references to Norse mythology. Other traces, such as place names bearing the names of gods may provide further information about deities, such as a potential association between deities based on the placement of locations bearing their names, their local popularity, and associations with geological features.

Mythology

Gods and other beings

The god Thor wades through a river, while the Æsir ride across the bridge, Bifröst, in an illustration by Lorenz Frølich (1895).

Central to accounts of Norse mythology are the plights of the gods and their interaction with various other beings, such as with the jötnar, who may be friends, lovers, foes, or family members of the gods. Numerous gods are mentioned in the source texts. As evidenced by records of personal names and place names, the most popular god among the Scandinavians during the Viking Age was Thor the thunder god, who is portrayed as unrelentingly pursuing his foes, his mountain-crushing, thunderous hammer Mjölnir in hand. In the mythology, Thor lays waste to numerous jötnar who are foes to the gods or humanity, and is wed to the beautiful, golden-haired goddess Sif.

The god Odin is also frequently mentioned in surviving texts. One-eyed, wolf- and raven-flanked, with a spear in hand, Odin pursues knowledge throughout the nine realms. In an act of self-sacrifice, Odin is described as having hung himself upside-down for nine days and nights on the cosmological tree Yggdrasil to gain knowledge of the runic alphabet, which he passed on to humanity; he is also associated closely with death, wisdom, and poetry. Odin is portrayed as the ruler of Asgard, and leader of the Aesir. Odin's wife is the powerful goddess Frigg who can see the future but tells no one, and together they have a beloved son, Baldr. After a series of dreams had by Baldr of his impending death, his death is engineered by Loki, and Baldr thereafter resides in Hel, a realm ruled over by an entity of the same name.

Odin must share half of his share of the dead with a powerful goddess, Freyja. She is beautiful, sensual, wears a feathered cloak, and practices seiðr. She rides to battle to choose among the slain and brings her chosen to her afterlife field Fólkvangr. Freyja weeps for her missing husband Óðr and seeks after him in faraway lands. Freyja's brother, the god Freyr, is also frequently mentioned in surviving texts, and in his association with the weather, royalty, human sexuality, and agriculture brings peace and pleasure to humanity. Deeply lovesick after catching sight of the beautiful jötunn Gerðr, Freyr seeks and wins her love, yet at the price of his future doom. Their father is the powerful god Njörðr. Njörðr is strongly associated with ships and seafaring, and so also wealth and prosperity. Freyja and Freyr's mother is Njörðr's unnamed sister (her name is unprovided in the source material). However, there is more information about his pairing with the skiing and hunting goddess Skaði. Their relationship is ill-fated, as Skaði cannot stand to be away from her beloved mountains, nor Njörðr from the seashore. Together, Freyja, Freyr, and Njörðr form a portion of gods known as the Vanir. While the Aesir and the Vanir retain distinct identification, they came together as the result of the Aesir–Vanir War.

While they receive less mention, numerous other gods and goddesses appear in the source material. (For a list of these deities, see List of Germanic deities.) Some of the gods heard less of include the apple-bearing goddess Iðunn and her husband, the skaldic god Bragi; the gold-toothed god Heimdallr, born of nine mothers; the ancient god Týr, who lost his right hand while binding the great wolf Fenrir; and the goddess Gefjon, who formed modern-day Zealand, Denmark.

Various beings outside of the gods are mentioned. Elves and dwarfs are commonly mentioned and appear to be connected, but their attributes are vague and the relation between the two is ambiguous. Elves are described as radiant and beautiful, whereas dwarfs often act as earthen smiths. A group of beings variously described as jötnar, thursar, and trolls (in English these are all often glossed as "giants") frequently appear. These beings may either aid, deter, or take their place among the gods. The Norns, dísir, and aforementioned valkyries also receive frequent mention. While their functions and roles may overlap and differ, all are collective female beings associated with fate.

Cosmology

The cosmological, central tree Yggdrasil is depicted in The Ash Yggdrasil by Friedrich Wilhelm Heine (1886).
Sól, the Sun, and Máni, the Moon, are chased by the wolves Sköll and Háti in The Wolves Pursuing Sol and Mani by J. C. Dollman (1909).

In Norse cosmology, all beings live in Nine Worlds that center around the cosmological tree Yggdrasil. The gods inhabit the heavenly realm of Asgard whereas humanity inhabits Midgard, a region in the center of the cosmos. Outside of the gods, humanity, and the jötnar, these Nine Worlds are inhabited by beings, such as elves and dwarfs. Travel between the worlds is frequently recounted in the myths, where the gods and other beings may interact directly with humanity. Numerous creatures live on Yggdrasil, such as the insulting messenger squirrel Ratatoskr and the perching hawk Veðrfölnir. The tree itself has three major roots, and at the base of one of these roots live the Norns, female entities associated with fate. Elements of the cosmos are personified, such as the Sun (Sól, a goddess), the Moon (Máni, a god), and Earth (Jörð, a goddess), as well as units of time, such as day (Dagr, a god) and night (Nótt, a jötunn).

The afterlife is a complex matter in Norse mythology. The dead may go to the murky realm of Hel—a realm ruled over by a female being of the same name, may be ferried away by valkyries to Odin's martial hall Valhalla, or may be chosen by the goddess Freyja to dwell in her field Fólkvangr. The goddess Rán may claim those that die at sea, and the goddess Gefjon is said to be attended by virgins upon their death. Texts also make reference to reincarnation. Time itself is presented between cyclic and linear, and some scholars have argued that cyclic time was the original format for the mythology. Various forms of a cosmological creation story are provided in Icelandic sources, and references to a future destruction and rebirth of the world—Ragnarok—are frequently mentioned in some texts.

Humanity

According to the Prose Edda and the Poetic Edda poem, Völuspá, the first human couple consisted of Ask and Embla; driftwood found by a trio of gods and imbued with life in the form of three gifts. After the cataclysm of Ragnarok, this process is mirrored in the survival of two humans from a wood; Líf and Lífþrasir. From these two humankind is foretold to repopulate the new and green earth.

Wednesday, July 30, 2025

History of randomness

From Wikipedia, the free encyclopedia
Ancient fresco of dice players in Pompei

In ancient history, the concepts of chance and randomness were intertwined with that of fate. Many ancient peoples threw dice to determine fate, and this later evolved into games of chance. At the same time, most ancient cultures used various methods of divination to attempt to circumvent randomness and fate. Beyond religion and games of chance, randomness has been attested for sortition since at least ancient Athenian democracy in the form of a kleroterion.

The formalization of odds and chance was perhaps earliest done by the Chinese 3,000 years ago. The Greek philosophers discussed randomness at length, but only in non-quantitative forms. It was only in the sixteenth century that Italian mathematicians began to formalize the odds associated with various games of chance. The invention of modern calculus had a positive impact on the formal study of randomness. In the 19th century the concept of entropy was introduced in physics.

The early part of the twentieth century saw a rapid growth in the formal analysis of randomness, and mathematical foundations for probability were introduced, leading to its axiomatization in 1933. At the same time, the advent of quantum mechanics changed the scientific perspective on determinacy. In the mid to late 20th-century, ideas of algorithmic information theory introduced new dimensions to the field via the concept of algorithmic randomness.

Although randomness had often been viewed as an obstacle and a nuisance for many centuries, in the twentieth century computer scientists began to realize that the deliberate introduction of randomness into computations can be an effective tool for designing better algorithms. In some cases, such randomized algorithms are able to outperform the best deterministic methods.

Antiquity to the Middle Ages

Depiction of Roman goddess Fortuna who determined fate, by Hans Beham, 1541

Pre-Christian people along the Mediterranean threw dice to determine fate, and this later evolved into games of chance. There is also evidence of games of chance played by ancient Egyptians, Hindus and Chinese, dating back to 2100 BC. The Chinese used dice before the Europeans, and have a long history of playing games of chance.

Over 3,000 years ago, the problems concerned with the tossing of several coins were considered in the I Ching, one of the oldest Chinese mathematical texts, that probably dates to 1150 BC. The two principal elements yin and yang were combined in the I Ching in various forms to produce Heads and Tails permutations of the type HH, TH, HT, etc. and the Chinese seem to have been aware of Pascal's triangle long before the Europeans formalized it in the 17th century. However, Western philosophy focused on the non-mathematical aspects of chance and randomness until the 16th century.

The development of the concept of chance throughout history has been very gradual. Historians have wondered why progress in the field of randomness was so slow, given that humans have encountered chance since antiquity. Deborah J. Bennett suggests that ordinary people face an inherent difficulty in understanding randomness, although the concept is often taken as being obvious and self-evident. She cites studies by Kahneman and Tversky; these concluded that statistical principles are not learned from everyday experience because people do not attend to the detail necessary to gain such knowledge.

The Greek philosophers were the earliest Western thinkers to address chance and randomness. Around 400 BC, Democritus presented a view of the world as governed by the unambiguous laws of order and considered randomness as a subjective concept that only originated from the inability of humans to understand the nature of events. He used the example of two men who would send their servants to bring water at the same time to cause them to meet. The servants, unaware of the plan, would view the meeting as random.

Aristotle saw chance and necessity as opposite forces. He argued that nature had rich and constant patterns that could not be the result of chance alone, but that these patterns never displayed the machine-like uniformity of necessary determinism. He viewed randomness as a genuine and widespread part of the world, but as subordinate to necessity and order. Aristotle classified events into three types: certain events that happen necessarily; probable events that happen in most cases; and unknowable events that happen by pure chance. He considered the outcome of games of chance as unknowable.

Around 300 BC Epicurus proposed the concept that randomness exists by itself, independent of human knowledge. He believed that in the atomic world, atoms would swerve at random along their paths, bringing about randomness at higher levels.

Hotei, the deity of fortune observing a cock fight in a 16th-century Japanese print

For several centuries thereafter, the idea of chance continued to be intertwined with fate. Divination was practiced in many cultures, using diverse methods. The Chinese analyzed the cracks in turtle shells, while the Germans, who according to Tacitus had the highest regards for lots and omens, utilized strips of bark. In the Roman Empire, chance was personified by the Goddess Fortuna. The Romans would partake in games of chance to simulate what Fortuna would have decided. In 49 BC, Julius Caesar allegedly decided on his fateful decision to cross the Rubicon after throwing dice.

Aristotle's classification of events into the three classes: certain, probable and unknowable was adopted by Roman philosophers, but they had to reconcile it with deterministic Christian teachings in which even events unknowable to man were considered to be predetermined by God. About 960 Bishop Wibold of Cambrai correctly enumerated the 56 different outcomes (without permutations) of playing with three dice. No reference to playing cards has been found in Europe before 1350. The Church preached against card playing, and card games spread much more slowly than games based on dice. The Christian Church specifically forbade divination; and wherever Christianity went, divination lost most of its old-time power.

Over the centuries, many Christian scholars wrestled with the conflict between the belief in free will and its implied randomness, and the idea that God knows everything that happens. Saints Augustine and Aquinas tried to reach an accommodation between foreknowledge and free will, but Martin Luther argued against randomness and took the position that God's omniscience renders human actions unavoidable and determined. In the 13th century, Thomas Aquinas viewed randomness not as the result of a single cause, but of several causes coming together by chance. While he believed in the existence of randomness, he rejected it as an explanation of the end-directedness of nature, for he saw too many patterns in nature to have been obtained by chance.

The Greeks and Romans had not noticed the magnitudes of the relative frequencies of the games of chance. For centuries, chance was discussed in Europe with no mathematical foundation and it was only in the 16th century that Italian mathematicians began to discuss the outcomes of games of chance as ratios. In his 1565 Liber de Lude Aleae (a gambler's manual published after his death) Gerolamo Cardano wrote one of the first formal tracts to analyze the odds of winning at various games.

17th–19th centuries

Statue of Blaise Pascal, Louvre

Around 1620 Galileo wrote a paper called On a discovery concerning dice that used an early probabilistic model to address specific questions. In 1654, prompted by Chevalier de Méré's interest in gambling, Blaise Pascal corresponded with Pierre de Fermat, and much of the groundwork for probability theory was laid. Pascal's Wager was noted for its early use of the concept of infinity, and the first formal use of decision theory. The work of Pascal and Fermat influenced Leibniz's work on the infinitesimal calculus, which in turn provided further momentum for the formal analysis of probability and randomness.

The first known suggestion for viewing randomness in terms of complexity was made by Leibniz in an obscure 17th-century document discovered after his death. Leibniz asked how one could know if a set of points on a piece of paper were selected at random (e.g. by splattering ink) or not. Given that for any set of finite points there is always a mathematical equation that can describe the points, (e.g. by Lagrangian interpolation) the question focuses on the way the points are expressed mathematically. Leibniz viewed the points as random if the function describing them had to be extremely complex. Three centuries later, the same concept was formalized as algorithmic randomness by A. N. Kolmogorov and Gregory Chaitin as the minimal length of a computer program needed to describe a finite string as random.

The Doctrine of Chances, the first textbook on probability theory was published in 1718 and the field continued to grow thereafter. The frequency theory approach to probability was first developed by Robert Ellis and John Venn late in the 19th century.

The Fortune Teller by Vouet, 1617

While the mathematical elite was making progress in understanding randomness from the 17th to the 19th century, the public at large continued to rely on practices such as fortune telling in the hope of taming chance. Fortunes were told in a multitude of ways both in the Orient (where fortune telling was later termed an addiction) and in Europe by gypsies and others. English practices such as the reading of eggs dropped in a glass were exported to Puritan communities in North America.

"I know of scarcely anything so apt to impress the imagination as the wonderful form of cosmic order expressed by the "Law of Frequency of Error." The law would have been personified by the Greeks and deified, if they had known of it. It reigns with serenity and in complete self-effacement amidst the wildest confusion. The huger the mob, and the greater the apparent anarchy, the more perfect is its sway. It is the supreme law of Unreason. Whenever a large sample of chaotic elements are taken in hand and marshalled in the order of their magnitude, an unsuspected and most beautiful form of regularity proves to have been latent all along. The tops of the marshalled row form a flowing curve of invariable proportions; and each element, as it is sorted into place, finds, as it were, a pre-ordained niche, accurately adapted to fit it."

Galton (1894)

The term entropy, which is now a key element in the study of randomness, was coined by Rudolf Clausius in 1865 as he studied heat engines in the context of the second law of thermodynamics. Clausius was the first to state "entropy always increases".

From the time of Newton until about 1890, it was generally believed that if one knows the initial state of a system with great accuracy, and if all the forces acting on the system can be formulated with equal accuracy, it would be possible, in principle, to make predictions of the state of the universe for an infinitely long time. The limits to such predictions in physical systems became clear as early as 1893 when Henri Poincaré showed that in the three-body problem in astronomy, small changes to the initial state could result in large changes in trajectories during the numerical integration of the equations.

During the 19th century, as probability theory was formalized and better understood, the attitude towards "randomness as nuisance" began to be questioned. Goethe wrote:

The tissue of the world is built from necessities and randomness; the intellect of men places itself between both and can control them; it considers the necessity and the reason of its existence; it knows how randomness can be managed, controlled, and used.

The words of Goethe proved prophetic, when in the 20th century randomized algorithms were discovered as powerful tools. By the end of the 19th century, Newton's model of a mechanical universe was fading away as the statistical view of the collision of molecules in gases was studied by Maxwell and Boltzmann. Boltzmann's equation S = k loge W (inscribed on his tombstone) first related entropy with logarithms.

20th century

Antony Gormley's Quantum Cloud sculpture in London was designed by a computer using a random walk algorithm.

During the 20th century, the five main interpretations of probability theory (e.g., classical, logical, frequency, propensity and subjective) became better understood, were discussed, compared and contrasted. A significant number of application areas were developed in this century, from finance to physics. In 1900 Louis Bachelier applied Brownian motion to evaluate stock options, effectively launching the fields of financial mathematics and stochastic processes.

Émile Borel was one of the first mathematicians to formally address randomness in 1909, and introduced normal numbers. In 1919 Richard von Mises gave the first definition of algorithmic randomness via the impossibility of a gambling system. He advanced the frequency theory of randomness in terms of what he called the collective, i.e. a random sequence. Von Mises regarded the randomness of a collective as an empirical law, established by experience. He related the "disorder" or randomness of a collective to the lack of success of attempted gambling systems. This approach led him to suggest a definition of randomness that was later refined and made mathematically rigorous by Alonzo Church by using computable functions in 1940. Von Mises likened the principle of the impossibility of a gambling system to the principle of the conservation of energy, a law that cannot be proven, but has held true in repeated experiments.

Von Mises never totally formalized his rules for sub-sequence selection, but in his 1940 paper "On the concept of random sequence", Alonzo Church suggested that the functions used for place settings in the formalism of von Mises be computable functions rather than arbitrary functions of the initial segments of the sequence, appealing to the Church–Turing thesis on effectiveness.

The advent of quantum mechanics in the early 20th century and the formulation of the Heisenberg uncertainty principle in 1927 saw the end to the Newtonian mindset among physicists regarding the determinacy of nature. In quantum mechanics, there is not even a way to consider all observable elements in a system as random variables at once, since many observables do not commute.

Café Central, one of the early meeting places of the Vienna circle

By the early 1940s, the frequency theory approach to probability was well accepted within the Vienna circle, but in the 1950s Karl Popper proposed the propensity theory. Given that the frequency approach cannot deal with "a single toss" of a coin, and can only address large ensembles or collectives, the single-case probabilities were treated as propensities or chances. The concept of propensity was also driven by the desire to handle single-case probability settings in quantum mechanics, e.g. the probability of decay of a specific atom at a specific moment. In more general terms, the frequency approach can not deal with the probability of the death of a specific person given that the death can not be repeated multiple times for that person. Karl Popper echoed the same sentiment as Aristotle in viewing randomness as subordinate to order when he wrote that "the concept of chance is not opposed to the concept of law" in nature, provided one considers the laws of chance.

Claude Shannon's development of Information theory in 1948 gave rise to the entropy view of randomness. In this view, randomness is the opposite of determinism in a stochastic process. Hence if a stochastic system has entropy zero it has no randomness and any increase in entropy increases randomness. Shannon's formulation defaults to Boltzmann's 19th century formulation of entropy in case all probabilities are equal. Entropy is now widely used in diverse fields of science from thermodynamics to quantum chemistry.

Martingales for the study of chance and betting strategies were introduced by Paul Lévy in the 1930s and were formalized by Joseph L. Doob in the 1950s. The application of random walk hypothesis in financial theory was first proposed by Maurice Kendall in 1953. It was later promoted by Eugene Fama and Burton Malkiel.

Random strings were first studied in the 1960s by A. N. Kolmogorov (who had provided the first axiomatic definition of probability theory in 1933), Chaitin and Martin-Löf. The algorithmic randomness of a string was defined as the minimum size of a program (e.g. in bits) executed on a universal computer that yields the string. Chaitin's Omega number later related randomness and the halting probability for programs.

In 1964, Benoît Mandelbrot suggested that most statistical models approached only a first stage of dealing with indeterminism, and that they ignored many aspects of real world turbulence. In his 1997 he defined seven states of randomness ranging from "mild to wild", with traditional randomness being at the mild end of the scale.

Despite mathematical advances, reliance on other methods of dealing with chance, such as fortune telling and astrology continued in the 20th century. The government of Myanmar reportedly shaped 20th century economic policy based on fortune telling and planned the move of the capital of the country based on the advice of astrologers. White House Chief of Staff Donald Regan criticized the involvement of astrologer Joan Quigley in decisions made during Ronald Reagan's presidency in the 1980s. Quigley claims to have been the White House astrologer for seven years.

During the 20th century, limits in dealing with randomness were better understood. The best-known example of both theoretical and operational limits on predictability is weather forecasting, simply because models have been used in the field since the 1950s. Predictions of weather and climate are necessarily uncertain. Observations of weather and climate are uncertain and incomplete, and the models into which the data are fed are uncertain. In 1961, Edward Lorenz noticed that a very small change to the initial data submitted to a computer program for weather simulation could result in a completely different weather scenario. This later became known as the butterfly effect, often paraphrased as the question: "Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas?". A key example of serious practical limits on predictability is in geology, where the ability to predict earthquakes either on an individual or on a statistical basis remains a remote prospect.

In the late 1970s and early 1980s, computer scientists began to realize that the deliberate introduction of randomness into computations can be an effective tool for designing better algorithms. In some cases, such randomized algorithms outperform the best deterministic methods.

Consilience

From Wikipedia, the free encyclopedia

In science and history, consilience (also convergence of evidence or concordance of evidence) is the principle that evidence from independent, unrelated sources can "converge" on strong conclusions. That is, when multiple sources of evidence are in agreement, the conclusion can be very strong even when none of the individual sources of evidence is significantly so on its own. Most established scientific knowledge is supported by a convergence of evidence: if not, the evidence is comparatively weak, and there will probably not be a strong scientific consensus.

The principle is based on unity of knowledge; measuring the same result by several different methods should lead to the same answer. For example, it should not matter whether one measures distances within the Giza pyramid complex by laser rangefinding, by satellite imaging, or with a metre-stick – in all three cases, the answer should be approximately the same. For the same reason, different dating methods in geochronology should concur, a result in chemistry should not contradict a result in geology, etc.

The word consilience was originally coined as the phrase "consilience of inductions" by William Whewell (consilience refers to a "jumping together" of knowledge). The word comes from Latin com- "together" and -siliens "jumping" (as in resilience).

Description

Consilience requires the use of independent methods of measurement, meaning that the methods have few shared characteristics. That is, the mechanism by which the measurement is made is different; each method is dependent on an unrelated natural phenomenon. For example, the accuracy of laser range-finding measurements is based on the scientific understanding of lasers, while satellite pictures and metre-sticks (or yardsticks) rely on different phenomena. Because the methods are independent, when one of several methods is in error, it is very unlikely to be in error in the same way as any of the other methods, and a difference between the measurements will be observed. If the scientific understanding of the properties of lasers was inaccurate, then the laser measurement would be inaccurate but the others would not.

As a result, when several different methods agree, this is strong evidence that none of the methods are in error and the conclusion is correct. This is because of a greatly reduced likelihood of errors: for a consensus estimate from multiple measurements to be wrong, the errors would have to be similar for all samples and all methods of measurement, which is extremely unlikely. Random errors will tend to cancel out as more measurements are made, due to regression to the mean; systematic errors will be detected by differences between the measurements and will also tend to cancel out since the direction of the error will still be random. This is how scientific theories reach high confidence—over time, they build up a large degree of evidence which converges on the same conclusion.

When results from different strong methods do appear to conflict, this is treated as a serious problem to be reconciled. For example, in the 19th century, the Sun appeared to be no more than 20 million years old, but the Earth appeared to be no less than 300 million years (resolved by the discovery of nuclear fusion and radioactivity, and the theory of quantum mechanics); or current attempts to resolve theoretical differences between quantum mechanics and general relativity.

Significance

Because of consilience, the strength of evidence for any particular conclusion is related to how many independent methods are supporting the conclusion, as well as how different these methods are. Those techniques with the fewest (or no) shared characteristics provide the strongest consilience and result in the strongest conclusions. This also means that confidence is usually strongest when considering evidence from different fields because the techniques are usually very different.

For example, the theory of evolution is supported by a convergence of evidence from genetics, molecular biology, paleontology, geology, biogeography, comparative anatomy, comparative physiology, and many other fields. In fact, the evidence within each of these fields is itself a convergence providing evidence for the theory. As a result, to disprove evolution, most or all of these independent lines of evidence would have to be found to be in error. The strength of the evidence, considered together as a whole, results in the strong scientific consensus that the theory is correct. In a similar way, evidence about the history of the universe is drawn from astronomy, astrophysics, planetary geology, and physics.

Finding similar conclusions from multiple independent methods is also evidence for the reliability of the methods themselves, because consilience eliminates the possibility of all potential errors that do not affect all the methods equally. This is also used for the validation of new techniques through comparison with the consilient ones. If only partial consilience is observed, this allows for the detection of errors in methodology; any weaknesses in one technique can be compensated for by the strengths of the others. Alternatively, if using more than one or two techniques for every experiment is infeasible, some of the benefits of consilience may still be obtained if it is well-established that these techniques usually give the same result.

Consilience is important across all of science, including the social sciences, and is often used as an argument for scientific realism by philosophers of science. Each branch of science studies a subset of reality that depends on factors studied in other branches. Atomic physics underlies the workings of chemistry, which studies emergent properties that in turn are the basis of biology. Psychology is not separate from the study of properties emergent from the interaction of neurons and synapses. Sociology, economics, and anthropology are each, in turn, studies of properties emergent from the interaction of countless individual humans. The concept that all the different areas of research are studying one real, existing universe is an apparent explanation of why scientific knowledge determined in one field of inquiry has often helped in understanding other fields.

Deviations

Consilience does not forbid deviations: in fact, since not all experiments are perfect, some deviations from established knowledge are expected. However, when the convergence is strong enough, then new evidence inconsistent with the previous conclusion is not usually enough to outweigh that convergence. Without an equally strong convergence on the new result, the weight of evidence will still favor the established result. This means that the new evidence is most likely to be wrong.

Science denialism (for example, AIDS denialism) is often based on a misunderstanding of this property of consilience. A denier may promote small gaps not yet accounted for by the consilient evidence, or small amounts of evidence contradicting a conclusion without accounting for the pre-existing strength resulting from consilience. More generally, to insist that all evidence converge precisely with no deviations would be naïve falsificationism, equivalent to considering a single contrary result to falsify a theory when another explanation, such as equipment malfunction or misinterpretation of results, is much more likely.

In history

Historical evidence also converges in an analogous way. For example: if five ancient historians, none of whom knew each other, all claim that Julius Caesar seized power in Rome in 49 BCE, this is strong evidence in favor of that event occurring even if each individual historian is only partially reliable. By contrast, if the same historian had made the same claim five times in five different places (and no other types of evidence were available), the claim is much weaker because it originates from a single source. The evidence from the ancient historians could also converge with evidence from other fields, such as archaeology: for example, evidence that many senators fled Rome at the time, that the battles of Caesar's civil war occurred, and so forth.

Consilience has also been discussed in reference to Holocaust denial.

"We [have now discussed] eighteen proofs all converging on one conclusion...the deniers shift the burden of proof to historians by demanding that each piece of evidence, independently and without corroboration between them, prove the Holocaust. Yet no historian has ever claimed that one piece of evidence proves the Holocaust. We must examine the collective whole."

That is, individually the evidence may underdetermine the conclusion, but together they overdetermine it. A similar way to state this is that to ask for one particular piece of evidence in favor of a conclusion is a flawed question.

Outside the sciences

In addition to the sciences, consilience can be important to the arts, ethics and religion. Both artists and scientists have identified the importance of biology in the process of artistic innovation.

History of the concept

Consilience has its roots in the ancient Greek concept of an intrinsic orderliness that governs our cosmos, inherently comprehensible by logical process, a vision at odds with mystical views in many cultures that surrounded the Hellenes. The rational view was recovered during the high Middle Ages, separated from theology during the Renaissance and found its apogee in the Age of Enlightenment.

Whewell's definition was that:

The Consilience of Inductions takes place when an Induction, obtained from one class of facts, coincides with an Induction obtained from another different class. Thus Consilience is a test of the truth of the Theory in which it occurs.

More recent descriptions include:

"Where there is a convergence of evidence, where the same explanation is implied, there is increased confidence in the explanation. Where there is divergence, then either the explanation is at fault or one or more of the sources of information is in error or requires reinterpretation."

"Proof is derived through a convergence of evidence from numerous lines of inquiry—multiple, independent inductions, all of which point to an unmistakable conclusion."

Edward O. Wilson

Although the concept of consilience in Whewell's sense was widely discussed by philosophers of science, the term was unfamiliar to the broader public until the end of the 20th century, when it was revived in Consilience: The Unity of Knowledge, a 1998 book by the author and biologist E. O. Wilson, as an attempt to bridge the cultural gap between the sciences and the humanities that was the subject of C. P. Snow's The Two Cultures and the Scientific Revolution (1959). Wilson believed that "the humanities, ranging from philosophy and history to moral reasoning, comparative religion, and interpretation of the arts, will draw closer to the sciences and partly fuse with them" with the result that science and the scientific method, from within this fusion, would not only explain the physical phenomenon but also provide moral guidance and be the ultimate source of all truths.

Wilson held that with the rise of the modern sciences, the sense of unity gradually was lost in the increasing fragmentation and specialization of knowledge in the last two centuries. He asserted that the sciences, humanities, and arts have a common goal: to give a purpose to understand the details, to lend to all inquirers "a conviction, far deeper than a mere working proposition, that the world is orderly and can be explained by a small number of natural laws." An important point made by Wilson is that hereditary human nature and evolution itself profoundly affect the evolution of culture, in essence, a sociobiological concept. Wilson's concept is a much broader notion of consilience than that of Whewell, who was merely pointing out that generalizations invented to account for one set of phenomena often account for others as well.

A parallel view lies in the term universology, which literally means "the science of the universe." Universology was first promoted for the study of the interconnecting principles and truths of all domains of knowledge by Stephen Pearl Andrews, a 19th-century utopian futurist and anarchist.

Problem of induction

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Problem_of_induction
Usually inferred from repeated observations: "The sun always rises in the east."
Usually not inferred from repeated observations: "When someone dies, it's never me."

The problem of induction is a philosophical problem that questions the rationality of predictions about unobserved things based on previous observations. These inferences from the observed to the unobserved are known as "inductive inferences". David Hume, who first formulated the problem in 1739, argued that there is no non-circular way to justify inductive inferences, while he acknowledged that everyone does and must make such inferences.

The traditional inductivist view is that all claimed empirical laws, either in everyday life or through the scientific method, can be justified through some form of reasoning. The problem is that many philosophers tried to find such a justification but their proposals were not accepted by others. Identifying the inductivist view as the scientific view, C. D. Broad once said that induction is "the glory of science and the scandal of philosophy". In contrast, Karl Popper's critical rationalism claimed that inductive justifications are never used in science and proposed instead that science is based on the procedure of conjecturing hypotheses, deductively calculating consequences, and then empirically attempting to falsify them.

Formulation of the problem

In inductive reasoning, one makes a series of observations and infers a claim based on them. For instance, from a series of observations that a woman walks her dog by the market at 8 am on Monday, it seems valid to infer that next Monday she will do the same, or that, in general, the woman walks her dog by the market every Monday. That next Monday the woman walks by the market merely adds to the series of observations, but it does not prove she will walk by the market every Monday. First of all, it is not certain, regardless of the number of observations, that the woman always walks by the market at 8 am on Monday. In fact, David Hume even argued that we cannot claim it is "more probable", since this still requires the assumption that the past predicts the future.

Second, the observations themselves do not establish the validity of inductive reasoning, except inductively. Bertrand Russell illustrated this point in The Problems of Philosophy:

Domestic animals expect food when they see the person who usually feeds them. We know that all these rather crude expectations of uniformity are liable to be misleading. The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken.

Ancient and early modern origins

Pyrrhonism

The works of the Pyrrhonist philosopher Sextus Empiricus contain the oldest surviving questioning of the validity of inductive reasoning. He wrote:

It is also easy, I consider, to set aside the method of induction. For, when they propose to establish the universal from the particulars by means of induction, they will effect this by a review either of all or of some of the particular instances. But if they review some, the induction will be insecure, since some of the particulars omitted in the induction may contravene the universal; while if they are to review all, they will be toiling at the impossible, since the particulars are infinite and indefinite. Thus on both grounds, as I think, the consequence is that induction is invalidated.

The focus upon the gap between the premises and conclusion present in the above passage appears different from Hume's focus upon the circular reasoning of induction. However, Weintraub claims in The Philosophical Quarterly that although Sextus's approach to the problem appears different, Hume's approach was actually an application of another argument raised by Sextus:

Those who claim for themselves to judge the truth are bound to possess a criterion of truth. This criterion, then, either is without a judge's approval or has been approved. But if it is without approval, whence comes it that it is truthworthy? For no matter of dispute is to be trusted without judging. And, if it has been approved, that which approves it, in turn, either has been approved or has not been approved, and so on ad infinitum.

Although the criterion argument applies to both deduction and induction, Weintraub believes that Sextus's argument "is precisely the strategy Hume invokes against induction: it cannot be justified, because the purported justification, being inductive, is circular." She concludes that "Hume's most important legacy is the supposition that the justification of induction is not analogous to that of deduction." She ends with a discussion of Hume's implicit sanction of the validity of deduction, which Hume describes as intuitive in a manner analogous to modern foundationalism.

Indian philosophy

The Cārvāka, a materialist and skeptic school of Indian philosophy, used the problem of induction to point out the flaws in using inference as a way to gain valid knowledge. They held that since inference needed an invariable connection between the middle term and the predicate, and further, that since there was no way to establish this invariable connection, that the efficacy of inference as a means of valid knowledge could never be stated.

The 9th century Indian skeptic, Jayarasi Bhatta, also made an attack on inference, along with all means of knowledge, and showed by a type of reductio argument that there was no way to conclude universal relations from the observation of particular instances.

Medieval philosophy

Medieval writers such as al-Ghazali and William of Ockham connected the problem with God's absolute power, asking how we can be certain that the world will continue behaving as expected when God could at any moment miraculously cause the opposite. Duns Scotus, however, argued that inductive inference from a finite number of particulars to a universal generalization was justified by "a proposition reposing in the soul, 'Whatever occurs in a great many instances by a cause that is not free, is the natural effect of that cause.'" Some 17th-century Jesuits argued that although God could create the end of the world at any moment, it was necessarily a rare event and hence our confidence that it would not happen very soon was largely justified.

David Hume

David Hume, a Scottish thinker of the Enlightenment era, is the philosopher most often associated with induction. His formulation of the problem of induction can be found in An Enquiry concerning Human Understanding, §4. Here, Hume introduces his famous distinction between "relations of ideas" and "matters of fact". Relations of ideas are propositions which can be derived from deductive logic, which can be found in fields such as geometry and algebra. Matters of fact, meanwhile, are not verified through the workings of deductive logic but by experience. Specifically, matters of fact are established by making an inference about causes and effects from repeatedly observed experience. While relations of ideas are supported by reason alone, matters of fact must rely on the connection of a cause and effect through experience. Causes of effects cannot be linked through a priori reasoning, but by positing a "necessary connection" that depends on the "uniformity of nature".

Hume situates his introduction to the problem of induction in A Treatise of Human Nature within his larger discussion on the nature of causes and effects (Book I, Part III, Section VI). He writes that reasoning alone cannot establish the grounds of causation. Instead, the human mind imputes causation to phenomena after repeatedly observing a connection between two objects. For Hume, establishing the link between causes and effects relies not on reasoning alone, but the observation of "constant conjunction" throughout one's sensory experience. From this discussion, Hume goes on to present his formulation of the problem of induction in A Treatise of Human Nature, writing "there can be no demonstrative arguments to prove, that those instances, of which we have had no experience, resemble those, of which we have had experience."

In other words, the problem of induction can be framed in the following way: we cannot apply a conclusion about a particular set of observations to a more general set of observations. While deductive logic allows one to arrive at a conclusion with certainty, inductive logic can only provide a conclusion that is probably true. It is mistaken to frame the difference between deductive and inductive logic as one between general to specific reasoning and specific to general reasoning. This is a common misperception about the difference between inductive and deductive thinking. According to the literal standards of logic, deductive reasoning arrives at certain conclusions while inductive reasoning arrives at probable conclusions.  Hume's treatment of induction helps to establish the grounds for probability, as he writes in A Treatise of Human Nature that "probability is founded on the presumption of a resemblance betwixt those objects, of which we have had experience, and those, of which we have had none" (Book I, Part III, Section VI).

Therefore, Hume establishes induction as the very grounds for attributing causation. There might be many effects which stem from a single cause. Over repeated observation, one establishes that a certain set of effects are linked to a certain set of causes. However, the future resemblance of these connections to connections observed in the past depends on induction. Induction allows one to conclude that "Effect A2" was caused by "Cause A2" because a connection between "Effect A1" and "Cause A1" was observed repeatedly in the past. Given that reason alone can not be sufficient to establish the grounds of induction, Hume implies that induction must be accomplished through imagination. One does not make an inductive reference through a priori reasoning, but through an imaginative step automatically taken by the mind.

Hume does not challenge that induction is performed by the human mind automatically, but rather hopes to show more clearly how much human inference depends on inductive—not a priori—reasoning. He does not deny future uses of induction, but shows that it is distinct from deductive reasoning, helps to ground causation, and wants to inquire more deeply into its validity. Hume offers no solution to the problem of induction himself. He prompts other thinkers and logicians to argue for the validity of induction as an ongoing dilemma for philosophy. A key issue with establishing the validity of induction is that one is tempted to use an inductive inference as a form of justification itself. This is because people commonly justify the validity of induction by pointing to the many instances in the past when induction proved to be accurate. For example, one might argue that it is valid to use inductive inference in the future because this type of reasoning has yielded accurate results in the past. However, this argument relies on an inductive premise itself—that past observations of induction being valid will mean that future observations of induction will also be valid. Thus, many solutions to the problem of induction tend to be circular.

Nelson Goodman's new riddle of induction

Nelson Goodman's Fact, Fiction, and Forecast (1955) presented a different description of the problem of induction in the chapter entitled "The New Riddle of Induction". Goodman proposed the new predicate "grue". Something is grue if and only if it has been (or will be, according to a scientific, general hypothesis) observed to be green before a certain time t, and blue if observed after that time. The "new" problem of induction is, since all emeralds we have ever seen are both green and grue, why do we suppose that after time t we will find green but not grue emeralds? The problem here raised is that two different inductions will be true and false under the same conditions. In other words:

  • Given the observations of a lot of green emeralds, someone using a common language will inductively infer that all emeralds are green (therefore, he will believe that any emerald he will ever find will be green, even after time t).
  • Given the same set of observations of green emeralds, someone using the predicate "grue" will inductively infer that all emeralds, which will be observed after t, will be blue, despite the fact that he observed only green emeralds so far.

One could argue, using Occam's razor, that greenness is more likely than grueness because the concept of grueness is more complex than that of greenness. Goodman, however, points out that the predicate "grue" only appears more complex than the predicate "green" because we have defined grue in terms of blue and green. If we had always been brought up to think in terms of "grue" and "bleen" (where bleen is blue before time t, and green thereafter), we would intuitively consider "green" to be a crazy and complicated predicate. Goodman believed that which scientific hypotheses we favour depend on which predicates are "entrenched" in our language.

Willard Van Orman Quine offers a practical solution to this problem by making the metaphysical claim that only predicates that identify a "natural kind" (i.e. a real property of real things) can be legitimately used in a scientific hypothesis. R. Bhaskar also offers a practical solution to the problem. He argues that the problem of induction only arises if we deny the possibility of a reason for the predicate, located in the enduring nature of something. For example, we know that all emeralds are green, not because we have only ever seen green emeralds, but because the chemical make-up of emeralds insists that they must be green. If we were to change that structure, they would not be green. For instance, emeralds are a kind of green beryl, made green by trace amounts of chromium and sometimes vanadium. Without these trace elements, the gems would be colourless.

Notable interpretations

Hume

Although induction is not made by reason, Hume observes that we nonetheless perform it and improve from it. He proposes a descriptive explanation for the nature of induction in §5 of the Enquiry, titled "Skeptical solution of these doubts". It is by custom or habit that one draws the inductive connection described above, and "without the influence of custom we would be entirely ignorant of every matter of fact beyond what is immediately present to the memory and senses". The result of custom is belief, which is instinctual and much stronger than imagination alone.

John Maynard Keynes

In his Treatise on Probability, John Maynard Keynes notes:

An inductive argument affirms, not that a certain matter of fact is so, but that relative to certain evidence there is a probability in its favour. The validity of the induction, relative to the original evidence, is not upset, therefore, if, as a fact, the truth turns out to be otherwise.

This approach was endorsed by Bertrand Russell.

David Stove and Donald Williams

David Stove's argument for induction, based on the statistical syllogism, was presented in the Rationality of Induction and was developed from an argument put forward by one of Stove's heroes, the late Donald Cary Williams (formerly Professor at Harvard) in his book The Ground of Induction. Stove argued that it is a statistical truth that the great majority of the possible subsets of specified size (as long as this size is not too small) are similar to the larger population to which they belong. For example, the majority of the subsets which contain 3000 ravens which you can form from the raven population are similar to the population itself (and this applies no matter how large the raven population is, as long as it is not infinite). Consequently, Stove argued that if you find yourself with such a subset then the chances are that this subset is one of the ones that are similar to the population, and so you are justified in concluding that it is likely that this subset "matches" the population reasonably closely. The situation would be analogous to drawing a ball out of a barrel of balls, 99% of which are red. In such a case you have a 99% chance of drawing a red ball. Similarly, when getting a sample of ravens the probability is very high that the sample is one of the matching or "representative" ones. So as long as you have no reason to think that your sample is an unrepresentative one, you are justified in thinking that probably (although not certainly) that it is.

Biting the bullet: Keith Campbell and Claudio Costa

An intuitive answer to Hume would be to say that a world inaccessible to any inductive procedure would simply not be conceivable. This intuition was taken into account by Keith Campbell by considering that, to be built, a concept must be reapplied, which demands a certain continuity in its object of application and consequently some openness to induction. Claudio Costa has noted that a future can only be a future of its own past if it holds some identity with it. Moreover, the nearer a future is to the point of junction with its past, the greater are the similarities tendentially involved. Consequently – contra Hume – some form of principle of homogeneity (causal or structural) between future and past must be warranted, which would make some inductive procedure always possible.

Karl Popper

Karl Popper, a philosopher of science, sought to solve the problem of induction. He argued that science does not use induction, and induction is in fact a myth. Instead, knowledge is created by conjecture and criticism. The main role of observations and experiments in science, he argued, is in attempts to criticize and refute existing theories.

According to Popper, the problem of induction as usually conceived is asking the wrong question: it is asking how to justify theories given they cannot be justified by induction. Popper argued that justification is not needed at all, and seeking justification "begs for an authoritarian answer". Instead, Popper said, what should be done is to look to find and correct errors. Popper regarded theories that have survived criticism as better corroborated in proportion to the amount and stringency of the criticism, but, in sharp contrast to the inductivist theories of knowledge, emphatically as less likely to be true. Popper held that seeking for theories with a high probability of being true was a false goal that is in conflict with the search for knowledge. Science should seek for theories that are most probably false on the one hand (which is the same as saying that they are highly falsifiable and so there are many ways that they could turn out to be wrong), but still all actual attempts to falsify them have failed so far (that they are highly corroborated).

Wesley C. Salmon criticizes Popper on the grounds that predictions need to be made both for practical purposes and in order to test theories. That means Popperians need to make a selection from the number of unfalsified theories available to them, which is generally more than one. Popperians would wish to choose well-corroborated theories, in their sense of corroboration, but face a dilemma: either they are making the essentially inductive claim that a theory's having survived criticism in the past means it will be a reliable predictor in the future; or Popperian corroboration is no indicator of predictive power at all, so there is no rational motivation for their preferred selection principle.

David Miller has criticized this kind of criticism by Salmon and others because it makes inductivist assumptions. Popper does not say that corroboration is an indicator of predictive power. The predictive power is in the theory itself, not in its corroboration. The rational motivation for choosing a well-corroborated theory is that it is simply easier to falsify: Well-corroborated means that at least one kind of experiment (already conducted at least once) could have falsified (but did not actually falsify) the one theory, while the same kind of experiment, regardless of its outcome, could not have falsified the other. So it is rational to choose the well-corroborated theory: It may not be more likely to be true, but if it is actually false, it is easier to get rid of when confronted with the conflicting evidence that will eventually turn up. Accordingly, it is wrong to consider corroboration as a reason, a justification for believing in a theory or as an argument in favor of a theory to convince someone who objects to it.

Cell signaling

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cell_signaling   In biology , cell signali...