Search This Blog

Sunday, May 28, 2023

Logical positivism

From Wikipedia, the free encyclopedia

Logical positivism, later called logical empiricism, and both of which together are also known as neopositivism, is a movement whose central thesis was the verification principle (also known as the verifiability criterion of meaning). This theory of knowledge asserted that only statements verifiable through direct observation or logical proof are meaningful in terms of conveying truth value, information or factual content. Starting in the late 1920s, groups of philosophers, scientists, and mathematicians formed the Berlin Circle and the Vienna Circle, which, in these two cities, would propound the ideas of logical positivism.

Flourishing in several European centres through the 1930s, the movement sought to prevent confusion rooted in unclear language and unverifiable claims by converting philosophy into "scientific philosophy", which, according to the logical positivists, ought to share the bases and structures of empirical sciences' best examples, such as Albert Einstein's general theory of relativity. Despite its ambition to overhaul philosophy by studying and mimicking the extant conduct of empirical science, logical positivism became erroneously stereotyped as a movement to regulate the scientific process and to place strict standards on it.

After World War II, the movement shifted to a milder variant, logical empiricism, led mainly by Carl Hempel, who, during the rise of Nazism, had immigrated to the United States. In the ensuing years, the movement's central premises, still unresolved, were heavily criticised by leading philosophers, particularly Willard van Orman Quine and Karl Popper, and even, within the movement itself, by Hempel. The 1962 publication of Thomas Kuhn's landmark book The Structure of Scientific Revolutions dramatically shifted academic philosophy's focus. In 1967 philosopher John Passmore pronounced logical positivism "dead, or as dead as a philosophical movement ever becomes".

Origins

Logical positivists picked from Ludwig Wittgenstein's early philosophy of language the verifiability principle or criterion of meaningfulness. As in Ernst Mach's phenomenalism, whereby the mind knows only actual or potential sensory experience, verificationists took all sciences' basic content to be only sensory experience. And some influence came from Percy Bridgman's musings that others proclaimed as operationalism, whereby a physical theory is understood by what laboratory procedures scientists perform to test its predictions. In verificationism, only the verifiable was scientific, and thus meaningful (or cognitively meaningful), whereas the unverifiable, being unscientific, were meaningless "pseudostatements" (just emotively meaningful). Unscientific discourse, as in ethics and metaphysics, would be unfit for discourse by philosophers, newly tasked to organize knowledge, not develop new knowledge.

Definitions

Logical positivism is sometimes stereotyped as forbidding talk of unobservables, such as microscopic entities or such notions as causality and general principles, but that is an exaggeration. Rather, most neopositivists viewed talk of unobservables as metaphorical or elliptical: direct observations phrased abstractly or indirectly. So theoretical terms would garner meaning from observational terms via correspondence rules, and thereby theoretical laws would be reduced to empirical laws. Via Bertrand Russell's logicism, reducing mathematics to logic, physics' mathematical formulas would be converted to symbolic logic. Via Russell's logical atomism, ordinary language would break into discrete units of meaning. Rational reconstruction, then, would convert ordinary statements into standardized equivalents, all networked and united by a logical syntax. A scientific theory would be stated with its method of verification, whereby a logical calculus or empirical operation could verify its falsity or truth.

Development

In the late 1930s, logical positivists fled Germany and Austria for Britain and the United States. By then, many had replaced Mach's phenomenalism with Otto Neurath's physicalism, whereby science's content is not actual or potential sensations, but instead is entities publicly observable. Rudolf Carnap, who had sparked logical positivism in the Vienna Circle, had sought to replace verification with simply confirmation. With World War II's close in 1945, logical positivism became milder, logical empiricism, led largely by Carl Hempel, in America, who expounded the covering law model of scientific explanation. Logical positivism became a major underpinning of analytic philosophy, and dominated philosophy in the English-speaking world, including philosophy of science, while influencing sciences, but especially social sciences, into the 1960s. Yet the movement failed to resolve its central problems, and its doctrines were increasingly criticized, most trenchantly by Willard Van Orman Quine, Norwood Hanson, Karl Popper, Thomas Kuhn, and Carl Hempel.

Roots

Language

Tractatus Logico-Philosophicus, by the young Ludwig Wittgenstein, introduced the view of philosophy as "critique of language", offering the possibility of a theoretically principled distinction of intelligible versus nonsensical discourse. Tractatus adhered to a correspondence theory of truth (versus a coherence theory of truth). Wittgenstein's influence also shows in some versions of the verifiability principle. In tractarian doctrine, truths of logic are tautologies, a view widely accepted by logical positivists who were also influenced by Wittgenstein's interpretation of probability although, according to Neurath, some logical positivists found Tractatus to contain too much metaphysics.

Logicism

Gottlob Frege began the program of reducing mathematics to logic, continued it with Bertrand Russell, but lost interest in this logicism, and Russell continued it with Alfred North Whitehead in their Principia Mathematica, inspiring some of the more mathematical logical positivists, such as Hans Hahn and Rudolf Carnap. Carnap's early anti-metaphysical works employed Russell's theory of types. Carnap envisioned a universal language that could reconstruct mathematics and thereby encode physics. Yet Kurt Gödel's incompleteness theorem showed this impossible except in trivial cases, and Alfred Tarski's undefinability theorem shattered all hopes of reducing mathematics to logic. Thus, a universal language failed to stem from Carnap's 1934 work Logische Syntax der Sprache (Logical Syntax of Language). Still, some logical positivists, including Carl Hempel, continued support of logicism.

Empiricism

In Germany, Hegelian metaphysics was a dominant movement, and Hegelian successors such as F H Bradley explained reality by postulating metaphysical entities lacking empirical basis, drawing reaction in the form of positivism. Starting in the late 19th century, there was a "back to Kant" movement. Ernst Mach's positivism and phenomenalism were a major influence.

Origins

Vienna

The Vienna Circle, gathering around University of Vienna and Café Central, was led principally by Moritz Schlick. Schlick had held a neo-Kantian position, but later converted, via Carnap's 1928 book Der logische Aufbau der Welt, that is, The Logical Structure of the World. A 1929 pamphlet written by Otto Neurath, Hans Hahn, and Rudolf Carnap summarized the Vienna Circle's positions. Another member of Vienna Circle to later prove very influential was Carl Hempel. A friendly but tenacious critic of the Circle was Karl Popper, whom Neurath nicknamed the "Official Opposition".

Carnap and other Vienna Circle members, including Hahn and Neurath, saw need for a weaker criterion of meaningfulness than verifiability. A radical "left" wing—led by Neurath and Carnap—began the program of "liberalization of empiricism", and they also emphasized fallibilism and pragmatics, which latter Carnap even suggested as empiricism's basis. A conservative "right" wing—led by Schlick and Waismann—rejected both the liberalization of empiricism and the epistemological nonfoundationalism of a move from phenomenalism to physicalism. As Neurath and somewhat Carnap posed science toward social reform, the split in Vienna Circle also reflected political views.

Berlin

The Berlin Circle was led principally by Hans Reichenbach.

Rivals

Both Moritz Schlick and Rudolf Carnap had been influenced by and sought to define logical positivism versus the neo-Kantianism of Ernst Cassirer—the then leading figure of Marburg school, so called—and against Edmund Husserl's phenomenology. Logical positivists especially opposed Martin Heidegger's obscure metaphysics, the epitome of what logical positivism rejected. In the early 1930s, Carnap debated Heidegger over "metaphysical pseudosentences". Despite its revolutionary aims, logical positivism was but one view among many vying within Europe, and logical positivists initially spoke their language.

Export

As the movement's first emissary to the New World, Moritz Schlick visited Stanford University in 1929, yet otherwise remained in Vienna and was murdered in 1936 at the University by a former student, Johann Nelböck, who was reportedly deranged. That year, a British attendee at some Vienna Circle meetings since 1933, A. J. Ayer saw his Language, Truth and Logic, written in English, import logical positivism to the English-speaking world. By then, the Nazi Party's 1933 rise to power in Germany had triggered flight of intellectuals. In exile in England, Otto Neurath died in 1945. Rudolf Carnap, Hans Reichenbach, and Carl Hempel—Carnap's protégé who had studied in Berlin with Reichenbach—settled permanently in America. Upon Germany's annexation of Austria in 1938, remaining logical positivists, many of whom were also Jewish, were targeted and continued flight. Logical positivism thus became dominant in the English-speaking world.

Principles

Analytic/synthetic gap

Concerning reality, the necessary is a state true in all possible worlds—mere logical validity—whereas the contingent hinges on the way the particular world is. Concerning knowledge, the a priori is knowable before or without, whereas the a posteriori is knowable only after or through, relevant experience. Concerning statements, the analytic is true via terms' arrangement and meanings, thus a tautology—true by logical necessity but uninformative about the world—whereas the synthetic adds reference to a state of facts, a contingency.

In 1739, David Hume cast a fork aggressively dividing "relations of ideas" from "matters of fact and real existence", such that all truths are of one type or the other. By Hume's fork, truths by relations among ideas (abstract) all align on one side (analytic, necessary, a priori), whereas truths by states of actualities (concrete) always align on the other side (synthetic, contingent, a posteriori). Of any treatises containing neither, Hume orders, "Commit it then to the flames, for it can contain nothing but sophistry and illusion".

Thus awakened from "dogmatic slumber", Immanuel Kant quested to answer Hume's challenge—but by explaining how metaphysics is possible. Eventually, in his 1781 work, Kant crossed the tines of Hume's fork to identify another range of truths by necessity—synthetic a priori, statements claiming states of facts but known true before experience—by arriving at transcendental idealism, attributing the mind a constructive role in phenomena by arranging sense data into the very experience space, time, and substance. Thus, Kant saved Newton's law of universal gravitation from Hume's problem of induction by finding uniformity of nature to be a priori knowledge. Logical positivists rejected Kant's synthetic a priori, and adopted Hume's fork, whereby a statement is either analytic and a priori (thus necessary and verifiable logically) or synthetic and a posteriori (thus contingent and verifiable empirically).

Observation/theory gap

Early, most logical positivists proposed that all knowledge is based on logical inference from simple "protocol sentences" grounded in observable facts. In the 1936 and 1937 papers "Testability and meaning", individual terms replace sentences as the units of meaning. Further, theoretical terms no longer need to acquire meaning by explicit definition from observational terms: the connection may be indirect, through a system of implicit definitions. Carnap also provided an important, pioneering discussion of disposition predicates.

Cognitive meaningfulness

Verification

The logical positivists' initial stance was that a statement is "cognitively meaningful" in terms of conveying truth value, information or factual content only if some finite procedure conclusively determines its truth. By this verifiability principle, only statements verifiable either by their analyticity or by empiricism were cognitively meaningful. Metaphysics, ontology, as well as much of ethics failed this criterion, and so were found cognitively meaningless. Moritz Schlick, however, did not view ethical or aesthetic statements as cognitively meaningless. Cognitive meaningfulness was variously defined: having a truth value; corresponding to a possible state of affairs; intelligible or understandable as are scientific statements.

Ethics and aesthetics were subjective preferences, while theology and other metaphysics contained "pseudostatements", neither true nor false. This meaningfulness was cognitive, although other types of meaningfulness—for instance, emotive, expressive, or figurative—occurred in metaphysical discourse, dismissed from further review. Thus, logical positivism indirectly asserted Hume's law, the principle that is statements cannot justify ought statements, but are separated by an unbridgeable gap. A. J. Ayer's 1936 book asserted an extreme variant—the boo/hooray doctrine—whereby all evaluative judgments are but emotional reactions.

Confirmation

In an important pair of papers in 1936 and 1937, "Testability and meaning", Carnap replaced verification with confirmation, on the view that although universal laws cannot be verified they can be confirmed. Later, Carnap employed abundant logical and mathematical methods in researching inductive logic while seeking to provide an account of probability as "degree of confirmation", but was never able to formulate a model. In Carnap's inductive logic, every universal law's degree of confirmation is always zero. In any event, the precise formulation of what came to be called the "criterion of cognitive significance" took three decades (Hempel 1950, Carnap 1956, Carnap 1961).

Carl Hempel became a major critic within the logical positivism movement. Hempel criticized the positivist thesis that empirical knowledge is restricted to Basissätze/Beobachtungssätze/Protokollsätze (basic statements or observation statements or protocol statements). Hempel elucidated the paradox of confirmation.

Weak verification

The second edition of A. J. Ayer's book arrived in 1946, and discerned strong versus weak forms of verification. Ayer concluded, "A proposition is said to be verifiable, in the strong sense of the term, if, and only if, its truth could be conclusively established by experience", but is verifiable in the weak sense "if it is possible for experience to render it probable". And yet, "no proposition, other than a tautology, can possibly be anything more than a probable hypothesis". Thus, all are open to weak verification.

Philosophy of science

Upon the global defeat of Nazism, and the removal from philosophy of rivals for radical reform—Marburg neo-Kantianism, Husserlian phenomenology, Heidegger's "existential hermeneutics"—and while hosted in the climate of American pragmatism and commonsense empiricism, the neopositivists shed much of their earlier, revolutionary zeal. No longer crusading to revise traditional philosophy into a new scientific philosophy, they became respectable members of a new philosophy subdiscipline, philosophy of science. Receiving support from Ernest Nagel, logical empiricists were especially influential in the social sciences.

Explanation

Comtean positivism had viewed science as description, whereas the logical positivists posed science as explanation, perhaps to better realize the envisioned unity of science by covering not only fundamental science—that is, fundamental physics—but the special sciences, too, for instance biology, anthropology, psychology, sociology, and economics. The most widely accepted concept of scientific explanation, held even by neopositivist critic Karl Popper, was the deductive-nomological model (DN model). Yet DN model received its greatest explication by Carl Hempel, first in his 1942 article "The function of general laws in history", and more explicitly with Paul Oppenheim in their 1948 article "Studies in the logic of explanation".

In the DN model, the stated phenomenon to be explained is the explanandum—which can be an event, law, or theory—whereas premises stated to explain it are the explanans. Explanans must be true or highly confirmed, contain at least one law, and entail the explanandum. Thus, given initial conditions C1, C2 . . . Cn plus general laws L1, L2 . . . Ln, event E is a deductive consequence and scientifically explained. In the DN model, a law is an unrestricted generalization by conditional proposition—If A, then B—and has empirical content testable. (Differing from a merely true regularity—for instance, George always carries only $1 bills in his wallet—a law suggests what must be true, and is consequent of a scientific theory's axiomatic structure.)

By the Humean empiricist view that humans observe sequences of events, (not cause and effect, as causality and causal mechanisms are unobservable), the DN model neglects causality beyond mere constant conjunction, first event A and then always event B. Hempel's explication of the DN model held natural laws—empirically confirmed regularities—as satisfactory and, if formulated realistically, approximating causal explanation. In later articles, Hempel defended the DN model and proposed a probabilistic explanation, inductive-statistical model (IS model). the DN and IS models together form the covering law model, as named by a critic, William Dray. Derivation of statistical laws from other statistical laws goes to deductive-statistical model (DS model). Georg Henrik von Wright, another critic, named it subsumption theory, fitting the ambition of theory reduction.

Unity of science

Logical positivists were generally committed to "Unified Science", and sought a common language or, in Neurath's phrase, a "universal slang" whereby all scientific propositions could be expressed. The adequacy of proposals or fragments of proposals for such a language was often asserted on the basis of various "reductions" or "explications" of the terms of one special science to the terms of another, putatively more fundamental. Sometimes these reductions consisted of set-theoretic manipulations of a few logically primitive concepts (as in Carnap's Logical Structure of the World, 1928). Sometimes, these reductions consisted of allegedly analytic or a priori deductive relationships (as in Carnap's "Testability and meaning"). A number of publications over a period of thirty years would attempt to elucidate this concept.

Theory reduction

As in Comtean positivism's envisioned unity of science, neopositivists aimed to network all special sciences through the covering law model of scientific explanation. And ultimately, by supplying boundary conditions and supplying bridge laws within the covering law model, all the special sciences' laws would reduce to fundamental physics, the fundamental science.

Critics

After World War II, key tenets of logical positivism, including its atomistic philosophy of science, the verifiability principle, and the fact/value gap, drew escalated criticism. The verifiability criterion made universal statements 'cognitively' meaningless, and even made statements beyond empiricism for technological but not conceptual reasons meaningless, which was taken to pose significant problems for the philosophy of science. These problems were recognized within the movement, which hosted attempted solutions—Carnap's move to confirmation, Ayer's acceptance of weak verification—but the program drew sustained criticism from a number of directions by the 1950s. Even philosophers disagreeing among themselves on which direction general epistemology ought to take, as well as on philosophy of science, agreed that the logical empiricist program was untenable, and it became viewed as self-contradictory: the verifiability criterion of meaning was itself unverified. Notable critics included Popper, Quine, Hanson, Kuhn, Putnam, Austin, Strawson, Goodman, and Rorty.

Popper

An early, tenacious critic was Karl Popper whose 1934 book Logik der Forschung, arriving in English in 1959 as The Logic of Scientific Discovery, directly answered verificationism. Popper considered the problem of induction as rendering empirical verification logically impossible, and the deductive fallacy of affirming the consequent reveals any phenomenon's capacity to host more than one logically possible explanation. Accepting scientific method as hypotheticodeduction, whose inference form is denying the consequent, Popper finds scientific method unable to proceed without falsifiable predictions. Popper thus identifies falsifiability to demarcate not meaningful from meaningless but simply scientific from unscientific—a label not in itself unfavorable.

Popper finds virtue in metaphysics, required to develop new scientific theories. And an unfalsifiable—thus unscientific, perhaps metaphysical—concept in one era can later, through evolving knowledge or technology, become falsifiable, thus scientific. Popper also found science's quest for truth to rest on values. Popper disparages the pseudoscientific, which occurs when an unscientific theory is proclaimed true and coupled with seemingly scientific method by "testing" the unfalsifiable theory—whose predictions are confirmed by necessity—or when a scientific theory's falsifiable predictions are strongly falsified but the theory is persistently protected by "immunizing stratagems", such as the appendage of ad hoc clauses saving the theory or the recourse to increasingly speculative hypotheses shielding the theory.

Explicitly denying the positivist view of meaning and verification, Popper developed the epistemology of critical rationalism, which considers that human knowledge evolves by conjectures and refutations, and that no number, degree, and variety of empirical successes can either verify or confirm scientific theory. For Popper, science's aim is corroboration of scientific theory, which strives for scientific realism but accepts the maximal status of strongly corroborated verisimilitude ("truthlikeness"). Popper thus acknowledged the value of the positivist movement's emphasis on science but claimed that he had "killed positivism".

Quine

Although an empiricist, American logician Willard Van Orman Quine published the 1951 paper "Two Dogmas of Empiricism", which challenged conventional empiricist presumptions. Quine attacked the analytic/synthetic division, which the verificationist program had been hinged upon in order to entail, by consequence of Hume's fork, both necessity and aprioricity. Quine's ontological relativity explained that every term in any statement has its meaning contingent on a vast network of knowledge and belief, the speaker's conception of the entire world. Quine later proposed naturalized epistemology.

Hanson

In 1958, Norwood Hanson's Patterns of Discovery undermined the division of observation versus theory, as one can predict, collect, prioritize, and assess data only via some horizon of expectation set by a theory. Thus, any dataset—the direct observations, the scientific facts—is laden with theory.

Kuhn

With his landmark The Structure of Scientific Revolutions (1962), Thomas Kuhn critically destabilized the verificationist program, which was presumed to call for foundationalism. (But already in the 1930s, Otto Neurath had argued for nonfoundationalism via coherentism by likening science to a boat (Neurath's boat) that scientists must rebuild at sea.) Although Kuhn's thesis itself was attacked even by opponents of neopositivism, in the 1970 postscript to Structure, Kuhn asserted, at least, that there was no algorithm to science—and, on that, even most of Kuhn's critics agreed.

Powerful and persuasive, Kuhn's book, unlike the vocabulary and symbols of logic's formal language, was written in natural language open to the layperson. Kuhn's book was first published in a volume of International Encyclopedia of Unified Science—a project begun by logical positivists but co-edited by Neurath whose view of science was already nonfoundationalist as mentioned above—and some sense unified science, indeed, but by bringing it into the realm of historical and social assessment, rather than fitting it to the model of physics. Kuhn's ideas were rapidly adopted by scholars in disciplines well outside natural sciences, and, as logical empiricists were extremely influential in the social sciences, ushered academia into postpositivism or postempiricism.

Putnam

The "received view" operates on the correspondence rule that states, "The observational terms are taken as referring to specified phenomena or phenomenal properties, and the only interpretation given to the theoretical terms is their explicit definition provided by the correspondence rules". According to Hilary Putnam, a former student of Reichenbach and of Carnap, the dichotomy of observational terms versus theoretical terms introduced a problem within scientific discussion that was nonexistent until this dichotomy was stated by logical positivists. Putnam's four objections:

  • Something is referred to as "observational" if it is observable directly with our senses. Then an observational term cannot be applied to something unobservable. If this is the case, there are no observational terms.
  • With Carnap's classification, some unobservable terms are not even theoretical and belong to neither observational terms nor theoretical terms. Some theoretical terms refer primarily to observational terms.
  • Reports of observational terms frequently contain theoretical terms.
  • A scientific theory may not contain any theoretical terms (an example of this is Darwin's original theory of evolution).

Putnam also alleged that positivism was actually a form of metaphysical idealism by its rejecting scientific theory's ability to garner knowledge about nature's unobservable aspects. With his "no miracles" argument, posed in 1974, Putnam asserted scientific realism, the stance that science achieves true—or approximately true—knowledge of the world as it exists independently of humans' sensory experience. In this, Putnam opposed not only the positivism but other instrumentalism—whereby scientific theory is but a human tool to predict human observations—filling the void left by positivism's decline.

Decline and legacy

By the late 1960s, logical positivism had become exhausted. In 1976, A. J. Ayer quipped that "the most important" defect of logical positivism "was that nearly all of it was false", though he maintained "it was true in spirit." Although logical positivism tends to be recalled as a pillar of scientism, Carl Hempel was key in establishing the philosophy subdiscipline philosophy of science where Thomas Kuhn and Karl Popper brought in the era of postpositivism. John Passmore found logical positivism to be "dead, or as dead as a philosophical movement ever becomes".

Logical positivism's fall reopened debate over the metaphysical merit of scientific theory, whether it can offer knowledge of the world beyond human experience (scientific realism) versus whether it is but a human tool to predict human experience (instrumentalism). Meanwhile, it became popular among philosophers to rehash the faults and failures of logical positivism without investigation of them. Thereby, logical positivism has been generally misrepresented, sometimes severely. Arguing for their own views, often framed versus logical positivism, many philosophers have reduced logical positivism to simplisms and stereotypes, especially the notion of logical positivism as a type of foundationalism. In any event, the movement helped anchor analytic philosophy in the English-speaking world, and returned Britain to empiricism. Without the logical positivists, who have been tremendously influential outside philosophy, especially in psychology and other social sciences, intellectual life of the 20th century would be unrecognizable.

Medieval Warm Period

From Wikipedia, the free encyclopedia
 
Global average temperatures show that the Medieval Warm Period was not a global phenomenon.

The Medieval Warm Period (MWP), also known as the Medieval Climate Optimum or the Medieval Climatic Anomaly, was a time of warm climate in the North Atlantic region that lasted from c. 950 to c. 1250. Climate proxy records show peak warmth occurred at different times for different regions, which indicate that the MWP was not a globally uniform event. Some refer to the MWP as the Medieval Climatic Anomaly to emphasize that climatic effects other than temperature were also important.

The MWP was followed by a regionally cooler period in the North Atlantic and elsewhere, which is sometimes called the Little Ice Age (LIA).

Possible causes of the MWP include increased solar activity, decreased volcanic activity, and changes in ocean circulation.

Research

The Medieval Warm Period (MWP) is generally thought to have occurred from c. 950c. 1250, during the European Middle Ages. In 1965, Hubert Lamb, one of the first paleoclimatologists, published research based on data from botany, historical document research, and meteorology, combined with records indicating prevailing temperature and rainfall in England around c. 1200 and around c. 1600. He proposed, "Evidence has been accumulating in many fields of investigation pointing to a notably warm climate in many parts of the world, that lasted a few centuries around c. 1000c. 1200 AD, and was followed by a decline of temperature levels till between c. 1500 and c. 1700 the coldest phase since the last ice age occurred."

The era of warmer temperatures became known as the Medieval Warm Period and the subsequent cold period the Little Ice Age (LIA). However, the view that the MWP was a global event was challenged by other researchers. The IPCC First Assessment Report of 1990 discussed the "Medieval Warm Period around 1000 AD (which may not have been global) and the Little Ice Age which ended only in the middle to late nineteenth century." It stated that temperatures in the "late tenth to early thirteenth centuries (about AD 950–1250) appear to have been exceptionally warm in western Europe, Iceland and Greenland." The IPCC Third Assessment Report from 2001 summarized newer research: "evidence does not support globally synchronous periods of anomalous cold or warmth over this time frame, and the conventional terms of 'Little Ice Age' and 'Medieval Warm Period' are chiefly documented in describing northern hemisphere trends in hemispheric or global mean temperature changes in past centuries."

Global temperature records taken from ice cores, tree rings, and lake deposits have shown that the Earth may have been slightly cooler globally (by 0.03 °C) than in the early and the mid-20th century.

Palaeoclimatologists developing region-specific climate reconstructions of past centuries conventionally label their coldest interval as "LIA" and their warmest interval as the "MWP". Others follow the convention, and when a significant climate event is found in the "LIA" or "MWP" timeframes, they associate their events to the period. Some "MWP" events are thus wet events or cold events, rather than strictly warm events, particularly in central Antarctica, where climate patterns that are opposite to those of the North Atlantic have been noticed.

Global climate during the Medieval Warm Period

In 2019, by using an extended proxy data set, the Pages-2k consortium confirmed that the Medieval Climate Anomaly was not a globally synchronous event. The warmest 51-year period within the MWP did not occur at the same time in different regions. They argue for a regional instead of global framing of climate variability in the preindustrial Common Era to aid in understanding.

North Atlantic

Greenland ice sheet temperatures interpreted with 18O isotope from 6 ice cores (Vinther, B., et al., 2009). The data set ranges from 9690 BC to AD 1970 and has a resolution of around 20 years. That means that each data point represents the average temperature of the surrounding 20 years.
 

Lloyd D. Keigwin's 1996 study of radiocarbon-dated box core data from marine sediments in the Sargasso Sea found that its sea surface temperature was approximately 1 °C (1.8 °F) cooler approximately 400 years ago, during the LIA, and 1700 years ago and was approximately 1 °C warmer 1000 years ago, during the MWP.

Using sediment samples from Puerto Rico, the Gulf Coast, and the Atlantic Coast from Florida to New England, Mann et al. (2009) found consistent evidence of a peak in North Atlantic tropical cyclone activity during the MWP, which was followed by a subsequent lull in activity.

Iceland

Iceland was first settled between about 865 and 930, during a time believed to be warm enough for sailing and farming. By retrieval and isotope analysis of marine cores and from examination of mollusc growth patterns from Iceland, Patterson et al. reconstructed a stable oxygen (δ18 O) and carbon (δ13 C) isotope record at a decadal resolution from the Roman Warm Period to the MWP and the LIA. Patterson et al. conclude that the summer temperature stayed high but winter temperature decreased after the initial settlement of Iceland.

Greenland

The last written records of the Norse Greenlanders are from an Icelandic marriage in 1408 but were recorded later in Iceland, at Hvalsey Church, which is now the best-preserved of the Norse ruins.

The 2009 Mann et al. study found warmth exceeding 1961–1990 levels in southern Greenland and parts of North America during the MWP, which the study defines as from 950 to 1250, with warmth in some regions exceeding temperatures of the 1990–2010 period. Much of the Northern Hemisphere showed a significant cooling during the LIA, which the study defines as from 1400 to 1700, but Labrador and isolated parts of the United States appeared to be approximately as warm as during the 1961–1990 period.

1690 copy of the 1570 Skálholt map, based on documentary information about earlier Norse sites in America.

The Norse colonization of the Americas has been associated with warmer periods. The common theory is that Norsemen took advantage of ice-free seas to colonize areas in Greenland and other outlying lands of the far north. However, a study from Columbia University suggests that Greenland was not colonized in warmer weather, but the warming effect in fact lasted for only very briefly. c. 1000AD, the climate was sufficiently warm for the Vikings to journey to Newfoundland and to establish a short-lived outpost there.

L'Anse aux Meadows, Newfoundland, today, with a reconstruction of a Viking settlement.

In around 985, Vikings founded the Eastern and Western Settlements, both near the southern tip of Greenland. In the colony's early stages, they kept cattle, sheep, and goats, with around a quarter of their diet from seafood. After the climate became colder and stormier around 1250, their diet steadily shifted towards ocean sources. By around 1300, seal hunting provided over three quarters of their food.

By 1350, there was reduced demand for their exports, and trade with Europe fell away. The last document from the settlements dates from 1412, and over the following decades, the remaining Europeans left in what seems to have been a gradual withdrawal, which was caused mainly by economic factors such as increased availability of farms in Scandinavian countries.

Europe

Substantial glacial retreat in southern Europe was experienced during the MWP. While several smaller glaciers experienced complete deglaciation, larger glaciers in the region survived and now provide insight into the region’s climate history. In addition to warming induced glacial melt, sedimentary records reveal a period of increased flooding, coinciding with the MWP, in eastern Europe that is attributed to enhanced precipitation from a positive phase North Atlantic Oscillation (NAO). Other impacts of climate change can be less apparent such as a changing landscape. Preceding the MWP, a coastal region in western Sardinia was abandoned by the Romans. The coastal area was able to substantially expand into the lagoon without the influence of human populations and a high stand during the MWP. When human populations returned to the region, they encountered a land altered by climate change and had to reestablish ports.

Other regions

North America

In Chesapeake Bay (now in Maryland and Virginia, United States), researchers found large temperature excursions (changes from the mean temperature of that time) during the MWP (about 950–1250) and the Little Ice Age (about 1400–1700, with cold periods persisting into the early 20th century), which are possibly related to changes in the strength of North Atlantic thermohaline circulation. Sediments in Piermont Marsh of the lower Hudson Valley show a dry MWP from 800 to 1300.

Prolonged droughts affected many parts of what is now the Western United States, especially eastern California and the west of Great Basin. Alaska experienced three intervals of comparable warmth: 1–300, 850–1200, and since 1800. Knowledge of the MWP in North America has been useful in dating occupancy periods of certain Native American habitation sites, especially in arid parts of the Western United States. Droughts in the MWP may have impacted Native American settlements also in the Eastern United States, such as at Cahokia. Review of more recent archaeological research shows that as the search for signs of unusual cultural changes has broadened, some of the early patterns (such as violence and health problems) have been found to be more complicated and regionally varied than had been previously thought. Other patterns, such as settlement disruption, deterioration of long-distance trade, and population movements, have been further corroborated.

Africa

The climate in equatorial eastern Africa has alternated between being drier than today and relatively wet. The climate was drier during the MWP (1000–1270). Off the coast of Africa, Isotopic analysis of bones from the Canary Islands’ inhabitants during the MWP to LIA transition reveal the region experienced a 5 °C decrease in air temperature. Over this period, the diet of inhabitants did not appreciably change, which suggests they were remarkably resilient to climate change.

Antarctica

A sediment core from the eastern Bransfield Basin, in the Antarctic Peninsula, preserves climatic events from both the LIA and the MWP. The authors noted, "The late Holocene records clearly identify Neoglacial events of the LIA and Medieval Warm Period (MWP)." Some Antarctic regions were atypically cold, but others were atypically warm between 1000 and 1200.

Pacific Ocean

Corals in the tropical Pacific Ocean suggest that relatively cool and dry conditions may have persisted early in the millennium, which is consistent with a La Niña-like configuration of the El Niño-Southern Oscillation patterns.

In 2013, a study from three US universities was published in Science magazine and showed that the water temperature in the Pacific Ocean was 0.9 degrees warmer during the MWP than during the LIA and 0.65 degrees warmer than the decades before the study.

South America

The MWP has been noted in Chile in a 1500-year lake bed sediment core, as well as in the Eastern Cordillera of Ecuador.

A reconstruction, based on ice cores, found that the MWP could be distinguished in tropical South America from about 1050 to 1300 and was followed in the 15th century by the LIA. Peak temperatures did not rise as to the level of the late 20th century, which were unprecedented in the area during the study period of 1600 years.

Asia

Adhikari and Kumon (2001), investigating sediments in Lake Nakatsuna, in central Japan, found a warm period from 900 to 1200 that corresponded to the MWP and three cool phases, two of which could be related to the LIA. Other research in northeastern Japan showed that there was one warm and humid interval, from 750 to 1200, and two cold and dry intervals, from 1 to 750 and from 1200 to now. Ge et al. studied temperatures in China for the past 2000 years and found high uncertainty prior to the 16th century but good consistency over the last 500 years highlighted by the two cold periods, 1620s–1710s and 1800s–1860s, and the 20th-century warming. They also found that the warming from the 10th to the 14th centuries in some regions might be comparable in magnitude to the warming of the last few decades of the 20th century, which was unprecedented within the past 500 years. Generally, a warming period was identified in China, coinciding with the MWP, using multi-proxy data for temperature. However, the warming was inconsistent across China. Significant temperature change, from the MWP to LIA, was found for northeast and central-east China but not for northwest China and the Tibetan Plateau. Alongside an overall warmer climate, areas in Asia experienced wetter conditions in the MWP southeastern China, India, and far eastern Russia. Peat cores from peatland in southeast China suggest changes in the East Asian Summer Monsoon (EASM) and El Niño Southern Oscillation (ENSO) are responsible for increased precipitation in the region during the MWP. The Indian Summer Monsoon (ISM) was also enhanced during the MWP with a temperature driven change to the Atlantic Multi-decadal Oscillation (AMO), bringing more precipitation to India. In far eastern Russia, continental regions experienced severe floods during the MWP while nearby islands experienced less precipitation leading to a decrease in peatland. Pollen data from this region indicates an expansion of warm climate vegetation with an increasing number of broadleaf and decreasing number of coniferous forests.

Oceania

There is an extreme scarcity of data from Australia for both the MWP and the LIA. However, evidence from wave-built shingle terraces for a permanently-full Lake Eyre during the 9th and the 10th centuries is consistent with a La Niña-like configuration, but the data are insufficient to show how lake levels varied from year to year or what climatic conditions elsewhere in Australia were like.

A 1979 study from the University of Waikato found, "Temperatures derived from an 18O/16O profile through a stalagmite found in a New Zealand cave (40.67°S, 172.43°E) suggested the Medieval Warm Period to have occurred between AD c. 1050 and c. 1400 and to have been 0.75 °C warmer than the Current Warm Period." More evidence in New Zealand is from an 1100-year tree-ring record.

Isotope

From Wikipedia, the free encyclopedia
 
The three naturally occurring isotopes of hydrogen. The fact that each isotope has one proton makes them all variants of hydrogen: the identity of the isotope is given by the number of protons and neutrons. From left to right, the isotopes are protium (1H) with zero neutrons, deuterium (2H) with one neutron, and tritium (3H) with two neutrons.

Isotopes are distinct nuclear species (or nuclides, as technical term) of the same element. They have the same atomic number (number of protons in their nuclei) and position in the periodic table (and hence belong to the same chemical element), but differ in nucleon numbers (mass numbers) due to different numbers of neutrons in their nuclei. While all isotopes of a given element have almost the same chemical properties, they have different atomic masses and physical properties.

The term isotope is formed from the Greek roots isos (ἴσος "equal") and topos (τόπος "place"), meaning "the same place"; thus, the meaning behind the name is that different isotopes of a single element occupy the same position on the periodic table. It was coined by Scottish doctor and writer Margaret Todd in 1913 in a suggestion to the British chemist Frederick Soddy.

The number of protons within the atom's nucleus is called its atomic number and is equal to the number of electrons in the neutral (non-ionized) atom. Each atomic number identifies a specific element, but not the isotope; an atom of a given element may have a wide range in its number of neutrons. The number of nucleons (both protons and neutrons) in the nucleus is the atom's mass number, and each isotope of a given element has a different mass number.

For example, carbon-12, carbon-13, and carbon-14 are three isotopes of the element carbon with mass numbers 12, 13, and 14, respectively. The atomic number of carbon is 6, which means that every carbon atom has 6 protons so that the neutron numbers of these isotopes are 6, 7, and 8 respectively.

Isotope vs. nuclide

A nuclide is a species of an atom with a specific number of protons and neutrons in the nucleus, for example carbon-13 with 6 protons and 7 neutrons. The nuclide concept (referring to individual nuclear species) emphasizes nuclear properties over chemical properties, whereas the isotope concept (grouping all atoms of each element) emphasizes chemical over nuclear. The neutron number has large effects on nuclear properties, but its effect on chemical properties is negligible for most elements. Even for the lightest elements, whose ratio of neutron number to atomic number varies the most between isotopes, it usually has only a small effect although it matters in some circumstances (for hydrogen, the lightest element, the isotope effect is large enough to affect biology strongly). The term isotopes (originally also isotopic elements, now sometimes isotopic nuclides) is intended to imply comparison (like synonyms or isomers). For example, the nuclides 12
6
C
, 13
6
C
, 14
6
C
are isotopes (nuclides with the same atomic number but different mass numbers), but 40
18
Ar
, 40
19
K
, 40
20
Ca
are isobars (nuclides with the same mass number). However, isotope is the older term and so is better known than nuclide and is still sometimes used in contexts in which nuclide might be more appropriate, such as nuclear technology and nuclear medicine.

Notation

An isotope and/or nuclide is specified by the name of the particular element (this indicates the atomic number) followed by a hyphen and the mass number (e.g. helium-3, helium-4, carbon-12, carbon-14, uranium-235 and uranium-239). When a chemical symbol is used, e.g. "C" for carbon, standard notation (now known as "AZE notation" because A is the mass number, Z the atomic number, and E for element) is to indicate the mass number (number of nucleons) with a superscript at the upper left of the chemical symbol and to indicate the atomic number with a subscript at the lower left (e.g. 3
2
He
, 4
2
He
, 12
6
C
, 14
6
C
, 235
92
U
, and 239
92
U
). Because the atomic number is given by the element symbol, it is common to state only the mass number in the superscript and leave out the atomic number subscript (e.g. 3
He
, 4
He
, 12
C
, 14
C
, 235
U
, and 239
U
). The letter m is sometimes appended after the mass number to indicate a nuclear isomer, a metastable or energetically excited nuclear state (as opposed to the lowest-energy ground state), for example 180m
73
Ta
(tantalum-180m).

The common pronunciation of the AZE notation is different from how it is written: 4
2
He
is commonly pronounced as helium-four instead of four-two-helium, and 235
92
U
as uranium two-thirty-five (American English) or uranium-two-three-five (British) instead of 235-92-uranium.

Radioactive, primordial, and stable isotopes

Some isotopes/nuclides are radioactive, and are therefore referred to as radioisotopes or radionuclides, whereas others have never been observed to decay radioactively and are referred to as stable isotopes or stable nuclides. For example, 14
C
is a radioactive form of carbon, whereas 12
C
and 13
C
are stable isotopes. There are about 339 naturally occurring nuclides on Earth, of which 286 are primordial nuclides, meaning that they have existed since the Solar System's formation.

Primordial nuclides include 35 nuclides with very long half-lives (over 100 million years) and 251 that are formally considered as "stable nuclides", because they have not been observed to decay. In most cases, for obvious reasons, if an element has stable isotopes, those isotopes predominate in the elemental abundance found on Earth and in the Solar System. However, in the cases of three elements (tellurium, indium, and rhenium) the most abundant isotope found in nature is actually one (or two) extremely long-lived radioisotope(s) of the element, despite these elements having one or more stable isotopes.

Theory predicts that many apparently "stable" isotopes/nuclides are radioactive, with extremely long half-lives (discounting the possibility of proton decay, which would make all nuclides ultimately unstable). Some stable nuclides are in theory energetically susceptible to other known forms of decay, such as alpha decay or double beta decay, but no decay products have yet been observed, and so these isotopes are said to be "observationally stable". The predicted half-lives for these nuclides often greatly exceed the estimated age of the universe, and in fact, there are also 31 known radionuclides (see primordial nuclide) with half-lives longer than the age of the universe.

Adding in the radioactive nuclides that have been created artificially, there are 3,339 currently known nuclides. These include 905 nuclides that are either stable or have half-lives longer than 60 minutes. See list of nuclides for details.

History

Radioactive isotopes

The existence of isotopes was first suggested in 1913 by the radiochemist Frederick Soddy, based on studies of radioactive decay chains that indicated about 40 different species referred to as radioelements (i.e. radioactive elements) between uranium and lead, although the periodic table only allowed for 11 elements between lead and uranium inclusive.

Several attempts to separate these new radioelements chemically had failed. For example, Soddy had shown in 1910 that mesothorium (later shown to be 228Ra), radium (226Ra, the longest-lived isotope), and thorium X (224Ra) are impossible to separate. Attempts to place the radioelements in the periodic table led Soddy and Kazimierz Fajans independently to propose their radioactive displacement law in 1913, to the effect that alpha decay produced an element two places to the left in the periodic table, whereas beta decay emission produced an element one place to the right. Soddy recognized that emission of an alpha particle followed by two beta particles led to the formation of an element chemically identical to the initial element but with a mass four units lighter and with different radioactive properties.

Soddy proposed that several types of atoms (differing in radioactive properties) could occupy the same place in the table. For example, the alpha-decay of uranium-235 forms thorium-231, whereas the beta decay of actinium-230 forms thorium-230. The term "isotope", Greek for "at the same place", was suggested to Soddy by Margaret Todd, a Scottish physician and family friend, during a conversation in which he explained his ideas to her. He received the 1921 Nobel Prize in Chemistry in part for his work on isotopes.

In the bottom right corner of J. J. Thomson's photographic plate are the separate impact marks for the two isotopes of neon: neon-20 and neon-22.

In 1914 T. W. Richards found variations between the atomic weight of lead from different mineral sources, attributable to variations in isotopic composition due to different radioactive origins.

Stable isotopes

The first evidence for multiple isotopes of a stable (non-radioactive) element was found by J. J. Thomson in 1912 as part of his exploration into the composition of canal rays (positive ions). Thomson channelled streams of neon ions through parallel magnetic and electric fields, measured their deflection by placing a photographic plate in their path, and computed their mass to charge ratio using a method that became known as the Thomson's parabola method. Each stream created a glowing patch on the plate at the point it struck. Thomson observed two separate parabolic patches of light on the photographic plate (see image), which suggested two species of nuclei with different mass to charge ratios.

F. W. Aston subsequently discovered multiple stable isotopes for numerous elements using a mass spectrograph. In 1919 Aston studied neon with sufficient resolution to show that the two isotopic masses are very close to the integers 20 and 22 and that neither is equal to the known molar mass (20.2) of neon gas. This is an example of Aston's whole number rule for isotopic masses, which states that large deviations of elemental molar masses from integers are primarily due to the fact that the element is a mixture of isotopes. Aston similarly showed in 1920 that the molar mass of chlorine (35.45) is a weighted average of the almost integral masses for the two isotopes 35Cl and 37Cl.

Variation in properties between isotopes

Chemical and molecular properties

A neutral atom has the same number of electrons as protons. Thus different isotopes of a given element all have the same number of electrons and share a similar electronic structure. Because the chemical behavior of an atom is largely determined by its electronic structure, different isotopes exhibit nearly identical chemical behavior.

The main exception to this is the kinetic isotope effect: due to their larger masses, heavier isotopes tend to react somewhat more slowly than lighter isotopes of the same element. This is most pronounced by far for protium (1
H
), deuterium (2
H
), and tritium (3
H
), because deuterium has twice the mass of protium and tritium has three times the mass of protium. These mass differences also affect the behavior of their respective chemical bonds, by changing the center of gravity (reduced mass) of the atomic systems. However, for heavier elements, the relative mass difference between isotopes is much less so that the mass-difference effects on chemistry are usually negligible. (Heavy elements also have relatively more neutrons than lighter elements, so the ratio of the nuclear mass to the collective electronic mass is slightly greater.) There is also an equilibrium isotope effect.

Isotope half-lives. Z = number of protons. N = number of neutrons. The plot for stable isotopes diverges from the line Z = N as the element number Z becomes larger

Similarly, two molecules that differ only in the isotopes of their atoms (isotopologues) have identical electronic structures, and therefore almost indistinguishable physical and chemical properties (again with deuterium and tritium being the primary exceptions). The vibrational modes of a molecule are determined by its shape and by the masses of its constituent atoms; so different isotopologues have different sets of vibrational modes. Because vibrational modes allow a molecule to absorb photons of corresponding energies, isotopologues have different optical properties in the infrared range.

Nuclear properties and stability

Atomic nuclei consist of protons and neutrons bound together by the residual strong force. Because protons are positively charged, they repel each other. Neutrons, which are electrically neutral, stabilize the nucleus in two ways. Their copresence pushes protons slightly apart, reducing the electrostatic repulsion between the protons, and they exert the attractive nuclear force on each other and on protons. For this reason, one or more neutrons are necessary for two or more protons to bind into a nucleus. As the number of protons increases, so does the ratio of neutrons to protons necessary to ensure a stable nucleus (see graph at right). For example, although the neutron:proton ratio of 3
2
He
is 1:2, the neutron:proton ratio of 238
92
U
is greater than 3:2. A number of lighter elements have stable nuclides with the ratio 1:1 (Z = N). The nuclide 40
20
Ca
(calcium-40) is observationally the heaviest stable nuclide with the same number of neutrons and protons. All stable nuclides heavier than calcium-40 contain more neutrons than protons.

Numbers of isotopes per element

Of the 80 elements with a stable isotope, the largest number of stable isotopes observed for any element is ten (for the element tin). No element has nine or eight stable isotopes. Five elements have seven stable isotopes, eight have six stable isotopes, ten have five stable isotopes, nine have four stable isotopes, five have three stable isotopes, 16 have two stable isotopes (counting 180m
73
Ta
as stable), and 26 elements have only a single stable isotope (of these, 19 are so-called mononuclidic elements, having a single primordial stable isotope that dominates and fixes the atomic weight of the natural element to high precision; 3 radioactive mononuclidic elements occur as well). In total, there are 251 nuclides that have not been observed to decay. For the 80 elements that have one or more stable isotopes, the average number of stable isotopes is 251/80 ≈ 3.14 isotopes per element.

Even and odd nucleon numbers

Even/odd Z, N (1
H
as OE
)
p, n EE OO EO OE Total
Stable 145 5 53 48 251
Long-lived 23 4 3 5 35
All primordial 168 9 56 53 286

The proton:neutron ratio is not the only factor affecting nuclear stability. It depends also on evenness or oddness of its atomic number Z, neutron number N and, consequently, of their sum, the mass number A. Oddness of both Z and N tends to lower the nuclear binding energy, making odd nuclei, generally, less stable. This remarkable difference of nuclear binding energy between neighbouring nuclei, especially of odd-A isobars, has important consequences: unstable isotopes with a nonoptimal number of neutrons or protons decay by beta decay (including positron emission), electron capture, or other less common decay modes such as spontaneous fission and cluster decay.

The majority of stable nuclides are even-proton-even-neutron, where all numbers Z, N, and A are even. The odd-A stable nuclides are divided (roughly evenly) into odd-proton-even-neutron, and even-proton-odd-neutron nuclides. Stable odd-proton-odd-neutron nuclei are the least common.

Even atomic number

The 146 even-proton, even-neutron (EE) nuclides comprise ~58% of all stable nuclides and all have spin 0 because of pairing. There are also 24 primordial long-lived even-even nuclides. As a result, each of the 41 even-numbered elements from 2 to 82 has at least one stable isotope, and most of these elements have several primordial isotopes. Half of these even-numbered elements have six or more stable isotopes. The extreme stability of helium-4 due to a double pairing of 2 protons and 2 neutrons prevents any nuclides containing five (5
2
He
, 5
3
Li
) or eight (8
4
Be
) nucleons from existing for long enough to serve as platforms for the buildup of heavier elements via nuclear fusion in stars (see triple alpha process).

Even-odd long-lived

Decay Half-life
113
48
Cd
beta 7.7×1015 a
147
62
Sm
alpha 1.06×1011 a
235
92
U
alpha 7.04×108 a

53 stable nuclides have an even number of protons and an odd number of neutrons. They are a minority in comparison to the even-even isotopes, which are about 3 times as numerous. Among the 41 even-Z elements that have a stable nuclide, only two elements (argon and cerium) have no even-odd stable nuclides. One element (tin) has three. There are 24 elements that have one even-odd nuclide and 13 that have two odd-even nuclides. Of 35 primordial radionuclides there exist four even-odd nuclides (see table at right), including the fissile 235
92
U
. Because of their odd neutron numbers, the even-odd nuclides tend to have large neutron capture cross-sections, due to the energy that results from neutron-pairing effects. These stable even-proton odd-neutron nuclides tend to be uncommon by abundance in nature, generally because, to form and enter into primordial abundance, they must have escaped capturing neutrons to form yet other stable even-even isotopes, during both the s-process and r-process of neutron capture, during nucleosynthesis in stars. For this reason, only 195
78
Pt
and 9
4
Be
are the most naturally abundant isotopes of their element.

Odd atomic number

Forty-eight stable odd-proton-even-neutron nuclides, stabilized by their paired neutrons, form most of the stable isotopes of the odd-numbered elements; the very few odd-proton-odd-neutron nuclides comprise the others. There are 41 odd-numbered elements with Z = 1 through 81, of which 39 have stable isotopes (the elements technetium (
43
Tc
) and promethium (
61
Pm
) have no stable isotopes). Of these 39 odd Z elements, 30 elements (including hydrogen-1 where 0 neutrons is even) have one stable odd-even isotope, and nine elements: chlorine (
17
Cl
), potassium (
19
K
), copper (
29
Cu
), gallium (
31
Ga
), bromine (
35
Br
), silver (
47
Ag
), antimony (
51
Sb
), iridium (
77
Ir
), and thallium (
81
Tl
), have two odd-even stable isotopes each. This makes a total 30 + 2(9) = 48 stable odd-even isotopes.

There are also five primordial long-lived radioactive odd-even isotopes, 87
37
Rb
, 115
49
In
, 187
75
Re
, 151
63
Eu
, and 209
83
Bi
. The last two were only recently found to decay, with half-lives greater than 1018 years.

Only five stable nuclides contain both an odd number of protons and an odd number of neutrons. The first four "odd-odd" nuclides occur in low mass nuclides, for which changing a proton to a neutron or vice versa would lead to a very lopsided proton-neutron ratio (2
1
H
, 6
3
Li
, 10
5
B
, and 14
7
N
; spins 1, 1, 3, 1). The only other entirely "stable" odd-odd nuclide, 180m
73
Ta
(spin 9), is thought to be the rarest of the 251 stable nuclides, and is the only primordial nuclear isomer, which has not yet been observed to decay despite experimental attempts.

Many odd-odd radionuclides (such as the ground state of tantalum-180) with comparatively short half-lives are known. Usually, they beta-decay to their nearby even-even isobars that have paired protons and paired neutrons. Of the nine primordial odd-odd nuclides (five stable and four radioactive with long half-lives), only 14
7
N
is the most common isotope of a common element. This is the case because it is a part of the CNO cycle. The nuclides 6
3
Li
and 10
5
B
are minority isotopes of elements that are themselves rare compared to other light elements, whereas the other six isotopes make up only a tiny percentage of the natural abundance of their elements.

Odd neutron number

Neutron number parity (1
H
as even
)
N Even Odd
Stable 194 58
Long-lived 27 7
All primordial 221 65

Actinides with odd neutron number are generally fissile (with thermal neutrons), whereas those with even neutron number are generally not, though they are fissionable with fast neutrons. All observationally stable odd-odd nuclides have nonzero integer spin. This is because the single unpaired neutron and unpaired proton have a larger nuclear force attraction to each other if their spins are aligned (producing a total spin of at least 1 unit), instead of anti-aligned. See deuterium for the simplest case of this nuclear behavior.

Only 195
78
Pt
, 9
4
Be
, and 14
7
N
have odd neutron number and are the most naturally abundant isotope of their element.

Occurrence in nature

Elements are composed either of one nuclide (mononuclidic elements), or of more than one naturally occurring isotopes. The unstable (radioactive) isotopes are either primordial or postprimordial. Primordial isotopes were a product of stellar nucleosynthesis or another type of nucleosynthesis such as cosmic ray spallation, and have persisted down to the present because their rate of decay is so slow (e.g. uranium-238 and potassium-40). Post-primordial isotopes were created by cosmic ray bombardment as cosmogenic nuclides (e.g., tritium, carbon-14), or by the decay of a radioactive primordial isotope to a radioactive radiogenic nuclide daughter (e.g. uranium to radium). A few isotopes are naturally synthesized as nucleogenic nuclides, by some other natural nuclear reaction, such as when neutrons from natural nuclear fission are absorbed by another atom.

As discussed above, only 80 elements have any stable isotopes, and 26 of these have only one stable isotope. Thus, about two-thirds of stable elements occur naturally on Earth in multiple stable isotopes, with the largest number of stable isotopes for an element being ten, for tin (
50
Sn
). There are about 94 elements found naturally on Earth (up to plutonium inclusive), though some are detected only in very tiny amounts, such as plutonium-244. Scientists estimate that the elements that occur naturally on Earth (some only as radioisotopes) occur as 339 isotopes (nuclides) in total. Only 251 of these naturally occurring nuclides are stable, in the sense of never having been observed to decay as of the present time. An additional 35 primordial nuclides (to a total of 286 primordial nuclides), are radioactive with known half-lives, but have half-lives longer than 100 million years, allowing them to exist from the beginning of the Solar System. See list of nuclides for details.

All the known stable nuclides occur naturally on Earth; the other naturally occurring nuclides are radioactive but occur on Earth due to their relatively long half-lives, or else due to other means of ongoing natural production. These include the afore-mentioned cosmogenic nuclides, the nucleogenic nuclides, and any radiogenic nuclides formed by ongoing decay of a primordial radioactive nuclide, such as radon and radium from uranium.

An additional ~3000 radioactive nuclides not found in nature have been created in nuclear reactors and in particle accelerators. Many short-lived nuclides not found naturally on Earth have also been observed by spectroscopic analysis, being naturally created in stars or supernovae. An example is aluminium-26, which is not naturally found on Earth but is found in abundance on an astronomical scale.

The tabulated atomic masses of elements are averages that account for the presence of multiple isotopes with different masses. Before the discovery of isotopes, empirically determined noninteger values of atomic mass confounded scientists. For example, a sample of chlorine contains 75.8% chlorine-35 and 24.2% chlorine-37, giving an average atomic mass of 35.5 atomic mass units.

According to generally accepted cosmology theory, only isotopes of hydrogen and helium, traces of some isotopes of lithium and beryllium, and perhaps some boron, were created at the Big Bang, while all other nuclides were synthesized later, in stars and supernovae, and in interactions between energetic particles such as cosmic rays, and previously produced nuclides. (See nucleosynthesis for details of the various processes thought responsible for isotope production.) The respective abundances of isotopes on Earth result from the quantities formed by these processes, their spread through the galaxy, and the rates of decay for isotopes that are unstable. After the initial coalescence of the Solar System, isotopes were redistributed according to mass, and the isotopic composition of elements varies slightly from planet to planet. This sometimes makes it possible to trace the origin of meteorites.

Atomic mass of isotopes

The atomic mass (mr) of an isotope (nuclide) is determined mainly by its mass number (i.e. number of nucleons in its nucleus). Small corrections are due to the binding energy of the nucleus (see mass defect), the slight difference in mass between proton and neutron, and the mass of the electrons associated with the atom, the latter because the electron:nucleon ratio differs among isotopes.

The mass number is a dimensionless quantity. The atomic mass, on the other hand, is measured using the atomic mass unit based on the mass of the carbon-12 atom. It is denoted with symbols "u" (for unified atomic mass unit) or "Da" (for dalton).

The atomic masses of naturally occurring isotopes of an element determine the standard atomic weight of the element. When the element contains N isotopes, the expression below is applied for the average atomic mass :

where m1, m2, ..., mN are the atomic masses of each individual isotope, and x1, ..., xN are the relative abundances of these isotopes.

Applications of isotopes

Purification of isotopes

Several applications exist that capitalize on the properties of the various isotopes of a given element. Isotope separation is a significant technological challenge, particularly with heavy elements such as uranium or plutonium. Lighter elements such as lithium, carbon, nitrogen, and oxygen are commonly separated by gas diffusion of their compounds such as CO and NO. The separation of hydrogen and deuterium is unusual because it is based on chemical rather than physical properties, for example in the Girdler sulfide process. Uranium isotopes have been separated in bulk by gas diffusion, gas centrifugation, laser ionization separation, and (in the Manhattan Project) by a type of production mass spectrometry.

Use of chemical and biological properties

  • Isotope analysis is the determination of isotopic signature, the relative abundances of isotopes of a given element in a particular sample. Isotope analysis is frequently done by isotope ratio mass spectrometry. For biogenic substances in particular, significant variations of isotopes of C, N, and O can occur. Analysis of such variations has a wide range of applications, such as the detection of adulteration in food products or the geographic origins of products using isoscapes. The identification of certain meteorites as having originated on Mars is based in part upon the isotopic signature of trace gases contained in them.
  • Isotopic substitution can be used to determine the mechanism of a chemical reaction via the kinetic isotope effect.
  • Another common application is isotopic labeling, the use of unusual isotopes as tracers or markers in chemical reactions. Normally, atoms of a given element are indistinguishable from each other. However, by using isotopes of different masses, even different nonradioactive stable isotopes can be distinguished by mass spectrometry or infrared spectroscopy. For example, in 'stable isotope labeling with amino acids in cell culture (SILAC)' stable isotopes are used to quantify proteins. If radioactive isotopes are used, they can be detected by the radiation they emit (this is called radioisotopic labeling).
  • Isotopes are commonly used to determine the concentration of various elements or substances using the isotope dilution method, whereby known amounts of isotopically substituted compounds are mixed with the samples and the isotopic signatures of the resulting mixtures are determined with mass spectrometry.

Use of nuclear properties

  • A technique similar to radioisotopic labeling is radiometric dating: using the known half-life of an unstable element, one can calculate the amount of time that has elapsed since a known concentration of isotope existed. The most widely known example is radiocarbon dating used to determine the age of carbonaceous materials.
  • Several forms of spectroscopy rely on the unique nuclear properties of specific isotopes, both radioactive and stable. For example, nuclear magnetic resonance (NMR) spectroscopy can be used only for isotopes with a nonzero nuclear spin. The most common nuclides used with NMR spectroscopy are 1H, 2D, 15N, 13C, and 31P.
  • Mössbauer spectroscopy also relies on the nuclear transitions of specific isotopes, such as 57Fe.
  • Radionuclides also have important uses. Nuclear power and nuclear weapons development require relatively large quantities of specific isotopes. Nuclear medicine and radiation oncology utilize radioisotopes respectively for medical diagnosis and treatment.

Information asymmetry

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inf...