Search This Blog

Sunday, May 28, 2023

Semiconductor

From Wikipedia, the free encyclopedia

A semiconductor is a material which has an electrical conductivity value falling between that of a conductor, such as copper, and an insulator, such as glass. Its resistivity falls as its temperature rises; metals behave in the opposite way. Its conducting properties may be altered in useful ways by introducing impurities ("doping") into the crystal structure. When two differently doped regions exist in the same crystal, a semiconductor junction is created. The behavior of charge carriers, which include electrons, ions, and electron holes, at these junctions is the basis of diodes, transistors, and most modern electronics. Some examples of semiconductors are silicon, germanium, gallium arsenide, and elements near the so-called "metalloid staircase" on the periodic table. After silicon, gallium arsenide is the second-most common semiconductor and is used in laser diodes, solar cells, microwave-frequency integrated circuits, and others. Silicon is a critical element for fabricating most electronic circuits.

Semiconductor devices can display a range of different useful properties, such as passing current more easily in one direction than the other, showing variable resistance, and having sensitivity to light or heat. Because the electrical properties of a semiconductor material can be modified by doping and by the application of electrical fields or light, devices made from semiconductors can be used for amplification, switching, and energy conversion.

The conductivity of silicon is increased by adding a small amount (of the order of 1 in 108) of pentavalent (antimony, phosphorus, or arsenic) or trivalent (boron, gallium, indium) atoms. This process is known as doping, and the resulting semiconductors are known as doped or extrinsic semiconductors. Apart from doping, the conductivity of a semiconductor can be improved by increasing its temperature. This is contrary to the behavior of a metal, in which conductivity decreases with an increase in temperature.

The modern understanding of the properties of a semiconductor relies on quantum physics to explain the movement of charge carriers in a crystal lattice. Doping greatly increases the number of charge carriers within the crystal. When a doped semiconductor contains free holes, it is called "p-type", and when it contains free electrons, it is known as "n-type". The semiconductor materials used in electronic devices are doped under precise conditions to control the concentration and regions of p- and n-type dopants. A single semiconductor device crystal can have many p- and n-type regions; the p–n junctions between these regions are responsible for the useful electronic behavior. Using a hot-point probe, one can determine quickly whether a semiconductor sample is p- or n-type.

A few of the properties of semiconductor materials were observed throughout the mid-19th and first decades of the 20th century. The first practical application of semiconductors in electronics was the 1904 development of the cat's-whisker detector, a primitive semiconductor diode used in early radio receivers. Developments in quantum physics led in turn to the invention of the transistor in 1947 and the integrated circuit in 1958.

Properties

Variable electrical conductivity

Semiconductors in their natural state are poor conductors because a current requires the flow of electrons, and semiconductors have their valence bands filled, preventing the entire flow of new electrons. Several developed techniques allow semiconducting materials to behave like conducting materials, such as doping or gating. These modifications have two outcomes: n-type and p-type. These refer to the excess or shortage of electrons, respectively. A balanced number of electrons would cause a current to flow throughout the material.

Heterojunctions

Heterojunctions occur when two differently doped semiconducting materials are joined. For example, a configuration could consist of p-doped and n-doped germanium. This results in an exchange of electrons and holes between the differently doped semiconducting materials. The n-doped germanium would have an excess of electrons, and the p-doped germanium would have an excess of holes. The transfer occurs until an equilibrium is reached by a process called recombination, which causes the migrating electrons from the n-type to come in contact with the migrating holes from the p-type. The result of this process is a narrow strip of immobile ions, which causes an electric field across the junction.

Excited electrons

A difference in electric potential on a semiconducting material would cause it to leave thermal equilibrium and create a non-equilibrium situation. This introduces electrons and holes to the system, which interact via a process called ambipolar diffusion. Whenever thermal equilibrium is disturbed in a semiconducting material, the number of holes and electrons changes. Such disruptions can occur as a result of a temperature difference or photons, which can enter the system and create electrons and holes. The process that creates and annihilates electrons and holes are called generation and recombination, respectively.

Light emission

In certain semiconductors, excited electrons can relax by emitting light instead of producing heat. These semiconductors are used in the construction of light-emitting diodes and fluorescent quantum dots.

High thermal conductivity

Semiconductors with high thermal conductivity can be used for heat dissipation and improving thermal management of electronics.

Thermal energy conversion

Semiconductors have large thermoelectric power factors making them useful in thermoelectric generators, as well as high thermoelectric figures of merit making them useful in thermoelectric coolers.

Materials

Silicon crystals are the most common semiconducting materials used in microelectronics and photovoltaics.

A large number of elements and compounds have semiconducting properties, including:

The most common semiconducting materials are crystalline solids, but amorphous and liquid semiconductors are also known. These include hydrogenated amorphous silicon and mixtures of arsenic, selenium, and tellurium in a variety of proportions. These compounds share with better-known semiconductors the properties of intermediate conductivity and a rapid variation of conductivity with temperature, as well as occasional negative resistance. Such disordered materials lack the rigid crystalline structure of conventional semiconductors such as silicon. They are generally used in thin film structures, which do not require material of higher electronic quality, being relatively insensitive to impurities and radiation damage.

Preparation of semiconductor materials

Almost all of today's electronic technology involves the use of semiconductors, with the most important aspect being the integrated circuit (IC), which are found in desktops, laptops, scanners, cell-phones, and other electronic devices. Semiconductors for ICs are mass-produced. To create an ideal semiconducting material, chemical purity is paramount. Any small imperfection can have a drastic effect on how the semiconducting material behaves due to the scale at which the materials are used.

A high degree of crystalline perfection is also required, since faults in the crystal structure (such as dislocations, twins, and stacking faults) interfere with the semiconducting properties of the material. Crystalline faults are a major cause of defective semiconductor devices. The larger the crystal, the more difficult it is to achieve the necessary perfection. Current mass production processes use crystal ingots between 100 and 300 mm (3.9 and 11.8 in) in diameter, grown as cylinders and sliced into wafers.

There is a combination of processes that are used to prepare semiconducting materials for ICs. One process is called thermal oxidation, which forms silicon dioxide on the surface of the silicon. This is used as a gate insulator and field oxide. Other processes are called photomasks and photolithography. This process is what creates the patterns on the circuit in the integrated circuit. Ultraviolet light is used along with a photoresist layer to create a chemical change that generates the patterns for the circuit.

The etching is the next process that is required. The part of the silicon that was not covered by the photoresist layer from the previous step can now be etched. The main process typically used today is called plasma etching. Plasma etching usually involves an etch gas pumped in a low-pressure chamber to create plasma. A common etch gas is chlorofluorocarbon, or more commonly known Freon. A high radio-frequency voltage between the cathode and anode is what creates the plasma in the chamber. The silicon wafer is located on the cathode, which causes it to be hit by the positively charged ions that are released from the plasma. The result is silicon that is etched anisotropically.

The last process is called diffusion. This is the process that gives the semiconducting material its desired semiconducting properties. It is also known as doping. The process introduces an impure atom to the system, which creates the p–n junction. To get the impure atoms embedded in the silicon wafer, the wafer is first put in a 1,100 degree Celsius chamber. The atoms are injected in and eventually diffuse with the silicon. After the process is completed and the silicon has reached room temperature, the doping process is done and the semiconducting material is ready to be used in an integrated circuit.

Physics of semiconductors

Energy bands and electrical conduction

Filling of the electronic states in various types of materials at equilibrium. Here, height is energy while width is the density of available states for a certain energy in the material listed. The shade follows the Fermi–Dirac distribution (black: all states filled, white: no state filled). In metals and semimetals the Fermi level EF lies inside at least one band.
In insulators and semiconductors the Fermi level is inside a band gap; however, in semiconductors the bands are near enough to the Fermi level to be thermally populated with electrons or holes.

Semiconductors are defined by their unique electric conductive behavior, somewhere between that of a conductor and an insulator. The differences between these materials can be understood in terms of the quantum states for electrons, each of which may contain zero or one electron (by the Pauli exclusion principle). These states are associated with the electronic band structure of the material. Electrical conductivity arises due to the presence of electrons in states that are delocalized (extending through the material), however in order to transport electrons a state must be partially filled, containing an electron only part of the time. If the state is always occupied with an electron, then it is inert, blocking the passage of other electrons via that state. The energies of these quantum states are critical since a state is partially filled only if its energy is near the Fermi level (see Fermi–Dirac statistics).

High conductivity in material comes from it having many partially filled states and much state delocalization. Metals are good electrical conductors and have many partially filled states with energies near their Fermi level. Insulators, by contrast, have few partially filled states, their Fermi levels sit within band gaps with few energy states to occupy. Importantly, an insulator can be made to conduct by increasing its temperature: heating provides energy to promote some electrons across the band gap, inducing partially filled states in both the band of states beneath the band gap (valence band) and the band of states above the band gap (conduction band). An (intrinsic) semiconductor has a band gap that is smaller than that of an insulator and at room temperature, significant numbers of electrons can be excited to cross the band gap.

A pure semiconductor, however, is not very useful, as it is neither a very good insulator nor a very good conductor. However, one important feature of semiconductors (and some insulators, known as semi-insulators) is that their conductivity can be increased and controlled by doping with impurities and gating with electric fields. Doping and gating move either the conduction or valence band much closer to the Fermi level and greatly increase the number of partially filled states.

Some wider-bandgap semiconductor materials are sometimes referred to as semi-insulators. When undoped, these have electrical conductivity nearer to that of electrical insulators, however they can be doped (making them as useful as semiconductors). Semi-insulators find niche applications in micro-electronics, such as substrates for HEMT. An example of a common semi-insulator is gallium arsenide. Some materials, such as titanium dioxide, can even be used as insulating materials for some applications, while being treated as wide-gap semiconductors for other applications.

Charge carriers (electrons and holes)

The partial filling of the states at the bottom of the conduction band can be understood as adding electrons to that band. The electrons do not stay indefinitely (due to the natural thermal recombination) but they can move around for some time. The actual concentration of electrons is typically very dilute, and so (unlike in metals) it is possible to think of the electrons in the conduction band of a semiconductor as a sort of classical ideal gas, where the electrons fly around freely without being subject to the Pauli exclusion principle. In most semiconductors, the conduction bands have a parabolic dispersion relation, and so these electrons respond to forces (electric field, magnetic field, etc.) much as they would in a vacuum, though with a different effective mass. Because the electrons behave like an ideal gas, one may also think about conduction in very simplistic terms such as the Drude model, and introduce concepts such as electron mobility.

For partial filling at the top of the valence band, it is helpful to introduce the concept of an electron hole. Although the electrons in the valence band are always moving around, a completely full valence band is inert, not conducting any current. If an electron is taken out of the valence band, then the trajectory that the electron would normally have taken is now missing its charge. For the purposes of electric current, this combination of the full valence band, minus the electron, can be converted into a picture of a completely empty band containing a positively charged particle that moves in the same way as the electron. Combined with the negative effective mass of the electrons at the top of the valence band, we arrive at a picture of a positively charged particle that responds to electric and magnetic fields just as a normal positively charged particle would do in a vacuum, again with some positive effective mass.[12] This particle is called a hole, and the collection of holes in the valence band can again be understood in simple classical terms (as with the electrons in the conduction band).

Carrier generation and recombination

When ionizing radiation strikes a semiconductor, it may excite an electron out of its energy level and consequently leave a hole. This process is known as electron-hole pair generation. Electron-hole pairs are constantly generated from thermal energy as well, in the absence of any external energy source.

Electron-hole pairs are also apt to recombine. Conservation of energy demands that these recombination events, in which an electron loses an amount of energy larger than the band gap, be accompanied by the emission of thermal energy (in the form of phonons) or radiation (in the form of photons).

In some states, the generation and recombination of electron-hole pairs are in equipoise. The number of electron-hole pairs in the steady state at a given temperature is determined by quantum statistical mechanics. The precise quantum mechanical mechanisms of generation and recombination are governed by the conservation of energy and conservation of momentum.

As the probability that electrons and holes meet together is proportional to the product of their numbers, the product is in the steady-state nearly constant at a given temperature, providing that there is no significant electric field (which might "flush" carriers of both types, or move them from neighbor regions containing more of them to meet together) or externally driven pair generation. The product is a function of the temperature, as the probability of getting enough thermal energy to produce a pair increases with temperature, being approximately exp(−EG/kT), where k is Boltzmann's constant, T is the absolute temperature and EG is bandgap.

The probability of meeting is increased by carrier traps – impurities or dislocations which can trap an electron or hole and hold it until a pair is completed. Such carrier traps are sometimes purposely added to reduce the time needed to reach the steady-state.

Doping

The conductivity of semiconductors may easily be modified by introducing impurities into their crystal lattice. The process of adding controlled impurities to a semiconductor is known as doping. The amount of impurity, or dopant, added to an intrinsic (pure) semiconductor varies its level of conductivity. Doped semiconductors are referred to as extrinsic. By adding impurity to the pure semiconductors, the electrical conductivity may be varied by factors of thousands or millions.

A 1 cm3 specimen of a metal or semiconductor has the order of 1022 atoms. In a metal, every atom donates at least one free electron for conduction, thus 1 cm3 of metal contains on the order of 1022 free electrons, whereas a 1 cm3 sample of pure germanium at 20 °C contains about 4.2×1022 atoms, but only 2.5×1013 free electrons and 2.5×1013 holes. The addition of 0.001% of arsenic (an impurity) donates an extra 1017 free electrons in the same volume and the electrical conductivity is increased by a factor of 10,000.

The materials chosen as suitable dopants depend on the atomic properties of both the dopant and the material to be doped. In general, dopants that produce the desired controlled changes are classified as either electron acceptors or donors. Semiconductors doped with donor impurities are called n-type, while those doped with acceptor impurities are known as p-type. The n and p type designations indicate which charge carrier acts as the material's majority carrier. The opposite carrier is called the minority carrier, which exists due to thermal excitation at a much lower concentration compared to the majority carrier.

For example, the pure semiconductor silicon has four valence electrons that bond each silicon atom to its neighbors. In silicon, the most common dopants are group III and group V elements. Group III elements all contain three valence electrons, causing them to function as acceptors when used to dope silicon. When an acceptor atom replaces a silicon atom in the crystal, a vacant state (an electron "hole") is created, which can move around the lattice and function as a charge carrier. Group V elements have five valence electrons, which allows them to act as a donor; substitution of these atoms for silicon creates an extra free electron. Therefore, a silicon crystal doped with boron creates a p-type semiconductor whereas one doped with phosphorus results in an n-type material.

During manufacture, dopants can be diffused into the semiconductor body by contact with gaseous compounds of the desired element, or ion implantation can be used to accurately position the doped regions.

Amorphous semiconductors

Some materials, when rapidly cooled to a glassy amorphous state, have semiconducting properties. These include B, Si, Ge, Se, and Te, and there are multiple theories to explain them.

Early history of semiconductors

The history of the understanding of semiconductors begins with experiments on the electrical properties of materials. The properties of the time-temperature coefficient of resistance, rectification, and light-sensitivity were observed starting in the early 19th century.

Thomas Johann Seebeck was the first to notice an effect due to semiconductors, in 1821. In 1833, Michael Faraday reported that the resistance of specimens of silver sulfide decreases when they are heated. This is contrary to the behavior of metallic substances such as copper. In 1839, Alexandre Edmond Becquerel reported observation of a voltage between a solid and a liquid electrolyte, when struck by light, the photovoltaic effect. In 1873, Willoughby Smith observed that selenium resistors exhibit decreasing resistance when light falls on them. In 1874, Karl Ferdinand Braun observed conduction and rectification in metallic sulfides, although this effect had been discovered much earlier by Peter Munck af Rosenschold (sv) writing for the Annalen der Physik und Chemie in 1835, and Arthur Schuster found that a copper oxide layer on wires has rectification properties that ceases, when the wires are cleaned. William Grylls Adams and Richard Evans Day observed the photovoltaic effect in selenium in 1876.

A unified explanation of these phenomena required a theory of solid-state physics, which developed greatly in the first half of the 20th century. In 1878 Edwin Herbert Hall demonstrated the deflection of flowing charge carriers by an applied magnetic field, the Hall effect. The discovery of the electron by J.J. Thomson in 1897 prompted theories of electron-based conduction in solids. Karl Baedeker, by observing a Hall effect with the reverse sign to that in metals, theorized that copper iodide had positive charge carriers. Johan Koenigsberger classified solid materials like metals, insulators, and "variable conductors" in 1914 although his student Josef Weiss already introduced the term Halbleiter (a semiconductor in modern meaning) in his Ph.D. thesis in 1910. Felix Bloch published a theory of the movement of electrons through atomic lattices in 1928. In 1930, B. Gudden stated that conductivity in semiconductors was due to minor concentrations of impurities. By 1931, the band theory of conduction had been established by Alan Herries Wilson and the concept of band gaps had been developed. Walter H. Schottky and Nevill Francis Mott developed models of the potential barrier and of the characteristics of a metal–semiconductor junction. By 1938, Boris Davydov had developed a theory of the copper-oxide rectifier, identifying the effect of the p–n junction and the importance of minority carriers and surface states.

Agreement between theoretical predictions (based on developing quantum mechanics) and experimental results was sometimes poor. This was later explained by John Bardeen as due to the extreme "structure sensitive" behavior of semiconductors, whose properties change dramatically based on tiny amounts of impurities. Commercially pure materials of the 1920s containing varying proportions of trace contaminants produced differing experimental results. This spurred the development of improved material refining techniques, culminating in modern semiconductor refineries producing materials with parts-per-trillion purity.

Devices using semiconductors were at first constructed based on empirical knowledge before semiconductor theory provided a guide to the construction of more capable and reliable devices.

Alexander Graham Bell used the light-sensitive property of selenium to transmit sound over a beam of light in 1880. A working solar cell, of low efficiency, was constructed by Charles Fritts in 1883, using a metal plate coated with selenium and a thin layer of gold; the device became commercially useful in photographic light meters in the 1930s. Point-contact microwave detector rectifiers made of lead sulfide were used by Jagadish Chandra Bose in 1904; the cat's-whisker detector using natural galena or other materials became a common device in the development of radio. However, it was somewhat unpredictable in operation and required manual adjustment for best performance. In 1906, H.J. Round observed light emission when electric current passed through silicon carbide crystals, the principle behind the light-emitting diode. Oleg Losev observed similar light emission in 1922, but at the time the effect had no practical use. Power rectifiers, using copper oxide and selenium, were developed in the 1920s and became commercially important as an alternative to vacuum tube rectifiers.

The first semiconductor devices used galena, including German physicist Ferdinand Braun's crystal detector in 1874 and Indian physicist Jagadish Chandra Bose's radio crystal detector in 1901.

In the years preceding World War II, infrared detection and communications devices prompted research into lead-sulfide and lead-selenide materials. These devices were used for detecting ships and aircraft, for infrared rangefinders, and for voice communication systems. The point-contact crystal detector became vital for microwave radio systems since available vacuum tube devices could not serve as detectors above about 4000 MHz; advanced radar systems relied on the fast response of crystal detectors. Considerable research and development of silicon materials occurred during the war to develop detectors of consistent quality.

Early transistors

Detector and power rectifiers could not amplify a signal. Many efforts were made to develop a solid-state amplifier and were successful in developing a device called the point contact transistor which could amplify 20dB or more. In 1922, Oleg Losev developed two-terminal, negative resistance amplifiers for radio, but he perished in the Siege of Leningrad after successful completion. In 1926, Julius Edgar Lilienfeld patented a device resembling a field-effect transistor, but it was not practical. R. Hilsch and R. W. Pohl in 1938 demonstrated a solid-state amplifier using a structure resembling the control grid of a vacuum tube; although the device displayed power gain, it had a cut-off frequency of one cycle per second, too low for any practical applications, but an effective application of the available theory. At Bell Labs, William Shockley and A. Holden started investigating solid-state amplifiers in 1938. The first p–n junction in silicon was observed by Russell Ohl about 1941 when a specimen was found to be light-sensitive, with a sharp boundary between p-type impurity at one end and n-type at the other. A slice cut from the specimen at the p–n boundary developed a voltage when exposed to light.

The first working transistor was a point-contact transistor invented by John Bardeen, Walter Houser Brattain, and William Shockley at Bell Labs in 1947. Shockley had earlier theorized a field-effect amplifier made from germanium and silicon, but he failed to build such a working device, before eventually using germanium to invent the point-contact transistor. In France, during the war, Herbert Mataré had observed amplification between adjacent point contacts on a germanium base. After the war, Mataré's group announced their "Transistron" amplifier only shortly after Bell Labs announced the "transistor".

In 1954, physical chemist Morris Tanenbaum fabricated the first silicon junction transistor at Bell Labs. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialised applications.

Logical positivism

From Wikipedia, the free encyclopedia

Logical positivism, later called logical empiricism, and both of which together are also known as neopositivism, is a movement whose central thesis was the verification principle (also known as the verifiability criterion of meaning). This theory of knowledge asserted that only statements verifiable through direct observation or logical proof are meaningful in terms of conveying truth value, information or factual content. Starting in the late 1920s, groups of philosophers, scientists, and mathematicians formed the Berlin Circle and the Vienna Circle, which, in these two cities, would propound the ideas of logical positivism.

Flourishing in several European centres through the 1930s, the movement sought to prevent confusion rooted in unclear language and unverifiable claims by converting philosophy into "scientific philosophy", which, according to the logical positivists, ought to share the bases and structures of empirical sciences' best examples, such as Albert Einstein's general theory of relativity. Despite its ambition to overhaul philosophy by studying and mimicking the extant conduct of empirical science, logical positivism became erroneously stereotyped as a movement to regulate the scientific process and to place strict standards on it.

After World War II, the movement shifted to a milder variant, logical empiricism, led mainly by Carl Hempel, who, during the rise of Nazism, had immigrated to the United States. In the ensuing years, the movement's central premises, still unresolved, were heavily criticised by leading philosophers, particularly Willard van Orman Quine and Karl Popper, and even, within the movement itself, by Hempel. The 1962 publication of Thomas Kuhn's landmark book The Structure of Scientific Revolutions dramatically shifted academic philosophy's focus. In 1967 philosopher John Passmore pronounced logical positivism "dead, or as dead as a philosophical movement ever becomes".

Origins

Logical positivists picked from Ludwig Wittgenstein's early philosophy of language the verifiability principle or criterion of meaningfulness. As in Ernst Mach's phenomenalism, whereby the mind knows only actual or potential sensory experience, verificationists took all sciences' basic content to be only sensory experience. And some influence came from Percy Bridgman's musings that others proclaimed as operationalism, whereby a physical theory is understood by what laboratory procedures scientists perform to test its predictions. In verificationism, only the verifiable was scientific, and thus meaningful (or cognitively meaningful), whereas the unverifiable, being unscientific, were meaningless "pseudostatements" (just emotively meaningful). Unscientific discourse, as in ethics and metaphysics, would be unfit for discourse by philosophers, newly tasked to organize knowledge, not develop new knowledge.

Definitions

Logical positivism is sometimes stereotyped as forbidding talk of unobservables, such as microscopic entities or such notions as causality and general principles, but that is an exaggeration. Rather, most neopositivists viewed talk of unobservables as metaphorical or elliptical: direct observations phrased abstractly or indirectly. So theoretical terms would garner meaning from observational terms via correspondence rules, and thereby theoretical laws would be reduced to empirical laws. Via Bertrand Russell's logicism, reducing mathematics to logic, physics' mathematical formulas would be converted to symbolic logic. Via Russell's logical atomism, ordinary language would break into discrete units of meaning. Rational reconstruction, then, would convert ordinary statements into standardized equivalents, all networked and united by a logical syntax. A scientific theory would be stated with its method of verification, whereby a logical calculus or empirical operation could verify its falsity or truth.

Development

In the late 1930s, logical positivists fled Germany and Austria for Britain and the United States. By then, many had replaced Mach's phenomenalism with Otto Neurath's physicalism, whereby science's content is not actual or potential sensations, but instead is entities publicly observable. Rudolf Carnap, who had sparked logical positivism in the Vienna Circle, had sought to replace verification with simply confirmation. With World War II's close in 1945, logical positivism became milder, logical empiricism, led largely by Carl Hempel, in America, who expounded the covering law model of scientific explanation. Logical positivism became a major underpinning of analytic philosophy, and dominated philosophy in the English-speaking world, including philosophy of science, while influencing sciences, but especially social sciences, into the 1960s. Yet the movement failed to resolve its central problems, and its doctrines were increasingly criticized, most trenchantly by Willard Van Orman Quine, Norwood Hanson, Karl Popper, Thomas Kuhn, and Carl Hempel.

Roots

Language

Tractatus Logico-Philosophicus, by the young Ludwig Wittgenstein, introduced the view of philosophy as "critique of language", offering the possibility of a theoretically principled distinction of intelligible versus nonsensical discourse. Tractatus adhered to a correspondence theory of truth (versus a coherence theory of truth). Wittgenstein's influence also shows in some versions of the verifiability principle. In tractarian doctrine, truths of logic are tautologies, a view widely accepted by logical positivists who were also influenced by Wittgenstein's interpretation of probability although, according to Neurath, some logical positivists found Tractatus to contain too much metaphysics.

Logicism

Gottlob Frege began the program of reducing mathematics to logic, continued it with Bertrand Russell, but lost interest in this logicism, and Russell continued it with Alfred North Whitehead in their Principia Mathematica, inspiring some of the more mathematical logical positivists, such as Hans Hahn and Rudolf Carnap. Carnap's early anti-metaphysical works employed Russell's theory of types. Carnap envisioned a universal language that could reconstruct mathematics and thereby encode physics. Yet Kurt Gödel's incompleteness theorem showed this impossible except in trivial cases, and Alfred Tarski's undefinability theorem shattered all hopes of reducing mathematics to logic. Thus, a universal language failed to stem from Carnap's 1934 work Logische Syntax der Sprache (Logical Syntax of Language). Still, some logical positivists, including Carl Hempel, continued support of logicism.

Empiricism

In Germany, Hegelian metaphysics was a dominant movement, and Hegelian successors such as F H Bradley explained reality by postulating metaphysical entities lacking empirical basis, drawing reaction in the form of positivism. Starting in the late 19th century, there was a "back to Kant" movement. Ernst Mach's positivism and phenomenalism were a major influence.

Origins

Vienna

The Vienna Circle, gathering around University of Vienna and Café Central, was led principally by Moritz Schlick. Schlick had held a neo-Kantian position, but later converted, via Carnap's 1928 book Der logische Aufbau der Welt, that is, The Logical Structure of the World. A 1929 pamphlet written by Otto Neurath, Hans Hahn, and Rudolf Carnap summarized the Vienna Circle's positions. Another member of Vienna Circle to later prove very influential was Carl Hempel. A friendly but tenacious critic of the Circle was Karl Popper, whom Neurath nicknamed the "Official Opposition".

Carnap and other Vienna Circle members, including Hahn and Neurath, saw need for a weaker criterion of meaningfulness than verifiability. A radical "left" wing—led by Neurath and Carnap—began the program of "liberalization of empiricism", and they also emphasized fallibilism and pragmatics, which latter Carnap even suggested as empiricism's basis. A conservative "right" wing—led by Schlick and Waismann—rejected both the liberalization of empiricism and the epistemological nonfoundationalism of a move from phenomenalism to physicalism. As Neurath and somewhat Carnap posed science toward social reform, the split in Vienna Circle also reflected political views.

Berlin

The Berlin Circle was led principally by Hans Reichenbach.

Rivals

Both Moritz Schlick and Rudolf Carnap had been influenced by and sought to define logical positivism versus the neo-Kantianism of Ernst Cassirer—the then leading figure of Marburg school, so called—and against Edmund Husserl's phenomenology. Logical positivists especially opposed Martin Heidegger's obscure metaphysics, the epitome of what logical positivism rejected. In the early 1930s, Carnap debated Heidegger over "metaphysical pseudosentences". Despite its revolutionary aims, logical positivism was but one view among many vying within Europe, and logical positivists initially spoke their language.

Export

As the movement's first emissary to the New World, Moritz Schlick visited Stanford University in 1929, yet otherwise remained in Vienna and was murdered in 1936 at the University by a former student, Johann Nelböck, who was reportedly deranged. That year, a British attendee at some Vienna Circle meetings since 1933, A. J. Ayer saw his Language, Truth and Logic, written in English, import logical positivism to the English-speaking world. By then, the Nazi Party's 1933 rise to power in Germany had triggered flight of intellectuals. In exile in England, Otto Neurath died in 1945. Rudolf Carnap, Hans Reichenbach, and Carl Hempel—Carnap's protégé who had studied in Berlin with Reichenbach—settled permanently in America. Upon Germany's annexation of Austria in 1938, remaining logical positivists, many of whom were also Jewish, were targeted and continued flight. Logical positivism thus became dominant in the English-speaking world.

Principles

Analytic/synthetic gap

Concerning reality, the necessary is a state true in all possible worlds—mere logical validity—whereas the contingent hinges on the way the particular world is. Concerning knowledge, the a priori is knowable before or without, whereas the a posteriori is knowable only after or through, relevant experience. Concerning statements, the analytic is true via terms' arrangement and meanings, thus a tautology—true by logical necessity but uninformative about the world—whereas the synthetic adds reference to a state of facts, a contingency.

In 1739, David Hume cast a fork aggressively dividing "relations of ideas" from "matters of fact and real existence", such that all truths are of one type or the other. By Hume's fork, truths by relations among ideas (abstract) all align on one side (analytic, necessary, a priori), whereas truths by states of actualities (concrete) always align on the other side (synthetic, contingent, a posteriori). Of any treatises containing neither, Hume orders, "Commit it then to the flames, for it can contain nothing but sophistry and illusion".

Thus awakened from "dogmatic slumber", Immanuel Kant quested to answer Hume's challenge—but by explaining how metaphysics is possible. Eventually, in his 1781 work, Kant crossed the tines of Hume's fork to identify another range of truths by necessity—synthetic a priori, statements claiming states of facts but known true before experience—by arriving at transcendental idealism, attributing the mind a constructive role in phenomena by arranging sense data into the very experience space, time, and substance. Thus, Kant saved Newton's law of universal gravitation from Hume's problem of induction by finding uniformity of nature to be a priori knowledge. Logical positivists rejected Kant's synthetic a priori, and adopted Hume's fork, whereby a statement is either analytic and a priori (thus necessary and verifiable logically) or synthetic and a posteriori (thus contingent and verifiable empirically).

Observation/theory gap

Early, most logical positivists proposed that all knowledge is based on logical inference from simple "protocol sentences" grounded in observable facts. In the 1936 and 1937 papers "Testability and meaning", individual terms replace sentences as the units of meaning. Further, theoretical terms no longer need to acquire meaning by explicit definition from observational terms: the connection may be indirect, through a system of implicit definitions. Carnap also provided an important, pioneering discussion of disposition predicates.

Cognitive meaningfulness

Verification

The logical positivists' initial stance was that a statement is "cognitively meaningful" in terms of conveying truth value, information or factual content only if some finite procedure conclusively determines its truth. By this verifiability principle, only statements verifiable either by their analyticity or by empiricism were cognitively meaningful. Metaphysics, ontology, as well as much of ethics failed this criterion, and so were found cognitively meaningless. Moritz Schlick, however, did not view ethical or aesthetic statements as cognitively meaningless. Cognitive meaningfulness was variously defined: having a truth value; corresponding to a possible state of affairs; intelligible or understandable as are scientific statements.

Ethics and aesthetics were subjective preferences, while theology and other metaphysics contained "pseudostatements", neither true nor false. This meaningfulness was cognitive, although other types of meaningfulness—for instance, emotive, expressive, or figurative—occurred in metaphysical discourse, dismissed from further review. Thus, logical positivism indirectly asserted Hume's law, the principle that is statements cannot justify ought statements, but are separated by an unbridgeable gap. A. J. Ayer's 1936 book asserted an extreme variant—the boo/hooray doctrine—whereby all evaluative judgments are but emotional reactions.

Confirmation

In an important pair of papers in 1936 and 1937, "Testability and meaning", Carnap replaced verification with confirmation, on the view that although universal laws cannot be verified they can be confirmed. Later, Carnap employed abundant logical and mathematical methods in researching inductive logic while seeking to provide an account of probability as "degree of confirmation", but was never able to formulate a model. In Carnap's inductive logic, every universal law's degree of confirmation is always zero. In any event, the precise formulation of what came to be called the "criterion of cognitive significance" took three decades (Hempel 1950, Carnap 1956, Carnap 1961).

Carl Hempel became a major critic within the logical positivism movement. Hempel criticized the positivist thesis that empirical knowledge is restricted to Basissätze/Beobachtungssätze/Protokollsätze (basic statements or observation statements or protocol statements). Hempel elucidated the paradox of confirmation.

Weak verification

The second edition of A. J. Ayer's book arrived in 1946, and discerned strong versus weak forms of verification. Ayer concluded, "A proposition is said to be verifiable, in the strong sense of the term, if, and only if, its truth could be conclusively established by experience", but is verifiable in the weak sense "if it is possible for experience to render it probable". And yet, "no proposition, other than a tautology, can possibly be anything more than a probable hypothesis". Thus, all are open to weak verification.

Philosophy of science

Upon the global defeat of Nazism, and the removal from philosophy of rivals for radical reform—Marburg neo-Kantianism, Husserlian phenomenology, Heidegger's "existential hermeneutics"—and while hosted in the climate of American pragmatism and commonsense empiricism, the neopositivists shed much of their earlier, revolutionary zeal. No longer crusading to revise traditional philosophy into a new scientific philosophy, they became respectable members of a new philosophy subdiscipline, philosophy of science. Receiving support from Ernest Nagel, logical empiricists were especially influential in the social sciences.

Explanation

Comtean positivism had viewed science as description, whereas the logical positivists posed science as explanation, perhaps to better realize the envisioned unity of science by covering not only fundamental science—that is, fundamental physics—but the special sciences, too, for instance biology, anthropology, psychology, sociology, and economics. The most widely accepted concept of scientific explanation, held even by neopositivist critic Karl Popper, was the deductive-nomological model (DN model). Yet DN model received its greatest explication by Carl Hempel, first in his 1942 article "The function of general laws in history", and more explicitly with Paul Oppenheim in their 1948 article "Studies in the logic of explanation".

In the DN model, the stated phenomenon to be explained is the explanandum—which can be an event, law, or theory—whereas premises stated to explain it are the explanans. Explanans must be true or highly confirmed, contain at least one law, and entail the explanandum. Thus, given initial conditions C1, C2 . . . Cn plus general laws L1, L2 . . . Ln, event E is a deductive consequence and scientifically explained. In the DN model, a law is an unrestricted generalization by conditional proposition—If A, then B—and has empirical content testable. (Differing from a merely true regularity—for instance, George always carries only $1 bills in his wallet—a law suggests what must be true, and is consequent of a scientific theory's axiomatic structure.)

By the Humean empiricist view that humans observe sequences of events, (not cause and effect, as causality and causal mechanisms are unobservable), the DN model neglects causality beyond mere constant conjunction, first event A and then always event B. Hempel's explication of the DN model held natural laws—empirically confirmed regularities—as satisfactory and, if formulated realistically, approximating causal explanation. In later articles, Hempel defended the DN model and proposed a probabilistic explanation, inductive-statistical model (IS model). the DN and IS models together form the covering law model, as named by a critic, William Dray. Derivation of statistical laws from other statistical laws goes to deductive-statistical model (DS model). Georg Henrik von Wright, another critic, named it subsumption theory, fitting the ambition of theory reduction.

Unity of science

Logical positivists were generally committed to "Unified Science", and sought a common language or, in Neurath's phrase, a "universal slang" whereby all scientific propositions could be expressed. The adequacy of proposals or fragments of proposals for such a language was often asserted on the basis of various "reductions" or "explications" of the terms of one special science to the terms of another, putatively more fundamental. Sometimes these reductions consisted of set-theoretic manipulations of a few logically primitive concepts (as in Carnap's Logical Structure of the World, 1928). Sometimes, these reductions consisted of allegedly analytic or a priori deductive relationships (as in Carnap's "Testability and meaning"). A number of publications over a period of thirty years would attempt to elucidate this concept.

Theory reduction

As in Comtean positivism's envisioned unity of science, neopositivists aimed to network all special sciences through the covering law model of scientific explanation. And ultimately, by supplying boundary conditions and supplying bridge laws within the covering law model, all the special sciences' laws would reduce to fundamental physics, the fundamental science.

Critics

After World War II, key tenets of logical positivism, including its atomistic philosophy of science, the verifiability principle, and the fact/value gap, drew escalated criticism. The verifiability criterion made universal statements 'cognitively' meaningless, and even made statements beyond empiricism for technological but not conceptual reasons meaningless, which was taken to pose significant problems for the philosophy of science. These problems were recognized within the movement, which hosted attempted solutions—Carnap's move to confirmation, Ayer's acceptance of weak verification—but the program drew sustained criticism from a number of directions by the 1950s. Even philosophers disagreeing among themselves on which direction general epistemology ought to take, as well as on philosophy of science, agreed that the logical empiricist program was untenable, and it became viewed as self-contradictory: the verifiability criterion of meaning was itself unverified. Notable critics included Popper, Quine, Hanson, Kuhn, Putnam, Austin, Strawson, Goodman, and Rorty.

Popper

An early, tenacious critic was Karl Popper whose 1934 book Logik der Forschung, arriving in English in 1959 as The Logic of Scientific Discovery, directly answered verificationism. Popper considered the problem of induction as rendering empirical verification logically impossible, and the deductive fallacy of affirming the consequent reveals any phenomenon's capacity to host more than one logically possible explanation. Accepting scientific method as hypotheticodeduction, whose inference form is denying the consequent, Popper finds scientific method unable to proceed without falsifiable predictions. Popper thus identifies falsifiability to demarcate not meaningful from meaningless but simply scientific from unscientific—a label not in itself unfavorable.

Popper finds virtue in metaphysics, required to develop new scientific theories. And an unfalsifiable—thus unscientific, perhaps metaphysical—concept in one era can later, through evolving knowledge or technology, become falsifiable, thus scientific. Popper also found science's quest for truth to rest on values. Popper disparages the pseudoscientific, which occurs when an unscientific theory is proclaimed true and coupled with seemingly scientific method by "testing" the unfalsifiable theory—whose predictions are confirmed by necessity—or when a scientific theory's falsifiable predictions are strongly falsified but the theory is persistently protected by "immunizing stratagems", such as the appendage of ad hoc clauses saving the theory or the recourse to increasingly speculative hypotheses shielding the theory.

Explicitly denying the positivist view of meaning and verification, Popper developed the epistemology of critical rationalism, which considers that human knowledge evolves by conjectures and refutations, and that no number, degree, and variety of empirical successes can either verify or confirm scientific theory. For Popper, science's aim is corroboration of scientific theory, which strives for scientific realism but accepts the maximal status of strongly corroborated verisimilitude ("truthlikeness"). Popper thus acknowledged the value of the positivist movement's emphasis on science but claimed that he had "killed positivism".

Quine

Although an empiricist, American logician Willard Van Orman Quine published the 1951 paper "Two Dogmas of Empiricism", which challenged conventional empiricist presumptions. Quine attacked the analytic/synthetic division, which the verificationist program had been hinged upon in order to entail, by consequence of Hume's fork, both necessity and aprioricity. Quine's ontological relativity explained that every term in any statement has its meaning contingent on a vast network of knowledge and belief, the speaker's conception of the entire world. Quine later proposed naturalized epistemology.

Hanson

In 1958, Norwood Hanson's Patterns of Discovery undermined the division of observation versus theory, as one can predict, collect, prioritize, and assess data only via some horizon of expectation set by a theory. Thus, any dataset—the direct observations, the scientific facts—is laden with theory.

Kuhn

With his landmark The Structure of Scientific Revolutions (1962), Thomas Kuhn critically destabilized the verificationist program, which was presumed to call for foundationalism. (But already in the 1930s, Otto Neurath had argued for nonfoundationalism via coherentism by likening science to a boat (Neurath's boat) that scientists must rebuild at sea.) Although Kuhn's thesis itself was attacked even by opponents of neopositivism, in the 1970 postscript to Structure, Kuhn asserted, at least, that there was no algorithm to science—and, on that, even most of Kuhn's critics agreed.

Powerful and persuasive, Kuhn's book, unlike the vocabulary and symbols of logic's formal language, was written in natural language open to the layperson. Kuhn's book was first published in a volume of International Encyclopedia of Unified Science—a project begun by logical positivists but co-edited by Neurath whose view of science was already nonfoundationalist as mentioned above—and some sense unified science, indeed, but by bringing it into the realm of historical and social assessment, rather than fitting it to the model of physics. Kuhn's ideas were rapidly adopted by scholars in disciplines well outside natural sciences, and, as logical empiricists were extremely influential in the social sciences, ushered academia into postpositivism or postempiricism.

Putnam

The "received view" operates on the correspondence rule that states, "The observational terms are taken as referring to specified phenomena or phenomenal properties, and the only interpretation given to the theoretical terms is their explicit definition provided by the correspondence rules". According to Hilary Putnam, a former student of Reichenbach and of Carnap, the dichotomy of observational terms versus theoretical terms introduced a problem within scientific discussion that was nonexistent until this dichotomy was stated by logical positivists. Putnam's four objections:

  • Something is referred to as "observational" if it is observable directly with our senses. Then an observational term cannot be applied to something unobservable. If this is the case, there are no observational terms.
  • With Carnap's classification, some unobservable terms are not even theoretical and belong to neither observational terms nor theoretical terms. Some theoretical terms refer primarily to observational terms.
  • Reports of observational terms frequently contain theoretical terms.
  • A scientific theory may not contain any theoretical terms (an example of this is Darwin's original theory of evolution).

Putnam also alleged that positivism was actually a form of metaphysical idealism by its rejecting scientific theory's ability to garner knowledge about nature's unobservable aspects. With his "no miracles" argument, posed in 1974, Putnam asserted scientific realism, the stance that science achieves true—or approximately true—knowledge of the world as it exists independently of humans' sensory experience. In this, Putnam opposed not only the positivism but other instrumentalism—whereby scientific theory is but a human tool to predict human observations—filling the void left by positivism's decline.

Decline and legacy

By the late 1960s, logical positivism had become exhausted. In 1976, A. J. Ayer quipped that "the most important" defect of logical positivism "was that nearly all of it was false", though he maintained "it was true in spirit." Although logical positivism tends to be recalled as a pillar of scientism, Carl Hempel was key in establishing the philosophy subdiscipline philosophy of science where Thomas Kuhn and Karl Popper brought in the era of postpositivism. John Passmore found logical positivism to be "dead, or as dead as a philosophical movement ever becomes".

Logical positivism's fall reopened debate over the metaphysical merit of scientific theory, whether it can offer knowledge of the world beyond human experience (scientific realism) versus whether it is but a human tool to predict human experience (instrumentalism). Meanwhile, it became popular among philosophers to rehash the faults and failures of logical positivism without investigation of them. Thereby, logical positivism has been generally misrepresented, sometimes severely. Arguing for their own views, often framed versus logical positivism, many philosophers have reduced logical positivism to simplisms and stereotypes, especially the notion of logical positivism as a type of foundationalism. In any event, the movement helped anchor analytic philosophy in the English-speaking world, and returned Britain to empiricism. Without the logical positivists, who have been tremendously influential outside philosophy, especially in psychology and other social sciences, intellectual life of the 20th century would be unrecognizable.

Medieval Warm Period

From Wikipedia, the free encyclopedia
 
Global average temperatures show that the Medieval Warm Period was not a global phenomenon.

The Medieval Warm Period (MWP), also known as the Medieval Climate Optimum or the Medieval Climatic Anomaly, was a time of warm climate in the North Atlantic region that lasted from c. 950 to c. 1250. Climate proxy records show peak warmth occurred at different times for different regions, which indicate that the MWP was not a globally uniform event. Some refer to the MWP as the Medieval Climatic Anomaly to emphasize that climatic effects other than temperature were also important.

The MWP was followed by a regionally cooler period in the North Atlantic and elsewhere, which is sometimes called the Little Ice Age (LIA).

Possible causes of the MWP include increased solar activity, decreased volcanic activity, and changes in ocean circulation.

Research

The Medieval Warm Period (MWP) is generally thought to have occurred from c. 950c. 1250, during the European Middle Ages. In 1965, Hubert Lamb, one of the first paleoclimatologists, published research based on data from botany, historical document research, and meteorology, combined with records indicating prevailing temperature and rainfall in England around c. 1200 and around c. 1600. He proposed, "Evidence has been accumulating in many fields of investigation pointing to a notably warm climate in many parts of the world, that lasted a few centuries around c. 1000c. 1200 AD, and was followed by a decline of temperature levels till between c. 1500 and c. 1700 the coldest phase since the last ice age occurred."

The era of warmer temperatures became known as the Medieval Warm Period and the subsequent cold period the Little Ice Age (LIA). However, the view that the MWP was a global event was challenged by other researchers. The IPCC First Assessment Report of 1990 discussed the "Medieval Warm Period around 1000 AD (which may not have been global) and the Little Ice Age which ended only in the middle to late nineteenth century." It stated that temperatures in the "late tenth to early thirteenth centuries (about AD 950–1250) appear to have been exceptionally warm in western Europe, Iceland and Greenland." The IPCC Third Assessment Report from 2001 summarized newer research: "evidence does not support globally synchronous periods of anomalous cold or warmth over this time frame, and the conventional terms of 'Little Ice Age' and 'Medieval Warm Period' are chiefly documented in describing northern hemisphere trends in hemispheric or global mean temperature changes in past centuries."

Global temperature records taken from ice cores, tree rings, and lake deposits have shown that the Earth may have been slightly cooler globally (by 0.03 °C) than in the early and the mid-20th century.

Palaeoclimatologists developing region-specific climate reconstructions of past centuries conventionally label their coldest interval as "LIA" and their warmest interval as the "MWP". Others follow the convention, and when a significant climate event is found in the "LIA" or "MWP" timeframes, they associate their events to the period. Some "MWP" events are thus wet events or cold events, rather than strictly warm events, particularly in central Antarctica, where climate patterns that are opposite to those of the North Atlantic have been noticed.

Global climate during the Medieval Warm Period

In 2019, by using an extended proxy data set, the Pages-2k consortium confirmed that the Medieval Climate Anomaly was not a globally synchronous event. The warmest 51-year period within the MWP did not occur at the same time in different regions. They argue for a regional instead of global framing of climate variability in the preindustrial Common Era to aid in understanding.

North Atlantic

Greenland ice sheet temperatures interpreted with 18O isotope from 6 ice cores (Vinther, B., et al., 2009). The data set ranges from 9690 BC to AD 1970 and has a resolution of around 20 years. That means that each data point represents the average temperature of the surrounding 20 years.
 

Lloyd D. Keigwin's 1996 study of radiocarbon-dated box core data from marine sediments in the Sargasso Sea found that its sea surface temperature was approximately 1 °C (1.8 °F) cooler approximately 400 years ago, during the LIA, and 1700 years ago and was approximately 1 °C warmer 1000 years ago, during the MWP.

Using sediment samples from Puerto Rico, the Gulf Coast, and the Atlantic Coast from Florida to New England, Mann et al. (2009) found consistent evidence of a peak in North Atlantic tropical cyclone activity during the MWP, which was followed by a subsequent lull in activity.

Iceland

Iceland was first settled between about 865 and 930, during a time believed to be warm enough for sailing and farming. By retrieval and isotope analysis of marine cores and from examination of mollusc growth patterns from Iceland, Patterson et al. reconstructed a stable oxygen (δ18 O) and carbon (δ13 C) isotope record at a decadal resolution from the Roman Warm Period to the MWP and the LIA. Patterson et al. conclude that the summer temperature stayed high but winter temperature decreased after the initial settlement of Iceland.

Greenland

The last written records of the Norse Greenlanders are from an Icelandic marriage in 1408 but were recorded later in Iceland, at Hvalsey Church, which is now the best-preserved of the Norse ruins.

The 2009 Mann et al. study found warmth exceeding 1961–1990 levels in southern Greenland and parts of North America during the MWP, which the study defines as from 950 to 1250, with warmth in some regions exceeding temperatures of the 1990–2010 period. Much of the Northern Hemisphere showed a significant cooling during the LIA, which the study defines as from 1400 to 1700, but Labrador and isolated parts of the United States appeared to be approximately as warm as during the 1961–1990 period.

1690 copy of the 1570 Skálholt map, based on documentary information about earlier Norse sites in America.

The Norse colonization of the Americas has been associated with warmer periods. The common theory is that Norsemen took advantage of ice-free seas to colonize areas in Greenland and other outlying lands of the far north. However, a study from Columbia University suggests that Greenland was not colonized in warmer weather, but the warming effect in fact lasted for only very briefly. c. 1000AD, the climate was sufficiently warm for the Vikings to journey to Newfoundland and to establish a short-lived outpost there.

L'Anse aux Meadows, Newfoundland, today, with a reconstruction of a Viking settlement.

In around 985, Vikings founded the Eastern and Western Settlements, both near the southern tip of Greenland. In the colony's early stages, they kept cattle, sheep, and goats, with around a quarter of their diet from seafood. After the climate became colder and stormier around 1250, their diet steadily shifted towards ocean sources. By around 1300, seal hunting provided over three quarters of their food.

By 1350, there was reduced demand for their exports, and trade with Europe fell away. The last document from the settlements dates from 1412, and over the following decades, the remaining Europeans left in what seems to have been a gradual withdrawal, which was caused mainly by economic factors such as increased availability of farms in Scandinavian countries.

Europe

Substantial glacial retreat in southern Europe was experienced during the MWP. While several smaller glaciers experienced complete deglaciation, larger glaciers in the region survived and now provide insight into the region’s climate history. In addition to warming induced glacial melt, sedimentary records reveal a period of increased flooding, coinciding with the MWP, in eastern Europe that is attributed to enhanced precipitation from a positive phase North Atlantic Oscillation (NAO). Other impacts of climate change can be less apparent such as a changing landscape. Preceding the MWP, a coastal region in western Sardinia was abandoned by the Romans. The coastal area was able to substantially expand into the lagoon without the influence of human populations and a high stand during the MWP. When human populations returned to the region, they encountered a land altered by climate change and had to reestablish ports.

Other regions

North America

In Chesapeake Bay (now in Maryland and Virginia, United States), researchers found large temperature excursions (changes from the mean temperature of that time) during the MWP (about 950–1250) and the Little Ice Age (about 1400–1700, with cold periods persisting into the early 20th century), which are possibly related to changes in the strength of North Atlantic thermohaline circulation. Sediments in Piermont Marsh of the lower Hudson Valley show a dry MWP from 800 to 1300.

Prolonged droughts affected many parts of what is now the Western United States, especially eastern California and the west of Great Basin. Alaska experienced three intervals of comparable warmth: 1–300, 850–1200, and since 1800. Knowledge of the MWP in North America has been useful in dating occupancy periods of certain Native American habitation sites, especially in arid parts of the Western United States. Droughts in the MWP may have impacted Native American settlements also in the Eastern United States, such as at Cahokia. Review of more recent archaeological research shows that as the search for signs of unusual cultural changes has broadened, some of the early patterns (such as violence and health problems) have been found to be more complicated and regionally varied than had been previously thought. Other patterns, such as settlement disruption, deterioration of long-distance trade, and population movements, have been further corroborated.

Africa

The climate in equatorial eastern Africa has alternated between being drier than today and relatively wet. The climate was drier during the MWP (1000–1270). Off the coast of Africa, Isotopic analysis of bones from the Canary Islands’ inhabitants during the MWP to LIA transition reveal the region experienced a 5 °C decrease in air temperature. Over this period, the diet of inhabitants did not appreciably change, which suggests they were remarkably resilient to climate change.

Antarctica

A sediment core from the eastern Bransfield Basin, in the Antarctic Peninsula, preserves climatic events from both the LIA and the MWP. The authors noted, "The late Holocene records clearly identify Neoglacial events of the LIA and Medieval Warm Period (MWP)." Some Antarctic regions were atypically cold, but others were atypically warm between 1000 and 1200.

Pacific Ocean

Corals in the tropical Pacific Ocean suggest that relatively cool and dry conditions may have persisted early in the millennium, which is consistent with a La Niña-like configuration of the El Niño-Southern Oscillation patterns.

In 2013, a study from three US universities was published in Science magazine and showed that the water temperature in the Pacific Ocean was 0.9 degrees warmer during the MWP than during the LIA and 0.65 degrees warmer than the decades before the study.

South America

The MWP has been noted in Chile in a 1500-year lake bed sediment core, as well as in the Eastern Cordillera of Ecuador.

A reconstruction, based on ice cores, found that the MWP could be distinguished in tropical South America from about 1050 to 1300 and was followed in the 15th century by the LIA. Peak temperatures did not rise as to the level of the late 20th century, which were unprecedented in the area during the study period of 1600 years.

Asia

Adhikari and Kumon (2001), investigating sediments in Lake Nakatsuna, in central Japan, found a warm period from 900 to 1200 that corresponded to the MWP and three cool phases, two of which could be related to the LIA. Other research in northeastern Japan showed that there was one warm and humid interval, from 750 to 1200, and two cold and dry intervals, from 1 to 750 and from 1200 to now. Ge et al. studied temperatures in China for the past 2000 years and found high uncertainty prior to the 16th century but good consistency over the last 500 years highlighted by the two cold periods, 1620s–1710s and 1800s–1860s, and the 20th-century warming. They also found that the warming from the 10th to the 14th centuries in some regions might be comparable in magnitude to the warming of the last few decades of the 20th century, which was unprecedented within the past 500 years. Generally, a warming period was identified in China, coinciding with the MWP, using multi-proxy data for temperature. However, the warming was inconsistent across China. Significant temperature change, from the MWP to LIA, was found for northeast and central-east China but not for northwest China and the Tibetan Plateau. Alongside an overall warmer climate, areas in Asia experienced wetter conditions in the MWP southeastern China, India, and far eastern Russia. Peat cores from peatland in southeast China suggest changes in the East Asian Summer Monsoon (EASM) and El Niño Southern Oscillation (ENSO) are responsible for increased precipitation in the region during the MWP. The Indian Summer Monsoon (ISM) was also enhanced during the MWP with a temperature driven change to the Atlantic Multi-decadal Oscillation (AMO), bringing more precipitation to India. In far eastern Russia, continental regions experienced severe floods during the MWP while nearby islands experienced less precipitation leading to a decrease in peatland. Pollen data from this region indicates an expansion of warm climate vegetation with an increasing number of broadleaf and decreasing number of coniferous forests.

Oceania

There is an extreme scarcity of data from Australia for both the MWP and the LIA. However, evidence from wave-built shingle terraces for a permanently-full Lake Eyre during the 9th and the 10th centuries is consistent with a La Niña-like configuration, but the data are insufficient to show how lake levels varied from year to year or what climatic conditions elsewhere in Australia were like.

A 1979 study from the University of Waikato found, "Temperatures derived from an 18O/16O profile through a stalagmite found in a New Zealand cave (40.67°S, 172.43°E) suggested the Medieval Warm Period to have occurred between AD c. 1050 and c. 1400 and to have been 0.75 °C warmer than the Current Warm Period." More evidence in New Zealand is from an 1100-year tree-ring record.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...