Search This Blog

Friday, September 10, 2021

The Structure of Scientific Revolutions

From Wikipedia, the free encyclopedia
 
The Structure of Scientific Revolutions
Structure-of-scientific-revolutions-1st-ed-pb.png
Cover of the first edition
AuthorThomas S. Kuhn
Cover artistTed Lacey
CountryUnited States
LanguageEnglish
SubjectHistory of science
PublisherUniversity of Chicago Press
Publication date
1962
Media typePrint (Hardcover and Paperback)
Pages264
ISBN9780226458113
501
LC ClassQ175.K95

The Structure of Scientific Revolutions (1962; second edition 1970; third edition 1996; fourth edition 2012) is a book about the history of science by the philosopher Thomas S. Kuhn. Its publication was a landmark event in the history, philosophy, and sociology of science. Kuhn challenged the then prevailing view of progress in science in which scientific progress was viewed as "development-by-accumulation" of accepted facts and theories. Kuhn argued for an episodic model in which periods of conceptual continuity where there is cumulative progress, which Kuhn referred to as periods of "normal science", were interrupted by periods of revolutionary science. The discovery of "anomalies" during revolutions in science leads to new paradigms. New paradigms then ask new questions of old data, move beyond the mere "puzzle-solving" of the previous paradigm, change the rules of the game and the "map" directing new research.

For example, Kuhn's analysis of the Copernican Revolution emphasized that, in its beginning, it did not offer more accurate predictions of celestial events, such as planetary positions, than the Ptolemaic system, but instead appealed to some practitioners based on a promise of better, simpler solutions that might be developed at some point in the future. Kuhn called the core concepts of an ascendant revolution its "paradigms" and thereby launched this word into widespread analogical use in the second half of the 20th century. Kuhn's insistence that a paradigm shift was a mélange of sociology, enthusiasm and scientific promise, but not a logically determinate procedure, caused an uproar in reaction to his work. Kuhn addressed concerns in the 1969 postscript to the second edition. For some commentators The Structure of Scientific Revolutions introduced a realistic humanism into the core of science, while for others the nobility of science was tarnished by Kuhn's introduction of an irrational element into the heart of its greatest achievements.

History

The Structure of Scientific Revolutions was first published as a monograph in the International Encyclopedia of Unified Science, then as a book by University of Chicago Press in 1962. In 1969, Kuhn added a postscript to the book in which he replied to critical responses to the first edition. A 50th Anniversary Edition (with an introductory essay by Ian Hacking) was published by the University of Chicago Press in April 2012.

Kuhn dated the genesis of his book to 1947, when he was a graduate student at Harvard University and had been asked to teach a science class for humanities undergraduates with a focus on historical case studies. Kuhn later commented that until then, "I'd never read an old document in science." Aristotle's Physics was astonishingly unlike Isaac Newton's work in its concepts of matter and motion. Kuhn wrote "... as I was reading him, Aristotle appeared not only ignorant of mechanics, but a dreadfully bad physical scientist as well. About motion, in particular, his writings seemed to me full of egregious errors, both of logic and of observation." This was in an apparent contradiction with the fact that Aristotle was a brilliant mind. While perusing Aristotle's Physics, Kuhn formed the view that in order to properly appreciate Aristotle's reasoning, one must be aware of the scientific conventions of the time. Kuhn concluded that Aristotle's concepts were not "bad Newton," just different. This insight was the foundation of The Structure of Scientific Revolutions.

Prior to the publication of Kuhn's book, a number of ideas regarding the process of scientific investigation and discovery had already been proposed. Ludwik Fleck developed the first system of the sociology of scientific knowledge in his book The Genesis and Development of a Scientific Fact (1935). He claimed that the exchange of ideas led to the establishment of a thought collective, which, when developed sufficiently, served to separate the field into esoteric (professional) and exoteric (laymen) circles. Kuhn wrote the foreword to the 1979 edition of Fleck's book, noting that he read it in 1950 and was reassured that someone "saw in the history of science what I myself was finding there."

Kuhn was not confident about how his book would be received. Harvard University had denied his tenure a few years prior. However, by the mid-1980s, his book had achieved blockbuster status. When Kuhn's book came out in the early 1960s, "structure" was an intellectually popular word in many fields in the humanities and social sciences, including linguistics and anthropology, appealing in its idea that complex phenomena could reveal or be studied through basic, simpler structures. Kuhn's book contributed to that idea.

One theory to which Kuhn replies directly is Karl Popper's “falsificationism,” which stresses falsifiability as the most important criterion for distinguishing between that which is scientific and that which is unscientific. Kuhn also addresses verificationism, a philosophical movement that emerged in the 1920s among logical positivists. The verifiability principle claims that meaningful statements must be supported by empirical evidence or logical requirements.

Synopsis

Basic approach

Kuhn's approach to the history and philosophy of science focuses on conceptual issues like the practice of normal science, influence of historical events, emergence of scientific discoveries, nature of scientific revolutions and progress through scientific revolutions. What sorts of intellectual options and strategies were available to people during a given period? What types of lexicons and terminology were known and employed during certain epochs? Stressing the importance of not attributing traditional thought to earlier investigators, Kuhn's book argues that the evolution of scientific theory does not emerge from the straightforward accumulation of facts, but rather from a set of changing intellectual circumstances and possibilities. Such an approach is largely commensurate with the general historical school of non-linear history.

Kuhn did not see scientific theory as proceeding linearly from an objective, unbiased accumulation of all available data, but rather as paradigm-driven. “The operations and measurements that a scientist undertakes in the laboratory are not ‘the given’ of experience but rather ‘the collected with difficulty.’ They are not what the scientist sees—at least not before his research is well advanced and his attention focused. Rather, they are concrete indices to the content of more elementary perceptions, and as such they are selected for the close scrutiny of normal research only because they promise opportunity for the fruitful elaboration of an accepted paradigm. Far more clearly than the immediate experience from which they in part derive, operations and measurements are paradigm-determined. Science does not deal in all possible laboratory manipulations. Instead, it selects those relevant to the juxtaposition of a paradigm with the immediate experience that that paradigm has partially determined. As a result, scientists with different paradigms engage in different concrete laboratory manipulations.” 

Historical examples of chemistry

Kuhn explains his ideas using examples taken from the history of science. For instance, eighteenth-century scientists believed that homogenous solutions were chemical compounds. Therefore, a combination of water and alcohol was generally classified as a compound. Nowadays it is considered to be a solution, but there was no reason then to suspect that it was not a compound. Water and alcohol would not separate spontaneously, nor will they separate completely upon distillation (they form an azeotrope). Water and alcohol can be combined in any proportion.

Under this paradigm, scientists believed that chemical reactions (such as the combination of water and alcohol) did not necessarily occur in fixed proportion. This belief was ultimately overturned by Dalton's atomic theory, which asserted that atoms can only combine in simple, whole-number ratios. Under this new paradigm, any reaction which did not occur in fixed proportion could not be a chemical process. This type world-view transition among the scientific community exemplifies Kuhn's paradigm shift. 

Copernican Revolution

A famous example of a revolution in scientific thought is the Copernican Revolution. In Ptolemy's school of thought, cycles and epicycles (with some additional concepts) were used for modeling the movements of the planets in a cosmos that had a stationary Earth at its center. As accuracy of celestial observations increased, complexity of the Ptolemaic cyclical and epicyclical mechanisms had to increase to maintain the calculated planetary positions close to the observed positions. Copernicus proposed a cosmology in which the Sun was at the center and the Earth was one of the planets revolving around it. For modeling the planetary motions, Copernicus used the tools he was familiar with, namely the cycles and epicycles of the Ptolemaic toolbox. Yet Copernicus' model needed more cycles and epicycles than existed in the then-current Ptolemaic model, and due to a lack of accuracy in calculations, his model did not appear to provide more accurate predictions than the Ptolemy model. Copernicus' contemporaries rejected his cosmology, and Kuhn asserts that they were quite right to do so: Copernicus' cosmology lacked credibility.

Kuhn illustrates how a paradigm shift later became possible when Galileo Galilei introduced his new ideas concerning motion. Intuitively, when an object is set in motion, it soon comes to a halt. A well-made cart may travel a long distance before it stops, but unless something keeps pushing it, it will eventually stop moving. Aristotle had argued that this was presumably a fundamental property of nature: for the motion of an object to be sustained, it must continue to be pushed. Given the knowledge available at the time, this represented sensible, reasonable thinking.

Galileo put forward a bold alternative conjecture: suppose, he said, that we always observe objects coming to a halt simply because some friction is always occurring. Galileo had no equipment with which to objectively confirm his conjecture, but he suggested that without any friction to slow down an object in motion, its inherent tendency is to maintain its speed without the application of any additional force.

The Ptolemaic approach of using cycles and epicycles was becoming strained: there seemed to be no end to the mushrooming growth in complexity required to account for the observable phenomena. Johannes Kepler was the first person to abandon the tools of the Ptolemaic paradigm. He started to explore the possibility that the planet Mars might have an elliptical orbit rather than a circular one. Clearly, the angular velocity could not be constant, but it proved very difficult to find the formula describing the rate of change of the planet's angular velocity. After many years of calculations, Kepler arrived at what we now know as the law of equal areas.

Galileo's conjecture was merely that — a conjecture. So was Kepler's cosmology. But each conjecture increased the credibility of the other, and together, they changed the prevailing perceptions of the scientific community. Later, Newton showed that Kepler's three laws could all be derived from a single theory of motion and planetary motion. Newton solidified and unified the paradigm shift that Galileo and Kepler had initiated.

Coherence

One of the aims of science is to find models that will account for as many observations as possible within a coherent framework. Together, Galileo's rethinking of the nature of motion and Keplerian cosmology represented a coherent framework that was capable of rivaling the Aristotelian/Ptolemaic framework.

Once a paradigm shift has taken place, the textbooks are rewritten. Often the history of science too is rewritten, being presented as an inevitable process leading up to the current, established framework of thought. There is a prevalent belief that all hitherto-unexplained phenomena will in due course be accounted for in terms of this established framework. Kuhn states that scientists spend most (if not all) of their careers in a process of puzzle-solving. Their puzzle-solving is pursued with great tenacity, because the previous successes of the established paradigm tend to generate great confidence that the approach being taken guarantees that a solution to the puzzle exists, even though it may be very hard to find. Kuhn calls this process normal science.

As a paradigm is stretched to its limits, anomalies — failures of the current paradigm to take into account observed phenomena — accumulate. Their significance is judged by the practitioners of the discipline. Some anomalies may be dismissed as errors in observation, others as merely requiring small adjustments to the current paradigm that will be clarified in due course. Some anomalies resolve themselves spontaneously, having increased the available depth of insight along the way. But no matter how great or numerous the anomalies that persist, Kuhn observes, the practicing scientists will not lose faith in the established paradigm until a credible alternative is available; to lose faith in the solvability of the problems would in effect mean ceasing to be a scientist.

In any community of scientists, Kuhn states, there are some individuals who are bolder than most. These scientists, judging that a crisis exists, embark on what Kuhn calls revolutionary science, exploring alternatives to long-held, obvious-seeming assumptions. Occasionally this generates a rival to the established framework of thought. The new candidate paradigm will appear to be accompanied by numerous anomalies, partly because it is still so new and incomplete. The majority of the scientific community will oppose any conceptual change, and, Kuhn emphasizes, so they should. To fulfill its potential, a scientific community needs to contain both individuals who are bold and individuals who are conservative. There are many examples in the history of science in which confidence in the established frame of thought was eventually vindicated. It is almost impossible to predict whether the anomalies in a candidate for a new paradigm will eventually be resolved. Those scientists who possess an exceptional ability to recognize a theory's potential will be the first whose preference is likely to shift in favour of the challenging paradigm. There typically follows a period in which there are adherents of both paradigms. In time, if the challenging paradigm is solidified and unified, it will replace the old paradigm, and a paradigm shift will have occurred.

Phases

Kuhn explains the process of scientific change as the result of various phases of paradigm change.

  • Phase 1 – It exists only once and is the pre-paradigm phase, in which there is no consensus on any particular theory. This phase is characterized by several incompatible and incomplete theories. Consequently, most scientific inquiry takes the form of lengthy books, as there is no common body of facts that may be taken for granted. If the actors in the pre-paradigm community eventually gravitate to one of these conceptual frameworks and ultimately to a widespread consensus on the appropriate choice of methods, terminology and on the kinds of experiment that are likely to contribute to increased insights.
  • Phase 2 – Normal science begins, in which puzzles are solved within the context of the dominant paradigm. As long as there is consensus within the discipline, normal science continues. Over time, progress in normal science may reveal anomalies, facts that are difficult to explain within the context of the existing paradigm. While usually these anomalies are resolved, in some cases they may accumulate to the point where normal science becomes difficult and where weaknesses in the old paradigm are revealed.
  • Phase 3 – If the paradigm proves chronically unable to account for anomalies, the community enters a crisis period. Crises are often resolved within the context of normal science. However, after significant efforts of normal science within a paradigm fail, science may enter the next phase.
  • Phase 4 – Paradigm shift, or scientific revolution, is the phase in which the underlying assumptions of the field are reexamined and a new paradigm is established.
  • Phase 5 – Post-Revolution, the new paradigm's dominance is established and so scientists return to normal science, solving puzzles within the new paradigm.

A science may go through these cycles repeatedly, though Kuhn notes that it is a good thing for science that such shifts do not occur often or easily.

Incommensurability

According to Kuhn, the scientific paradigms preceding and succeeding a paradigm shift are so different that their theories are incommensurable — the new paradigm cannot be proven or disproven by the rules of the old paradigm, and vice versa. (A later interpretation by Kuhn of 'commensurable' versus 'incommensurable' was as a distinction between languages, namely, that statements in commensurable languages were translatable fully from one to the other, while in incommensurable languages, strict translation is not possible.) The paradigm shift does not merely involve the revision or transformation of an individual theory, it changes the way terminology is defined, how the scientists in that field view their subject, and, perhaps most significantly, what questions are regarded as valid, and what rules are used to determine the truth of a particular theory. The new theories were not, as the scientists had previously thought, just extensions of old theories, but were instead completely new world views. Such incommensurability exists not just before and after a paradigm shift, but in the periods in between conflicting paradigms. It is simply not possible, according to Kuhn, to construct an impartial language that can be used to perform a neutral comparison between conflicting paradigms, because the very terms used are integral to the respective paradigms, and therefore have different connotations in each paradigm. The advocates of mutually exclusive paradigms are in a difficult position: "Though each may hope to convert the other to his way of seeing science and its problems, neither may hope to prove his case. The competition between paradigms is not the sort of battle that can be resolved by proofs. (p. 148)" Scientists subscribing to different paradigms end up talking past one another.

Kuhn states that the probabilistic tools used by verificationists are inherently inadequate for the task of deciding between conflicting theories, since they belong to the very paradigms they seek to compare. Similarly, observations that are intended to falsify a statement will fall under one of the paradigms they are supposed to help compare, and will therefore also be inadequate for the task. According to Kuhn, the concept of falsifiability is unhelpful for understanding why and how science has developed as it has. In the practice of science, scientists will only consider the possibility that a theory has been falsified if an alternative theory is available that they judge credible. If there is not, scientists will continue to adhere to the established conceptual framework. If a paradigm shift has occurred, the textbooks will be rewritten to state that the previous theory has been falsified.

Kuhn further developed his ideas regarding incommensurability in the 1980s and 1990s. In his unpublished manuscript The Plurality of Worlds, Kuhn introduces the theory of kind concepts: sets of interrelated concepts that are characteristic of a time period in a science and differ in structure from the modern analogous kind concepts. These different structures imply different “taxonomies” of things and processes, and this difference in taxonomies constitutes incommensurability. This theory is strongly naturalistic and draws on developmental psychology to “found a quasi-transcendental theory of experience and of reality.”

Exemplar

Kuhn introduced the concept of an exemplar in a postscript to the second edition of The Structure of Scientific Revolutions (1970). He noted that he was substituting the term 'exemplars' for 'paradigm', meaning the problems and solutions that students of a subject learn from the beginning of their education. For example, physicists might have as exemplars the inclined plane, Kepler's laws of planetary motion, or instruments like the calorimeter.

According to Kuhn, scientific practice alternates between periods of normal science and revolutionary science. During periods of normalcy, scientists tend to subscribe to a large body of interconnecting knowledge, methods, and assumptions which make up the reigning paradigm. Normal science presents a series of problems that are solved as scientists explore their field. The solutions to some of these problems become well known and are the exemplars of the field.

Those who study a scientific discipline are expected to know its exemplars. There is no fixed set of exemplars, but for a physicist today it would probably include the harmonic oscillator from mechanics and the hydrogen atom from quantum mechanics.

Kuhn on scientific progress

The first edition of The Structure of Scientific Revolutions ended with a chapter titled "Progress through Revolutions", in which Kuhn spelled out his views on the nature of scientific progress. Since he considered problem solving to be a central element of science, Kuhn saw that for a new candidate paradigm to be accepted by a scientific community, "First, the new candidate must seem to resolve some outstanding and generally recognized problem that can be met in no other way. Second, the new paradigm must promise to preserve a relatively large part of the concrete problem solving ability that has accrued to science through its predecessors. While the new paradigm is rarely as expansive as the old paradigm in its initial stages, it must nevertheless have significant promise for future problem-solving. As a result, though new paradigms seldom or never possess all the capabilities of their predecessors, they usually preserve a great deal of the most concrete parts of past achievement and they always permit additional concrete problem-solutions besides.

In the second edition, Kuhn added a postscript in which he elaborated his ideas on the nature of scientific progress. He described a thought experiment involving an observer who has the opportunity to inspect an assortment of theories, each corresponding to a single stage in a succession of theories. What if the observer is presented with these theories without any explicit indication of their chronological order? Kuhn anticipates that it will be possible to reconstruct their chronology on the basis of the theories' scope and content, because the more recent a theory is, the better it will be as an instrument for solving the kinds of puzzle that scientists aim to solve. Kuhn remarked: "That is not a relativist's position, and it displays the sense in which I am a convinced believer in scientific progress."

Influence and reception

The Structure of Scientific Revolutions has been credited with producing the kind of "paradigm shift" Kuhn discussed. Since the book's publication, over one million copies have been sold, including translations into sixteen different languages. In 1987, it was reported to be the twentieth-century book most frequently cited in the period 1976–1983 in the arts and the humanities.

Philosophy

The first extensive review of The Structure of Scientific Revolutions was authored by Dudley Shapere, a philosopher who interpreted Kuhn's work as a continuation of the anti-positivist sentiment of other philosophers of science, including Paul Feyerabend and Norwood Russell Hanson. Shapere noted the book's influence on the philosophical landscape of the time, calling it “a sustained attack on the prevailing image of scientific change as a linear process of ever-increasing knowledge.” According to the philosopher Michael Ruse, Kuhn discredited the ahistorical and prescriptive approach to the philosophy of science of Ernest Nagel's The Structure of Science (1961). Kuhn's book sparked a historicist "revolt against positivism" (the so-called "historical turn in philosophy of science" which looked to the history of science as a source of data for developing a philosophy of science), although this may not have been Kuhn's intention; in fact, he had already approached the prominent positivist Rudolf Carnap about having his work published in the International Encyclopedia of Unified Science. The philosopher Robert C. Solomon noted that Kuhn's views have often been suggested to have an affinity to those of Georg Wilhelm Friedrich Hegel. Kuhn's view of scientific knowledge, as expounded in The Structure of Scientific Revolutions, has been compared to the views of the philosopher Michel Foucault.

Sociology

The first field to claim descent from Kuhn's ideas was the sociology of scientific knowledge. Sociologists working within this new field, including Harry Collins and Steven Shapin, used Kuhn's emphasis on the role of non-evidential community factors in scientific development to argue against logical empiricism, which discouraged inquiry into the social aspects of scientific communities. These sociologists expanded upon Kuhn's ideas, arguing that scientific judgment is determined by social factors, such as professional interests and political ideologies.

Barry Barnes detailed the connection between the sociology of scientific knowledge and Kuhn in his book T. S. Kuhn and Social Science. In particular, Kuhn's ideas regarding science occurring within an established framework informed Barnes's own ideas regarding finitism, a theory wherein meaning is continuously changed (even during periods of normal science) by its usage within the social framework.

The Structure of Scientific Revolutions elicited a number of reactions from the broader sociological community. Following the book's publication, some sociologists expressed the belief that the field of sociology had not yet developed a unifying paradigm, and should therefore strive towards homogenization. Others argued that the field was in the midst of normal science, and speculated that a new revolution would soon emerge. Some sociologists, including John Urry, doubted that Kuhn's theory, which addressed the development of natural science, was necessarily relevant to sociological development.

Economics

Developments in the field of economics are often expressed and legitimized in Kuhnian terms. For instance, neoclassical economists have claimed “to be at the second stage [normal science], and to have been there for a very long time – since Adam Smith, according to some accounts (Hollander, 1987), or Jevons according to others (Hutchison, 1978).” In the 1970s, Post Keynesian economists denied the coherence of the neoclassical paradigm, claiming that their own paradigm would ultimately become dominant.

While perhaps less explicit, Kuhn's influence remains apparent in recent economics. For instance, the abstract of Olivier Blanchard's paper “The State of Macro” (2008) begins:

For a long while after the explosion of macroeconomics in the 1970s, the field looked like a battlefield. Over time however, largely because facts do not go away, a largely shared vision both of fluctuations and of methodology has emerged. Not everything is fine. Like all revolutions, this one has come with the destruction of some knowledge, and suffers from extremism and herding.

Political science

In 1974, The Structure of Scientific Revolutions was ranked as the second most frequently used book in political science courses focused on scope and methods. In particular, Kuhn's theory has been used by political scientists to critique behavioralism, which claims that accurate political statements must be both testable and falsifiable. The book also proved popular with political scientists embroiled in debates about whether a set of formulations put forth by a political scientist constituted a theory, or something else.

The changes that occur in politics, society and business are often expressed in Kuhnian terms, however poor their parallel with the practice of science may seem to scientists and historians of science. The terms "paradigm" and "paradigm shift" have become such notorious clichés and buzzwords that they are sometimes viewed as effectively devoid of content.

Criticisms

Front cover of Imre Lakatos and Alan Musgrave, ed., Criticism and the Growth of Knowledge

The Structure of Scientific Revolutions was soon criticized by Kuhn's colleagues in the history and philosophy of science. In 1965, a special symposium on the book was held at an International Colloquium on the Philosophy of Science that took place at Bedford College, London, and was chaired by Karl Popper. The symposium led to the publication of the symposium's presentations plus other essays, most of them critical, which eventually appeared in an influential volume of essays. Kuhn expressed the opinion that his critics' readings of his book were so inconsistent with his own understanding of it that he was "...tempted to posit the existence of two Thomas Kuhns," one the author of his book, the other the individual who had been criticized in the symposium by "Professors Popper, Feyerabend, Lakatos, Toulmin and Watkins."

A number of the included essays question the existence of normal science. In his essay, Feyerabend suggests that Kuhn's conception of normal science fits organized crime as well as it does science. Popper expresses distaste with the entire premise of Kuhn's book, writing, “the idea of turning for enlightenment concerning the aims of science, and its possible progress, to sociology or to psychology (or. . .to the history of science) is surprising and disappointing.”

Concept of paradigm

In his 1972 work, Human Understanding, Stephen Toulmin argued that a more realistic picture of science than that presented in The Structure of Scientific Revolutions would admit the fact that revisions in science take place much more frequently, and are much less dramatic than can be explained by the model of revolution/normal science. In Toulmin's view, such revisions occur quite often during periods of what Kuhn would call "normal science." For Kuhn to explain such revisions in terms of the non-paradigmatic puzzle solutions of normal science, he would need to delineate what is perhaps an implausibly sharp distinction between paradigmatic and non-paradigmatic science.

Incommensurability of paradigms

In a series of texts published in the early 1970s, Carl R. Kordig asserted a position somewhere between that of Kuhn and the older philosophy of science. His criticism of the Kuhnian position was that the incommensurability thesis was too radical, and that this made it impossible to explain the confrontation of scientific theories that actually occurs. According to Kordig, it is in fact possible to admit the existence of revolutions and paradigm shifts in science while still recognizing that theories belonging to different paradigms can be compared and confronted on the plane of observation. Those who accept the incommensurability thesis do not do so because they admit the discontinuity of paradigms, but because they attribute a radical change in meanings to such shifts.

Kordig maintains that there is a common observational plane. For example, when Kepler and Tycho Brahe are trying to explain the relative variation of the distance of the sun from the horizon at sunrise, both see the same thing (the same configuration is focused on the retina of each individual). This is just one example of the fact that "rival scientific theories share some observations, and therefore some meanings." Kordig suggests that with this approach, he is not reintroducing the distinction between observations and theory in which the former is assigned a privileged and neutral status, but that it is possible to affirm more simply the fact that, even if no sharp distinction exists between theory and observations, this does not imply that there are no comprehensible differences at the two extremes of this polarity.

At a secondary level, for Kordig there is a common plane of inter-paradigmatic standards or shared norms that permit the effective confrontation of rival theories.

In 1973, Hartry Field published an article that also sharply criticized Kuhn's idea of incommensurability. In particular, he took issue with this passage from Kuhn:

Newtonian mass is immutably conserved; that of Einstein is convertible into energy. Only at very low relative velocities can the two masses be measured in the same way, and even then they must not be conceived as if they were the same thing. (Kuhn 1970).

Field takes this idea of incommensurability between the same terms in different theories one step further. Instead of attempting to identify a persistence of the reference of terms in different theories, Field's analysis emphasizes the indeterminacy of reference within individual theories. Field takes the example of the term "mass", and asks what exactly "mass" means in modern post-relativistic physics. He finds that there are at least two different definitions:

  1. Relativistic mass: the mass of a particle is equal to the total energy of the particle divided by the speed of light squared. Since the total energy of a particle in relation to one system of reference differs from the total energy in relation to other systems of reference, while the speed of light remains constant in all systems, it follows that the mass of a particle has different values in different systems of reference.
  2. "Real" mass: the mass of a particle is equal to the non-kinetic energy of a particle divided by the speed of light squared. Since non-kinetic energy is the same in all systems of reference, and the same is true of light, it follows that the mass of a particle has the same value in all systems of reference.

Projecting this distinction backwards in time onto Newtonian dynamics, we can formulate the following two hypotheses:

  • HR: the term "mass" in Newtonian theory denotes relativistic mass.
  • Hp: the term "mass" in Newtonian theory denotes "real" mass.

According to Field, it is impossible to decide which of these two affirmations is true. Prior to the theory of relativity, the term "mass" was referentially indeterminate. But this does not mean that the term "mass" did not have a different meaning than it now has. The problem is not one of meaning but of reference. The reference of such terms as mass is only partially determined: we don't really know how Newton intended his use of this term to be applied. As a consequence, neither of the two terms fully denotes (refers). It follows that it is improper to maintain that a term has changed its reference during a scientific revolution; it is more appropriate to describe terms such as "mass" as "having undergone a denotional refinement."

In 1974, Donald Davidson objected that the concept of incommensurable scientific paradigms competing with each other is logically inconsistent. "In his article Davidson goes well beyond the semantic version of the incommensurability thesis: to make sense of the idea of a language independent of translation requires a distinction between conceptual schemes and the content organized by such schemes. But, Davidson argues, no coherent sense can be made of the idea of a conceptual scheme, and therefore no sense may be attached to the idea of an untranslatable language."

Incommensurability and perception

The close connection between the interpretationalist hypothesis and a holistic conception of beliefs is at the root of the notion of the dependence of perception on theory, a central concept in The Structure of Scientific Revolutions. Kuhn maintained that the perception of the world depends on how the percipient conceives the world: two scientists who witness the same phenomenon and are steeped in two radically different theories will see two different things. According to this view, our interpretation of the world determines what we see.

Jerry Fodor attempts to establish that this theoretical paradigm is fallacious and misleading by demonstrating the impenetrability of perception to the background knowledge of subjects. The strongest case can be based on evidence from experimental cognitive psychology, namely the persistence of perceptual illusions. Knowing that the lines in the Müller-Lyer illusion are equal does not prevent one from continuing to see one line as being longer than the other. This impenetrability of the information elaborated by the mental modules limits the scope of interpretationalism.

In epistemology, for example, the criticism of what Fodor calls the interpretationalist hypothesis accounts for the common-sense intuition (on which naïve physics is based) of the independence of reality from the conceptual categories of the experimenter. If the processes of elaboration of the mental modules are in fact independent of the background theories, then it is possible to maintain the realist view that two scientists who embrace two radically diverse theories see the world exactly in the same manner even if they interpret it differently. The point is that it is necessary to distinguish between observations and the perceptual fixation of beliefs. While it is beyond doubt that the second process involves the holistic relationship between beliefs, the first is largely independent of the background beliefs of individuals.

Other critics, such as Israel Scheffler, Hilary Putnam and Saul Kripke, have focused on the Fregean distinction between sense and reference in order to defend scientific realism. Scheffler contends that Kuhn confuses the meanings of terms such as "mass" with their referents. While their meanings may very well differ, their referents (the objects or entities to which they correspond in the external world) remain fixed.

Subsequent commentary by Kuhn

In 1995 Kuhn argued that the Darwinian metaphor in the book should have been taken more seriously than it had been.

Awards and honors

Editions

See also

We are the 99%

From Wikipedia, the free encyclopedia

"We are the 99%" poster referencing the Polish Solidarity movement
 
Occupy Wall Street Poster
 
Protesters with the "99%" T-shirts at Occupy Wall Street on November 17, 2011 near the New York City Hall.

We are the 99% is a political slogan widely used and coined during the 2011 Occupy movement from Gore Vidal's famous and original version "the one percent", meaning the nation's wealthiest 1%, to which the 99% reversely correspond.

The phrase directly refers to the income and wealth inequality in the United States with a concentration of wealth among the top-earning 1%. It reflects an opinion that "the 99%" are paying the price for the mistakes of a tiny minority within the upper class.

According to the Economic Policy Institute as of 2018, all households with incomes less than $737,697 belonged to the lower 99% of wage earners. However, the 1% is not necessarily a reference to top 1% of wage earners, but a reference to the top 1% of individuals by net worth, of which earned wages are only a fraction of the many factors that contribute to their wealth.

Origin

Mainstream accounts

The slogan "We are the 99%" became a unifying slogan of the Occupy movement in August 2011 after a Tumblr blog "wearethe99percent.tumblr.com" was launched in late August 2011 by a 28-year-old New York activist going by the name of "Chris" together with Priscilla Grim.

Chris credited an August 2011 flyer for the NYC assembly "We The 99%" for the term. A 2011 Rolling Stone article attributed to anthropologist David Graeber the suggestion that the Occupy movement represented the 99%. Graeber was sometimes credited with the slogan "We are the 99%" but attributed the full version to others.

Joseph Stiglitz
 
Graph by sociologist Lane Kenworthy showing changes in real US incomes in top 1%, middle 60%, and bottom 20% from 1979 through 2007, tracking household income but not individual incomes.

Mainstream media sources trace the origin of the phrase to economist Joseph Stiglitz's May 2011 article "Of the 1%, by the 1%, for the 1%" in Vanity Fair, in which he was criticizing the economic inequality present in the United States. In the article Stiglitz spoke of the damaging impact of economic inequality involving 1% of the U.S. population owning a large portion of economic wealth in the country, while 99% of the population hold much less economic wealth than the richest 1%:

[I]n our democracy, 1% of the people take nearly a quarter of the nation's income … In terms of wealth rather than income, the top 1% control 40% … [as a result] the top 1% have the best houses, the best educations, the best doctors, and the best lifestyles, but there is one thing that money doesn't seem to have bought: an understanding that their fate is bound up with how the other 99% live. Throughout history, this is something that the top 1% eventually do learn. Too late.

Earlier uses of the term "the one percent" to refer to the wealthiest people in society include the 2006 documentary The One Percent (film) about the growing wealth gap between the wealthy elite compared to the overall population, and a 2001 opinion column in the MIT student newspaper The Tech (newspaper).

Other published accounts

More than one publication dates the concept back much further. For instance, the one percent and the 99 percent were explained in a February 1984 article titled "The USA: Who Owns It? Who Runs It?" in Black Liberation Month News, published in Chicago and available online as of 2020.

Even further back, historian Howard Zinn used this concept in "The Coming Revolt of the Guards", the final chapter in the first edition of his book A People's History of the United States published in 1980. "I am taking the liberty of uniting those 99 percent as 'the people'. I have been writing a history that attempts to represent their submerged, deflected, common interest. To emphasize the commonality of the 99 percent, to declare deep enmity of interest with the 1 percent, is to do exactly what the governments of the United States, and the wealthy elite allied to them-from the Founding Fathers to now-have tried their best to prevent."

The 1960 novel Too Many Clients by Rex Stout, part of the Nero Wolfe mystery series, refers to the top two percent: "I know a chairman of the board of a billion-dollar corporation, one of the 2 per cent, [sic] who never gets his shoes shined and shaves three times a week."

The first mention of the concept may very well be found in a poster (circa 1935) advertising the newspaper created by the populist Louisiana politician Huey Long called The American Progress. The second paragraph mentioned the one percent and the ninety-nine percent: "With 1% of our people owning nearly twice as much as all the other 99%, how is a country ever to have permanent progress unless there is a correction of this evil?" 

Variations on the slogan

  • "We are the 1 percent; we stand with the 99 percent": by members of the "one percent" who wish to express their support for higher taxes, such as nonprofit organizations Resource Generation and Wealth for the Common Good.
  • "We are the 99.9%": by Nobel Prize–winning economist Paul Krugman in an op-ed in The New York Times arguing that the original slogan sets the bar too low when considering recent changes in distribution of income. In particular, Krugman cited a 2005 Congressional Budget Office report indicating that between 1979 and 2005 the inflation-adjusted income for the middle of the income distribution rose 21%, while for the top 0.1% it rose by 400%.
  • "We are the 53%": by conservative RedState.com blogger Erick Erickson along with Josh Treviño, communications director for the Texas Public Policy Foundation, and filmmaker Mike Wilson launched in October 2011, in response to the 99% slogan. Erikson referred to the 53% of American workers who pay federal income taxes, and criticizing the 47% of workers who do not pay federal income tax for what Erikson describes as being "subsidized" by those who pay taxes. The Tax Policy Center at the Urban Institute and Brookings Institution both reported that roughly half of the workers who do not pay Federal income tax earn below the tax threshold while the other half pay no income tax due to "provisions that benefit senior citizens and low-income working families with children."
  • "We are the 48%": by those who supported the United Kingdom remaining in the European Union after the 2016 referendum on membership, highlighting the relatively even split between supporters of remaining in and withdrawing from the EU.
  • "We are the 87%" (German language: "Wir Sind 87 Prozent") : by the German people who did not vote for the far-right Alternative for Germany party in the 2017 German federal election.

Economic context

Occupy protesters in Oakland holding "We are the 99%"-themed signs

"We are the 99%" is a political slogan and an implicit economic claim of "Occupy" protesters. It refers to the increased concentration of income and wealth since the 1970s among the top 1% of income earners in the United States.

It also reflects an opinion that the "99%" are paying the price for the mistakes of a tiny minority within the upper class.

Studies by the Congressional Budget Office (CBO), the US Department of Commerce, and Internal Revenue Service show that income inequality has grown significantly since the late 1970s, after several decades of stability. Between 1979 and 2007, the top earning 1 percent of Americans have seen their after-tax-and-benefit incomes grow by an average of 275%, compared to around 40–60% for the lower 99 percent. Since 1979 the average pre-tax income for the bottom 90% of households has decreased by $900, while that of the top 1% increased by over $700,000. This imbalance became further exacerbated by changes making federal income taxes less progressive. From 1992-2007 the top 400 income earners in the U.S. saw their income increase 392% and their average tax rate reduced by 37%. In 2009, the average income of the top 1% was $960,000 with a minimum income of $343,927. In 2007 the top 1% had a larger share of total income than at any time since 1928. This is in stark contrast with surveys of US populations that indicate an "ideal" distribution that is much more equal, and a widespread ignorance of the true income inequality and wealth inequality. In 2007, the richest 1% of the American population owned 34.6% of the country's total wealth, and the next 19% owned 50.5%. Thus, the top 20% of Americans owned 85% of the country's wealth and the bottom 80% of the population owned 15% in 2007. Financial inequality measured as the total net worth minus the value of one's home was greater than inequality in total wealth, with the top 1% of the population owning 42.7%, the next 19% of Americans owning 50.3%, and the bottom 80% owning 7% per Forbes in 2011. After the Great Recession started in 2007, the share of total wealth owned by the top 1% of the population grew from 34.6% to 37.1%, and that owned by the top 20% of Americans grew from 85% to 87.7%. Median household wealth dropped by 36.1% compared to a drop of only 11.1% for the top 1%, further widening the gap. During the economic expansion between 2002 and 2007, the income of the top 1% had grown 10 times faster than the income of the bottom 90% and 66% of total income gains went to the 1%.

According to the Economic Policy Institute as of 2018, all households with incomes less than $737,697 belonged to the lower 99% of wage earners.

Data on the minimum yearly income to be considered among the 1% vary per source, ranging from about $500,000 to $1.3 million. This is somewhat below the average compensation range of CEOs whose salaries average $3.9 million according to the AFL-CIO. CEOs salaries average $10.6 million for those whose companies are in the S&P 500 and $19.8 million for companies in the Dow Jones Industrial Average.

A chart showing the disparity in income distribution in the United States. Wealth inequality and income inequality have been central concerns among OWS protesters. CBO data shows that in 1980, the top 1% earned 9.1% of all income, while in 2006 they earned 18.8% of all income.

Following the recession of the late 2000s (decade), the economy in the US continued to experience a jobless recovery. New York Times columnist Anne-Marie Slaughter described pictures on the "We are the 99" website as "page after page of testimonials from members of the middle class who took out loans to pay for education, took out mortgages to buy their houses and a piece of the American dream, worked hard at the jobs they could find, and ended up unemployed or radically underemployed and on the precipice of financial and social ruin." With market uncertainty due to fears of a double-dip recession and the downgrade of the US credit rating in the summer of 2011, the topics of how much the rich pay in taxes and how to solve the nation's economic crisis dominated media commentary. When Congress returned from break, proposed policy solutions came from both major parties as the 2012 Republican presidential debates occurred almost simultaneously with President Obama's September 9 proposal of the American Jobs Act. On September 17, 2011 President Obama announced an economic policy proposal for taxing millionaires known as the Buffett Rule. This immediately led to public statements by House Speaker John Boehner, President Obama, and Republican Mitt Romney over whether the Democrats were fomenting "class warfare".

In November 2011 economist Paul Krugman wrote, that the We are the 99% slogan "correctly defines the issue as being the middle class versus the elite and also gets past the common but wrong notion that rising inequality is mainly about the well educated doing better than the less educated." He questioned whether the slogan ought to refer to the 99.9 percent, as a large fraction of the top 1 percent's gains have actually gone to an even smaller group, the top 0.1 percent—the richest one-thousandth of the population. Krugman argued against the idea that the very rich make a special contribution to the economy as "job creators" as few were new economy innovators like Steve Jobs. He quoted a recent analysis having found that 43% of the top 0.1 percent were executives at non-financial companies, 18% in finance, and another 12% are lawyers or in real estate. Commenting on the ongoing economic crisis he wrote, "[the] seemingly high returns before the crisis simply reflected increased risk-taking—risk that was mostly borne not by the wheeler-dealers themselves, but either by naïve investors or by taxpayers, who ended up holding the bag when it all went wrong".

In general, empirical researches have shown the accuracy of this slogan. Per an Oxfam report, just ahead of the 2015 World Economic Forum: "The combined wealth of the world's richest 1 percent will overtake that of everyone else by next year [2016] given the current trend of rising inequality".

Criticism

We are the 99% protester at Occupy London

CNBC senior markets writer Jeff Cox reacted negatively to the protest movement, calling the 1% are "the most vilified members of American society" who protesters fail to realize includes not only corporate CEOs (31% of the top earning one percent), bankers and stock traders (13.9%), but also doctors (1.85%), real estate professionals (3.2%), entertainers in arts, media and sports (1.6%), professors and scientists (1.8%), lawyers (1.22%), farmers and ranchers (0.5%), and pilots (0.2%). Cox noted that 1 Percenters pay a disproportionate amount of their incomes to taxes, which later research has confirmed. He stated the phenomenon of wealth concentration among a small segment of the population is a century old, and argued a direct correlation between wealth concentration and the health of the stock market, stating that 36.7% of the United States' wealth was controlled by the 1% in 1922, 44.2% when the stock market crashed in 1929, 19.9% in 1976, and has increased since then. Cox wrote that wealth concentration intensified at the same time that the US changed from a manufacturing leader to a financial services leader. Cox took issue with protesters' focus on income and wealth, and with their embrace of rich allies such as actress Susan Sarandon and Russell Simmons, who are themselves in the 1%. Josh Barro of National Review offered similar arguments, asserting that the 1% includes those with incomes beginning at $593,000, which would exclude most Wall Street bankers.

Economist Thomas Sowell noted in November 2011 that IRS data shows the majority of those in the top 1% of income are there for a short period, and that age was more associated with wealth concentration than was income. Sowell further argued that analyzing data about abstract categories (like income brackets) should not be confused with analyzing data about individuals (who can move in and out of various abstract categories, like income brackets, throughout their lives):

"It is easier and cheaper to collect statistics about income brackets than it is to follow actual flesh-and-blood people as they move massively from one income bracket to another over the years.
More important, statistical studies that follow particular individuals over the years often reach diametrically opposite conclusions from those reached by statistical studies that follow income brackets over the years."

Economic professor Sean Mulholland argued in 2012 that the idea that the richer have become richer while the poor have become poorer is false because data showing that the richest income earners grew significantly richer over the same period that members of poorer classes maintained a fairly constant income rate does not account for the upward and downward economic mobility of particular households over recent decades.

In the US, Republicans have generally been critical of the movement accusing protesters and their supporters of class warfare. Newt Gingrich called the "concept of the 99 and the one" both divisive and "un-American". Democrats have offered "cautious support", using the "99%" slogan to argue for the passage of President Obama's jobs act, Internet access rules, voter identification laws, mine safety, and other issues. Both parties agree that the movement has changed public debate. In December 2011, the New York Times reported that "Whatever the long-term effects of the Occupy Movement, protesters succeeded in implanting "we are the 99 percent" ... into the cultural and political lexicon."

New Continental Congress

After the Occupy movement activists' camps started getting uprooted, the Occupy movement came back online proposing a new United States Declaration of Independence from corporations, along with a new Continental Congress in Philadelphia.

See also

Critical juncture theory

From Wikipedia, the free encyclopedia

Critical juncture theory focuses on critical junctures, i.e., large, rapid, discontinuous changes, and the long-term causal effect or historical legacy of these changes. Critical junctures are turning points that alters the course of evolution of some entity (e.g., a species, a society). Critical juncture theory seeks to explain both (1) the historical origin and maintenance of social order, and (2) the occurrence of social change through sudden, big leaps.

Critical juncture theory is not a general theory of social order and change. It emphasizes one kind of cause (involving a big, discontinuous change) and kind of effect (a persistent effect). Yet, it challenges some common assumptions in many approaches and theories in the social sciences. The idea that some changes are discontinuous sets it up as an alternative to (1) "continuist" or "synechist" theories that assume that change is always gradual or that natura non facit saltus – Latin for "nature does not make jumps." The idea that such discontinuous changes have a long-term impact stands in counterposition to (2) "presentist" explanations that only consider the possible causal effect of temporally proximate factors.

Theorizing about critical junctures began in the social sciences in the 1960s. Since then, it has been central to a body of research in the social sciences that is historically informed. Research on critical junctures in the social sciences is part of the broader tradition of comparative historical analysis and historical institutionalism. It is a tradition that spans political science, sociology and economics. Within economics, it shares an interest in historically oriented research with the new economic history or cliometrics. Research on critical junctures is also part of the broader "historical turn" in the social sciences.

Origins in the 1960s and early 1970s

The idea of episodes of discontinuous change, followed by periods of relative stability, was introduced in various fields of knowledge in the 1960s and early 1970s."

Kuhn's paradigm shifts

Philosopher of science Thomas Kuhn's landmark work The Structure of Scientific Revolutions (1962) introduced and popularized the idea of discontinuous change and the long-term effects of discontinuous change. Kuhn argued that progress in knowledge occurs at times through sudden jumps, which he called paradigm shifts. After paradigm shifts, scholars do normal science within paradigms, which endure until a new revolution came about.

Kuhn challenged the conventional view in the philosophy of science at the time that knowledge growth could be understood entirely as a process of gradual, cumulative growth.

Gellner's neo-episodic model of change

Anthropologist Ernest Gellner proposed a neo-episodic model of change in 1964 that highlights the "step-like nature of history" and the "remarkable discontinuity" between different historical periods. Gellner contrasts the neo-episodic model of change to an evolutionary model that portrays "the pattern of Western history" as a process of "continuous and sustained and mainly endogenous upward growth."

Sociologist Michael Mann adapted Gellner's idea of "'episodes' of major structural transformation" and called such episodes "power jumps."

Lipset and Rokkan's critical junctures

Sociologist Seymour Lipset and political scientist Stein Rokkan introduced the idea of critical junctures and their long-term impact in the social sciences in 1967. The ideas presented in the coauthored 1967 work were elaborated by Rokkan in Citizens, Elections, and Parties (1970).

Gellner had introduced a similar idea in the social sciences. However, Lipset and Rokkan offered a more elaborate model and an extensive application of their model to Europe (see below). Although Gellner influenced some sociologists, the impact of Lipset and Rokkan on the social sciences was greater.

Gould's model of sudden, punctuated change (bottom image) contrasts with the view that change is always gradual (top image).

Gould's punctuated equilibrium model

Kuhn's ideas influenced paleontologist Stephen Jay Gould, who introduced the idea of punctuated equilibrium in the field of evolutionary biology in 1972. Gould's initial work on punctuated equilibrium was coauthored with Niles Eldredge.

Gould's model of punctuated equilibrium drew attention to episodic bursts of evolutionary change followed by periods of morphological stability. He challenged the conventional model of gradual, continuous change - called phyletic gradualism.

The critical juncture theoretical framework in the social sciences

Since its launching in 1967, research on critical junctures has focused in part on developing a theoretical framework, which has evolved over time.

In studies of society, some scholars use the term "punctuated equilibrium" model, and others the term "neo-episodic" model. Studies of knowledge continue to use the term "paradigm shift". However, these terms can be treated as synonyms for critical juncture.

Developments in the late 1960s–early 1970s

Key ideas in critical junctures research were initially introduced in the 1960s and early 1970s by Seymour Lipset, Stein Rokkan, and Arthur Stinchcombe.

Critical junctures and legacies

Stein Rokkan, coauthor of "Cleavage Structures, Party Systems, and Voter Alignments."

Seymour Lipset and Stein Rokkan (1967) and Rokkan (1970) introduced the idea that big discontinuous changes, such as the reformation, the building of nations, and the industrial revolution, reflected conflicts organized around social cleavages, such as the center-periphery, state-church, land-industry, and owner-worker cleavages. In turn, these big discontinuous changes could be seen as critical junctures because they generated social outcomes that subsequently remained "frozen" for extensive periods of time.

In more general terms, Lipset and Rokkan's model has three components:

  •  (1) Cleavage. Strong and enduring conflicts that polarize a political system. Four such cleavages were identified:
    • The center–periphery cleavage, a conflict between a central nation-building culture and ethnically linguistically distinct subject populations in the peripheries.
    • The state–church cleavage, a conflict between the aspirations of a nation-state and the church.
    • The land–industry cleavage, a conflict between landed interests and commercial/industrial entrepreneurs.
    • The worker–employer cleavage, a conflict between owners and workers.
  •  (2) Critical juncture. Radical changes regarding these cleavages happen at certain moments.
  •  (3) Legacy. Once these changes occur, their effect endures for some time afterwards.

Rokkan (1970) added two points to these ideas. Critical junctures could set countries on divergent or convergent paths. Critical junctures could be "sequential," such that a new critical junctures does not totally erase the legacies of a previous critical juncture but rather modifies that previous legacy.

The reproduction of legacies through self-replicating causal loops

Arthur Stinchcombe (1968) filled a key gap in Lipset and Rokkan's model. Lipset and Rokkan argued that critical junctures produced legacies, but did not explain how the effect of a critical juncture could endure over a long period.

Stinchcombe elaborated the idea of historical causes (such as critical junctures) as a distinct kind of cause that generates a "self-replicating causal loop." Stinchcombe explained that the distinctive feature of such a loop is that "an effect created by causes at some previous period becomes a cause of that same effect in succeeding periods." This loop was represented graphically by Stinchcombe as follows:

   X t1 ––> Y t2 ––> D t3 ––> Y t4 ––> D t5 ––> Y t6

Stinchcombe argued that the cause (X) that explains the initial adoption of some social feature (Y) was not the same one that explains the persistence of this feature. Persistence is explained by the repeated effect of Y on D and of D on Y.

Developments in the early 1980s–early 1990s

Additional contributions were made in the 1980s and early 1990s by various political scientists and economists.

Douglass North, coauthor of Institutions, Institutional Change and Economic Performance.

Punctuated equilibrium, path dependence, and institutions

Paul A. David and W. Brian Arthur, two economists, introduced and elaborated the concept of path dependence, the idea that past events and decisions affect present options and that some outcomes can persist due to the operation of a self-reinforcing feedback loop. This idea of a self-reinforcing feedback loop resembles that of a self-replicating causal loop introduced earlier by Stinchcombe. However, it resonated with economists and led to a growing recognition in economics that "history matters."

The work by Stephen Krasner in political science incorporated the idea of punctuated equilibrium into the social sciences. Krasner also drew on the work by Arthur and connected the idea of path dependence to the study of political institutions.

Douglass North, an economist and Nobel laureate, applied the idea of path dependence to institutions, which he defined as "the rules of the game in a society," and drew attention to the persistence of institutions.

David Collier, coauthor of Shaping the Political Arena.

A synthesis

Political scientists Ruth Berins Collier and David Collier, in Shaping the Political Arena (1991), provided a synthesis of many ideas introduced from the 1960s to 1990, in the form of the following "five-step template":

   Antecedent Conditions ––> Cleavage or Shock ––> Critical Juncture 
    ––> Aftermath ––> Legacy

These key concepts have been defined as follows:

  • (1) "Antecedent conditions are diverse socioeconomic and political conditions prior to the onset of the critical juncture that constitute the baseline for subsequent change."
  • (2) "Cleavages, shocks, or crises are triggers of critical junctures."
  • (3) "Critical junctures are major episodes of institutional change or innovation."
  • (4) "The aftermath is the period during which the legacy takes shape."
  • (5) "The legacy is an enduring, self-reinforcing institutional inheritance of the critical juncture that stays in place and is stable for a considerable period."

Debates in the 2000s–2010s

Following a period of consolidation of critical junctures framework, few new developments occurred in the 1990s. However, since around 2000, several new ideas were proposed and many aspects of the critical junctures framework are the subject of debate.

Critical junctures and incremental change

An important new issue in the study of change is the relative role of critical junctures and incremental change. On the one hand, the two kinds of change are sometimes starkly counterposed. Kathleen Thelen emphasizes more gradual, cumulative patterns of institutional evolution and holds that "the conceptual apparatus of path dependence may not always offer a realistic image of development." On the other hand, path dependence, as conceptualized by Paul David is not deterministic and leaves room for policy shifts and institutional innovation.

Critical junctures and contingency

Einar Berntzen notes another debate: "Some scholars emphasize the historical contingency of the choices made by political actors during the critical juncture." For example, Michael Bernhard writes that critical junctures "are periods in which the constraints of structure have weakened and political actors have enhanced autonomy to restructure, overturn, and replace critical systems or sub-systems."

However, Berntzen holds that "other scholars have criticized the focus on agency and contingency as key causal factors of institutional path selection during critical junctures" and "argue that a focus on antecedent conditions of critical junctures is analytically more useful." For example, Dan Slater and Erica Simmons place a heavy emphasis on antecedent conditions.

Legacies and path dependence

The use of the concept of path dependence in the study of critical junctures has been a source of some debate. On the one hand, James Mahoney argues that "path dependence characterizes specifically those historical sequences in which contingent events set into motion institutional patterns or event chains that have deterministic properties" and that there are two types of path dependence: "self-reinforcing sequences" and "reactive sequences." On the other hand, Kathleen Thelen and other criticize the idea of path dependence determinism, and Jörg Sydow, Georg Schreyögg, and Jochen Koch question the idea of reactive sequences as a kind of path dependence.

Institutional and behavioral path dependence

The study of critical junctures has commonly been seen as involving a change in institutions. However, many works extend the scope of research of critical junctures by focusing on changes in culture. Avidit Acharya, Matthew Blackwell, and Maya Sen state that the persistence of a legacy can be "reinforced both by formal institutions, such as Jim Crow laws (a process known as institutional path dependence), and also by informal institutions, such as family socialization and community norms (a process we call behavioral path dependence."

Substantive applications in the social sciences

Topics and processes

A critical juncture approach has been used in the study of many fields of research: state formation, political regimes, regime change and democracy, party system, public policy, government performance, and economic development.

In addition, many processes and events have been identified as critical junctures.

The domestication of animals is commonly treated as a turning point in world history. The image depicts an Egyptian hieroglyphic painting showing an early instance of a domesticated animal.

Pre-1760 power jumps

Michael Mann, in The Sources of Social Power (1986), relies on Gellner's neo-episodic model of change and identifies a series of "power jumps" in world history prior to 1760 - the idea of power jumps is similar to that of a critical juncture. Some of the examples of power jumps identified by Mann are:

The end of the Cold War in 1989 is one among many turning points studied as a critical juncture.

Modern era critical junctures

Some of the processes in the modern era that are commonly seen as critical junctures in the social sciences are:

Considerable discussion has focused on the possibility that the COVID-19 pandemic will be a critical juncture.

Examples of research

Barrington Moore Jr.'s Social Origins of Dictatorship and Democracy: Lord and Peasant in the Making of the Modern World (1966) argues that revolutions (the critical junctures) occurred in different ways (bourgeois revolutions, revolutions from above, and revolutions from below) and this difference led to contrasting political regimes in the long term (the legacy)—democracy, fascism, and communism, respectively. In contrast to the unilinear view of evolution common in the 1960s, Moore showed that countries followed multiple paths to modernity.

Collier and Collier's Shaping the Political Arena: Critical Junctures, the Labor Movement, and the Regime Dynamics in Latin America (1991) compares "eight Latin American countries to argue that labor-incorporation periods were critical junctures that set the countries on distinct paths of development that had major consequences for the crystallization of certain parties and party systems in the electoral arena. The way in which state actors incorporated labor movements was conditioned by the political strength of the oligarchy, the antecedent condition in their analysis. Different policies towards labor led to four specific types of labor incorporation: state incorporation (Brazil and Chile), radical populism (Mexico and Venezuela), labor populism (Peru and Argentina), and electoral mobilization by a traditional party (Uruguay and Colombia). These different patterns triggered contrasting reactions and counter reactions in the aftermath of labor incorporation. Eventually, through a complex set of intermediate steps, relatively enduring party system regimes were established in all eight countries: multiparty polarizing systems (Brazil and Chile), integrative party systems (Mexico and Venezuela), stalemated party systems (Peru and Argentina), and systems marked by electoral stability and social conflict (Uruguay and Colombia)."

John Ikenberry's After Victory: Institutions, Strategic Restraint, and the Rebuilding of Order After Major Wars (2001) compares post-war settlements after major wars – following the Napoleonic Wars in 1815, the world wars in 1919 and 1945, and the end of the Cold War in 1989. It argues that "international order has come and gone, risen and fallen across historical eras" and that the "great moments of order building come after major wars – 1648, 1713, 1815, 1919, 1945, and 1989." In essence, peace conferences and settlement agreements put in place "institutions and arrangements for postwar order." Ikenberry also shows that "the actual character of international order has varied across eras and order building moments" and that "variations have been manifest along multiple dimensions: geographic scope, organizational logic, rules and institutions, hierarchy and leadership, and the manner in and degree to which coercion and consent undergird the resulting order."

Daron Acemoglu and James A. Robinson’s Why Nations Fail: The Origins of Power, Prosperity, and Poverty (2012) draws on the idea of critical junctures. A key thesis of this book is that, at critical junctures (such as the Glorious Revolution in 1688 in England), countries start to evolve along different paths. Countries that adopt inclusive political and economic institutions become prosperous democracies. Countries that adopt extractive political and economic institutions fail to develop political and economically.

Sebastián L. Mazzuca's Latecomer State Formation. Political Geography and Capacity Failure in Latin America (2021) compares state formation in Latin America and Europe. A key argument is that state formation in Latin America was trade-led rather than war-led and that this difference explains why Latin American states have low state capacity relative to their European counterparts. In early modern western Europe, Mazzuca argues, "state formation had multiple linkages to state building. Violence monopolization required great efforts at fiscal extraction, which in turn caused the abolition of the intermediary power of local potentates and incited social demands for new public goods." In contrast, in Latin America, "the obstacles to the development of state capacities were the result of mutually convenient bargains struck by central state-makers and peripheral potentates, who, far from being eliminated during state formation, obtained institutional power to reinforce local bastions."

Debates in research

Critical juncture research typically contrasts an argument about the historical origins of some outcome to an explanation based in temporally proximate factors. However, researchers have engaged in debates about what historical event should be considered a critical juncture.

The rise of the West

A key debate in research on critical junctures concerns the turning point that led to the rise of the West.

  • Jared Diamond, in Guns, Germs and Steel (1997) argues that the development reaching back to around 11,000 BCE explain why key breakthroughs were made in the West rather than in some other region of the world.
  • Michael Mitterauer, in Why Europe? The Medieval Origins of its Special Path (2010) traces the rise of the West to developments in the Middle Ages.
  •  Daron Acemoglu and James A. Robinson, in Why Nations Fail: The Origins of Power, Prosperity, and Poverty (2012) and The Narrow Corridor. States, Societies, and the Fate of Liberty (2019) argue that a critical juncture during the early modern age is what set the West on its distinctive path.

Historical sources of economic development (with a focus on Latin America)

Another key debate concerns the historical roots of economic development, a debate that has address Latin America in particular.

  • Jerry F. Hough and Robin Grier (2015) claim that "key events in England and Spain in the 1260s explain why Mexico lagged behind the United States economically in the 20th century."
  • Works by Daron Acemoglu, Simon H. Johnson, and James A. Robinson (2001); James Mahoney (2010); and Stanley Engerman and Kenneth Sokoloff (2012) focus on colonialism as the key turning point explaining long-term economic trajectories.
  • Sebastián L. Mazzuca (2017) claims that the relatively poor economic performance of Latin American countries is not due to some colonial heritage but rather "to the juncture of state formation."
  • Rudiger Dornbusch and Sebastián Edwards (1991) see the emergence of mass politics in the mid-20th century as the key turning point that explains the economic performance of Latin America.

Historical origins of the Asian developmental state

Research on Asia includes a debate about the historical roots of developmental states.

  • Atul Kohli (2004) argues that developmental states originate in the colonial period.
  • Tuong Vu (2010) maintains that developmental states originate in the post-colonial period.

Reception and impact

Research on critical junctures is generally seen as an important contribution to the social sciences.

Within political science, Berntzen argues that research on critical junctures "has played an important role in comparative historical and other macro-comparative scholarship." Some of the most notable works in the field of comparative politics since the 1960s rely on the concept of a critical juncture.

Barrington Moore Jr.'s Social Origins of Dictatorship and Democracy. Lord and Peasant in the Making of the Modern World (1966) is broadly recognized as a landmark study in the study of democratization.

Robert D. Putnam's Making Democracy Work: Civic Traditions in Modern Italy (1993) provides an analysis of the historical origins of social capital in Italy that is widely credited with launching a strand of research on social capital and its consequences in various fields within political science.

Frank Baumgartner and Bryan D. Jones's Agendas and Instability in American Politics (2009) is credited with having "a massive impact in the study of public policy."

Within economics, the historically informed work of Douglass North, and Daron Acemoglu and James A. Robinson, is seen as partly responsible for the disciple's renewed interest in political institutions and the historical origins of institutions and hence for the revival of the tradition of institutional economics.

See also

 

Neurophilosophy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Neurophilosophy ...