Search This Blog

Thursday, February 15, 2018

Scientific modelling

From Wikipedia, the free encyclopedia

Example of scientific modelling. A schematic of chemical and transport processes related to atmospheric composition.

Scientific modelling is a scientific activity, the aim of which is to make a particular part or feature of the world easier to understand, define, quantify, visualize, or simulate by referencing it to existing and usually commonly accepted knowledge. It requires selecting and identifying relevant aspects of a situation in the real world and then using different types of models for different aims, such as conceptual models to better understand, operational models to operationalize, mathematical models to quantify, and graphical models to visualize the subject. Modelling is an essential and inseparable part of many scientific disciplines, each of which have their own ideas about specific types of modelling.[1][2]

There is also an increasing attention to scientific modelling[3] in fields such as science education, philosophy of science, systems theory, and knowledge visualization. There is growing collection of methods, techniques and meta-theory about all kinds of specialized scientific modelling.

Overview

MathModel.svg

A scientific model seeks to represent empirical objects, phenomena, and physical processes in a logical and objective way. All models are in simulacra, that is, simplified reflections of reality that, despite being approximations, can be extremely useful.[4] Building and disputing models is fundamental to the scientific enterprise. Complete and true representation may be impossible, but scientific debate often concerns which is the better model for a given task, e.g., which is the more accurate climate model for seasonal forecasting.[5]

Attempts to formalize the principles of the empirical sciences use an interpretation to model reality, in the same way logicians axiomatize the principles of logic. The aim of these attempts is to construct a formal system that will not produce theoretical consequences that are contrary to what is found in reality. Predictions or other statements drawn from such a formal system mirror or map the real world only insofar as these scientific models are true.[6][7]

For the scientist, a model is also a way in which the human thought processes can be amplified.[8] For instance, models that are rendered in software allow scientists to leverage computational power to simulate, visualize, manipulate and gain intuition about the entity, phenomenon, or process being represented. Such computer models are in silico. Other types of scientific models are in vivo (living models, such as laboratory rats) and in vitro (in glassware, such as tissue culture).[9]

Basics of scientific modelling

Modelling as a substitute for direct measurement and experimentation

Models are typically used when it is either impossible or impractical to create experimental conditions in which scientists can directly measure outcomes. Direct measurement of outcomes under controlled conditions (see Scientific method) will always be more reliable than modelled estimates of outcomes.

Within modelling and simulation, a model is a task-driven, purposeful simplification and abstraction of a perception of reality, shaped by physical, legal, and cognitive constraints.[10] It is task-driven, because a model is captured with a certain question or task in mind. Simplifications leave all the known and observed entities and their relation out that are not important for the task. Abstraction aggregates information that is important, but not needed in the same detail as the object of interest. Both activities, simplification and abstraction, are done purposefully. However, they are done based on a perception of reality. This perception is already a model in itself, as it comes with a physical constraint. There are also constraints on what we are able to legally observe with our current tools and methods, and cognitive constraints which limit what we are able to explain with our current theories. This model comprises the concepts, their behavior, and their relations in formal form and is often referred to as a conceptual model. In order to execute the model, it needs to be implemented as a computer simulation. This requires more choices, such as numerical approximations or the use of heuristics.[11] Despite all these epistemological and computational constraints, simulation has been recognized as the third pillar of scientific methods: theory building, simulation, and experimentation.[12]

Simulation

A simulation is the implementation of a model. A steady state simulation provides information about the system at a specific instant in time (usually at equilibrium, if such a state exists). A dynamic simulation provides information over time. A simulation brings a model to life and shows how a particular object or phenomenon will behave. Such a simulation can be useful for testing, analysis, or training in those cases where real-world systems or concepts can be represented by models.[13]

Structure

Structure is a fundamental and sometimes intangible notion covering the recognition, observation, nature, and stability of patterns and relationships of entities. From a child's verbal description of a snowflake, to the detailed scientific analysis of the properties of magnetic fields, the concept of structure is an essential foundation of nearly every mode of inquiry and discovery in science, philosophy, and art.[14]

Systems

A system is a set of interacting or interdependent entities, real or abstract, forming an integrated whole. In general, a system is a construct or collection of different elements that together can produce results not obtainable by the elements alone.[15] The concept of an 'integrated whole' can also be stated in terms of a system embodying a set of relationships which are differentiated from relationships of the set to other elements, and from relationships between an element of the set and elements not a part of the relational regime. There are two types of system models: 1) discrete in which the variables change instantaneously at separate points in time and, 2) continuous where the state variables change continuously with respect to time.[16]

Generating a model

Modelling is the process of generating a model as a conceptual representation of some phenomenon. Typically a model will deal with only some aspects of the phenomenon in question, and two models of the same phenomenon may be essentially different—that is to say, that the differences between them comprise more than just a simple renaming of components.

Such differences may be due to differing requirements of the model's end users, or to conceptual or aesthetic differences among the modellers and to contingent decisions made during the modelling process. Considerations that may influence the structure of a model might be the modeller's preference for a reduced ontology, preferences regarding statistical models versus deterministic models, discrete versus continuous time, etc. In any case, users of a model need to understand the assumptions made that are pertinent to its validity for a given use.

Building a model requires abstraction. Assumptions are used in modelling in order to specify the domain of application of the model. For example, the special theory of relativity assumes an inertial frame of reference. This assumption was contextualized and further explained by the general theory of relativity. A model makes accurate predictions when its assumptions are valid, and might well not make accurate predictions when its assumptions do not hold. Such assumptions are often the point with which older theories are succeeded by new ones (the general theory of relativity works in non-inertial reference frames as well).

The term "assumption" is actually broader than its standard use, etymologically speaking. The Oxford English Dictionary (OED) and online Wiktionary indicate its Latin source as assumere ("accept, to take to oneself, adopt, usurp"), which is a conjunction of ad- ("to, towards, at") and sumere (to take). The root survives, with shifted meanings, in the Italian sumere and Spanish sumir. In the OED, "assume" has the senses of (i) “investing oneself with (an attribute), ” (ii) “to undertake” (especially in Law), (iii) “to take to oneself in appearance only, to pretend to possess,” and (iv) “to suppose a thing to be.” Thus, "assumption" connotes other associations than the contemporary standard sense of “that which is assumed or taken for granted; a supposition, postulate,” and deserves a broader analysis in the philosophy of science.[citation needed]

Evaluating a model

A model is evaluated first and foremost by its consistency to empirical data; any model inconsistent with reproducible observations must be modified or rejected. One way to modify the model is by restricting the domain over which it is credited with having high validity. A case in point is Newtonian physics, which is highly useful except for the very small, the very fast, and the very massive phenomena of the universe. However, a fit to empirical data alone is not sufficient for a model to be accepted as valid. Other factors important in evaluating a model include:[citation needed]
  • Ability to explain past observations
  • Ability to predict future observations
  • Cost of use, especially in combination with other models
  • Refutability, enabling estimation of the degree of confidence in the model
  • Simplicity, or even aesthetic appeal
People may attempt to quantify the evaluation of a model using a utility function.

Visualization

Visualization is any technique for creating images, diagrams, or animations to communicate a message. Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas since the dawn of man. Examples from history include cave paintings, Egyptian hieroglyphs, Greek geometry, and Leonardo da Vinci's revolutionary methods of technical drawing for engineering and scientific purposes.

Space mapping

Space mapping refers to a methodology that employs a "quasi-global" modeling formulation to link companion "coarse" (ideal or low-fidelity) with "fine" (practical or high-fidelity) models of different complexities. In engineering optimization, space mapping aligns (maps) a very fast coarse model with its related expensive-to-compute fine model so as to avoid direct expensive optimization of the fine model. The alignment process iteratively refines a "mapped" coarse model (surrogate model).

Types of scientific modelling

Applications

Modelling and simulation

One application of scientific modelling is the field of modelling and simulation, generally referred to as "M&S". M&S has a spectrum of applications which range from concept development and analysis, through experimentation, measurement and verification, to disposal analysis. Projects and programs may use hundreds of different simulations, simulators and model analysis tools.
Example of the integrated use of Modelling and Simulation in Defence life cycle management. The modelling and simulation in this image is represented in the center of the image with the three containers.[13]

The figure shows how Modelling and Simulation is used as a central part of an integrated program in a Defence capability development process.[13]

Model-based learning in education

Flowchart Describing One Style of Model-based Learning
Model–based learning in education, particularly in relation to learning science involves students creating models for scientific concepts in order to:[17]
  • Gain insight of the scientific idea(s)
  • Acquire deeper understanding of the subject through visualization of the model
  • Improve student engagement in the course
Different types of model based learning techniques include:[17]
  • Physical macrocosms
  • Representational systems
  • Syntactic models
  • Emergent models
Model–making in education is an iterative exercise with students refining, developing and evaluating their models over time. This shifts learning from the rigidity and monotony of traditional curriculum to an exercise of students' creativity and curiosity. This approach utilizes the constructive strategy of social collaboration and learning scaffold theory. Model based learning includes cognitive reasoning skills where existing models can be improved upon by construction of newer models using the old models as a basis.[18]

"Model–based learning entails determining target models and a learning pathway that provide realistic chances of understanding." [19] Model making can also incorporate blended learning strategies by using web based tools and simulators, thereby allowing students to:
  • Familiarize themselves with on-line or digital resources
  • Create different models with various virtual materials at little or no cost
  • Practice model making activity any time and any place
  • Refine existing models
"A well-designed simulation simplifies a real world system while heightening awareness of the complexity of the system. Students can participate in the simplified system and learn how the real system operates without spending days, weeks or years it would take to undergo this experience in the real world." [20]

The teacher's role in the overall teaching and learning process is primarily that of a facilitator and arranger of the learning experience. He or she would assign the students, a model making activity for a particular concept and provide relevant information or support for the activity. For virtual model making activities, the teacher can also provide information on the usage of the digital tool and render troubleshooting support in case of glitches while using the same. The teacher can also arrange the group discussion activity between the students and provide the platform necessary for students to share their observations and knowledge extracted from the model making activity.

Model–based learning evaluation could include the use of rubrics that assess the ingenuity and creativity of the student in the model construction and also the overall classroom participation of the student vis-a-vis the knowledge constructed through the activity.

It is important, however, to give due consideration to the following for successful model–based learning to occur:
  • Use of the right tool at the right time for a particular concept
  • Provision within the educational setup for model–making activity: e.g., computer room with internet facility or software installed to access simulator or digital tool

Wednesday, February 7, 2018

Paradigm shift

A paradigm shift (also radical theory change),[1] a concept identified by the American physicist and philosopher Thomas Kuhn (1922–1996), is a fundamental change in the basic concepts and experimental practices of a scientific discipline. Kuhn contrasted these shifts, which characterize a scientific revolution, to the activity of normal science, which he described as scientific work done within a prevailing framework (or paradigm). In this context, the word "paradigm" is used in its original Greek meaning, as "example".
The nature of scientific revolutions has been studied by modern philosophy since Immanuel Kant used the phrase in the preface to his Critique of Pure Reason (1781). He referred to Greek mathematics and Newtonian physics. In the 20th century, new developments in the basic concepts of mathematics, physics, and biology revitalized interest in the question among scholars. It was against this active background that Kuhn published his work.

Kuhn presented his notion of a paradigm shift in his influential book The Structure of Scientific Revolutions (1962). As one commentator summarizes:
Kuhn acknowledges having used the term "paradigm" in two different meanings. In the first one, "paradigm" designates what the members of a certain scientific community have in common, that is to say, the whole of techniques, patents and values shared by the members of the community. In the second sense, the paradigm is a single element of a whole, say for instance Newton’s Principia, which, acting as a common model or an example... stands for the explicit rules and thus defines a coherent tradition of investigation. Thus the question is for Kuhn to investigate by means of the paradigm what makes possible the constitution of what he calls "normal science". That is to say, the science which can decide if a certain problem will be considered scientific or not. Normal science does not mean at all a science guided by a coherent system of rules, on the contrary, the rules can be derived from the paradigms, but the paradigms can guide the investigation also in the absence of rules. This is precisely the second meaning of the term "paradigm", which Kuhn considered the most new and profound, though it is in truth the oldest.[2]
Since the 1960s, the concept of a paradigm shift has also been used in numerous non-scientific contexts to describe a profound change in a fundamental model or perception of events, even though Kuhn himself restricted the use of the term to the physical sciences.

Kuhnian paradigm shifts

Kuhn used the duck-rabbit optical illusion, made famous by Wittgenstein, to demonstrate the way in which a paradigm shift could cause one to see the same information in an entirely different way.[3]

An epistemological paradigm shift was called a "scientific revolution" by epistemologist and historian of science Thomas Kuhn in his book The Structure of Scientific Revolutions.

A scientific revolution occurs, according to Kuhn, when scientists encounter anomalies that cannot be explained by the universally accepted paradigm within which scientific progress has thereto been made. The paradigm, in Kuhn's view, is not simply the current theory, but the entire worldview in which it exists, and all of the implications which come with it. This is based on features of landscape of knowledge that scientists can identify around them.

There are anomalies for all paradigms, Kuhn maintained, that are brushed away as acceptable levels of error, or simply ignored and not dealt with (a principal argument Kuhn uses to reject Karl Popper's model of falsifiability as the key force involved in scientific change). Rather, according to Kuhn, anomalies have various levels of significance to the practitioners of science at the time. To put it in the context of early 20th century physics, some scientists found the problems with calculating Mercury's perihelion more troubling than the Michelson-Morley experiment results, and some the other way around. Kuhn's model of scientific change differs here, and in many places, from that of the logical positivists in that it puts an enhanced emphasis on the individual humans involved as scientists, rather than abstracting science into a purely logical or philosophical venture.

When enough significant anomalies have accrued against a current paradigm, the scientific discipline is thrown into a state of crisis, according to Kuhn. During this crisis, new ideas, perhaps ones previously discarded, are tried. Eventually a new paradigm is formed, which gains its own new followers, and an intellectual "battle" takes place between the followers of the new paradigm and the hold-outs of the old paradigm. Again, for early 20th century physics, the transition between the Maxwellian electromagnetic worldview and the Einsteinian relativistic worldview was neither instantaneous nor calm, and instead involved a protracted set of "attacks," both with empirical data as well as rhetorical or philosophical arguments, by both sides, with the Einsteinian theory winning out in the long run. Again, the weighing of evidence and importance of new data was fit through the human sieve: some scientists found the simplicity of Einstein's equations to be most compelling, while some found them more complicated than the notion of Maxwell's aether which they banished. Some found Arthur Eddington's photographs of light bending around the sun to be compelling, while some questioned their accuracy and meaning. Sometimes the convincing force is just time itself and the human toll it takes, Kuhn said, using a quote from Max Planck: "a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it."[4]

After a given discipline has changed from one paradigm to another, this is called, in Kuhn's terminology, a scientific revolution or a paradigm shift. It is often this final conclusion, the result of the long process, that is meant when the term paradigm shift is used colloquially: simply the (often radical) change of worldview, without reference to the specificities of Kuhn's historical argument.

In a 2015 retrospective on Kuhn,[5] the philosopher Martin Cohen describes the notion of the "Paradigm Shift" as a kind of intellectual virus – spreading from hard science to social science and on to the arts and even everyday political rhetoric today. Cohen claims that Thomas Kuhn himself had only a very hazy idea of what it might mean and, in line with the American philosopher of science, Paul Feyerabend, accuses Kuhn of retreating from the more radical implications of his theory, which are that scientific facts are never really more than opinions, whose popularity is transitory and far from conclusive.

Science and paradigm shift

A common misinterpretation of paradigms is the belief that the discovery of paradigm shifts and the dynamic nature of science (with its many opportunities for subjective judgments by scientists) are a case for relativism:[6] the view that all kinds of belief systems are equal. Kuhn vehemently denies this interpretation[7] and states that when a scientific paradigm is replaced by a new one, albeit through a complex social process, the new one is always better, not just different.

These claims of relativism are, however, tied to another claim that Kuhn does at least somewhat endorse: that the language and theories of different paradigms cannot be translated into one another or rationally evaluated against one another—that they are incommensurable. This gave rise to much talk of different peoples and cultures having radically different worldviews or conceptual schemes—so different that whether or not one was better, they could not be understood by one another. However, the philosopher Donald Davidson published a highly regarded essay in 1974, "On the Very Idea of a Conceptual Scheme" (Proceedings and Addresses of the American Philosophical Association, Vol. 47, (1973–1974), pp. 5–20) arguing that the notion that any languages or theories could be incommensurable with one another was itself incoherent. If this is correct, Kuhn's claims must be taken in a weaker sense than they often are. Furthermore, the hold of the Kuhnian analysis on social science has long been tenuous with the wide application of multi-paradigmatic approaches in order to understand complex human behaviour (see for example John Hassard, Sociology and Organization Theory: Positivism, Paradigm and Postmodernity. Cambridge University Press, 1993, ISBN 0521350344.)

Paradigm shifts tend to be most dramatic in sciences that appear to be stable and mature, as in physics at the end of the 19th century. At that time, physics seemed to be a discipline filling in the last few details of a largely worked-out system. In 1900, Lord Kelvin famously told an assemblage of physicists at the British Association for the Advancement of Science, "There is nothing new to be discovered in physics now. All that remains is more and more precise measurement."[8][veracity of this quote challenged in Lord Kelvin article] Five years later, Albert Einstein published his paper on special relativity, which challenged the very simple set of rules laid down by Newtonian mechanics, which had been used to describe force and motion for over two hundred years.

In The Structure of Scientific Revolutions, Kuhn wrote, "Successive transition from one paradigm to another via revolution is the usual developmental pattern of mature science." (p. 12) Kuhn's idea was itself revolutionary in its time, as it caused a major change in the way that academics talk about science. Thus, it could be argued that it caused or was itself part of a "paradigm shift" in the history and sociology of science. However, Kuhn would not recognise such a paradigm shift. In the social sciences, people can still use earlier ideas to discuss the history of science.

Philosophers and historians of science, including Kuhn himself, ultimately accepted a modified version of Kuhn's model, which synthesizes his original view with the gradualist model that preceded it.[citation needed]

Examples of paradigm shifts

Natural sciences

Some of the "classical cases" of Kuhnian paradigm shifts in science are:

Social sciences

In Kuhn's view, the existence of a single reigning paradigm is characteristic of the natural sciences, while philosophy and much of social science were characterized by a "tradition of claims, counterclaims, and debates over fundamentals."[19] Others have applied Kuhn's concept of paradigm shift to the social sciences.

Applied sciences

More recently, paradigm shifts are also recognisable in applied sciences:
  • In medicine, the transition from "clinical judgment" to evidence-based medicine
  • In software engineering, the transition from the Rational Paradigm to the Empirical Paradigm [25]

Marketing

In the later part of the 1990s, 'paradigm shift' emerged as a buzzword, popularized as marketing speak and appearing more frequently in print and publication.[26] In his book Mind The Gaffe, author Larry Trask advises readers to refrain from using it, and to use caution when reading anything that contains the phrase. It is referred to in several articles and books[27][28] as abused and overused to the point of becoming meaningless.

Other uses

The term "paradigm shift" has found uses in other contexts, representing the notion of a major change in a certain thought-pattern—a radical change in personal beliefs, complex systems or organizations, replacing the former way of thinking or organizing with a radically different way of thinking or organizing:
  • M. L. Handa, a professor of sociology in education at O.I.S.E. University of Toronto, Canada, developed the concept of a paradigm within the context of social sciences. He defines what he means by "paradigm" and introduces the idea of a "social paradigm". In addition, he identifies the basic component of any social paradigm. Like Kuhn, he addresses the issue of changing paradigms, the process popularly known as "paradigm shift". In this respect, he focuses on the social circumstances which precipitate such a shift. Relatedly, he addresses how that shift affects social institutions, including the institution of education.[citation needed]
  • The concept has been developed for technology and economics in the identification of new techno-economic paradigms as changes in technological systems that have a major influence on the behaviour of the entire economy (Carlota Perez; earlier work only on technological paradigms by Giovanni Dosi). This concept is linked to Joseph Schumpeter's idea of creative destruction. Examples include the move to mass production and the introduction of microelectronics.[29]
  • Two photographs of the Earth from space, "Earthrise" (1968) and "The Blue Marble" (1972), are thought to have helped to usher in the environmentalist movement which gained great prominence in the years immediately following distribution of those images.[30][31]
  • Hans Küng applies Thomas Kuhn's theory of paradigm change to the entire history of Christian thought and theology. He identifies six historical "macromodels": 1) the apocalyptic paradigm of primitive Christianity, 2) the Hellenistic paradigm of the patristic period, 3) the medieval Roman Catholic paradigm, 4) the Protestant (Reformation) paradigm, 5) the modern Enlightenment paradigm, and 6) the emerging ecumenical paradigm. He also discusses five analogies between natural science and theology in relation to paradigm shifts. Küng addresses paradigm change in his books, Paradigm Change in Theology[32] and Theology for the Third Millennium: An Ecumenical View.[33]

Tuesday, February 6, 2018

Thomas Kuhn

From Wikipedia, the free encyclopedia

Thomas Kuhn
Thomas Kuhn.jpg
Born Thomas Samuel Kuhn
July 18, 1922
Cincinnati, Ohio, U.S.
Died June 17, 1996 (aged 73)
Cambridge, Massachusetts, U.S.
Alma mater Harvard University

Era 20th-century philosophy
Region Western philosophy
School Analytic
Historical turn[1]
Main interests
Philosophy of science
Notable ideas
Thomas Samuel Kuhn (/kn/; July 18, 1922 – June 17, 1996) was an American physicist, historian and philosopher of science whose controversial 1962 book The Structure of Scientific Revolutions was influential in both academic and popular circles, introducing the term paradigm shift, which has since become an English-language idiom.

Kuhn made several notable claims concerning the progress of scientific knowledge: that scientific fields undergo periodic "paradigm shifts" rather than solely progressing in a linear and continuous way, and that these paradigm shifts open up new approaches to understanding what scientists would never have considered valid before; and that the notion of scientific truth, at any given moment, cannot be established solely by objective criteria but is defined by a consensus of a scientific community. Competing paradigms are frequently incommensurable; that is, they are competing and irreconcilable accounts of reality. Thus, our comprehension of science can never rely wholly upon "objectivity" alone. Science must account for subjective perspectives as well, since all objective conclusions are ultimately founded upon the subjective conditioning/worldview of its researchers and participants.

Life

Kuhn was born in Cincinnati, Ohio, to Samuel L. Kuhn, an industrial engineer, and Minette Stroock Kuhn, both Jewish. He graduated from The Taft School in Watertown, CT, in 1940, where he became aware of his serious interest in mathematics and physics. He obtained his BS degree in physics from Harvard University in 1943, where he also obtained MS and PhD degrees in physics in 1946 and 1949, respectively, under the supervision of John Van Vleck.[12] As he states in the first few pages of the preface to the second edition of The Structure of Scientific Revolutions, his three years of total academic freedom as a Harvard Junior Fellow were crucial in allowing him to switch from physics to the history and philosophy of science. He later taught a course in the history of science at Harvard from 1948 until 1956, at the suggestion of university president James Conant. After leaving Harvard, Kuhn taught at the University of California, Berkeley, in both the philosophy department and the history department, being named Professor of the History of science in 1961. Kuhn interviewed and tape recorded Danish physicist Niels Bohr the day before Bohr's death.[13] At Berkeley, he wrote and published (in 1962) his best known and most influential work:[14] The Structure of Scientific Revolutions. In 1964, he joined Princeton University as the M. Taylor Pyne Professor of Philosophy and History of Science. He served as the president of the History of Science Society from 1969–70.[15] In 1979 he joined the Massachusetts Institute of Technology (MIT) as the Laurance S. Rockefeller Professor of Philosophy, remaining there until 1991. In 1994 Kuhn was diagnosed with lung cancer. He died in 1996.

Thomas Kuhn was married twice, first to Kathryn Muhs with whom he had three children, then to Jehane Barton Burns (Jehane R. Kuhn).

The Structure of Scientific Revolutions

The Structure of Scientific Revolutions (SSR) was originally printed as an article in the International Encyclopedia of Unified Science, published by the logical positivists of the Vienna Circle. In this book, Kuhn argued that science does not progress via a linear accumulation of new knowledge, but undergoes periodic revolutions, also called "paradigm shifts" (although he did not coin the phrase),[16] in which the nature of scientific inquiry within a particular field is abruptly transformed. In general, science is broken up into three distinct stages. Prescience, which lacks a central paradigm, comes first. This is followed by "normal science", when scientists attempt to enlarge the central paradigm by "puzzle-solving". Guided by the paradigm, normal science is extremely productive: "when the paradigm is successful, the profession will have solved problems that its members could scarcely have imagined and would never have undertaken without commitment to the paradigm".[17]

In regard to experimentation and collection of data with a view toward solving problems through the commitment to a paradigm, Kuhn states: “The operations and measurements that a scientist undertakes in the laboratory are not ‘the given’ of experience but rather ‘the collected with diffculty.’ They are not what the scientist sees—at least not before his research is well advanced and his attention focused. Rather, they are concrete indices to the content of more elementary perceptions, and as such they are selected for the close scrutiny of normal research only because they promise opportunity for the fruitful elaboration of an accepted paradigm. Far more clearly than the immediate experience from which they in part derive, operations and measurements are paradigm-determined. Science does not deal in all possible laboratory manipulations. Instead, it selects those relevant to the juxtaposition of a paradigm with the immediate experience that that paradigm has partially determined. As a result, scientists with different paradigms engage in different concrete laboratory manipulations.”[18]

During the period of normal science, the failure of a result to conform to the paradigm is seen not as refuting the paradigm, but as the mistake of the researcher, contra Popper's falsifiability criterion. As anomalous results build up, science reaches a crisis, at which point a new paradigm, which subsumes the old results along with the anomalous results into one framework, is accepted. This is termed revolutionary science.

In SSR, Kuhn also argues that rival paradigms are incommensurable—that is, it is not possible to understand one paradigm through the conceptual framework and terminology of another rival paradigm. For many critics, for example David Stove (Popper and After, 1982), this thesis seemed to entail that theory choice is fundamentally irrational: if rival theories cannot be directly compared, then one cannot make a rational choice as to which one is better. Whether Kuhn's views had such relativistic consequences is the subject of much debate; Kuhn himself denied the accusation of relativism in the third edition of SSR, and sought to clarify his views to avoid further misinterpretation. Freeman Dyson has quoted Kuhn as saying "I am not a Kuhnian!",[19] referring to the relativism that some philosophers have developed based on his work.

The enormous impact of Kuhn's work can be measured in the changes it brought about in the vocabulary of the philosophy of science: besides "paradigm shift", Kuhn popularized the word "paradigm" itself from a term used in certain forms of linguistics and the work of Georg Lichtenberg to its current broader meaning, coined the term "normal science" to refer to the relatively routine, day-to-day work of scientists working within a paradigm, and was largely responsible for the use of the term "scientific revolutions" in the plural, taking place at widely different periods of time and in different disciplines, as opposed to a single scientific revolution in the late Renaissance. The frequent use of the phrase "paradigm shift" has made scientists more aware of and in many cases more receptive to paradigm changes, so that Kuhn's analysis of the evolution of scientific views has by itself influenced that evolution.[citation needed]

Kuhn's work has been extensively used in social science; for instance, in the post-positivist/positivist debate within International Relations. Kuhn is credited as a foundational force behind the post-Mertonian sociology of scientific knowledge. Kuhn's work has also been used in the Arts and Humanities, such as by Matthew Edward Harris to distinguish between scientific and historical communities (such as political or religious groups): 'political-religious beliefs and opinions are not epistemologically the same as those pertaining to scientific theories'.[20] This is because would-be scientists' worldviews are changed through rigorous training, through the engagement between what Kuhn calls 'exemplars' and the Global Paradigm. Kuhn's notions of paradigms and paradigm shifts have been influential in understanding the history of economic thought, for example the Keynesian revolution,[21] and in debates in political science.[22]

A defense Kuhn gives against the objection that his account of science from The Structure of Scientific Revolutions results in relativism can be found in an essay by Kuhn called "Objectivity, Value Judgment, and Theory Choice."[23] In this essay, he reiterates five criteria from the penultimate chapter of SSR that determine (or help determine, more properly) theory choice:
  1. Accurate – empirically adequate with experimentation and observation
  2. Consistent – internally consistent, but also externally consistent with other theories
  3. Broad Scope – a theory's consequences should extend beyond that which it was initially designed to explain
  4. Simple – the simplest explanation, principally similar to Occam's razor
  5. Fruitful – a theory should disclose new phenomena or new relationships among phenomena
He then goes on to show how, although these criteria admittedly determine theory choice, they are imprecise in practice and relative to individual scientists. According to Kuhn, "When scientists must choose between competing theories, two men fully committed to the same list of criteria for choice may nevertheless reach different conclusions."[23] For this reason, the criteria still are not "objective" in the usual sense of the word because individual scientists reach different conclusions with the same criteria due to valuing one criterion over another or even adding additional criteria for selfish or other subjective reasons. Kuhn then goes on to say, "I am suggesting, of course, that the criteria of choice with which I began function not as rules, which determine choice, but as values, which influence it."[23] Because Kuhn utilizes the history of science in his account of science, his criteria or values for theory choice are often understood as descriptive normative rules (or more properly, values) of theory choice for the scientific community rather than prescriptive normative rules in the usual sense of the word "criteria", although there are many varied interpretations of Kuhn's account of science.

Polanyi–Kuhn debate

Although they used different terminologies, both Kuhn and Michael Polanyi believed that scientists' subjective experiences made science a relativized discipline. Polanyi lectured on this topic for decades before Kuhn published The Structure of Scientific Revolutions.

Supporters of Polanyi charged Kuhn with plagiarism, as it was known that Kuhn attended several of Polanyi's lectures, and that the two men had debated endlessly over epistemology before either had achieved fame. The charge of plagiarism is peculiar, for Kuhn had generously acknowledged Polanyi in the first edition of The Structure of Scientific Revolutions.[5] Despite this intellectual alliance, Polanyi's work was constantly interpreted by others within the framework of Kuhn's paradigm shifts, much to Polanyi's (and Kuhn's) dismay.[24]

Thomas Kuhn Paradigm Shift Award

In honor of his legacy, the "Thomas Kuhn Paradigm Shift Award" is awarded by the American Chemical Society to speakers who present original views that are at odds with mainstream scientific understanding. The winner is selected based in the novelty of the viewpoint and its potential impact if it were to be widely accepted.[25]

Honors

Kuhn was named a Guggenheim Fellow in 1954, and in 1982 was awarded the George Sarton Medal by the History of Science Society. He also received numerous honorary doctorates.

Bibliography

Monday, February 5, 2018

Atomic theory

From Wikipedia, the free encyclopedia

The current theoretical model of the atom involves a dense nucleus surrounded by a probabilistic "cloud" of electrons

In chemistry and physics, atomic theory is a scientific theory of the nature of matter, which states that matter is composed of discrete units called atoms. It began as a philosophical concept in ancient Greece and entered the scientific mainstream in the early 19th century when discoveries in the field of chemistry showed that matter did indeed behave as if it were made up of atoms.

The word atom comes from the Ancient Greek adjective atomos, meaning "indivisible".[1] 19th century chemists began using the term in connection with the growing number of irreducible chemical elements. While seemingly apropos, around the turn of the 20th century, through various experiments with electromagnetism and radioactivity, physicists discovered that the so-called "uncuttable atom" was actually a conglomerate of various subatomic particles (chiefly, electrons, protons and neutrons) which can exist separately from each other. In fact, in certain extreme environments, such as neutron stars, extreme temperature and pressure prevents atoms from existing at all.

Since atoms were found to be divisible, physicists later invented the term "elementary particles" to describe the "uncuttable", though not indestructible, parts of an atom. The field of science which studies subatomic particles is particle physics, and it is in this field that physicists hope to discover the true fundamental nature of matter.

History

Philosophical atomism

The idea that matter is made up of discrete units is a very old one, appearing in many ancient cultures such as Greece and India. The word "atom" was coined by the ancient Greek philosophers Leucippus and his pupil Democritus.[2][3] However, these ideas were founded in philosophical and theological reasoning rather than evidence and experimentation. Because of this, they could not convince everybody, so atomism was but one of a number of competing theories on the nature of matter. It was not until the 19th century that the idea was embraced and refined by scientists, as the blossoming science of chemistry produced discoveries that could easily be explained using the concept of atoms.

John Dalton

Near the end of the 18th century, two laws about chemical reactions emerged without referring to the notion of an atomic theory. The first was the law of conservation of mass, formulated by Antoine Lavoisier in 1789, which states that the total mass in a chemical reaction remains constant (that is, the reactants have the same mass as the products).[4] The second was the law of definite proportions. First proven by the French chemist Joseph Louis Proust in 1799,[5] this law states that if a compound is broken down into its constituent elements, then the masses of the constituents will always have the same proportions, regardless of the quantity or source of the original substance.

John Dalton studied and expanded upon this previous work and developed the law of multiple proportions: if two elements can be combined to form a number of possible compounds, then the ratios of the masses of the second element which combine with a fixed mass of the first element will be ratios of small whole numbers. For example: Proust had studied tin oxides and found that their masses were either 88.1% tin and 11.9% oxygen or 78.7% tin and 21.3% oxygen (these were tin(II) oxide and tin dioxide respectively). Dalton noted from these percentages that 100g of tin will combine either with 13.5g or 27g of oxygen; 13.5 and 27 form a ratio of 1:2. Dalton found that an atomic theory of matter could elegantly explain this common pattern in chemistry. In the case of Proust's tin oxides, one tin atom will combine with either one or two oxygen atoms.[6]

Dalton believed atomic theory could explain why water absorbed different gases in different proportions - for example, he found that water absorbed carbon dioxide far better than it absorbed nitrogen.[7] Dalton hypothesized this was due to the differences in mass and complexity of the gases' respective particles. Indeed, carbon dioxide molecules (CO2) are heavier and larger than nitrogen molecules (N2).

Dalton proposed that each chemical element is composed of atoms of a single, unique type, and though they cannot be altered or destroyed by chemical means, they can combine to form more complex structures (chemical compounds). This marked the first truly scientific theory of the atom, since Dalton reached his conclusions by experimentation and examination of the results in an empirical fashion.

Various atoms and molecules as depicted in John Dalton's A New System of Chemical Philosophy (1808).

In 1803 Dalton orally presented his first list of relative atomic weights for a number of substances. This paper was published in 1805, but he did not discuss there exactly how he obtained these figures.[7] The method was first revealed in 1807 by his acquaintance Thomas Thomson, in the third edition of Thomson's textbook, A System of Chemistry. Finally, Dalton published a full account in his own textbook, A New System of Chemical Philosophy, 1808 and 1810.

Dalton estimated the atomic weights according to the mass ratios in which they combined, with the hydrogen atom taken as unity. However, Dalton did not conceive that with some elements atoms exist in molecules—e.g. pure oxygen exists as O2. He also mistakenly believed that the simplest compound between any two elements is always one atom of each (so he thought water was HO, not H2O).[8] This, in addition to the crudity of his equipment, flawed his results. For instance, in 1803 he believed that oxygen atoms were 5.5 times heavier than hydrogen atoms, because in water he measured 5.5 grams of oxygen for every 1 gram of hydrogen and believed the formula for water was HO. Adopting better data, in 1806 he concluded that the atomic weight of oxygen must actually be 7 rather than 5.5, and he retained this weight for the rest of his life. Others at this time had already concluded that the oxygen atom must weigh 8 relative to hydrogen equals 1, if one assumes Dalton's formula for the water molecule (HO), or 16 if one assumes the modern water formula (H2O).[9]

Avogadro

The flaw in Dalton's theory was corrected in principle in 1811 by Amedeo Avogadro. Avogadro had proposed that equal volumes of any two gases, at equal temperature and pressure, contain equal numbers of molecules (in other words, the mass of a gas's particles does not affect the volume that it occupies).[10] Avogadro's law allowed him to deduce the diatomic nature of numerous gases by studying the volumes at which they reacted. For instance: since two liters of hydrogen will react with just one liter of oxygen to produce two liters of water vapor (at constant pressure and temperature), it meant a single oxygen molecule splits in two in order to form two particles of water. Thus, Avogadro was able to offer more accurate estimates of the atomic mass of oxygen and various other elements, and made a clear distinction between molecules and atoms.

Brownian Motion

In 1827, the British botanist Robert Brown observed that dust particles inside pollen grains floating in water constantly jiggled about for no apparent reason. In 1905, Albert Einstein theorized that this Brownian motion was caused by the water molecules continuously knocking the grains about, and developed a hypothetical mathematical model to describe it.[11] This model was validated experimentally in 1908 by French physicist Jean Perrin, thus providing additional validation for particle theory (and by extension atomic theory).

Discovery of subatomic particles

Atoms were thought to be the smallest possible division of matter until 1897 when J.J. Thomson discovered the electron through his work on cathode rays.[12]
A Crookes tube is a sealed glass container in which two electrodes are separated by a vacuum. When a voltage is applied across the electrodes, cathode rays are generated, creating a glowing patch where they strike the glass at the opposite end of the tube. Through experimentation, Thomson discovered that the rays could be deflected by an electric field (in addition to magnetic fields, which was already known). He concluded that these rays, rather than being a form of light, were composed of very light negatively charged particles he called "corpuscles" (they would later be renamed electrons by other scientists). He measured the mass-to-charge ratio and discovered it was 1800 times smaller than that of hydrogen, the smallest atom. These corpuscles were a particle unlike any other previously known.

Thomson suggested that atoms were divisible, and that the corpuscles were their building blocks.[13] To explain the overall neutral charge of the atom, he proposed that the corpuscles were distributed in a uniform sea of positive charge; this was the plum pudding model[14] as the electrons were embedded in the positive charge like plums in a plum pudding (although in Thomson's model they were not stationary).

Discovery of the nucleus

The Geiger-Marsden experiment
Left: Expected results: alpha particles passing through the plum pudding model of the atom with negligible deflection.
Right: Observed results: a small portion of the particles were deflected by the concentrated positive charge of the nucleus.

Thomson's plum pudding model was disproved in 1909 by one of his former students, Ernest Rutherford, who discovered that most of the mass and positive charge of an atom is concentrated in a very small fraction of its volume, which he assumed to be at the very center.

In the Geiger–Marsden experiment, Hans Geiger and Ernest Marsden (colleagues of Rutherford working at his behest) shot alpha particles at thin sheets of metal and measured their deflection through the use of a fluorescent screen.[15] Given the very small mass of the electrons, the high momentum of the alpha particles, and the low concentration of the positive charge of the plum pudding model, the experimenters expected all the alpha particles to pass through the metal foil without significant deflection. To their astonishment, a small fraction of the alpha particles experienced heavy deflection. Rutherford concluded that the positive charge of the atom must be concentrated in a very tiny volume to produce an electric field sufficiently intense to deflect the alpha particles so strongly.

This led Rutherford to propose a planetary model in which a cloud of electrons surrounded a small, compact nucleus of positive charge. Only such a concentration of charge could produce the electric field strong enough to cause the heavy deflection.[16]

First steps toward a quantum physical model of the atom

The planetary model of the atom had two significant shortcomings. The first is that, unlike planets orbiting a sun, electrons are charged particles. An accelerating electric charge is known to emit electromagnetic waves according to the Larmor formula in classical electromagnetism. An orbiting charge should steadily lose energy and spiral toward the nucleus, colliding with it in a small fraction of a second. The second problem was that the planetary model could not explain the highly peaked emission and absorption spectra of atoms that were observed.
The Bohr model of the atom

Quantum theory revolutionized physics at the beginning of the 20th century, when Max Planck and Albert Einstein postulated that light energy is emitted or absorbed in discrete amounts known as quanta (singular, quantum). In 1913, Niels Bohr incorporated this idea into his Bohr model of the atom, in which an electron could only orbit the nucleus in particular circular orbits with fixed angular momentum and energy, its distance from the nucleus (i.e., their radii) being proportional to its energy.[17] Under this model an electron could not spiral into the nucleus because it could not lose energy in a continuous manner; instead, it could only make instantaneous "quantum leaps" between the fixed energy levels.[17] When this occurred, light was emitted or absorbed at a frequency proportional to the change in energy (hence the absorption and emission of light in discrete spectra).[17]

Bohr's model was not perfect. It could only predict the spectral lines of hydrogen; it couldn't predict those of multielectron atoms. Worse still, as spectrographic technology improved, additional spectral lines in hydrogen were observed which Bohr's model couldn't explain. In 1916, Arnold Sommerfeld added elliptical orbits to the Bohr model to explain the extra emission lines, but this made the model very difficult to use, and it still couldn't explain more complex atoms.

Discovery of isotopes

While experimenting with the products of radioactive decay, in 1913 radiochemist Frederick Soddy discovered that there appeared to be more than one element at each position on the periodic table.[18] The term isotope was coined by Margaret Todd as a suitable name for these elements.
That same year, J.J. Thomson conducted an experiment in which he channeled a stream of neon ions through magnetic and electric fields, striking a photographic plate at the other end. He observed two glowing patches on the plate, which suggested two different deflection trajectories. Thomson concluded this was because some of the neon ions had a different mass.[19] The nature of this differing mass would later be explained by the discovery of neutrons in 1932.

Discovery of nuclear particles

In 1917 Rutherford bombarded nitrogen gas with alpha particles and observed hydrogen nuclei being emitted from the gas (Rutherford recognized these, because he had previously obtained them bombarding hydrogen with alpha particles, and observing hydrogen nuclei in the products). Rutherford concluded that the hydrogen nuclei emerged from the nuclei of the nitrogen atoms themselves (in effect, he had split a nitrogen).[20]

From his own work and the work of his students Bohr and Henry Moseley, Rutherford knew that the positive charge of any atom could always be equated to that of an integer number of hydrogen nuclei. This, coupled with the atomic mass of many elements being roughly equivalent to an integer number of hydrogen atoms - then assumed to be the lightest particles - led him to conclude that hydrogen nuclei were singular particles and a basic constituent of all atomic nuclei. He named such particles protons. Further experimentation by Rutherford found that the nuclear mass of most atoms exceeded that of the protons it possessed; he speculated that this surplus mass was composed of previously-unknown neutrally charged particles, which were tentatively dubbed "neutrons".

In 1928, Walter Bothe observed that beryllium emitted a highly penetrating, electrically neutral radiation when bombarded with alpha particles. It was later discovered that this radiation could knock hydrogen atoms out of paraffin wax. Initially it was thought to be high-energy gamma radiation, since gamma radiation had a similar effect on electrons in metals, but James Chadwick found that the ionization effect was too strong for it to be due to electromagnetic radiation, so long as energy and momentum were conserved in the interaction. In 1932, Chadwick exposed various elements, such as hydrogen and nitrogen, to the mysterious "beryllium radiation", and by measuring the energies of the recoiling charged particles, he deduced that the radiation was actually composed of electrically neutral particles which could not be massless like the gamma ray, but instead were required to have a mass similar to that of a proton. Chadwick now claimed these particles as Rutherford's neutrons.[21] For his discovery of the neutron, Chadwick received the Nobel Prize in 1935.

Quantum physical models of the atom

The five filled atomic orbitals of a neon atom separated and arranged in order of increasing energy from left to right, with the last three orbitals being equal in energy. Each orbital holds up to two electrons, which most probably exist in the zones represented by the colored bubbles. Each electron is equally present in both orbital zones, shown here by color only to highlight the different wave phase.

In 1924, Louis de Broglie proposed that all moving particles—particularly subatomic particles such as electrons—exhibit a degree of wave-like behavior. Erwin Schrödinger, fascinated by this idea, explored whether or not the movement of an electron in an atom could be better explained as a wave rather than as a particle. Schrödinger's equation, published in 1926,[22] describes an electron as a wavefunction instead of as a point particle. This approach elegantly predicted many of the spectral phenomena that Bohr's model failed to explain. Although this concept was mathematically convenient, it was difficult to visualize, and faced opposition.[23] One of its critics, Max Born, proposed instead that Schrödinger's wavefunction described not the electron but rather all its possible states, and thus could be used to calculate the probability of finding an electron at any given location around the nucleus.[24] This reconciled the two opposing theories of particle versus wave electrons and the idea of wave–particle duality was introduced. This theory stated that the electron may exhibit the properties of both a wave and a particle. For example, it can be refracted like a wave, and has mass like a particle.[25]

A consequence of describing electrons as waveforms is that it is mathematically impossible to simultaneously derive the position and momentum of an electron. This became known as the Heisenberg uncertainty principle after the theoretical physicist Werner Heisenberg, who first described it and published it in 1927.[26] This invalidated Bohr's model, with its neat, clearly defined circular orbits. The modern model of the atom describes the positions of electrons in an atom in terms of probabilities. An electron can potentially be found at any distance from the nucleus, but, depending on its energy level, exists more frequently in certain regions around the nucleus than others; this pattern is referred to as its atomic orbital. The orbitals come in a variety of shapes-sphere, dumbbell, torus, etc.-with the nucleus in the middle.[27]

Israel and apartheid

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Israel_and_apartheid A Palestinian c...