Search This Blog

Thursday, February 15, 2018

Thermodynamic free energy

From Wikipedia, the free encyclopedia

The thermodynamic free energy is the amount of work that a thermodynamic system can perform. The concept is useful in the thermodynamics of chemical or thermal processes in engineering and science. The free energy is the internal energy of a system minus the amount of energy that cannot be used to perform work. This unusable energy is given by the entropy of a system multiplied by the temperature of the system.

Like the internal energy, the free energy is a thermodynamic state function. Energy is a generalization of free energy, since energy is the ability to do work which is free energy.

Overview

Free energy is that portion of any first-law energy that is available to perform thermodynamic work; i.e., work mediated by thermal energy. Free energy is subject to irreversible loss in the course of such work.[1] Since first-law energy is always conserved, it is evident that free energy is an expendable, second-law kind of energy that can perform work within finite amounts of time. Several free energy functions may be formulated based on system criteria. Free energy functions are Legendre transformations of the internal energy. For processes involving a system at constant pressure p and temperature T, the Gibbs free energy is the most useful because, in addition to subsuming any entropy change due merely to heat, it does the same for the p dV work needed to "make space for additional molecules" produced by various processes. (Hence its utility to solution-phase chemists, including biochemists.) The Helmholtz free energy has a special theoretical importance since it is proportional to the logarithm of the partition function for the canonical ensemble in statistical mechanics. (Hence its utility to physicists; and to gas-phase chemists and engineers, who do not want to ignore p dV work.)

The historically earlier Helmholtz free energy is defined as A = UTS, where U is the internal energy, T is the absolute temperature, and S is the entropy. Its change is equal to the amount of reversible work done on, or obtainable from, a system at constant T. Thus its appellation "work content", and the designation A from Arbeit, the German word for work. Since it makes no reference to any quantities involved in work (such as p and V), the Helmholtz function is completely general: its decrease is the maximum amount of work which can be done by a system, and it can increase at most by the amount of work done on a system.

The Gibbs free energy is given by G = HTS, where H is the enthalpy. (H = U + pV, where p is the pressure and V is the volume.)

Historically, these energy terms have been used inconsistently. In physics, free energy most often refers to the Helmholtz free energy, denoted by A, while in chemistry, free energy most often refers to the Gibbs free energy. Since both fields use both functions, a compromise has been suggested, using A to denote the Helmholtz function and G for the Gibbs function. While A is preferred by IUPAC, G is sometimes still in use, and the correct free energy function is often implicit in manuscripts and presentations.

Meaning of "free"

The basic definition of "energy" is a measure of a body's (in thermodynamics, the system) ability to cause change. For example, when a person pushes a heavy box a few meters forward, that person utilizes energy by exerting it in the form of mechanical energy, also known as work, on the box by a distance of a few meters forward. The mathematical definition of this form of energy is the product of the force exerted on the object and the distance by which the box moved (Work=Force x Distance). Because the person changed the stationary position of the box, that person exerted energy on that box. The work exerted is also called "useful energy" because all of the energy from the person went into moving the box. Because energy is neither created nor destroyed, but conserved (1st Law of Thermodynamics), it is constantly being converted from one form into another. For the case of the person pushing the box, the energy in the form of internal (or potential) energy obtained through metabolism was converted into work in order to push the box. This energy conversion, however, is not linear. In other words, some internal energy went into pushing the box, whereas some was lost in the form of heat (thermal energy). The difference of the internal energy, which is defined by U and the energy lost while performing work, usually in the form of heat, which can be defined as the product of the absolute temperature T and the entropy S (entropy is a measure of disorder in a system; more specifically, the measure of the thermal energy not available to perform work) of a body is what is called the "useful energy" of the body, or the work of the body performed on an object. In thermodynamics, this is what is known as "free energy". In other words, free energy is a measure of work (useful energy) a system can perform. Mathematically, free energy is expressed as:
free energy= U-TS

This expression means that free energy (energy of a system available to perform work) is the difference of the total internal energy of a system, and the energy not available to perform work, altered by the absolute temperature of the system, also known as entropy.

In the 18th and 19th centuries, the theory of heat, i.e., that heat is a form of energy having relation to vibratory motion, was beginning to supplant both the caloric theory, i.e., that heat is a fluid, and the four element theory, in which heat was the lightest of the four elements. In a similar manner, during these years, heat was beginning to be distinguished into different classification categories, such as “free heat”, “combined heat”, “radiant heat”, specific heat, heat capacity, “absolute heat”, “latent caloric”, “free” or “perceptible” caloric (calorique sensible), among others.

In 1780, for example, Laplace and Lavoisier stated: “In general, one can change the first hypothesis into the second by changing the words ‘free heat, combined heat, and heat released’ into ‘vis viva, loss of vis viva, and increase of vis viva.’” In this manner, the total mass of caloric in a body, called absolute heat, was regarded as a mixture of two components; the free or perceptible caloric could affect a thermometer, whereas the other component, the latent caloric, could not.[2] The use of the words “latent heat” implied a similarity to latent heat in the more usual sense; it was regarded as chemically bound to the molecules of the body. In the adiabatic compression of a gas, the absolute heat remained constant but the observed rise in temperature implied that some latent caloric had become “free” or perceptible.

During the early 19th century, the concept of perceptible or free caloric began to be referred to as “free heat” or heat set free. In 1824, for example, the French physicist Sadi Carnot, in his famous “Reflections on the Motive Power of Fire”, speaks of quantities of heat ‘absorbed or set free’ in different transformations. In 1882, the German physicist and physiologist Hermann von Helmholtz coined the phrase ‘free energy’ for the expression ETS, in which the change in F (or G) determines the amount of energy ‘free’ for work under the given conditions.[3]:235

Thus, in traditional use, the term “free” was attached to Gibbs free energy, i.e., for systems at constant pressure and temperature, or to Helmholtz free energy, i.e., for systems at constant volume and temperature, to mean ‘available in the form of useful work.’[4] With reference to the Gibbs free energy, we add the qualification that it is the energy free for non-volume work.[5]:77–79

An increasing number of books and journal articles do not include the attachment “free”, referring to G as simply Gibbs energy (and likewise for the Helmholtz energy). This is the result of a 1988 IUPAC meeting to set unified terminologies for the international scientific community, in which the adjective ‘free’ was supposedly banished.[6][7][8] This standard, however, has not yet been universally adopted, and many published articles and books still include the descriptive ‘free’.[citation needed]

Application

Just like with the general concept of energy, free energy has multiple definitions, depending on the conditions. In physics, chemistry, and biology, these conditions are thermodynamic parameters (Temperature T, Volume V, Pressure P, etc.). Scientists have come up with many ways to define free energy while keeping certain parameters from changing; mathematically expressed as (. When temperature and volume are kept constant, this is known as Helmholtz free energy A. The mathematical expression of Helmholtz free energy is:

A = U-TS

This definition of free energy is useful in physics for explaining the behavior of isolated systems kept at a constant volume. In chemistry, on the other hand, most chemical reactions are kept at constant pressure. Under this condition, the heat of the reaction q is equal to the enthalpy H of the reaction, or all of the energy associated with the reaction. For example, if a researcher wanted to perform a combustion reaction in a bomb calorimeter, the pressure is kept constant throughout the course of a reaction per mole of the substance of the system (reactants and products). Therefore, the heat of the reaction is a direct measure of the enthalpy H of the reaction (q=). In this case, internal energy can be a measure of the enthalpy of the reaction in question (H=U). Thus under constant pressure and temperature, the free energy in a reaction is known as Gibbs free energy G.

G = H-TS

The experimental usefulness of these functions is restricted to conditions where certain variables (T, and V or external p) are held constant, although they also have theoretical importance in deriving Maxwell relations. Work other than p dV may be added, e.g., for electrochemical cells, or f dx work in elastic materials and in muscle contraction. Other forms of work which must sometimes be considered are stress-strain, magnetic, as in adiabatic demagnetization used in the approach to absolute zero, and work due to electric polarization. These are described by tensors.

In most cases of interest there are internal degrees of freedom and processes, such as chemical reactions and phase transitions, which create entropy. Even for homogeneous "bulk" materials, the free energy functions depend on the (often suppressed) composition, as do all proper thermodynamic potentials (extensive functions), including the internal energy.

Name Symbol Formula Natural variables
Helmholtz free energy F U-TS T, V, \{N_i\}
Gibbs free energy G U+pV-TS T, p, \{N_i\}

Ni is the number of molecules (alternatively, moles) of type i in the system. If these quantities do not appear, it is impossible to describe compositional changes. The differentials for reversible processes are (assuming only pV work):
{\displaystyle dF=-p\,dV-S\,dT+\sum _{i}\mu _{i}\,dN_{i}\,}
{\displaystyle dG=V\,dp-S\,dT+\sum _{i}\mu _{i}\,dN_{i}\,}
where μi is the chemical potential for the ith component in the system. The second relation is especially useful at constant T and p, conditions which are easy to achieve experimentally, and which approximately characterize living creatures.
{\displaystyle (dG)_{T,p}=\sum _{i}\mu _{i}\,dN_{i}\,}
Any decrease in the Gibbs function of a system is the upper limit for any isothermal, isobaric work that can be captured in the surroundings, or it may simply be dissipated, appearing as T times a corresponding increase in the entropy of the system and/or its surrounding.
An example is surface free energy, the amount of increase of free energy when the area of surface increases by every unit area.

The path integral Monte Carlo method is a numerical approach for determining the values of free energies, based on quantum dynamical principles.

History

The quantity called "free energy" is a more advanced and accurate replacement for the outdated term affinity, which was used by chemists in previous years to describe the force that caused chemical reactions. The term affinity, as used in chemical relation, dates back to at least the time of Albertus Magnus in 1250.[citation needed]

From the 1998 textbook Modern Thermodynamics[9] by Nobel Laureate and chemistry professor Ilya Prigogine we find: "As motion was explained by the Newtonian concept of force, chemists wanted a similar concept of ‘driving force’ for chemical change. Why do chemical reactions occur, and why do they stop at certain points? Chemists called the ‘force’ that caused chemical reactions affinity, but it lacked a clear definition."

During the entire 18th century, the dominant view with regard to heat and light was that put forth by Isaac Newton, called the Newtonian hypothesis, which states that light and heat are forms of matter attracted or repelled by other forms of matter, with forces analogous to gravitation or to chemical affinity.

In the 19th century, the French chemist Marcellin Berthelot and the Danish chemist Julius Thomsen had attempted to quantify affinity using heats of reaction. In 1875, after quantifying the heats of reaction for a large number of compounds, Berthelot proposed the principle of maximum work, in which all chemical changes occurring without intervention of outside energy tend toward the production of bodies or of a system of bodies which liberate heat.

In addition to this, in 1780 Antoine Lavoisier and Pierre-Simon Laplace laid the foundations of thermochemistry by showing that the heat given out in a reaction is equal to the heat absorbed in the reverse reaction. They also investigated the specific heat and latent heat of a number of substances, and amounts of heat given out in combustion. In a similar manner, in 1840 Swiss chemist Germain Hess formulated the principle that the evolution of heat in a reaction is the same whether the process is accomplished in one-step process or in a number of stages. This is known as Hess' law. With the advent of the mechanical theory of heat in the early 19th century, Hess’s law came to be viewed as a consequence of the law of conservation of energy.

Based on these and other ideas, Berthelot and Thomsen, as well as others, considered the heat given out in the formation of a compound as a measure of the affinity, or the work done by the chemical forces. This view, however, was not entirely correct. In 1847, the English physicist James Joule showed that he could raise the temperature of water by turning a paddle wheel in it, thus showing that heat and mechanical work were equivalent or proportional to each other, i.e., approximately, dWdQ. This statement came to be known as the mechanical equivalent of heat and was a precursory form of the first law of thermodynamics.

By 1865, the German physicist Rudolf Clausius had shown that this equivalence principle needed amendment. That is, one can use the heat derived from a combustion reaction in a coal furnace to boil water, and use this heat to vaporize steam, and then use the enhanced high-pressure energy of the vaporized steam to push a piston. Thus, we might naively reason that one can entirely convert the initial combustion heat of the chemical reaction into the work of pushing the piston. Clausius showed, however, that we must take into account the work that the molecules of the working body, i.e., the water molecules in the cylinder, do on each other as they pass or transform from one step of or state of the engine cycle to the next, e.g., from (P1,V1) to (P2,V2). Clausius originally called this the “transformation content” of the body, and then later changed the name to entropy. Thus, the heat used to transform the working body of molecules from one state to the next cannot be used to do external work, e.g., to push the piston. Clausius defined this transformation heat as dQ = T dS.

In 1873, Willard Gibbs published A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces, in which he introduced the preliminary outline of the principles of his new equation able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i.e., bodies, being in composition part solid, part liquid, and part vapor, and by using a three-dimensional volume-entropy-internal energy graph, Gibbs was able to determine three states of equilibrium, i.e., "necessarily stable", "neutral", and "unstable", and whether or not changes will ensue. In 1876, Gibbs built on this framework by introducing the concept of chemical potential so to take into account chemical reactions and states of bodies that are chemically different from each other. In his own words, to summarize his results in 1873, Gibbs states:

If we wish to express in a single equation the necessary and sufficient condition of thermodynamic equilibrium for a substance when surrounded by a medium of constant pressure p and temperature T, this equation may be written:
δ(ε + ) = 0
when δ refers to the variation produced by any variations in the state of the parts of the body, and (when different parts of the body are in different states) in the proportion in which the body is divided between the different states. The condition of stable equilibrium is that the value of the expression in the parenthesis shall be a minimum.

In this description, as used by Gibbs, ε refers to the internal energy of the body, η refers to the entropy of the body, and ν is the volume of the body.

Hence, in 1882, after the introduction of these arguments by Clausius and Gibbs, the German scientist Hermann von Helmholtz stated, in opposition to Berthelot and Thomas’ hypothesis that chemical affinity is a measure of the heat of reaction of chemical reaction as based on the principle of maximal work, that affinity is not the heat given out in the formation of a compound but rather it is the largest quantity of work which can be gained when the reaction is carried out in a reversible manner, e.g., electrical work in a reversible cell. The maximum work is thus regarded as the diminution of the free, or available, energy of the system (Gibbs free energy G at T = constant, P = constant or Helmholtz free energy F at T = constant, V = constant), whilst the heat given out is usually a measure of the diminution of the total energy of the system (Internal energy). Thus, G or F is the amount of energy “free” for work under the given conditions.

Up until this point, the general view had been such that: “all chemical reactions drive the system to a state of equilibrium in which the affinities of the reactions vanish”. Over the next 60 years, the term affinity came to be replaced with the term free energy. According to chemistry historian Henry Leicester, the influential 1923 textbook Thermodynamics and the Free Energy of Chemical Reactions by Gilbert N. Lewis and Merle Randall led to the replacement of the term “affinity” by the term “free energy” in much of the English-speaking world.

Scientific modelling

From Wikipedia, the free encyclopedia

Example of scientific modelling. A schematic of chemical and transport processes related to atmospheric composition.

Scientific modelling is a scientific activity, the aim of which is to make a particular part or feature of the world easier to understand, define, quantify, visualize, or simulate by referencing it to existing and usually commonly accepted knowledge. It requires selecting and identifying relevant aspects of a situation in the real world and then using different types of models for different aims, such as conceptual models to better understand, operational models to operationalize, mathematical models to quantify, and graphical models to visualize the subject. Modelling is an essential and inseparable part of many scientific disciplines, each of which have their own ideas about specific types of modelling.[1][2]

There is also an increasing attention to scientific modelling[3] in fields such as science education, philosophy of science, systems theory, and knowledge visualization. There is growing collection of methods, techniques and meta-theory about all kinds of specialized scientific modelling.

Overview

MathModel.svg

A scientific model seeks to represent empirical objects, phenomena, and physical processes in a logical and objective way. All models are in simulacra, that is, simplified reflections of reality that, despite being approximations, can be extremely useful.[4] Building and disputing models is fundamental to the scientific enterprise. Complete and true representation may be impossible, but scientific debate often concerns which is the better model for a given task, e.g., which is the more accurate climate model for seasonal forecasting.[5]

Attempts to formalize the principles of the empirical sciences use an interpretation to model reality, in the same way logicians axiomatize the principles of logic. The aim of these attempts is to construct a formal system that will not produce theoretical consequences that are contrary to what is found in reality. Predictions or other statements drawn from such a formal system mirror or map the real world only insofar as these scientific models are true.[6][7]

For the scientist, a model is also a way in which the human thought processes can be amplified.[8] For instance, models that are rendered in software allow scientists to leverage computational power to simulate, visualize, manipulate and gain intuition about the entity, phenomenon, or process being represented. Such computer models are in silico. Other types of scientific models are in vivo (living models, such as laboratory rats) and in vitro (in glassware, such as tissue culture).[9]

Basics of scientific modelling

Modelling as a substitute for direct measurement and experimentation

Models are typically used when it is either impossible or impractical to create experimental conditions in which scientists can directly measure outcomes. Direct measurement of outcomes under controlled conditions (see Scientific method) will always be more reliable than modelled estimates of outcomes.

Within modelling and simulation, a model is a task-driven, purposeful simplification and abstraction of a perception of reality, shaped by physical, legal, and cognitive constraints.[10] It is task-driven, because a model is captured with a certain question or task in mind. Simplifications leave all the known and observed entities and their relation out that are not important for the task. Abstraction aggregates information that is important, but not needed in the same detail as the object of interest. Both activities, simplification and abstraction, are done purposefully. However, they are done based on a perception of reality. This perception is already a model in itself, as it comes with a physical constraint. There are also constraints on what we are able to legally observe with our current tools and methods, and cognitive constraints which limit what we are able to explain with our current theories. This model comprises the concepts, their behavior, and their relations in formal form and is often referred to as a conceptual model. In order to execute the model, it needs to be implemented as a computer simulation. This requires more choices, such as numerical approximations or the use of heuristics.[11] Despite all these epistemological and computational constraints, simulation has been recognized as the third pillar of scientific methods: theory building, simulation, and experimentation.[12]

Simulation

A simulation is the implementation of a model. A steady state simulation provides information about the system at a specific instant in time (usually at equilibrium, if such a state exists). A dynamic simulation provides information over time. A simulation brings a model to life and shows how a particular object or phenomenon will behave. Such a simulation can be useful for testing, analysis, or training in those cases where real-world systems or concepts can be represented by models.[13]

Structure

Structure is a fundamental and sometimes intangible notion covering the recognition, observation, nature, and stability of patterns and relationships of entities. From a child's verbal description of a snowflake, to the detailed scientific analysis of the properties of magnetic fields, the concept of structure is an essential foundation of nearly every mode of inquiry and discovery in science, philosophy, and art.[14]

Systems

A system is a set of interacting or interdependent entities, real or abstract, forming an integrated whole. In general, a system is a construct or collection of different elements that together can produce results not obtainable by the elements alone.[15] The concept of an 'integrated whole' can also be stated in terms of a system embodying a set of relationships which are differentiated from relationships of the set to other elements, and from relationships between an element of the set and elements not a part of the relational regime. There are two types of system models: 1) discrete in which the variables change instantaneously at separate points in time and, 2) continuous where the state variables change continuously with respect to time.[16]

Generating a model

Modelling is the process of generating a model as a conceptual representation of some phenomenon. Typically a model will deal with only some aspects of the phenomenon in question, and two models of the same phenomenon may be essentially different—that is to say, that the differences between them comprise more than just a simple renaming of components.

Such differences may be due to differing requirements of the model's end users, or to conceptual or aesthetic differences among the modellers and to contingent decisions made during the modelling process. Considerations that may influence the structure of a model might be the modeller's preference for a reduced ontology, preferences regarding statistical models versus deterministic models, discrete versus continuous time, etc. In any case, users of a model need to understand the assumptions made that are pertinent to its validity for a given use.

Building a model requires abstraction. Assumptions are used in modelling in order to specify the domain of application of the model. For example, the special theory of relativity assumes an inertial frame of reference. This assumption was contextualized and further explained by the general theory of relativity. A model makes accurate predictions when its assumptions are valid, and might well not make accurate predictions when its assumptions do not hold. Such assumptions are often the point with which older theories are succeeded by new ones (the general theory of relativity works in non-inertial reference frames as well).

The term "assumption" is actually broader than its standard use, etymologically speaking. The Oxford English Dictionary (OED) and online Wiktionary indicate its Latin source as assumere ("accept, to take to oneself, adopt, usurp"), which is a conjunction of ad- ("to, towards, at") and sumere (to take). The root survives, with shifted meanings, in the Italian sumere and Spanish sumir. In the OED, "assume" has the senses of (i) “investing oneself with (an attribute), ” (ii) “to undertake” (especially in Law), (iii) “to take to oneself in appearance only, to pretend to possess,” and (iv) “to suppose a thing to be.” Thus, "assumption" connotes other associations than the contemporary standard sense of “that which is assumed or taken for granted; a supposition, postulate,” and deserves a broader analysis in the philosophy of science.[citation needed]

Evaluating a model

A model is evaluated first and foremost by its consistency to empirical data; any model inconsistent with reproducible observations must be modified or rejected. One way to modify the model is by restricting the domain over which it is credited with having high validity. A case in point is Newtonian physics, which is highly useful except for the very small, the very fast, and the very massive phenomena of the universe. However, a fit to empirical data alone is not sufficient for a model to be accepted as valid. Other factors important in evaluating a model include:[citation needed]
  • Ability to explain past observations
  • Ability to predict future observations
  • Cost of use, especially in combination with other models
  • Refutability, enabling estimation of the degree of confidence in the model
  • Simplicity, or even aesthetic appeal
People may attempt to quantify the evaluation of a model using a utility function.

Visualization

Visualization is any technique for creating images, diagrams, or animations to communicate a message. Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas since the dawn of man. Examples from history include cave paintings, Egyptian hieroglyphs, Greek geometry, and Leonardo da Vinci's revolutionary methods of technical drawing for engineering and scientific purposes.

Space mapping

Space mapping refers to a methodology that employs a "quasi-global" modeling formulation to link companion "coarse" (ideal or low-fidelity) with "fine" (practical or high-fidelity) models of different complexities. In engineering optimization, space mapping aligns (maps) a very fast coarse model with its related expensive-to-compute fine model so as to avoid direct expensive optimization of the fine model. The alignment process iteratively refines a "mapped" coarse model (surrogate model).

Types of scientific modelling

Applications

Modelling and simulation

One application of scientific modelling is the field of modelling and simulation, generally referred to as "M&S". M&S has a spectrum of applications which range from concept development and analysis, through experimentation, measurement and verification, to disposal analysis. Projects and programs may use hundreds of different simulations, simulators and model analysis tools.
Example of the integrated use of Modelling and Simulation in Defence life cycle management. The modelling and simulation in this image is represented in the center of the image with the three containers.[13]

The figure shows how Modelling and Simulation is used as a central part of an integrated program in a Defence capability development process.[13]

Model-based learning in education

Flowchart Describing One Style of Model-based Learning
Model–based learning in education, particularly in relation to learning science involves students creating models for scientific concepts in order to:[17]
  • Gain insight of the scientific idea(s)
  • Acquire deeper understanding of the subject through visualization of the model
  • Improve student engagement in the course
Different types of model based learning techniques include:[17]
  • Physical macrocosms
  • Representational systems
  • Syntactic models
  • Emergent models
Model–making in education is an iterative exercise with students refining, developing and evaluating their models over time. This shifts learning from the rigidity and monotony of traditional curriculum to an exercise of students' creativity and curiosity. This approach utilizes the constructive strategy of social collaboration and learning scaffold theory. Model based learning includes cognitive reasoning skills where existing models can be improved upon by construction of newer models using the old models as a basis.[18]

"Model–based learning entails determining target models and a learning pathway that provide realistic chances of understanding." [19] Model making can also incorporate blended learning strategies by using web based tools and simulators, thereby allowing students to:
  • Familiarize themselves with on-line or digital resources
  • Create different models with various virtual materials at little or no cost
  • Practice model making activity any time and any place
  • Refine existing models
"A well-designed simulation simplifies a real world system while heightening awareness of the complexity of the system. Students can participate in the simplified system and learn how the real system operates without spending days, weeks or years it would take to undergo this experience in the real world." [20]

The teacher's role in the overall teaching and learning process is primarily that of a facilitator and arranger of the learning experience. He or she would assign the students, a model making activity for a particular concept and provide relevant information or support for the activity. For virtual model making activities, the teacher can also provide information on the usage of the digital tool and render troubleshooting support in case of glitches while using the same. The teacher can also arrange the group discussion activity between the students and provide the platform necessary for students to share their observations and knowledge extracted from the model making activity.

Model–based learning evaluation could include the use of rubrics that assess the ingenuity and creativity of the student in the model construction and also the overall classroom participation of the student vis-a-vis the knowledge constructed through the activity.

It is important, however, to give due consideration to the following for successful model–based learning to occur:
  • Use of the right tool at the right time for a particular concept
  • Provision within the educational setup for model–making activity: e.g., computer room with internet facility or software installed to access simulator or digital tool

Wednesday, February 7, 2018

Paradigm shift

A paradigm shift (also radical theory change),[1] a concept identified by the American physicist and philosopher Thomas Kuhn (1922–1996), is a fundamental change in the basic concepts and experimental practices of a scientific discipline. Kuhn contrasted these shifts, which characterize a scientific revolution, to the activity of normal science, which he described as scientific work done within a prevailing framework (or paradigm). In this context, the word "paradigm" is used in its original Greek meaning, as "example".
The nature of scientific revolutions has been studied by modern philosophy since Immanuel Kant used the phrase in the preface to his Critique of Pure Reason (1781). He referred to Greek mathematics and Newtonian physics. In the 20th century, new developments in the basic concepts of mathematics, physics, and biology revitalized interest in the question among scholars. It was against this active background that Kuhn published his work.

Kuhn presented his notion of a paradigm shift in his influential book The Structure of Scientific Revolutions (1962). As one commentator summarizes:
Kuhn acknowledges having used the term "paradigm" in two different meanings. In the first one, "paradigm" designates what the members of a certain scientific community have in common, that is to say, the whole of techniques, patents and values shared by the members of the community. In the second sense, the paradigm is a single element of a whole, say for instance Newton’s Principia, which, acting as a common model or an example... stands for the explicit rules and thus defines a coherent tradition of investigation. Thus the question is for Kuhn to investigate by means of the paradigm what makes possible the constitution of what he calls "normal science". That is to say, the science which can decide if a certain problem will be considered scientific or not. Normal science does not mean at all a science guided by a coherent system of rules, on the contrary, the rules can be derived from the paradigms, but the paradigms can guide the investigation also in the absence of rules. This is precisely the second meaning of the term "paradigm", which Kuhn considered the most new and profound, though it is in truth the oldest.[2]
Since the 1960s, the concept of a paradigm shift has also been used in numerous non-scientific contexts to describe a profound change in a fundamental model or perception of events, even though Kuhn himself restricted the use of the term to the physical sciences.

Kuhnian paradigm shifts

Kuhn used the duck-rabbit optical illusion, made famous by Wittgenstein, to demonstrate the way in which a paradigm shift could cause one to see the same information in an entirely different way.[3]

An epistemological paradigm shift was called a "scientific revolution" by epistemologist and historian of science Thomas Kuhn in his book The Structure of Scientific Revolutions.

A scientific revolution occurs, according to Kuhn, when scientists encounter anomalies that cannot be explained by the universally accepted paradigm within which scientific progress has thereto been made. The paradigm, in Kuhn's view, is not simply the current theory, but the entire worldview in which it exists, and all of the implications which come with it. This is based on features of landscape of knowledge that scientists can identify around them.

There are anomalies for all paradigms, Kuhn maintained, that are brushed away as acceptable levels of error, or simply ignored and not dealt with (a principal argument Kuhn uses to reject Karl Popper's model of falsifiability as the key force involved in scientific change). Rather, according to Kuhn, anomalies have various levels of significance to the practitioners of science at the time. To put it in the context of early 20th century physics, some scientists found the problems with calculating Mercury's perihelion more troubling than the Michelson-Morley experiment results, and some the other way around. Kuhn's model of scientific change differs here, and in many places, from that of the logical positivists in that it puts an enhanced emphasis on the individual humans involved as scientists, rather than abstracting science into a purely logical or philosophical venture.

When enough significant anomalies have accrued against a current paradigm, the scientific discipline is thrown into a state of crisis, according to Kuhn. During this crisis, new ideas, perhaps ones previously discarded, are tried. Eventually a new paradigm is formed, which gains its own new followers, and an intellectual "battle" takes place between the followers of the new paradigm and the hold-outs of the old paradigm. Again, for early 20th century physics, the transition between the Maxwellian electromagnetic worldview and the Einsteinian relativistic worldview was neither instantaneous nor calm, and instead involved a protracted set of "attacks," both with empirical data as well as rhetorical or philosophical arguments, by both sides, with the Einsteinian theory winning out in the long run. Again, the weighing of evidence and importance of new data was fit through the human sieve: some scientists found the simplicity of Einstein's equations to be most compelling, while some found them more complicated than the notion of Maxwell's aether which they banished. Some found Arthur Eddington's photographs of light bending around the sun to be compelling, while some questioned their accuracy and meaning. Sometimes the convincing force is just time itself and the human toll it takes, Kuhn said, using a quote from Max Planck: "a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it."[4]

After a given discipline has changed from one paradigm to another, this is called, in Kuhn's terminology, a scientific revolution or a paradigm shift. It is often this final conclusion, the result of the long process, that is meant when the term paradigm shift is used colloquially: simply the (often radical) change of worldview, without reference to the specificities of Kuhn's historical argument.

In a 2015 retrospective on Kuhn,[5] the philosopher Martin Cohen describes the notion of the "Paradigm Shift" as a kind of intellectual virus – spreading from hard science to social science and on to the arts and even everyday political rhetoric today. Cohen claims that Thomas Kuhn himself had only a very hazy idea of what it might mean and, in line with the American philosopher of science, Paul Feyerabend, accuses Kuhn of retreating from the more radical implications of his theory, which are that scientific facts are never really more than opinions, whose popularity is transitory and far from conclusive.

Science and paradigm shift

A common misinterpretation of paradigms is the belief that the discovery of paradigm shifts and the dynamic nature of science (with its many opportunities for subjective judgments by scientists) are a case for relativism:[6] the view that all kinds of belief systems are equal. Kuhn vehemently denies this interpretation[7] and states that when a scientific paradigm is replaced by a new one, albeit through a complex social process, the new one is always better, not just different.

These claims of relativism are, however, tied to another claim that Kuhn does at least somewhat endorse: that the language and theories of different paradigms cannot be translated into one another or rationally evaluated against one another—that they are incommensurable. This gave rise to much talk of different peoples and cultures having radically different worldviews or conceptual schemes—so different that whether or not one was better, they could not be understood by one another. However, the philosopher Donald Davidson published a highly regarded essay in 1974, "On the Very Idea of a Conceptual Scheme" (Proceedings and Addresses of the American Philosophical Association, Vol. 47, (1973–1974), pp. 5–20) arguing that the notion that any languages or theories could be incommensurable with one another was itself incoherent. If this is correct, Kuhn's claims must be taken in a weaker sense than they often are. Furthermore, the hold of the Kuhnian analysis on social science has long been tenuous with the wide application of multi-paradigmatic approaches in order to understand complex human behaviour (see for example John Hassard, Sociology and Organization Theory: Positivism, Paradigm and Postmodernity. Cambridge University Press, 1993, ISBN 0521350344.)

Paradigm shifts tend to be most dramatic in sciences that appear to be stable and mature, as in physics at the end of the 19th century. At that time, physics seemed to be a discipline filling in the last few details of a largely worked-out system. In 1900, Lord Kelvin famously told an assemblage of physicists at the British Association for the Advancement of Science, "There is nothing new to be discovered in physics now. All that remains is more and more precise measurement."[8][veracity of this quote challenged in Lord Kelvin article] Five years later, Albert Einstein published his paper on special relativity, which challenged the very simple set of rules laid down by Newtonian mechanics, which had been used to describe force and motion for over two hundred years.

In The Structure of Scientific Revolutions, Kuhn wrote, "Successive transition from one paradigm to another via revolution is the usual developmental pattern of mature science." (p. 12) Kuhn's idea was itself revolutionary in its time, as it caused a major change in the way that academics talk about science. Thus, it could be argued that it caused or was itself part of a "paradigm shift" in the history and sociology of science. However, Kuhn would not recognise such a paradigm shift. In the social sciences, people can still use earlier ideas to discuss the history of science.

Philosophers and historians of science, including Kuhn himself, ultimately accepted a modified version of Kuhn's model, which synthesizes his original view with the gradualist model that preceded it.[citation needed]

Examples of paradigm shifts

Natural sciences

Some of the "classical cases" of Kuhnian paradigm shifts in science are:

Social sciences

In Kuhn's view, the existence of a single reigning paradigm is characteristic of the natural sciences, while philosophy and much of social science were characterized by a "tradition of claims, counterclaims, and debates over fundamentals."[19] Others have applied Kuhn's concept of paradigm shift to the social sciences.

Applied sciences

More recently, paradigm shifts are also recognisable in applied sciences:
  • In medicine, the transition from "clinical judgment" to evidence-based medicine
  • In software engineering, the transition from the Rational Paradigm to the Empirical Paradigm [25]

Marketing

In the later part of the 1990s, 'paradigm shift' emerged as a buzzword, popularized as marketing speak and appearing more frequently in print and publication.[26] In his book Mind The Gaffe, author Larry Trask advises readers to refrain from using it, and to use caution when reading anything that contains the phrase. It is referred to in several articles and books[27][28] as abused and overused to the point of becoming meaningless.

Other uses

The term "paradigm shift" has found uses in other contexts, representing the notion of a major change in a certain thought-pattern—a radical change in personal beliefs, complex systems or organizations, replacing the former way of thinking or organizing with a radically different way of thinking or organizing:
  • M. L. Handa, a professor of sociology in education at O.I.S.E. University of Toronto, Canada, developed the concept of a paradigm within the context of social sciences. He defines what he means by "paradigm" and introduces the idea of a "social paradigm". In addition, he identifies the basic component of any social paradigm. Like Kuhn, he addresses the issue of changing paradigms, the process popularly known as "paradigm shift". In this respect, he focuses on the social circumstances which precipitate such a shift. Relatedly, he addresses how that shift affects social institutions, including the institution of education.[citation needed]
  • The concept has been developed for technology and economics in the identification of new techno-economic paradigms as changes in technological systems that have a major influence on the behaviour of the entire economy (Carlota Perez; earlier work only on technological paradigms by Giovanni Dosi). This concept is linked to Joseph Schumpeter's idea of creative destruction. Examples include the move to mass production and the introduction of microelectronics.[29]
  • Two photographs of the Earth from space, "Earthrise" (1968) and "The Blue Marble" (1972), are thought to have helped to usher in the environmentalist movement which gained great prominence in the years immediately following distribution of those images.[30][31]
  • Hans Küng applies Thomas Kuhn's theory of paradigm change to the entire history of Christian thought and theology. He identifies six historical "macromodels": 1) the apocalyptic paradigm of primitive Christianity, 2) the Hellenistic paradigm of the patristic period, 3) the medieval Roman Catholic paradigm, 4) the Protestant (Reformation) paradigm, 5) the modern Enlightenment paradigm, and 6) the emerging ecumenical paradigm. He also discusses five analogies between natural science and theology in relation to paradigm shifts. Küng addresses paradigm change in his books, Paradigm Change in Theology[32] and Theology for the Third Millennium: An Ecumenical View.[33]

Tuesday, February 6, 2018

Thomas Kuhn

From Wikipedia, the free encyclopedia

Thomas Kuhn
Thomas Kuhn.jpg
Born Thomas Samuel Kuhn
July 18, 1922
Cincinnati, Ohio, U.S.
Died June 17, 1996 (aged 73)
Cambridge, Massachusetts, U.S.
Alma mater Harvard University

Era 20th-century philosophy
Region Western philosophy
School Analytic
Historical turn[1]
Main interests
Philosophy of science
Notable ideas
Thomas Samuel Kuhn (/kn/; July 18, 1922 – June 17, 1996) was an American physicist, historian and philosopher of science whose controversial 1962 book The Structure of Scientific Revolutions was influential in both academic and popular circles, introducing the term paradigm shift, which has since become an English-language idiom.

Kuhn made several notable claims concerning the progress of scientific knowledge: that scientific fields undergo periodic "paradigm shifts" rather than solely progressing in a linear and continuous way, and that these paradigm shifts open up new approaches to understanding what scientists would never have considered valid before; and that the notion of scientific truth, at any given moment, cannot be established solely by objective criteria but is defined by a consensus of a scientific community. Competing paradigms are frequently incommensurable; that is, they are competing and irreconcilable accounts of reality. Thus, our comprehension of science can never rely wholly upon "objectivity" alone. Science must account for subjective perspectives as well, since all objective conclusions are ultimately founded upon the subjective conditioning/worldview of its researchers and participants.

Life

Kuhn was born in Cincinnati, Ohio, to Samuel L. Kuhn, an industrial engineer, and Minette Stroock Kuhn, both Jewish. He graduated from The Taft School in Watertown, CT, in 1940, where he became aware of his serious interest in mathematics and physics. He obtained his BS degree in physics from Harvard University in 1943, where he also obtained MS and PhD degrees in physics in 1946 and 1949, respectively, under the supervision of John Van Vleck.[12] As he states in the first few pages of the preface to the second edition of The Structure of Scientific Revolutions, his three years of total academic freedom as a Harvard Junior Fellow were crucial in allowing him to switch from physics to the history and philosophy of science. He later taught a course in the history of science at Harvard from 1948 until 1956, at the suggestion of university president James Conant. After leaving Harvard, Kuhn taught at the University of California, Berkeley, in both the philosophy department and the history department, being named Professor of the History of science in 1961. Kuhn interviewed and tape recorded Danish physicist Niels Bohr the day before Bohr's death.[13] At Berkeley, he wrote and published (in 1962) his best known and most influential work:[14] The Structure of Scientific Revolutions. In 1964, he joined Princeton University as the M. Taylor Pyne Professor of Philosophy and History of Science. He served as the president of the History of Science Society from 1969–70.[15] In 1979 he joined the Massachusetts Institute of Technology (MIT) as the Laurance S. Rockefeller Professor of Philosophy, remaining there until 1991. In 1994 Kuhn was diagnosed with lung cancer. He died in 1996.

Thomas Kuhn was married twice, first to Kathryn Muhs with whom he had three children, then to Jehane Barton Burns (Jehane R. Kuhn).

The Structure of Scientific Revolutions

The Structure of Scientific Revolutions (SSR) was originally printed as an article in the International Encyclopedia of Unified Science, published by the logical positivists of the Vienna Circle. In this book, Kuhn argued that science does not progress via a linear accumulation of new knowledge, but undergoes periodic revolutions, also called "paradigm shifts" (although he did not coin the phrase),[16] in which the nature of scientific inquiry within a particular field is abruptly transformed. In general, science is broken up into three distinct stages. Prescience, which lacks a central paradigm, comes first. This is followed by "normal science", when scientists attempt to enlarge the central paradigm by "puzzle-solving". Guided by the paradigm, normal science is extremely productive: "when the paradigm is successful, the profession will have solved problems that its members could scarcely have imagined and would never have undertaken without commitment to the paradigm".[17]

In regard to experimentation and collection of data with a view toward solving problems through the commitment to a paradigm, Kuhn states: “The operations and measurements that a scientist undertakes in the laboratory are not ‘the given’ of experience but rather ‘the collected with diffculty.’ They are not what the scientist sees—at least not before his research is well advanced and his attention focused. Rather, they are concrete indices to the content of more elementary perceptions, and as such they are selected for the close scrutiny of normal research only because they promise opportunity for the fruitful elaboration of an accepted paradigm. Far more clearly than the immediate experience from which they in part derive, operations and measurements are paradigm-determined. Science does not deal in all possible laboratory manipulations. Instead, it selects those relevant to the juxtaposition of a paradigm with the immediate experience that that paradigm has partially determined. As a result, scientists with different paradigms engage in different concrete laboratory manipulations.”[18]

During the period of normal science, the failure of a result to conform to the paradigm is seen not as refuting the paradigm, but as the mistake of the researcher, contra Popper's falsifiability criterion. As anomalous results build up, science reaches a crisis, at which point a new paradigm, which subsumes the old results along with the anomalous results into one framework, is accepted. This is termed revolutionary science.

In SSR, Kuhn also argues that rival paradigms are incommensurable—that is, it is not possible to understand one paradigm through the conceptual framework and terminology of another rival paradigm. For many critics, for example David Stove (Popper and After, 1982), this thesis seemed to entail that theory choice is fundamentally irrational: if rival theories cannot be directly compared, then one cannot make a rational choice as to which one is better. Whether Kuhn's views had such relativistic consequences is the subject of much debate; Kuhn himself denied the accusation of relativism in the third edition of SSR, and sought to clarify his views to avoid further misinterpretation. Freeman Dyson has quoted Kuhn as saying "I am not a Kuhnian!",[19] referring to the relativism that some philosophers have developed based on his work.

The enormous impact of Kuhn's work can be measured in the changes it brought about in the vocabulary of the philosophy of science: besides "paradigm shift", Kuhn popularized the word "paradigm" itself from a term used in certain forms of linguistics and the work of Georg Lichtenberg to its current broader meaning, coined the term "normal science" to refer to the relatively routine, day-to-day work of scientists working within a paradigm, and was largely responsible for the use of the term "scientific revolutions" in the plural, taking place at widely different periods of time and in different disciplines, as opposed to a single scientific revolution in the late Renaissance. The frequent use of the phrase "paradigm shift" has made scientists more aware of and in many cases more receptive to paradigm changes, so that Kuhn's analysis of the evolution of scientific views has by itself influenced that evolution.[citation needed]

Kuhn's work has been extensively used in social science; for instance, in the post-positivist/positivist debate within International Relations. Kuhn is credited as a foundational force behind the post-Mertonian sociology of scientific knowledge. Kuhn's work has also been used in the Arts and Humanities, such as by Matthew Edward Harris to distinguish between scientific and historical communities (such as political or religious groups): 'political-religious beliefs and opinions are not epistemologically the same as those pertaining to scientific theories'.[20] This is because would-be scientists' worldviews are changed through rigorous training, through the engagement between what Kuhn calls 'exemplars' and the Global Paradigm. Kuhn's notions of paradigms and paradigm shifts have been influential in understanding the history of economic thought, for example the Keynesian revolution,[21] and in debates in political science.[22]

A defense Kuhn gives against the objection that his account of science from The Structure of Scientific Revolutions results in relativism can be found in an essay by Kuhn called "Objectivity, Value Judgment, and Theory Choice."[23] In this essay, he reiterates five criteria from the penultimate chapter of SSR that determine (or help determine, more properly) theory choice:
  1. Accurate – empirically adequate with experimentation and observation
  2. Consistent – internally consistent, but also externally consistent with other theories
  3. Broad Scope – a theory's consequences should extend beyond that which it was initially designed to explain
  4. Simple – the simplest explanation, principally similar to Occam's razor
  5. Fruitful – a theory should disclose new phenomena or new relationships among phenomena
He then goes on to show how, although these criteria admittedly determine theory choice, they are imprecise in practice and relative to individual scientists. According to Kuhn, "When scientists must choose between competing theories, two men fully committed to the same list of criteria for choice may nevertheless reach different conclusions."[23] For this reason, the criteria still are not "objective" in the usual sense of the word because individual scientists reach different conclusions with the same criteria due to valuing one criterion over another or even adding additional criteria for selfish or other subjective reasons. Kuhn then goes on to say, "I am suggesting, of course, that the criteria of choice with which I began function not as rules, which determine choice, but as values, which influence it."[23] Because Kuhn utilizes the history of science in his account of science, his criteria or values for theory choice are often understood as descriptive normative rules (or more properly, values) of theory choice for the scientific community rather than prescriptive normative rules in the usual sense of the word "criteria", although there are many varied interpretations of Kuhn's account of science.

Polanyi–Kuhn debate

Although they used different terminologies, both Kuhn and Michael Polanyi believed that scientists' subjective experiences made science a relativized discipline. Polanyi lectured on this topic for decades before Kuhn published The Structure of Scientific Revolutions.

Supporters of Polanyi charged Kuhn with plagiarism, as it was known that Kuhn attended several of Polanyi's lectures, and that the two men had debated endlessly over epistemology before either had achieved fame. The charge of plagiarism is peculiar, for Kuhn had generously acknowledged Polanyi in the first edition of The Structure of Scientific Revolutions.[5] Despite this intellectual alliance, Polanyi's work was constantly interpreted by others within the framework of Kuhn's paradigm shifts, much to Polanyi's (and Kuhn's) dismay.[24]

Thomas Kuhn Paradigm Shift Award

In honor of his legacy, the "Thomas Kuhn Paradigm Shift Award" is awarded by the American Chemical Society to speakers who present original views that are at odds with mainstream scientific understanding. The winner is selected based in the novelty of the viewpoint and its potential impact if it were to be widely accepted.[25]

Honors

Kuhn was named a Guggenheim Fellow in 1954, and in 1982 was awarded the George Sarton Medal by the History of Science Society. He also received numerous honorary doctorates.

Bibliography

Representation of a Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Representation_of_a_Lie_group...