Search This Blog

Saturday, November 10, 2018

Transition state theory

From Wikipedia, the free encyclopedia
 
Figure 1: Reaction coordinate diagram for the bimolecular nucleophilic substitution (SN2) reaction between bromomethane and the hydroxide anion

Transition state theory (TST) explains the reaction rates of elementary chemical reactions. The theory assumes a special type of chemical equilibrium (quasi-equilibrium) between reactants and activated transition state complexes.

TST is used primarily to understand qualitatively how chemical reactions take place. TST has been less successful in its original goal of calculating absolute reaction rate constants because the calculation of absolute reaction rates requires precise knowledge of potential energy surfaces, but it has been successful in calculating the standard enthalpy of activation (ΔHɵ), the standard entropy of activationSɵ), and the standard Gibbs energy of activation (ΔGɵ) for a particular reaction if its rate constant has been experimentally determined. (The notation refers to the value of interest at the transition state.)

This theory was developed simultaneously in 1935 by Henry Eyring, then at Princeton University, and by Meredith Gwynne Evans and Michael Polanyi of the University of Manchester. TST is also referred to as "activated-complex theory," "absolute-rate theory," and "theory of absolute reaction rates."

Before the development of TST, the Arrhenius rate law was widely used to determine energies for the reaction barrier. The Arrhenius equation derives from empirical observations and ignores any mechanistic considerations, such as whether one or more reactive intermediates are involved in the conversion of a reactant to a product. Therefore, further development was necessary to understand the two parameters associated with this law, the pre-exponential factor (A) and the activation energy (Ea). TST, which led to the Eyring equation, successfully addresses these two issues; however, 46 years elapsed between the publication of the Arrhenius rate law, in 1889, and the Eyring equation derived from TST, in 1935. During that period, many scientists and researchers contributed significantly to the development of the theory.

Kramers theory of reaction kinetics improves the TST.

Theory

The basic ideas behind transition state theory are as follows:
  1. Rates of reaction can be studied by examining activated complexes near the saddle point of a potential energy surface. The details of how these complexes are formed are not important. The saddle point itself is called the transition state.
  2. The activated complexes are in a special equilibrium (quasi-equilibrium) with the reactant molecules.
  3. The activated complexes can convert into products, and kinetic theory can be used to calculate the rate of this conversion.

Development

In the development of TST, three approaches were taken as summarized below

Thermodynamic treatment

In 1884, Jacobus van't Hoff proposed the Van 't Hoff equation describing the temperature dependence of the equilibrium constant for a reversible reaction:
A ⇌ B
where ΔU is the change in internal energy, K is the equilibrium constant of the reaction, R is the universal gas constant, and T is thermodynamic temperature. Based on experimental work, in 1889, Svante Arrhenius proposed a similar expression for the rate constant of a reaction, given as follows:
Integration of this expression leads to the Arrhenius equation
where k is the rate constant. A was referred to as the frequency factor (now called the pre-exponential coefficient), and Ea is regarded as the activation energy. By the early 20th century many had accepted the Arrhenius equation, but the physical interpretation of A and Ea remained vague. This led many researchers in chemical kinetics to offer different theories of how chemical reactions occurred in an attempt to relate A and Ea to the molecular dynamics directly responsible for chemical reactions.

In 1910, French chemist René Marcelin introduced the concept of standard Gibbs energy of activation. His relation can be written as
At about the same time as Marcelin was working on his formulation, Dutch chemists Philip Abraham Kohnstamm, Frans Eppo Cornelis Scheffer, and Wiedold Frans Brandsma introduced standard entropy of activation and the standard enthalpy of activation. They proposed the following rate constant equation
However, the nature of the constant was still unclear.

Kinetic-theory treatment

In early 1900, Max Trautz and William Lewis studied the rate of the reaction using collision theory, based on the kinetic theory of gases. Collision theory treats reacting molecules as hard spheres colliding with one another; this theory neglects entropy changes, since it assumes that the collision between molecules are completely elastic.

Lewis applied his treatment to the following reaction and obtained good agreement with experimental result.

2HI → H2 + I2

However, later when the same treatment was applied to other reactions, there were large discrepancies between theoretical and experimental results.

Statistical-mechanical treatment

Statistical mechanics played a significant role in the development of TST. However, the application of statistical mechanics to TST was developed very slowly given the fact that in mid-19th century, James Clerk Maxwell, Ludwig Boltzmann, and Leopold Pfaundler published several papers discussing reaction equilibrium and rates in terms of molecular motions and the statistical distribution of molecular speeds.

It was not until 1912 when the French chemist A. Berthoud used the Maxwell–Boltzmann distribution law to obtain an expression for the rate constant.
where a and b are constants related to energy terms.

Two years later, René Marcelin made an essential contribution by treating the progress of a chemical reaction as a motion of a point in phase space. He then applied Gibbs' statistical-mechanical procedures and obtained an expression similar to the one he had obtained earlier from thermodynamic consideration.

In 1915, another important contribution came from British physicist James Rice. Based on his statistical analysis, he concluded that the rate constant is proportional to the "critical increment". His ideas were further developed by Richard Chace Tolman. In 1919, Austrian physicist Karl Ferdinand Herzfeld applied statistical mechanics to the equilibrium constant and kinetic theory to the rate constant of the reverse reaction, k−1, for the reversible dissociation of a diatomic molecule.
He obtained the following equation for the rate constant of the forward reaction
where is the dissociation energy at absolute zero, kB is the Boltzmann constant, h is the Planck constant, T is thermodynamic temperature, υ is vibrational frequency of the bond. This expression is very important since it is the first time that the factor kBT/h, which is a critical component of TST, has appeared in a rate equation.

In 1920, the American chemist Richard Chace Tolman further developed Rice's idea of the critical increment. He concluded that critical increment (now referred to as activation energy) of a reaction is equal to the average energy of all molecules undergoing reaction minus the average energy of all reactant molecules.

Potential energy surfaces

The concept of potential energy surface was very important in the development of TST. The foundation of this concept was laid by René Marcelin in 1913. He theorized that the progress of a chemical reaction could be described as a point in a potential energy surface with coordinates in atomic momenta and distances.

In 1931, Henry Eyring and Michael Polanyi constructed a potential energy surface for the reaction below. This surface is a three-dimensional diagram based on quantum-mechanical principles as well as experimental data on vibrational frequencies and energies of dissociation.

H + H2 → H2 + H

A year after the Eyring and Polanyi construction, Hans Pelzer and Eugene Wigner made an important contribution by following the progress of a reaction on a potential energy surface. The importance of this work was that it was the first time that the concept of col or saddle point in the potential energy surface was discussed. They concluded that the rate of a reaction is determined by the motion of the system through that col.

It has been typically assumed that the rate-limiting or lowest saddle point is located on the same energy surface as the initial ground state. However, it was recently found that this could be incorrect for processes occurring in semiconductors and insulators, where an initial excited state could go through a saddle point lower than the one on the surface of the initial ground state.

Derivation of the Eyring equation

One of the most important features introduced by Eyring, Polanyi and Evans was the notion that activated complexes are in quasi-equilibrium with the reactants. The rate is then directly proportional to the concentration of these complexes multiplied by the frequency (kBT/h) with which they are converted into products.

Quasi-equilibrium assumption

It should be noted that quasi-equilibrium is different from classical chemical equilibrium, but can be described using the same thermodynamic treatment. Consider the reaction below
Figure 2: Potential energy diagram

where complete equilibrium is achieved between all the species in the system including activated complexes, [AB] . Using statistical mechanics, concentration of [AB] can be calculated in terms of the concentration of A and B.

TST assumes that even when the reactants and products are not in equilibrium with each other, the activated complexes are in quasi-equilibrium with the reactants. As illustrated in Figure 2, at any instant of time, there are a few activated complexes, and some were reactant molecules in the immediate past, which are designated [ABl] (since they are moving from left to right). The remainder of them were product molecules in the immediate past ([ABr]). Since the system is in complete equilibrium, the concentrations of [ABl] and [ABr] are equal, so that each concentration is equal to one-half of the total concentration of activated complexes:
In TST, it is assumed that the flux of activated complexes in the two directions are independent of each other. That is, if all the product molecules were suddenly removed from the reaction system, the flow of [ABr] stops, but there is still a flow from left to right. Hence, to be technically correct, the reactants are in equilibrium only with [ABl], the activated complexes that were reactants in the immediate past.

The activated complexes do not follow a Boltzmann distribution of energies, but an "equilibrium constant" can still be derived from the distribution they do follow. The equilibrium constant K‡ɵ for the quasi-equilibrium can be written as
So, the concentration of the transition state AB is
Therefore, the rate equation for the production of product is
Where the rate constant k is given by
k is directly proportional to the frequency of the vibrational mode responsible for converting the activated complex to the product; the frequency of this vibrational mode is . Every vibration does not necessarily lead to the formation of product, so a proportionality constant , referred to as the transmission coefficient, is introduced to account for this effect. So k can be rewritten as
For the equilibrium constant K, statistical mechanics leads to a temperature dependent expression given as
where
Combining the new expressions for k and K, a new rate constant expression can be written, which is given as
Since, by definition, ΔG = ΔH –TΔS, the rate constant expression can be expanded, to give an alternative form of the Eyring equation
TST's rate constant expression can be used to calculate the ΔGɵ, ΔHɵ, ΔSɵ, and even ΔV (the volume of activation) using experimental rate data.

Given the relationship between equilibrium constant and the forward and reverse rate constants, , the Eyring equation implies that

Limitations of transition state theory

In general, TST has provided researchers with a conceptual foundation for understanding how chemical reactions take place. Even though the theory is widely applicable, it does have limitations. For example, when applied to each elementary step of a multi-step reaction, the theory assumes that each intermediate is long-lived enough to reach a Boltzmann distribution of energies before continuing to the next step. When the intermediates are very short-lived, TST fails. In such cases, the momentum of the reaction trajectory from the reactants to the intermediate can carry forward to affect product selectivity (an example of such a reaction is the thermal decomposition of diazaobicyclopentanes, presented by Anslyn and Dougherty).

Transition state theory is also based on the assumption that atomic nuclei behave according to classic mechanics. It is assumed that unless atoms or molecules collide with enough energy to form the transition structure, then the reaction does not occur. However, according to quantum mechanics, for any barrier with a finite amount of energy, there is a possibility that particles can still tunnel across the barrier. With respect to chemical reactions this means that there is a chance that molecules will react, even if they do not collide with enough energy to traverse the energy barrier. While this effect is negligible for reactions with large activation energies, it becomes an important phenomenon for reactions with relatively low energy barriers, since the tunneling probability increases with decreasing barrier height.

Transition state theory fails for some reactions at high temperature. The theory assumes the reaction system will pass over the lowest energy saddle point on the potential energy surface. While this description is consistent for reactions occurring at relatively low temperatures, at high temperatures, molecules populate higher energy vibrational modes; their motion becomes more complex and collisions may lead to transition states far away from the lowest energy saddle point. This deviation from transition state theory is observed even in the simple exchange reaction between diatomic hydrogen and a hydrogen radical.

Given these limitations, several alternatives to transition state theory have been proposed. A brief discussion of these theories follows.

Generalized transition state theory

Any form of TST, such as microcanonical variational TST, canonical variational TST, and improved canonical variational TST, in which the transition state is not necessarily located at the saddle point, is referred to as generalized transition state theory.

Microcanonical variational TST

A fundamental flaw of transition state theory is that it counts any crossing of the transition state as a reaction from reactants to products or vice versa. In reality, a molecule may cross this "dividing surface" and turn around, or cross multiple times and only truly react once. As such, unadjusted TST is said to provide an upper bound for the rate coefficients. To correct for this, variational transition state theory varies the location of the dividing surface that defines a successful reaction in order to minimize the rate for each fixed energy. The rate expressions obtained in this microcanonical treatment can be integrated over the energy, taking into account the statistical distribution over energy states, so as to give the canonical, or thermal rates.

Canonical variational TST

A development of transition state theory in which the position of the dividing surface is varied so as to minimize the rate constant at a given temperature.

Improved canonical variational TST

A modification of canonical variational transition state theory in which, for energies below the threshold energy, the position of the dividing surface is taken to be that of the microcanonical threshold energy. This forces the contributions to rate constants to be zero if they are below the threshold energy. A compromise dividing surface is then chosen so as to minimize the contributions to the rate constant made by reactants having higher energies.

Nonadiabatic TST

An expansion of TST to the reactions when two spin-states are involved simultaneously is called nonadiabatic transition state theory (NA-TST).

Applications of TST: enzymatic reactions

Enzymes catalyze chemical reactions at rates that are astounding relative to uncatalyzed chemistry at the same reaction conditions. Each catalytic event requires a minimum of three or often more steps, all of which occur within the few milliseconds that characterize typical enzymatic reactions. According to transition state theory, the smallest fraction of the catalytic cycle is spent in the most important step, that of the transition state. The original proposals of absolute reaction rate theory for chemical reactions defined the transition state as a distinct species in the reaction coordinate that determined the absolute reaction rate. Soon thereafter, Linus Pauling proposed that the powerful catalytic action of enzymes could be explained by specific tight binding to the transition state species.  Because reaction rate is proportional to the fraction of the reactant in the transition state complex, the enzyme was proposed to increase the concentration of the reactive species.

This proposal was formalized by Wolfenden and coworkers at University of North Carolina at Chapel Hill, who hypothesized that the rate increase imposed by enzymes is proportional to the affinity of the enzyme for the transition state structure relative to the Michaelis complex. Because enzymes typically increase the non-catalyzed reaction rate by factors of 1010-1015, and Michaelis complexes often have dissociation constants in the range of 10−3-10−6 M, it is proposed that transition state complexes are bound with dissociation constants in the range of 10−14 -10−23 M. As substrate progresses from the Michaelis complex to product, chemistry occurs by enzyme-induced changes in electron distribution in the substrate.

Enzymes alter the electronic structure by protonation, proton abstraction, electron transfer, geometric distortion, hydrophobic partitioning, and interaction with Lewis acids and bases. These are accomplished by sequential protein and substrate conformational changes. When a combination of individually weak forces are brought to bear on the substrate, the summation of the individual energies results in large forces capable of relocating bonding electrons to cause bond-breaking and bond-making. Analogs that resemble the transition state structures should therefore provide the most powerful noncovalent inhibitors known, even if only a small fraction of the transition state energy is captured.

All chemical transformations pass through an unstable structure called the transition state, which is poised between the chemical structures of the substrates and products. The transition states for chemical reactions are proposed to have lifetimes near 10−13 seconds, on the order of the time of a single bond vibration. No physical or spectroscopic method is available to directly observe the structure of the transition state for enzymatic reactions, yet transition state structure is central to understanding enzyme catalysis since enzymes work by lowering the activation energy of a chemical transformation.

It is now accepted that enzymes function to stabilize transition states lying between reactants and products, and that they would therefore be expected to bind strongly any inhibitor that closely resembles such a transition state. Substrates and products often participate in several enzyme reactions, whereas the transition state tends to be characteristic of one particular enzyme, so that such an inhibitor tends to be specific for that particular enzyme. The identification of numerous transition state inhibitors supports the transition state stabilization hypothesis for enzymatic catalysis.
Currently there is a large number of enzymes known to interact with transition state analogs, most of which have been designed with the intention of inhibiting the target enzyme. Examples include HIV-1 protease, racemases, β-lactamases, metalloproteinases, cyclooxygenases and many others.

Scientific theory

From Wikipedia, the free encyclopedia

A scientific theory is an explanation of an aspect of the natural world that can be repeatedly tested and verified in accordance with the scientific method, using accepted protocols of observation, measurement, and evaluation of results. Where possible, theories are tested under controlled conditions in an experiment. In circumstances not amenable to experimental testing, theories are evaluated through principles of abductive reasoning. Established scientific theories have withstood rigorous scrutiny and embody scientific knowledge.

The meaning of the term scientific theory (often contracted to theory for brevity) as used in the disciplines of science is significantly different from the common vernacular usage of theory. In everyday speech, theory can imply an explanation that represents an unsubstantiated and speculative guess, whereas in science it describes an explanation that has been tested and widely accepted as valid. These different usages are comparable to the opposing usages of prediction in science versus common speech, where it denotes a mere hope.

The strength of a scientific theory is related to the diversity of phenomena it can explain and its simplicity. As additional scientific evidence is gathered, a scientific theory may be modified and ultimately rejected if it cannot be made to fit the new findings; in such circumstances, a more accurate theory is then required. That doesn’t mean that all theories can be fundamentally changed (for example, well established foundational scientific theories such as evolution, heliocentric theory, cell theory, theory of plate tectonics etc). In certain cases, the less-accurate unmodified scientific theory can still be treated as a theory if it is useful (due to its sheer simplicity) as an approximation under specific conditions. A case in point is Newton's laws of motion, which can serve as an approximation to special relativity at velocities that are small relative to the speed of light.

Scientific theories are testable and make falsifiable predictions. They describe the causes of a particular natural phenomenon and are used to explain and predict aspects of the physical universe or specific areas of inquiry (for example, electricity, chemistry, and astronomy). Scientists use theories to further scientific knowledge, as well as to facilitate advances in technology or medicine.

As with other forms of scientific knowledge, scientific theories are both deductive and inductive, aiming for predictive and explanatory power.

The paleontologist Stephen Jay Gould wrote that "...facts and theories are different things, not rungs in a hierarchy of increasing certainty. Facts are the world's data. Theories are structures of ideas that explain and interpret facts."

Types

Albert Einstein described two types of scientific theories: "Constructive theories" and "principle theories". Constructive theories are constructive models for phenomena: for example, kinetic energy. Principle theories are empirical generalisations such as Newton's laws of motion.

Characteristics

Essential criteria

Typically for any theory to be accepted within most academia there is one simple criterion. The essential criterion is that the theory must be observable and repeatable. The aforementioned criterion is essential to prevent fraud and perpetuate science itself.

The tectonic plates of the world were mapped in the second half of the 20th century. Plate tectonic theory successfully explains numerous observations about the Earth, including the distribution of earthquakes, mountains, continents, and oceans.

The defining characteristic of all scientific knowledge, including theories, is the ability to make falsifiable or testable predictions. The relevance and specificity of those predictions determine how potentially useful the theory is. A would-be theory that makes no observable predictions is not a scientific theory at all. Predictions not sufficiently specific to be tested are similarly not useful. In both cases, the term "theory" is not applicable.

A body of descriptions of knowledge can be called a theory if it fulfills the following criteria:
  • It makes falsifiable predictions with consistent accuracy across a broad area of scientific inquiry (such as mechanics).
  • It is well-supported by many independent strands of evidence, rather than a single foundation.
  • It is consistent with preexisting experimental results and at least as accurate in its predictions as are any preexisting theories.
These qualities are certainly true of such established theories as special and general relativity, quantum mechanics, plate tectonics, the modern evolutionary synthesis, etc.

Other criteria

In addition, scientists prefer to work with a theory that meets the following qualities:
  • It can be subjected to minor adaptations to account for new data that do not fit it perfectly, as they are discovered, thus increasing its predictive capability over time.
  • It is among the most parsimonious explanations, economical in the use of proposed entities or explanatory steps as per Occam's razor. This is because for each accepted explanation of a phenomenon, there may be an extremely large, perhaps even incomprehensible, number of possible and more complex alternatives, because one can always burden failing explanations with ad hoc hypotheses to prevent them from being falsified; therefore, simpler theories are preferable to more complex ones because they are more testable.

Definitions from scientific organizations

The United States National Academy of Sciences defines scientific theories as follows:
The formal scientific definition of theory is quite different from the everyday meaning of the word. It refers to a comprehensive explanation of some aspect of nature that is supported by a vast body of evidence. Many scientific theories are so well established that no new evidence is likely to alter them substantially. For example, no new evidence will demonstrate that the Earth does not orbit around the sun (heliocentric theory), or that living things are not made of cells (cell theory), that matter is not composed of atoms, or that the surface of the Earth is not divided into solid plates that have moved over geological timescales (the theory of plate tectonics)...One of the most useful properties of scientific theories is that they can be used to make predictions about natural events or phenomena that have not yet been observed.
From the American Association for the Advancement of Science:
A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment. Such fact-supported theories are not "guesses" but reliable accounts of the real world. The theory of biological evolution is more than "just a theory". It is as factual an explanation of the universe as the atomic theory of matter or the germ theory of disease. Our understanding of gravity is still a work in progress. But the phenomenon of gravity, like evolution, is an accepted fact.
Note that the term theory would not be appropriate for describing untested but intricate hypotheses or even scientific models.

Formation

The first observation of cells, by Robert Hooke, using an early microscope. This led to the development of cell theory.

The scientific method involves the proposal and testing of hypotheses, by deriving predictions from the hypotheses about the results of future experiments, then performing those experiments to see whether the predictions are valid. This provides evidence either for or against the hypothesis. When enough experimental results have been gathered in a particular area of inquiry, scientists may propose an explanatory framework that accounts for as many of these as possible. This explanation is also tested, and if it fulfills the necessary criteria (see above), then the explanation becomes a theory. This can take many years, as it can be difficult or complicated to gather sufficient evidence.

Once all of the criteria have been met, it will be widely accepted by scientists (see scientific consensus) as the best available explanation of at least some phenomena. It will have made predictions of phenomena that previous theories could not explain or could not predict accurately, and it will have resisted attempts at falsification. The strength of the evidence is evaluated by the scientific community, and the most important experiments will have been replicated by multiple independent groups.

Theories do not have to be perfectly accurate to be scientifically useful. For example, the predictions made by classical mechanics are known to be inaccurate in the relatistivic realm, but they are almost exactly correct at the comparatively low velocities of common human experience. In chemistry, there are many acid-base theories providing highly divergent explanations of the underlying nature of acidic and basic compounds, but they are very useful for predicting their chemical behavior. Like all knowledge in science, no theory can ever be completely certain, since it is possible that future experiments might conflict with the theory's predictions. However, theories supported by the scientific consensus have the highest level of certainty of any scientific knowledge; for example, that all objects are subject to gravity or that life on Earth evolved from a common ancestor.

Acceptance of a theory does not require that all of its major predictions be tested, if it is already supported by sufficiently strong evidence. For example, certain tests may be unfeasible or technically difficult. As a result, theories may make predictions that have not yet been confirmed or proven incorrect; in this case, the predicted results may be described informally with the term "theoretical". These predictions can be tested at a later time, and if they are incorrect, this may lead to the revision or rejection of the theory.

Modification and improvement

If experimental results contrary to a theory's predictions are observed, scientists first evaluate whether the experimental design was sound, and if so they confirm the results by independent replication. A search for potential improvements to the theory then begins. Solutions may require minor or major changes to the theory, or none at all if a satisfactory explanation is found within the theory's existing framework. Over time, as successive modifications build on top of each other, theories consistently improve and greater predictive accuracy is achieved. Since each new version of a theory (or a completely new theory) must have more predictive and explanatory power than the last, scientific knowledge consistently becomes more accurate over time.

If modifications to the theory or other explanations seem to be insufficient to account for the new results, then a new theory may be required. Since scientific knowledge is usually durable, this occurs much less commonly than modification. Furthermore, until such a theory is proposed and accepted, the previous theory will be retained. This is because it is still the best available explanation for many other phenomena, as verified by its predictive power in other contexts. For example, it has been known since 1859 that the observed perihelion precession of Mercury violates Newtonian mechanics, but the theory remained the best explanation available until relativity was supported by sufficient evidence. Also, while new theories may be proposed by a single person or by many, the cycle of modifications eventually incorporates contributions from many different scientists.

After the changes, the accepted theory will explain more phenomena and have greater predictive power (if it did not, the changes would not be adopted); this new explanation will then be open to further replacement or modification. If a theory does not require modification despite repeated tests, this implies that the theory is very accurate. This also means that accepted theories continue to accumulate evidence over time, and the length of time that a theory (or any of its principles) remains accepted often indicates the strength of its supporting evidence.

Unification

In quantum mechanics, the electrons of an atom occupy orbitals around the nucleus. This image shows the orbitals of a hydrogen atom (s, p, d) at three different energy levels (1, 2, 3). Brighter areas correspond to higher probability density.

In some cases, two or more theories may be replaced by a single theory that explains the previous theories as approximations or special cases, analogous to the way a theory is a unifying explanation for many confirmed hypotheses; this is referred to as unification of theories. For example, electricity and magnetism are now known to be two aspects of the same phenomenon, referred to as electromagnetism.

When the predictions of different theories appear to contradict each other, this is also resolved by either further evidence or unification. For example, physical theories in the 19th century implied that the Sun could not have been burning long enough to allow certain geological changes as well as the evolution of life. This was resolved by the discovery of nuclear fusion, the main energy source of the Sun. Contradictions can also be explained as the result of theories approximating more fundamental (non-contradictory) phenomena. For example, atomic theory is an approximation of quantum mechanics. Current theories describe three separate fundamental phenomena of which all other theories are approximations; the potential unification of these is sometimes called the Theory of Everything.

Example: Relativity

In 1905, Albert Einstein published the principle of special relativity, which soon became a theory. Special relativity predicted the alignment of the Newtonian principle of Galilean invariance, also termed Galilean relativity, with the electromagnetic field. By omitting from special relativity the luminiferous aether, Einstein stated that time dilation and length contraction measured in an object in relative motion is inertial—that is, the object exhibits constant velocity, which is speed with direction, when measured by its observer. He thereby duplicated the Lorentz transformation and the Lorentz contraction that had been hypothesized to resolve experimental riddles and inserted into electrodynamic theory as dynamical consequences of the aether's properties. An elegant theory, special relativity yielded its own consequences, such as the equivalence of mass and energy transforming into one another and the resolution of the paradox that an excitation of the electromagnetic field could be viewed in one reference frame as electricity, but in another as magnetism.

Einstein sought to generalize the invariance principle to all reference frames, whether inertial or accelerating. Rejecting Newtonian gravitation—a central force acting instantly at a distance—Einstein presumed a gravitational field. In 1907, Einstein's equivalence principle implied that a free fall within a uniform gravitational field is equivalent to inertial motion. By extending special relativity's effects into three dimensions, general relativity extended length contraction into space contraction, conceiving of 4D space-time as the gravitational field that alters geometrically and sets all local objects' pathways. Even massless energy exerts gravitational motion on local objects by "curving" the geometrical "surface" of 4D space-time. Yet unless the energy is vast, its relativistic effects of contracting space and slowing time are negligible when merely predicting motion. Although general relativity is embraced as the more explanatory theory via scientific realism, Newton's theory remains successful as merely a predictive theory via instrumentalism. To calculate trajectories, engineers and NASA still uses Newton's equations, which are simpler to operate.

Theories and laws

Both scientific laws and scientific theories are produced from the scientific method through the formation and testing of hypotheses, and can predict the behavior of the natural world. Both are typically well-supported by observations and/or experimental evidence. However, scientific laws are descriptive accounts of how nature will behave under certain conditions. Scientific theories are broader in scope, and give overarching explanations of how nature works and why it exhibits certain characteristics. Theories are supported by evidence from many different sources, and may contain one or several laws.

A common misconception is that scientific theories are rudimentary ideas that will eventually graduate into scientific laws when enough data and evidence have been accumulated. A theory does not change into a scientific law with the accumulation of new or better evidence. A theory will always remain a theory; a law will always remain a law. Both theories and laws could potentially be falsified by countervailing evidence.

Theories and laws are also distinct from hypotheses. Unlike hypotheses, theories and laws may be simply referred to as scientific fact. However, in science, theories are different from facts even when they are well supported. For example, evolution is both a theory and a fact.

About theories

Theories as axioms

The logical positivists thought of scientific theories as statements in a formal language. First-order logic is an example of a formal language. The logical positivists envisaged a similar scientific language. In addition to scientific theories, the language also included observation sentences ("the sun rises in the east"), definitions, and mathematical statements. The phenomena explained by the theories, if they could not be directly observed by the senses (for example, atoms and radio waves), were treated as theoretical concepts. In this view, theories function as axioms: predicted observations are derived from the theories much like theorems are derived in Euclidean geometry. However, the predictions are then tested against reality to verify the theories, and the "axioms" can be revised as a direct result.

The phrase "the received view of theories" is used to describe this approach. Terms commonly associated with it are "linguistic" (because theories are components of a language) and "syntactic" (because a language has rules about how symbols can be strung together). Problems in defining this kind of language precisely, e.g., are objects seen in microscopes observed or are they theoretical objects, led to the effective demise of logical positivism in the 1970s.

Theories as models

The semantic view of theories, which identifies scientific theories with models rather than propositions, has replaced the received view as the dominant position in theory formulation in the philosophy of science. A model is a logical framework intended to represent reality (a "model of reality"), similar to the way that a map is a graphical model that represents the territory of a city or country.

Precession of the perihelion of Mercury (exaggerated). The deviation in Mercury's position from the Newtonian prediction is about 43 arc-seconds (about two-thirds of 1/60 of a degree) per century.
 
In this approach, theories are a specific category of models that fulfill the necessary criteria (see above). One can use language to describe a model; however, the theory is the model (or a collection of similar models), and not the description of the model. A model of the solar system, for example, might consist of abstract objects that represent the sun and the planets. These objects have associated properties, e.g., positions, velocities, and masses. The model parameters, e.g., Newton's Law of Gravitation, determine how the positions and velocities change with time. This model can then be tested to see whether it accurately predicts future observations; astronomers can verify that the positions of the model's objects over time match the actual positions of the planets. For most planets, the Newtonian model's predictions are accurate; for Mercury, it is slightly inaccurate and the model of general relativity must be used instead.

The word "semantic" refers to the way that a model represents the real world. The representation (literally, "re-presentation") describes particular aspects of a phenomenon or the manner of interaction among a set of phenomena. For instance, a scale model of a house or of a solar system is clearly not an actual house or an actual solar system; the aspects of an actual house or an actual solar system represented in a scale model are, only in certain limited ways, representative of the actual entity. A scale model of a house is not a house; but to someone who wants to learn about houses, analogous to a scientist who wants to understand reality, a sufficiently detailed scale model may suffice.

Differences between theory and model

Several commentators have stated that the distinguishing characteristic of theories is that they are explanatory as well as descriptive, while models are only descriptive (although still predictive in a more limited sense). Philosopher Stephen Pepper also distinguished between theories and models, and said in 1948 that general models and theories are predicated on a "root" metaphor that constrains how scientists theorize and model a phenomenon and thus arrive at testable hypotheses.
Engineering practice makes a distinction between "mathematical models" and "physical models"; the cost of fabricating a physical model can be minimized by first creating a mathematical model using a computer software package, such as a computer aided design tool. The component parts are each themselves modelled, and the fabrication tolerances are specified. An exploded view drawing is used to lay out the fabrication sequence. Simulation packages for displaying each of the subassemblies allow the parts to be rotated, magnified, in realistic detail. Software packages for creating the bill of materials for construction allows subcontractors to specialize in assembly processes, which spreads the cost of manufacturing machinery among multiple customers. See: Computer-aided engineering, Computer-aided manufacturing, and 3D printing

Assumptions in formulating theories

An assumption (or axiom) is a statement that is accepted without evidence. For example, assumptions can be used as premises in a logical argument. Isaac Asimov described assumptions as follows:
...it is incorrect to speak of an assumption as either true or false, since there is no way of proving it to be either (If there were, it would no longer be an assumption). It is better to consider assumptions as either useful or useless, depending on whether deductions made from them corresponded to reality...Since we must start somewhere, we must have assumptions, but at least let us have as few assumptions as possible.
Certain assumptions are necessary for all empirical claims (e.g. the assumption that reality exists). However, theories do not generally make assumptions in the conventional sense (statements accepted without evidence). While assumptions are often incorporated during the formation of new theories, these are either supported by evidence (such as from previously existing theories) or the evidence is produced in the course of validating the theory. This may be as simple as observing that the theory makes accurate predictions, which is evidence that any assumptions made at the outset are correct or approximately correct under the conditions tested.

Conventional assumptions, without evidence, may be used if the theory is only intended to apply when the assumption is valid (or approximately valid). For example, the special theory of relativity assumes an inertial frame of reference. The theory makes accurate predictions when the assumption is valid, and does not make accurate predictions when the assumption is not valid. Such assumptions are often the point with which older theories are succeeded by new ones (the general theory of relativity works in non-inertial reference frames as well).

The term "assumption" is actually broader than its standard use, etymologically speaking. The Oxford English Dictionary (OED) and online Wiktionary indicate its Latin source as assumere ("accept, to take to oneself, adopt, usurp"), which is a conjunction of ad- ("to, towards, at") and sumere (to take). The root survives, with shifted meanings, in the Italian assumere and Spanish sumir. The first sense of "assume" in the OED is "to take unto (oneself), receive, accept, adopt". The term was originally employed in religious contexts as in "to receive up into heaven", especially "the reception of the Virgin Mary into heaven, with body preserved from corruption", (1297 CE) but it was also simply used to refer to "receive into association" or "adopt into partnership". Moreover, other senses of assumere included (i) "investing oneself with (an attribute)", (ii) "to undertake" (especially in Law), (iii) "to take to oneself in appearance only, to pretend to possess", and (iv) "to suppose a thing to be" (all senses from OED entry on "assume"; the OED entry for "assumption" is almost perfectly symmetrical in senses). Thus, "assumption" connotes other associations than the contemporary standard sense of "that which is assumed or taken for granted; a supposition, postulate" (only the 11th of 12 senses of "assumption", and the 10th of 11 senses of "assume").

Descriptions

From philosophers of science

Karl Popper described the characteristics of a scientific theory as follows:
  1. It is easy to obtain confirmations, or verifications, for nearly every theory—if we look for confirmations.
  2. Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory—an event which would have refuted the theory.
  3. Every "good" scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.
  4. A theory which is not refutable by any conceivable event is non-scientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.
  5. Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.
  6. Confirming evidence should not count except when it is the result of a genuine test of the theory; and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory. (I now speak in such cases of "corroborating evidence".)
  7. Some genuinely testable theories, when found to be false, might still be upheld by their admirers—for example by introducing post hoc (after the fact) some auxiliary hypothesis or assumption, or by reinterpreting the theory post hoc in such a way that it escapes refutation. Such a procedure is always possible, but it rescues the theory from refutation only at the price of destroying, or at least lowering, its scientific status, by tampering with evidence. The temptation to tamper can be minimized by first taking the time to write down the testing protocol before embarking on the scientific work.
Popper summarized these statements by saying that the central criterion of the scientific status of a theory is its "falsifiability, or refutability, or testability". Echoing this, Stephen Hawking states, "A theory is a good theory if it satisfies two requirements: It must accurately describe a large class of observations on the basis of a model that contains only a few arbitrary elements, and it must make definite predictions about the results of future observations." He also discusses the "unprovable but falsifiable" nature of theories, which is a necessary consequence of inductive logic, and that "you can disprove a theory by finding even a single observation that disagrees with the predictions of the theory".

Several philosophers and historians of science have, however, argued that Popper's definition of theory as a set of falsifiable statements is wrong because, as Philip Kitcher has pointed out, if one took a strictly Popperian view of "theory", observations of Uranus when first discovered in 1781 would have "falsified" Newton's celestial mechanics. Rather, people suggested that another planet influenced Uranus' orbit—and this prediction was indeed eventually confirmed.

Kitcher agrees with Popper that "There is surely something right in the idea that a science can succeed only if it can fail." He also says that scientific theories include statements that cannot be falsified, and that good theories must also be creative. He insists we view scientific theories as an "elaborate collection of statements", some of which are not falsifiable, while others—those he calls "auxiliary hypotheses", are.

According to Kitcher, good scientific theories must have three features:
  1. Unity: "A science should be unified…. Good theories consist of just one problem-solving strategy, or a small family of problem-solving strategies, that can be applied to a wide range of problems."
  2. Fecundity: "A great scientific theory, like Newton's, opens up new areas of research…. Because a theory presents a new way of looking at the world, it can lead us to ask new questions, and so to embark on new and fruitful lines of inquiry…. Typically, a flourishing science is incomplete. At any time, it raises more questions than it can currently answer. But incompleteness is not vice. On the contrary, incompleteness is the mother of fecundity…. A good theory should be productive; it should raise new questions and presume those questions can be answered without giving up its problem-solving strategies."
  3. Auxiliary hypotheses that are independently testable: "An auxiliary hypothesis ought to be testable independently of the particular problem it is introduced to solve, independently of the theory it is designed to save." (For example, the evidence for the existence of Neptune is independent of the anomalies in Uranus's orbit.)
Like other definitions of theories, including Popper's, Kitcher makes it clear that a theory must include statements that have observational consequences. But, like the observation of irregularities in the orbit of Uranus, falsification is only one possible consequence of observation. The production of new hypotheses is another possible and equally important result.

Analogies and metaphors

The concept of a scientific theory has also been described using analogies and metaphors. For instance, the logical empiricist Carl Gustav Hempel likened the structure of a scientific theory to a "complex spatial network:"
Its terms are represented by the knots, while the threads connecting the latter correspond, in part, to the definitions and, in part, to the fundamental and derivative hypotheses included in the theory. The whole system floats, as it were, above the plane of observation and is anchored to it by the rules of interpretation. These might be viewed as strings which are not part of the network but link certain points of the latter with specific places in the plane of observation. By virtue of these interpretive connections, the network can function as a scientific theory: From certain observational data, we may ascend, via an interpretive string, to some point in the theoretical network, thence proceed, via definitions and hypotheses, to other points, from which another interpretive string permits a descent to the plane of observation.
Michael Polanyi made an analogy between a theory and a map:
A theory is something other than myself. It may be set out on paper as a system of rules, and it is the more truly a theory the more completely it can be put down in such terms. Mathematical theory reaches the highest perfection in this respect. But even a geographical map fully embodies in itself a set of strict rules for finding one's way through a region of otherwise uncharted experience. Indeed, all theory may be regarded as a kind of map extended over space and time.
A scientific theory can also be thought of as a book that captures the fundamental information about the world, a book that must be researched, written, and shared. In 1623, Galileo Galilei wrote:
Philosophy [i.e. physics] is written in this grand book—I mean the universe—which stands continually open to our gaze, but it cannot be understood unless one first learns to comprehend the language and interpret the characters in which it is written. It is written in the language of mathematics, and its characters are triangles, circles, and other geometrical figures, without which it is humanly impossible to understand a single word of it; without these, one is wandering around in a dark labyrinth.
The book metaphor could also be applied in the following passage, by the contemporary philosopher of science Ian Hacking:
I myself prefer an Argentine fantasy. God did not write a Book of Nature of the sort that the old Europeans imagined. He wrote a Borgesian library, each book of which is as brief as possible, yet each book of which is inconsistent with every other. No book is redundant. For every book there is some humanly accessible bit of Nature such that that book, and no other, makes possible the comprehension, prediction and influencing of what is going on…Leibniz said that God chose a world which maximized the variety of phenomena while choosing the simplest laws. Exactly so: but the best way to maximize phenomena and have simplest laws is to have the laws inconsistent with each other, each applying to this or that but none applying to all.

In physics

In physics, the term theory is generally used for a mathematical framework—derived from a small set of basic postulates (usually symmetries—like equality of locations in space or in time, or identity of electrons, etc.)—that is capable of producing experimental predictions for a given category of physical systems. A good example is classical electromagnetism, which encompasses results derived from gauge symmetry (sometimes called gauge invariance) in a form of a few equations called Maxwell's equations. The specific mathematical aspects of classical electromagnetic theory are termed "laws of electromagnetism," reflecting the level of consistent and reproducible evidence that supports them. Within electromagnetic theory generally, there are numerous hypotheses about how electromagnetism applies to specific situations. Many of these hypotheses are already considered to be adequately tested, with new ones always in the making and perhaps untested. An example of the latter might be the radiation reaction force. As of 2009, its effects on the periodic motion of charges are detectable in synchrotrons, but only as averaged effects over time. Some researchers are now considering experiments that could observe these effects at the instantaneous level (i.e. not averaged over time).

Examples

Note that many fields of inquiry do not have specific named theories, e.g. developmental biology. Scientific knowledge outside a named theory can still have a high level of certainty, depending on the amount of evidence supporting it. Also note that since theories draw evidence from many different fields, the categorization is not absolute.

Health psychology

From Wikipedia, the free encyclopedia

Health psychology is the study of psychological and behavioral processes in health, illness, and healthcare. It is concerned with understanding how psychological, behavioral, and cultural factors contribute to physical health and illness. Psychological factors can affect health directly. For example, chronically occurring environmental stressors affecting the hypothalamic–pituitary–adrenal axis, cumulatively, can harm health. Behavioral factors can also affect a person's health. For example, certain behaviors can, over time, harm (smoking or consuming excessive amounts of alcohol) or enhance health (engaging in exercise). Health psychologists take a biopsychosocial approach. In other words, health psychologists understand health to be the product not only of biological processes (e.g., a virus, tumor, etc.) but also of psychological (e.g., thoughts and beliefs), behavioral (e.g., habits), and social processes (e.g., socioeconomic status and ethnicity).

By understanding psychological factors that influence health, and constructively applying that knowledge, health psychologists can improve health by working directly with individual patients or indirectly in large-scale public health programs. In addition, health psychologists can help train other healthcare professionals (e.g., physicians and nurses) to take advantage of the knowledge the discipline has generated, when treating patients. Health psychologists work in a variety of settings: alongside other medical professionals in hospitals and clinics, in public health departments working on large-scale behavior change and health promotion programs, and in universities and medical schools where they teach and conduct research.

Although its early beginnings can be traced to the field of clinical psychology, four different divisions within health psychology and one related field, occupational health psychology (OHP), have developed over time. The four divisions include clinical health psychology, public health psychology, community health psychology, and critical health psychology. Professional organizations for the field of health psychology include Division 38 of the American Psychological Association (APA), the Division of Health Psychology of the British Psychological Society (BPS), and the European Health Psychology Society. Advanced credentialing in the US as a clinical health psychologist is provided through the American Board of Professional Psychology.

Overview

Recent advances in psychological, medical, and physiological research have led to a new way of thinking about health and illness. This conceptualization, which has been labeled the biopsychosocial model, views health and illness as the product of a combination of factors including biological characteristics (e.g., genetic predisposition), behavioral factors (e.g., lifestyle, stress, health beliefs), and social conditions (e.g., cultural influences, family relationships, social support).

Psychologists who strive to understand how biological, behavioral, and social factors influence health and illness are called health psychologists. Health psychologists use their knowledge of psychology and health to promote general well-being and understand physical illness. They are specially trained to help people deal with the psychological and emotional aspects of health and illness. Health psychologists work with many different health care professionals (e.g., physicians, dentists, nurses, physician's assistants, dietitians, social workers, pharmacists, physical and occupational therapists, and chaplains) to conduct research and provide clinical assessments and treatment services. Many health psychologists focus on prevention research and interventions designed to promote healthier lifestyles and try to find ways to encourage people to improve their health. For example, they may help people to lose weight or stop smoking. Health psychologists also use their skills to try to improve the healthcare system. For example, they may advise doctors about better ways to communicate with their patients. Health psychologists work in many different settings including the UK's National Health Service (NHS), private practice, universities, communities, schools and organizations. While many health psychologists provide clinical services as part of their duties, others function in non-clinical roles, primarily involving teaching and research. Leading journals include Health Psychology, the Journal of Health Psychology, the British Journal of Health Psychology, and Applied Psychology: Health and Well-Being. Health psychologists can work with people on a one-to-one basis, in groups, as a family, or at a larger population level.
Clinical health psychology (ClHP)
ClHP is the application of scientific knowledge, derived from the field of health psychology, to clinical questions that may arise across the spectrum of health care. ClHP is one of many specialty practice areas for clinical psychologists. It is also a major contributor to the prevention-focused field of behavioral health and the treatment-oriented field of behavioral medicine. Clinical practice includes education, the techniques of behavior change, and psychotherapy. In some countries, a clinical health psychologist, with additional training, can become a medical psychologist and, thereby, obtain prescription privileges.
Public health psychology (PHP)
PHP is population oriented. A major aim of PHP is to investigate potential causal links between psychosocial factors and health at the population level. Public health psychologists present research results to educators, policy makers, and health care providers in order to promote better public health. PHP is allied to other public health disciplines including epidemiology, nutrition, genetics and biostatistics. Some PHP interventions are targeted toward at-risk population groups (e.g., undereducated, single pregnant women who smoke) and not the population as a whole (e.g., all pregnant women).
Community health psychology (CoHP)
CoHP investigates community factors that contribute to the health and well-being of individuals who live in communities. CoHP also develops community-level interventions that are designed to combat disease and promote physical and mental health. The community often serves as the level of analysis, and is frequently sought as a partner in health-related interventions.
Critical health psychology (CrHP)
CrHP is concerned with the distribution of power and the impact of power differentials on health experience and behavior, health care systems, and health policy. CrHP prioritizes social justice and the universal right to health for people of all races, genders, ages, and socioeconomic positions. A major concern is health inequalities. The critical health psychologist is an agent of change, not simply an analyst or cataloger. A leading organization in this area is the International Society of Critical Health Psychology.
Health psychology, like other areas of applied psychology, is both a theoretical and applied field. Health psychologists employ diverse research methods. These methods include controlled randomized experiments, quasi-experiments, longitudinal studies, time-series designs, cross-sectional studies, case-control studies, qualitative research as well as action research. Health psychologists study a broad range of variables including cardiovascular disease, (cardiac psychology), smoking habits, the relation of religious beliefs to health, alcohol use, social support, living conditions, emotional state, social class, and more. Some health psychologists treat individuals with sleep problems, headaches, alcohol problems, etc. Other health psychologists work to empower community members by helping community members gain control over their health and improve quality of life of entire communities.

Origins and development

Psychological factors in health had been studied since the early 20th century by disciplines such as psychosomatic medicine and later behavioral medicine, but these were primarily branches of medicine, not psychology. Health psychology began to emerge as a distinct discipline of psychology in the United States in the 1970s. In the mid-20th century there was a growing understanding in medicine of the effect of behavior on health. For example, the Alameda County Study, which began in the 1960s, showed that people who ate regular meals (e.g., breakfast), maintained a healthy weight, received adequate sleep, did not smoke, drank little alcohol, and exercised regularly were in better health and lived longer. In addition, psychologists and other scientists were discovering relationships between psychological processes and physiological ones. These discoveries include a better understanding of the impact of psychosocial stress on the cardiovascular and immune systems, and the early finding that the functioning of the immune system could be altered by learning.

Psychologists have been working in medical settings for many years (in the UK sometimes the field was termed medical psychology). Medical psychology, however, was a relatively small field, primarily aimed at helping patients adjust to illness. In 1969, William Schofield prepared a report for the APA entitled The Role of Psychology in the Delivery of Health Services. While there were exceptions, he found that the psychological research of the time frequently regarded mental health and physical health as separate, and devoted very little attention to psychology's impact upon physical health. One of the few psychologists working in this area at the time, Schofield proposed new forms of education and training for future psychologists. The APA, responding to his proposal, in 1973 established a task force to consider how psychologists could (a) help people to manage their health-related behaviors, (b) help patients manage their physical health problems, and (c) train healthcare staff to work more effectively with patients.

Led by Joseph Matarazzo, in 1977, APA added a division devoted to health psychology. At the first divisional conference, Matarazzo delivered a speech that played an important role in defining health psychology. He defined the new field in this way, "Health psychology is the aggregate of the specific educational, scientific and professional contributions of the discipline of psychology to the promotion and maintenance of health, the prevention and treatment of illness, the identification of diagnostic and etiologic correlates of health, illness and related dysfunction, and the analysis and improvement of the healthcare system and health policy formation." In the 1980s, similar organizations were established elsewhere. In 1986, the BPS established a Division of Health Psychology. The European Health Psychology Society was also established in 1986. Similar organizations were established in other countries, including Australia and Japan. Universities began to develop doctoral level training programs in health psychology. In the US, post-doctoral level health psychology training programs were established for individuals who completed a doctoral degree in clinical psychology.
A number of relevant trends coincided with the emergence of health psychology, including:
  • Epidemiological evidence linking behavior and health.
  • The addition of behavioral science to medical school curricula, with courses often taught by psychologists.
  • The training of health professionals in communication skills, with the aim of improving patient satisfaction and adherence to medical treatment.
  • Increasing numbers of interventions based on psychological theory (e.g., behavior modification).
  • An increased understanding of the interaction between psychological and physiological factors leading to the emergence of psychophysiology and psychoneuroimmunology (PNI).
  • The health domain having become a target of research by social psychologists interested in testing theoretical models linking beliefs, attitudes, and behavior.
  • The emergence of AIDS/HIV, and the increase in funding for behavioral research the epidemic provoked.
In the UK, the BPS’s reconsideration of the role of the Medical Section prompted the emergence of health psychology as a distinct field. Marie Johnston and John Weinman argued in a letter to the BPS Bulletin that there was a great need for a Health Psychology Section. In December 1986 the section was established at the BPS London Conference, with Marie Johnston as chair. At the Annual BPS Conference in 1993 a review of "Current Trends in Health Psychology" was organized, and a definition of health psychology as "the study of psychological and behavioural processes in health, illness and healthcare" was proposed. The Health Psychology Section became a Special Group in 1993 and was awarded divisional status within the UK in 1997. The awarding of divisional status meant that the individual training needs and professional practice of health psychologists were recognized, and members were able to obtain chartered status with the BPS. The BPS went on to regulate training and practice in health psychology until the regulation of professional standards and qualifications was taken over by statutory registration with the Health Professions Council in 2010.

Objectives

Understanding behavioral and contextual factors

Health psychologists conduct research to identify behaviors and experiences that promote health, give rise to illness, and influence the effectiveness of health care. They also recommend ways to improve health care policy. Health psychologists have worked on developing ways to reduce smoking and improve daily nutrition in order to promote health and prevent illness. They have also studied the association between illness and individual characteristics. For example, health psychology has found a relation between the personality characteristics of thrill seeking, impulsiveness, hostility/anger, emotional instability, and depression, on one hand, and high-risk driving, on the other.

Health psychology is also concerned with contextual factors, including economic, cultural, community, social, and lifestyle factors that influence health. The biopsychosocial model can help in understanding the relation between contextual factors and biology in affecting health. Physical addiction impedes smoking cessation. Some research suggests that seductive advertising also contributes to psychological dependency on tobacco, although other research has found no relationship between media exposure and smoking in youth. OHP research indicates that people in jobs that combine little decision latitude with a high psychological workload are at increased risk for cardiovascular disease. Other OHP research reveals a relation between unemployment and elevations in blood pressure. Epidemiologic research documents a relation between social class and cardiovascular disease.

Health psychologists also aim to change health behaviors for the dual purpose of helping people stay healthy and helping patients adhere to disease treatment regimens. Health psychologists employ cognitive behavior therapy and applied behavior analysis for that purpose.

Preventing illness

Health psychologists promote health through behavioral change, as mentioned above; however, they attempt to prevent illness in other ways as well. Health psychologists try to help people to lead a healthy life by developing and running programmes which can help people to make changes in their lives such as stopping smoking, reducing the amount of alcohol they consume, eating more healthily, and exercising regularly. Campaigns informed by health psychology have targeted tobacco use. Those least able to afford tobacco products consume them most. Tobacco provides individuals with a way of controlling aversive emotional states accompanying daily experiences of stress that characterize the lives of deprived and vulnerable individuals. Practitioners emphasize education and effective communication as a part of illness prevention because many people do not recognize, or minimize, the risk of illness present in their lives. Moreover, many individuals are often unable to apply their knowledge of health practices owing to everyday pressures and stresses. A common example of population-based attempts to motivate the smoking public to reduce its dependence on cigarettes is anti-smoking campaigns.

Health psychologists help to promote health and well-being by preventing illness. Some illnesses can be more effectively treated if caught early. Health psychologists have worked to understand why some people do not seek early screenings or immunizations, and have used that knowledge to develop ways to encourage people to have early health checks for illnesses such as cancer and heart disease. Health psychologists are also finding ways to help people to avoid risky behaviors (e.g., engaging in unprotected sex) and encourage health-enhancing behaviors (e.g., regular tooth brushing or hand washing).

Health psychologists also aim at educating health professionals, including physicians and nurses, in communicating effectively with patients in ways that overcome barriers to understanding, remembering, and implementing effective strategies for reducing exposures to risk factors and making health-enhancing behavior changes.

There is also evidence from OHP that stress-reduction interventions at the workplace can be effective. For example, Kompier and his colleagues have shown that a number of interventions aimed at reducing stress in bus drivers has had beneficial effects for employees and bus companies.

The effects of disease

Health psychologists investigate how disease affects individuals' psychological well-being. An individual who becomes seriously ill or injured faces many different practical stressors. These stressors include problems meeting medical and other bills, problems obtaining proper care when home from the hospital, obstacles to caring for dependents, the experience of having one's sense of self-reliance compromised, gaining a new, unwanted identity as that of a sick person, and so on. These stressors can lead to depression, reduced self-esteem, etc.

Health psychology also concerns itself with bettering the lives of individuals with terminal illness. When there is little hope of recovery, health psychologist therapists can improve the quality of life of the patient by helping the patient recover at least some of his or her psychological well-being. Health psychologists are also concerned with providing therapeutic services for the bereaved.

Critical analysis of health policy

Critical health psychologists explore how health policy can influence inequities, inequalities and social injustice. These avenues of research expand the scope of health psychology beyond the level of individual health to an examination of the social and economic determinants of health both within and between regions and nations. The individualism of mainstream health psychology has been critiqued and deconstructed by critical health psychologists using qualitative methods that zero in on the health experience.

Conducting research

Like psychologists in the other main psychology disciplines, health psychologists have advanced knowledge of research methods. Health psychologists apply this knowledge to conduct research on a variety of questions. For example, health psychologists carry out research to answer questions such as:
  • What influences healthy eating?
  • How is stress linked to heart disease?
  • What are the emotional effects of genetic testing?
  • How can we change people’s health behavior to improve their health?

Teaching and communication

Health psychologists can also be responsible for training other health professionals on how to deliver interventions to help promote healthy eating, stopping smoking, weight loss, etc. Health psychologists also train other health professionals in communication skills such as how to break bad news or support behavior change for the purpose of improving adherence to treatment.

Applications

Improving doctor–patient communication

Health psychologists aid the process of communication between physicians and patients during medical consultations. There are many problems in this process, with patients showing a considerable lack of understanding of many medical terms, particularly anatomical terms (e.g., intestines). One area of research on this topic involves "doctor-centered" or "patient-centered" consultations. Doctor-centered consultations are generally directive, with the patient answering questions and playing less of a role in decision-making. Although this style is preferred by elderly people and others, many people dislike the sense of hierarchy or ignorance that it inspires. They prefer patient-centered consultations, which focus on the patient's needs, involve the doctor listening to the patient completely before making a decision, and involving the patient in the process of choosing treatment and finding a diagnosis.

Improving adherence to medical advice

Health psychologists engage in research and practice aimed at getting people to follow medical advice and adhere to their treatment regimens. Patients often forget to take their pills or consciously opt not to take their prescribed medications because of side effects. Failing to take prescribed medication is costly and wastes millions of usable medicines that could otherwise help other people. Estimated adherence rates are difficult to measure (see below); there is, however, evidence that adherence could be improved by tailoring treatment programs to individuals' daily lives. Additionally, traditional cognitive-behavioural therapies have been adapted for people suffering from chronic illnesses and comorbid psychological distress to include modules that encourage, support and reinforce adherence to medical advice as part of the larger treatment approach.

Ways of measuring adherence

Health psychologists have identified a number of ways of measuring patients' adherence to medical regimens:
  • Counting the number of pills in the medicine bottle
  • Using self-reports
  • Using "Trackcap" bottles, which track the number of times the bottle is opened.

Managing pain

Health psychology attempts to find treatments to reduce or eliminate pain, as well as understand pain anomalies such as episodic analgesia, causalgia, neuralgia, and phantom limb pain. Although the task of measuring and describing pain has been problematic, the development of the McGill Pain Questionnaire has helped make progress in this area. Treatments for pain involve patient-administered analgesia, acupuncture (found to be effective in reducing pain for osteoarthritis of the knee), biofeedback, and cognitive behavior therapy.

Health psychologist roles

Below are some examples of the types of positions held by health psychologists within applied settings such as the UK's NHS and private practice.
  • Consultant health psychologist: A consultant health psychologist will take a lead for health psychology within public health, including managing tobacco control and smoking cessation services and providing professional leadership in the management of health trainers.
  • Principal health psychologist: A principal health psychologist could, for example lead the health psychology service within one of the UK’s leading heart and lung hospitals, providing a clinical service to patients and advising all members of the multidisciplinary team.
  • Health psychologist: An example of a health psychologist's role would be to provide health psychology input to a center for weight management. Psychological assessment of treatment, development and delivery of a tailored weight management program, and advising on approaches to improve adherence to health advice and medical treatment.
  • Research psychologist: Research health psychologists carry out health psychology research, for example, exploring the psychological impact of receiving a diagnosis of dementia, or evaluating ways of providing psychological support for people with burn injuries. Research can also be in the area of health promotion, for example investigating the determinants of healthy eating or physical activity or understanding why people misuse substances.
  • Health psychologist in training/assistant health psychologist: As an assistant/in training, a health psychologist will gain experience assessing patients, delivering psychological interventions to change health behaviors, and conducting research, whilst being supervised by a qualified health psychologist.

Training

In the UK, health psychologists are registered by the Health Professions Council (HPC) and have trained to a level to be eligible for full membership of the Division of Health Psychology within the BPS. Registered health psychologists who are chartered with the BPS will have undertaken a minimum of six years of training and will have specialized in health psychology for a minimum of three years. Health psychologists in training must have completed BPS stage 1 training and be registered with the BPS Stage 2 training route or with a BPS-accredited university doctoral health psychology program. Once qualified, health psychologists can work in a range of settings, for example the NHS, universities, schools, private healthcare, and research and charitable organizations. A health psychologist in training might be working within applied settings while working towards registration and chartered status. A health psychologist will have demonstrated competencies in all of the following areas:
  • professional skills (including implementing ethical and legal standards, communication, and teamwork),
  • research skills (including designing, conducting, and analyzing psychological research in numerous areas),
  • consultancy skills (including planning and evaluation),
  • teaching and training skills (including knowledge of designing, delivering, and evaluating large and small scale training program),
  • intervention skills (including delivery and evaluation of behavior change interventions).
All qualified health psychologists must also engage in and record their continuing professional development (CPD) for psychology each year throughout their career.

Representation of a Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Representation_of_a_Lie_group...