Search This Blog

Saturday, May 25, 2019

Mathematical model

From Wikipedia, the free encyclopedia

A mathematical model is a description of a system using mathematical concepts and language. The process of developing a mathematical model is termed mathematical modeling. Mathematical models are used in the natural sciences (such as physics, biology, earth science, chemistry) and engineering disciplines (such as computer science, electrical engineering), as well as in the social sciences (such as economics, psychology, sociology, political science).

A model may help to explain a system and to study the effects of different components, and to make predictions about behaviour.

Elements of a mathematical model

Mathematical models can take many forms, including dynamical systems, statistical models, differential equations, or game theoretic models. These and other types of models can overlap, with a given model involving a variety of abstract structures. In general, mathematical models may include logical models. In many cases, the quality of a scientific field depends on how well the mathematical models developed on the theoretical side agree with results of repeatable experiments. Lack of agreement between theoretical mathematical models and experimental measurements often leads to important advances as better theories are developed. 

In the physical sciences, a traditional mathematical model contains most of the following elements:
  1. Governing equations
  2. Supplementary sub-models
    1. Defining equations
    2. Constitutive equations
  3. Assumptions and constraints
    1. Initial and boundary conditions
    2. Classical constraints and kinematic equations

Classifications

Mathematical models are usually composed of relationships and variables. Relationships can be described by operators, such as algebraic operators, functions, differential operators, etc. Variables are abstractions of system parameters of interest, that can be quantified. Several classification criteria can be used for mathematical models according to their structure:
  • Linear vs. nonlinear: If all the operators in a mathematical model exhibit linearity, the resulting mathematical model is defined as linear. A model is considered to be nonlinear otherwise. The definition of linearity and nonlinearity is dependent on context, and linear models may have nonlinear expressions in them. For example, in a statistical linear model, it is assumed that a relationship is linear in the parameters, but it may be nonlinear in the predictor variables. Similarly, a differential equation is said to be linear if it can be written with linear differential operators, but it can still have nonlinear expressions in it. In a mathematical programming model, if the objective functions and constraints are represented entirely by linear equations, then the model is regarded as a linear model. If one or more of the objective functions or constraints are represented with a nonlinear equation, then the model is known as a nonlinear model.
    Nonlinearity, even in fairly simple systems, is often associated with phenomena such as chaos and irreversibility. Although there are exceptions, nonlinear systems and models tend to be more difficult to study than linear ones. A common approach to nonlinear problems is linearization, but this can be problematic if one is trying to study aspects such as irreversibility, which are strongly tied to nonlinearity.
  • Static vs. dynamic: A dynamic model accounts for time-dependent changes in the state of the system, while a static (or steady-state) model calculates the system in equilibrium, and thus is time-invariant. Dynamic models typically are represented by differential equations or difference equations.
  • Explicit vs. implicit: If all of the input parameters of the overall model are known, and the output parameters can be calculated by a finite series of computations, the model is said to be explicit. But sometimes it is the output parameters which are known, and the corresponding inputs must be solved for by an iterative procedure, such as Newton's method (if the model is linear) or Broyden's method (if non-linear). In such a case the model is said to be implicit. For example, a jet engine's physical properties such as turbine and nozzle throat areas can be explicitly calculated given a design thermodynamic cycle (air and fuel flow rates, pressures, and temperatures) at a specific flight condition and power setting, but the engine's operating cycles at other flight conditions and power settings cannot be explicitly calculated from the constant physical properties.
  • Discrete vs. continuous: A discrete model treats objects as discrete, such as the particles in a molecular model or the states in a statistical model; while a continuous model represents the objects in a continuous manner, such as the velocity field of fluid in pipe flows, temperatures and stresses in a solid, and electric field that applies continuously over the entire model due to a point charge.
  • Deterministic vs. probabilistic (stochastic): A deterministic model is one in which every set of variable states is uniquely determined by parameters in the model and by sets of previous states of these variables; therefore, a deterministic model always performs the same way for a given set of initial conditions. Conversely, in a stochastic model—usually called a "statistical model"—randomness is present, and variable states are not described by unique values, but rather by probability distributions.
  • Deductive, inductive, or floating: A deductive model is a logical structure based on a theory. An inductive model arises from empirical findings and generalization from them. The floating model rests on neither theory nor observation, but is merely the invocation of expected structure. Application of mathematics in social sciences outside of economics has been criticized for unfounded models. Application of catastrophe theory in science has been characterized as a floating model.

Significance in the natural sciences

Mathematical models are of great importance in the natural sciences, particularly in physics. Physical theories are almost invariably expressed using mathematical models. 

Throughout history, more and more accurate mathematical models have been developed. Newton's laws accurately describe many everyday phenomena, but at certain limits theory of relativity and quantum mechanics must be used. Though even these theories can't model or explain all phenomena themselves or together, such as black holes. It is possible to obtain the less accurate models in appropriate limits, for example relativistic mechanics reduces to Newtonian mechanics at speeds much less than the speed of light. Quantum mechanics reduces to classical physics when the quantum numbers are high. For example, the de Broglie wavelength of a tennis ball is insignificantly small, so classical physics is a good approximation to use in this case. 

It is common to use idealized models in physics to simplify things. Massless ropes, point particles, ideal gases and the particle in a box are among the many simplified models used in physics. The laws of physics are represented with simple equations such as Newton's laws, Maxwell's equations and the Schrödinger equation. These laws are a basis for making mathematical models of real situations. Many real situations are very complex and thus modeled approximate on a computer, a model that is computationally feasible to compute is made from the basic laws or from approximate models made from the basic laws. For example, molecules can be modeled by molecular orbital models that are approximate solutions to the Schrödinger equation. In engineering, physics models are often made by mathematical methods such as finite element analysis.

Different mathematical models use different geometries that are not necessarily accurate descriptions of the geometry of the universe. Euclidean geometry is much used in classical physics, while special relativity and general relativity are examples of theories that use geometries which are not Euclidean.

Some applications

Since prehistorical times simple models such as maps and diagrams have been used. 

Often when engineers analyze a system to be controlled or optimized, they use a mathematical model. In analysis, engineers can build a descriptive model of the system as a hypothesis of how the system could work, or try to estimate how an unforeseeable event could affect the system. Similarly, in control of a system, engineers can try out different control approaches in simulations

A mathematical model usually describes a system by a set of variables and a set of equations that establish relationships between the variables. Variables may be of many types; real or integer numbers, boolean values or strings, for example. The variables represent some properties of the system, for example, measured system outputs often in the form of signals, timing data, counters, and event occurrence (yes/no). The actual model is the set of functions that describe the relations between the different variables.

Building blocks

In business and engineering, mathematical models may be used to maximize a certain output. The system under consideration will require certain inputs. The system relating inputs to outputs depends on other variables too: decision variables, state variables, exogenous variables, and random variables.
Decision variables are sometimes known as independent variables. Exogenous variables are sometimes known as parameters or constants. The variables are not independent of each other as the state variables are dependent on the decision, input, random, and exogenous variables. Furthermore, the output variables are dependent on the state of the system (represented by the state variables). 

Objectives and constraints of the system and its users can be represented as functions of the output variables or state variables. The objective functions will depend on the perspective of the model's user. Depending on the context, an objective function is also known as an index of performance, as it is some measure of interest to the user. Although there is no limit to the number of objective functions and constraints a model can have, using or optimizing the model becomes more involved (computationally) as the number increases. 

For example, economists often apply linear algebra when using input-output models. Complicated mathematical models that have many variables may be consolidated by use of vectors where one symbol represents several variables.

A priori information

To analyse something with a typical "black box approach", only the behavior of the stimulus/response will be accounted for, to infer the (unknown) box. The usual representation of this black box system is a data flow diagram centered in the box.
 
Mathematical modeling problems are often classified into black box or white box models, according to how much a priori information on the system is available. A black-box model is a system of which there is no a priori information available. A white-box model (also called glass box or clear box) is a system where all necessary information is available. Practically all systems are somewhere between the black-box and white-box models, so this concept is useful only as an intuitive guide for deciding which approach to take. 

Usually it is preferable to use as much a priori information as possible to make the model more accurate. Therefore, the white-box models are usually considered easier, because if you have used the information correctly, then the model will behave correctly. Often the a priori information comes in forms of knowing the type of functions relating different variables. For example, if we make a model of how a medicine works in a human system, we know that usually the amount of medicine in the blood is an exponentially decaying function. But we are still left with several unknown parameters; how rapidly does the medicine amount decay, and what is the initial amount of medicine in blood? This example is therefore not a completely white-box model. These parameters have to be estimated through some means before one can use the model. 

In black-box models one tries to estimate both the functional form of relations between variables and the numerical parameters in those functions. Using a priori information we could end up, for example, with a set of functions that probably could describe the system adequately. If there is no a priori information we would try to use functions as general as possible to cover all different models. An often used approach for black-box models are neural networks which usually do not make assumptions about incoming data. Alternatively the NARMAX (Nonlinear AutoRegressive Moving Average model with eXogenous inputs) algorithms which were developed as part of nonlinear system identification  can be used to select the model terms, determine the model structure, and estimate the unknown parameters in the presence of correlated and nonlinear noise. The advantage of NARMAX models compared to neural networks is that NARMAX produces models that can be written down and related to the underlying process, whereas neural networks produce an approximation that is opaque.

Subjective information

Sometimes it is useful to incorporate subjective information into a mathematical model. This can be done based on intuition, experience, or expert opinion, or based on convenience of mathematical form. Bayesian statistics provides a theoretical framework for incorporating such subjectivity into a rigorous analysis: we specify a prior probability distribution (which can be subjective), and then update this distribution based on empirical data. 

An example of when such approach would be necessary is a situation in which an experimenter bends a coin slightly and tosses it once, recording whether it comes up heads, and is then given the task of predicting the probability that the next flip comes up heads. After bending the coin, the true probability that the coin will come up heads is unknown; so the experimenter would need to make a decision (perhaps by looking at the shape of the coin) about what prior distribution to use. Incorporation of such subjective information might be important to get an accurate estimate of the probability.

Complexity

In general, model complexity involves a trade-off between simplicity and accuracy of the model. Occam's razor is a principle particularly relevant to modeling, its essential idea being that among models with roughly equal predictive power, the simplest one is the most desirable. While added complexity usually improves the realism of a model, it can make the model difficult to understand and analyze, and can also pose computational problems, including numerical instability. Thomas Kuhn argues that as science progresses, explanations tend to become more complex before a paradigm shift offers radical simplification.

For example, when modeling the flight of an aircraft, we could embed each mechanical part of the aircraft into our model and would thus acquire an almost white-box model of the system. However, the computational cost of adding such a huge amount of detail would effectively inhibit the usage of such a model. Additionally, the uncertainty would increase due to an overly complex system, because each separate part induces some amount of variance into the model. It is therefore usually appropriate to make some approximations to reduce the model to a sensible size. Engineers often can accept some approximations in order to get a more robust and simple model. For example, Newton's classical mechanics is an approximated model of the real world. Still, Newton's model is quite sufficient for most ordinary-life situations, that is, as long as particle speeds are well below the speed of light, and we study macro-particles only.

Training and tuning

Any model which is not pure white-box contains some parameters that can be used to fit the model to the system it is intended to describe. If the modeling is done by an artificial neural network or other machine learning, the optimization of parameters is called training, while the optimization of model hyperparameters is called tuning and often uses cross-validation. In more conventional modeling through explicitly given mathematical functions, parameters are often determined by curve fitting.

Model evaluation

A crucial part of the modeling process is the evaluation of whether or not a given mathematical model describes a system accurately. This question can be difficult to answer as it involves several different types of evaluation.

Fit to empirical data

Usually the easiest part of model evaluation is checking whether a model fits experimental measurements or other empirical data. In models with parameters, a common approach to test this fit is to split the data into two disjoint subsets: training data and verification data. The training data are used to estimate the model parameters. An accurate model will closely match the verification data even though these data were not used to set the model's parameters. This practice is referred to as cross-validation in statistics. 

Defining a metric to measure distances between observed and predicted data is a useful tool of assessing model fit. In statistics, decision theory, and some economic models, a loss function plays a similar role. 

While it is rather straightforward to test the appropriateness of parameters, it can be more difficult to test the validity of the general mathematical form of a model. In general, more mathematical tools have been developed to test the fit of statistical models than models involving differential equations. Tools from non-parametric statistics can sometimes be used to evaluate how well the data fit a known distribution or to come up with a general model that makes only minimal assumptions about the model's mathematical form.

Scope of the model

Assessing the scope of a model, that is, determining what situations the model is applicable to, can be less straightforward. If the model was constructed based on a set of data, one must determine for which systems or situations the known data is a "typical" set of data. 

The question of whether the model describes well the properties of the system between data points is called interpolation, and the same question for events or data points outside the observed data is called extrapolation

As an example of the typical limitations of the scope of a model, in evaluating Newtonian classical mechanics, we can note that Newton made his measurements without advanced equipment, so he could not measure properties of particles travelling at speeds close to the speed of light. Likewise, he did not measure the movements of molecules and other small particles, but macro particles only. It is then not surprising that his model does not extrapolate well into these domains, even though his model is quite sufficient for ordinary life physics.

Philosophical considerations

Many types of modeling implicitly involve claims about causality. This is usually (but not always) true of models involving differential equations. As the purpose of modeling is to increase our understanding of the world, the validity of a model rests not only on its fit to empirical observations, but also on its ability to extrapolate to situations or data beyond those originally described in the model. One can think of this as the differentiation between qualitative and quantitative predictions. One can also argue that a model is worthless unless it provides some insight which goes beyond what is already known from direct investigation of the phenomenon being studied.

An example of such criticism is the argument that the mathematical models of optimal foraging theory do not offer insight that goes beyond the common-sense conclusions of evolution and other basic principles of ecology.

Examples

  • One of the popular examples in computer science is the mathematical models of various machines, an example is the deterministic finite automaton (DFA) which is defined as an abstract mathematical concept, but due to the deterministic nature of a DFA, it is implementable in hardware and software for solving various specific problems. For example, the following is a DFA M with a binary alphabet, which requires that the input contains an even number of 0s.
The state diagram for M
 
M = (Q, Σ, δ, q0, F) where

0
1
S1 S2 S1
S2 S1 S2
The state S1 represents that there has been an even number of 0s in the input so far, while S2 signifies an odd number. A 1 in the input does not change the state of the automaton. When the input ends, the state will show whether the input contained an even number of 0s or not. If the input did contain an even number of 0s, M will finish in state S1, an accepting state, so the input string will be accepted. 

The language recognized by M is the regular language given by the regular expression 1*( 0 (1*) 0 (1*) )*, where "*" is the Kleene star, e.g., 1* denotes any non-negative number (possibly zero) of symbols "1".
  • Many everyday activities carried out without a thought are uses of mathematical models. A geographical map projection of a region of the earth onto a small, plane surface is a model which can be used for many purposes such as planning travel.
  • Another simple activity is predicting the position of a vehicle from its initial position, direction and speed of travel, using the equation that distance traveled is the product of time and speed. This is known as dead reckoning when used more formally. Mathematical modeling in this way does not necessarily require formal mathematics; animals have been shown to use dead reckoning.
  • Population Growth. A simple (though approximate) model of population growth is the Malthusian growth model. A slightly more realistic and largely used population growth model is the logistic function, and its extensions.
  • Model of a particle in a potential-field. In this model we consider a particle as being a point of mass which describes a trajectory in space which is modeled by a function giving its coordinates in space as a function of time. The potential field is given by a function and the trajectory, that is a function , is the solution of the differential equation:
that can be written also as:
Note this model assumes the particle is a point mass, which is certainly known to be false in many cases in which we use this model; for example, as a model of planetary motion.
  • Model of rational behavior for a consumer. In this model we assume a consumer faces a choice of n commodities labeled 1,2,...,n each with a market price p1, p2,..., pn. The consumer is assumed to have an ordinal utility function U (ordinal in the sense that only the sign of the differences between two utilities, and not the level of each utility, is meaningful), depending on the amounts of commodities x1, x2,..., xn consumed. The model further assumes that the consumer has a budget M which is used to purchase a vector x1, x2,..., xn in such a way as to maximize U(x1, x2,..., xn). The problem of rational behavior in this model then becomes an optimization problem, that is:
subject to:

This model has been used in a wide variety of economic contexts, such as in general equilibrium theory to show existence and Pareto efficiency of economic equilibria.

Numeracy

From Wikipedia, the free encyclopedia

Children in Laos have fun as they improve numeracy with "Number Bingo." They roll three dice, construct an equation from the numbers to produce a new number, then cover that number on the board, trying to get 4 in a row.
 
Numeracy is the ability to reason and to apply simple numerical concepts. Basic numeracy skills consist of comprehending fundamental arithmetics like addition, subtraction, multiplication, and division. For example, if one can understand simple mathematical equations such as, 2 + 2 = 4, then one would be considered possessing at least basic numeric knowledge. Substantial aspects of numeracy also include number sense, operation sense, computation, measurement, geometry, probability and statistics. A numerically literate person can manage and respond to the mathematical demands of life.

By contrast, innumeracy (the lack of numeracy) can have a negative impact. Numeracy has an influence on career decisions, and risk perception towards health decisions. For example, innumeracy distorts risk perception towards health decisions and may negatively affect economic choices. "Greater numeracy has been associated with reduced susceptibility to framing effects, less influence of nonnumerical information such as mood states, and greater sensitivity to different levels of numerical risk".

Representation of numbers

Humans have evolved to mentally represent numbers in two major ways from observation (not formal math). These representations are often thought to be innate, to be shared across human cultures, to be common to multiple species, and not to be the result of individual learning or cultural transmission. They are:
  1. Approximate representation of numerical magnitude, and
  2. Precise representation of the quantity of individual items.
Approximate representations of numerical magnitude imply that one can relatively estimate and comprehend an amount if the number is large. For example, one experiment showed children and adults arrays of many dots. After briefly observing them, both groups could accurately estimate the approximate number of dots. However, distinguishing differences between large numbers of dots proved to be more challenging.

Precise representations of distinct individuals demonstrate that people are more accurate in estimating amounts and distinguishing differences when the numbers are relatively small. For example, in one experiment, an experimenter presented an infant with two piles of crackers, one with two crackers the other with three. The experimenter then covered each pile with a cup. When allowed to choose a cup, the infant always chose the cup with more crackers because the infant could distinguish the difference.

Both systems—approximate representation of magnitude and precise representation quantity of individual items—have limited power. For example, neither allows representations of fractions or negative numbers. More complex representations require education. However, achievement in school mathematics correlates with an individual's unlearned approximate number sense.

Definitions and assessment

Fundamental (or rudimentary) numeracy skills include understanding of the real number line, time, measurement, and estimation. Fundamental skills include basic skills (the ability to identify and understand numbers) and computational skills (the ability to perform simple arithmetical operations and compare numerical magnitudes). 

More sophisticated numeracy skills include understanding of ratio concepts (notably fractions, proportions, percentages, and probabilities), and knowing when and how to perform multistep operations. Two categories of skills are included at the higher levels: the analytical skills (the ability to understand numerical information, such as required to interpret graphs and charts) and the statistical skills (the ability to apply higher probabilistic and statistical computation, such as conditional probabilities). 

A variety of tests have been developed for assessing numeracy and health numeracy.

Childhood influences

The first couple of years of childhood are considered to be a vital part of life for the development of numeracy and literacy. There are many components that play key roles in the development of numeracy at a young age, such as Socioeconomic Status (SES), parenting, Home Learning Environment (HLE), and age.

Socioeconomic status

Children who are brought up in families with high SES tend to be more engaged in developmentally enhancing activities. These children are more likely to develop the necessary abilities to learn and to become more motivated to learn. More specifically, a mother's education level is considered to have an effect on the child's ability to achieve in numeracy. That is, mothers with a high level of education will tend to have children who succeed more in numeracy.

A number of studies have, moreover, proved that the education level of mother is strongly correlated with the average age of getting married. To be more precise, females who entered the marriage later, tend to have greater autonomy, chances for skills premium and level of education (i.e. numeracy). Hence, they were more likely to share this experience with children.

Parenting

Parents are suggested to collaborate with their child in simple learning exercises, such as reading a book, painting, drawing, and playing with numbers. On a more expressive note, the act of using complex language, being more responsive towards the child, and establishing warm interactions are recommended to parents with the confirmation of positive numeracy outcomes. When discussing beneficial parenting behaviors, a feedback loop is formed because pleased parents are more willing to interact with their child, which in essence promotes better development in the child.

Home-learning environment

Along with parenting and SES, a strong home-learning environment increases the likelihood of the child being prepared for comprehending complex mathematical schooling. For example, if a child is influenced by many learning activities in the household, such as puzzles, coloring books, mazes, or books with picture riddles, then they will be more prepared to face school activities.

Age

Age is accounted for when discussing the development of numeracy in children. Children under the age of 5 have the best opportunity to absorb basic numeracy skills. After the age of 7, achievement of basic numeracy skills become less influential. For example, a study was conducted to compare the reading and mathematic abilities between children, ages 5 and 7, each in three different mental capacity groups (underachieving, average, and overachieving). The differences in the amount of knowledge retained were greater between the three different groups at age 5, than between the groups at age 7. This reveals that the younger you are the greater chance you have to retain more information, like numeracy.

Literacy

There seems to be a relationship between literacy and numeracy, which can be seen in young children. Depending on the level of literacy or numeracy at a young age, one can predict the growth of literacy and/ or numeracy skills in future development. There is some evidence that humans may have an inborn sense of number. In one study for example, five-month-old infants were shown two dolls, which were then hidden with a screen. The babies saw the experimenter pull one doll from behind the screen. Without the child's knowledge, a second experimenter could remove, or add dolls, unseen behind the screen. When the screen was removed, the infants showed more surprise at an unexpected number (for example, if there were still two dolls). Some researchers have concluded that the babies were able to count, although others doubt this and claim the infants noticed surface area rather than number.

Employment

Numeracy has a huge impact on employment. In a work environment, numeracy can be a controlling factor affecting career achievements and failures. Many professions require individuals to have a well-developed sense of numeracy, for example: mathematician, physicist, accountant, actuary, Risk Analyst, financial analyst, engineer, and architect. Even outside these specialized areas, the lack of proper numeracy skills can reduce employment opportunities and promotions, resulting in unskilled manual careers, low-paying jobs, and even unemployment. For example, carpenters and interior designers need to be able to measure, use fractions, and handle budgets. Another example pertaining to numeracy influencing employment was demonstrated at the Poynter Institute. The Poynter Institute has recently included numeracy as one of the skills required by competent journalists. Max Frankel, former executive editor of the New York Times, argues that "deploying numbers skillfully is as important to communication as deploying verbs". Unfortunately, it is evident that journalists often show poor numeracy skills. In a study by the Society of Professional Journalists, 58% of job applicants interviewed by broadcast news directors lacked an adequate understanding of statistical materials.

With regards to assessing applicants for an employment position, psychometric numerical reasoning tests have been created by occupational psychologists, who are involved in the study of numeracy. These psychometric numerical reasoning tests are used to assess an applicants' ability to comprehend and apply numbers. These tests are sometimes administered with a time limit, resulting in the need for the test-taker to think quickly and concisely. Research has shown that these tests are very useful in evaluating potential applicants because they do not allow the applicants to prepare for the test, unlike interview questions. This suggests that an applicant's results are reliable and accurate.

These psychometric numerical reasoning tests first became prevalent during the 1980s, following the pioneering work of psychologists, such as P. Kline. In 1986 P. Kline's published a book entitled, "A handbook of test construction: Introduction to psychometric design", which explained that psychometric testing could provide reliable and objective results. These findings could then be used to effectively assess a candidate's abilities in numeracy. In the future, psychometric numerical reasoning tests will continue to be used in employment assessments to fairly and accurately differentiate and evaluate possible employment applicants.

Innumeracy and dyscalculia

Innumeracy is a neologism coined by an analogue with illiteracy. Innumeracy refers to a lack of ability to reason with numbers. The term innumeracy was coined by cognitive scientist Douglas Hofstadter. However, this term was popularized in 1989 by mathematician John Allen Paulos in his book entitled, Innumeracy: Mathematical Illiteracy and its Consequences.

Developmental dyscalculia refers to a persistent and specific impairment of basic numerical-arithmetical skills learning in the context of normal intelligence.

Patterns and differences

The root cause of innumeracy varies. Innumeracy has been seen in those suffering from poor education and childhood deprivation of numeracy. Innumeracy is apparent in children during the transition of numerical skills obtained before schooling and the new skills taught in the education departments because of their memory capacity to comprehend the material. Patterns of innumeracy have also been observed depending on age, gender, and race. Older adults have been associated with lower numeracy skills than younger adults. Men have been identified to have higher numeracy skills than women. Some studies seem to indicate young people of African heritage tend to have lower numeracy skills. The Trends in International Mathematics and Science Study (TIMSS) in which children at fourth-grade (average 10 to 11 years) and eighth-grade (average 14 to 15 years) from 49 countries were tested on mathematical comprehension. The assessment included tests for number, algebra (also called patterns and relationships at fourth grade), measurement, geometry, and data. The latest study, in 2003 found that children from Singapore at both grade levels had the highest performance. Countries like Hong Kong SAR, Japan, and Taiwan also shared high levels of numeracy. The lowest scores were found in countries like South Africa, Ghana, and Saudi Arabia. Another finding showed a noticeable difference between boys and girls with some exceptions. For example, girls performed significantly better in Singapore, and boys performed significantly better in the United States.

Theory

There is a theory that innumeracy is more common than illiteracy when dividing cognitive abilities into two separate categories. David C. Geary, a notable cognitive developmental and evolutionary psychologist from the University of Missouri, created the terms "biological primary abilities" and "biological secondary abilities". Biological primary abilities evolve over time and are necessary for survival. Such abilities include speaking a common language or knowledge of simple mathematics. Biological secondary abilities are attained through personal experiences and cultural customs, such as reading or high level mathematics learned through schooling. Literacy and numeracy are similar in the sense that they are both important skills used in life. However, they differ in the sorts of mental demands each makes. Literacy consists of acquiring vocabulary and grammatical sophistication, which seem to be more closely related to memorization, whereas numeracy involves manipulating concepts, such as in calculus or geometry, and builds from basic numeracy skills. This could be a potential explanation of the challenge of being numerate.

Innumeracy and risk perception in health decision-making

Health numeracy has been defined as "the degree to which individuals have the capacity to access, process, interpret, communicate, and act on numerical, quantitative, graphical, biostatistical, and probabilistic health information needed to make effective health decisions". The concept of health numeracy is a component of the concept of health literacy. Health numeracy and health literacy can be thought of as the combination of skills needed for understanding risk and making good choices in health-related behavior. 

Health numeracy requires basic numeracy but also more advanced analytical and statistical skills. For instance, health numeracy also requires the ability to understand probabilities or relative frequencies in various numerical and graphical formats, and to engage in Bayesian inference, while avoiding errors sometimes associated with Bayesian reasoning. Health numeracy also requires understanding terms with definitions that are specific to the medical context. For instance, although 'survival' and 'mortality' are complementary in common usage, these terms are not complementary in medicine (see five-year survival rate). Innumeracy is also a very common problem when dealing with risk perception in health-related behavior; it is associated with patients, physicians, journalists and policymakers. Those who lack or have limited health numeracy skills run the risk of making poor health-related decisions because of an inaccurate perception of information. For example, if a patient has been diagnosed with breast cancer, being innumerate may hinder her ability to comprehend her physician's recommendations or even the severity of the health concern. One study found that people tended to overestimate their chances of survival or even to choose lower quality hospitals. Innumeracy also makes it difficult or impossible for some patients to read medical graphs correctly. Some authors have distinguished graph literacy from numeracy. Indeed, many doctors exhibit innumeracy when attempting to explain a graph or statistics to a patient. Once again, a misunderstanding between a doctor and patient due to either the doctor, patient, or both being unable to comprehend numbers effectively could result in serious health consequences.

Different presentation formats of numerical information, for instance natural frequency icon arrays, have been evaluated to assist both low numeracy and high numeracy individuals.

Evolution of Numeracy

In the field of economic history, numeracy is often used to assess human capital at times when there was no data on schooling or other educational measures. Using a method called age-heaping, researchers like professor Baten study the development and inequalities of numeracy over time and throughout regions. For example, Baten and Hippe find a numeracy gap between regions in West / Central Europe and the rest of Europe for the period 1790 - 1880. At the same time, their data analysis reveals that these differences as well as within country inequality decreased over time. Taking a similar approach, Baten and Fourie find overall high levels of numeracy for people in the Cape Colony (late 17th to early 19th century). 

In contrast to these studies comparing numeracy over countries or regions, it is also possible to analyze numeracy within countries. For example, Baten, Crayen and Voth look at the effects of war on numeracy in England and Baten and Priwitzer find a "military bias" in today's West Hungary: people opting for a military career had - on average - better numeracy indicators (1 BCE to 3CE).

Psychopathology

From Wikipedia, the free encyclopedia

Psychopathology is the scientific study of mental disorders, including efforts to understand their genetic, biological, psychological, and social causes; develop classification schemes (nosology) which can improve treatment planning and treatment outcomes; understand the course of psychiatric illnesses across all stages of development; more fully understand the manifestations of mental disorders; and investigate potentially effective treatments.
 
At least conceptually, psychopathology is a subset of pathology, which is the "... scientific study of the nature of disease and its causes, processes, development, and consequences." Psychopathology is distinct from psychiatry by virtue of being a theoretical field of scientific research rather than a specialty of medical practice.

History

Early explanations for mental illnesses were influenced by religious belief and superstition. Psychological conditions that are now classified as mental disorders were initially attributed to possessions by evil spirits, demons, and the devil. This idea was widely accepted up until the sixteenth and seventeenth centuries. Individuals who suffered from these so-called "possessions" were tortured as treatment. Doctors used this technique in hoping to bring their patients back to sanity. Those who failed to return to sanity after torture were executed.

The Greek physician Hippocrates was one of the first to reject the idea that mental disorders were caused by possession of demons or the devil. He firmly believed the symptoms of mental disorders were due to diseases originating in the brain. Hippocrates suspected that these states of insanity were due to imbalances of fluids in the body. He identified these fluids to be four in particular: blood, black bile, yellow bile, and phlegm.

Furthermore, not far from Hippocrates, the philosopher Plato would come to argue the mind, body, and spirit worked as a unit. Any imbalance brought to these compositions of the individual could bring distress or lack of harmony within the individual. This philosophical idea would remain in perspective until the seventeenth century.

In the eighteenth century's Romantic Movement, the idea that healthy parent-child relationships provided sanity became a prominent idea. Philosopher Jean-Jacques Rousseau introduced the notion that trauma in childhood could have negative implications later in adulthood.

In the nineteenth century, greatly influenced by Rousseau's ideas and philosophy, Austrian psychoanalyst Sigmund Freud would bring about psychotherapy and become the father of psychoanalysis, a clinical method for treating psychopathology through dialogue between a patient and a psychoanalyst. Talking therapy would originate from his ideas on the individual's experiences and the natural human efforts to make sense of the world and life.

As the study of psychiatric disorders

The scientific discipline of psychopathology was founded by Karl Jaspers in 1913. It was referred to as "static understanding" and its purpose was to graphically recreate the "mental phenomenon" experienced by the client.

The study of psychopathology is interdisciplinary, with contributions coming from clinical, social, and developmental psychology, as well as neuropsychology and other psychology subdisciplines; psychiatry; neuroscience generally; criminology; social work; sociology; epidemiology; statistics; and more. Practitioners in clinical and academic fields are referred to as psychopathologists.

How do scientists (and people in general) distinguish between unusual or odd behavior on one hand, and a mental disorder on the other? One strategy is to assess a person along four dimensions: deviance, distress, dysfunction. and danger, known collectively as the Four D's.

The four Ds

A description of the four Ds when defining abnormality:
  1. Deviance: this term describes the idea that specific thoughts, behaviours and emotions are considered deviant when they are unacceptable or not common in society. Clinicians must, however, remember that minority groups are not always deemed deviant just because they may not have anything in common with other groups. Therefore, we define an individual's actions as deviant or abnormal when their behaviour is deemed unacceptable by the culture they belong to. However, many disorders have a relation between patterns of deviance and therefore need to be evaluated in a differential diagnostic model.
  2. Distress: this term accounts for negative feelings by the individual with the disorder. They may feel deeply troubled and affected by their illness. Behaviors and feelings that cause distress to the individual or to others around him or her are considered abnormal, if the condition is upsetting to the person experiencing it. Distress is related to dysfunction by being a useful asset in accurately perceiving dysfunction in an individual’s life. These two are not always related because an individual can be highly dysfunctional and at the same time experiencing minimal stress. One should know the important characteristic of distress is not involved with dysfunction, but rather the limit to which an individual is stressed by an issue.
  3. Dysfunction: this term involves maladaptive behaviour that impairs the individual's ability to perform normal daily functions, such as getting ready for work in the morning, or driving a car. This maladaptive behaviour has to be a problem large enough to be considered a diagnosis. It's highly noted to look for dysfunction across an individual's life experience because there is a chance the dysfunction may appear in clear observable view and in places where it is less likely to appear. Such maladaptive behaviours prevent the individual from living a normal, healthy lifestyle. However, dysfunctional behaviour is not always caused by a disorder; it may be voluntary, such as engaging in a hunger strike.
  4. Danger: this term involves dangerous or violent behaviour directed at the individual, or others in the environment. The two important characteristics of danger is, danger to self and danger to others. When diagnosing, there is a large vulnerability of danger in which there is some danger in each diagnosis and within these diagnoses there is a continuum of severity. An example of dangerous behaviour that may suggest a psychological disorder is engaging in suicidal activity. Behaviors and feelings that are potentially harmful to an individual or the individuals around them are seen as abnormal.

The p factor

Instead of conceptualizing psychopathology as consisting of several discrete categories of mental disorders, groups of psychological and psychiatric scientists have proposed a "general psychopathology" construct, named the p-factor, because of its conceptual similarity with the g factor of general intelligence. Although researchers initially conceived a tripartite (three factor) explanation for psychopathology generally, subsequent study provided more evidence for a unitary, factor that is sequentially comorbid, recurrent/chronic, and exists on a continuum of severity and chronicity. Thus, the p factor is a dimensional, as opposed to a categorical, construct.

Higher scores on the p factor dimension have been found to be correlated with higher levels of functional impairment, greater incidence of problems in developmental history, and more diminished early-life brain function. In addition, those with higher levels of the p factor are more likely to have inherited a genetic predisposition to mental illness. The existence of the p factor may explain why it has been "... challenging to find causes, consequences, biomarkers, and treatments with specificity to individual mental disorders."

The p factor has been likened to the g factor of general intelligence, which is also a dimensional system by which overall cognitive ability can be defined. As psychopathology has typically been studied and implemented as a categorical system, like the Diagnostic and Statistical Manual system developed for clinicians, the dimensional system of the p factor provides an alternative conceptualization of mental disorders that might improve our understanding of psychopathology in general; lead to more precise diagnoses; and facilitate more effective treatment approaches.

Benjamin Lahey and colleagues first proposed a general psychopathology factor in 2012.

As mental symptoms

The term psychopathology may also be used to denote behaviors or experiences which are indicative of mental illness, even if they do not constitute a formal diagnosis. For example, the presence of a hallucination may be considered as a psychopathological sign, even if there are not enough symptoms present to fulfill the criteria for one of the disorders listed in the DSM or ICD.

In a more general sense, any behaviour or experience which causes impairment, distress or disability, particularly if it is thought to arise from a functional breakdown in either the cognitive or neurocognitive systems in the brain, may be classified as psychopathology. It remains unclear how strong the distinction between maladaptive traits and mental disorders actually is, e.g. neuroticism is often described as the personal level of minor psychiatric symptoms.

Diagnostic and Statistical Manual of Mental Disorders

The Diagnostic and Statistical Manual of Mental Disorders (DSM) is an guideline for the diagnosis and understanding of mental disorders. It serves as reference for a range of professionals in medicine and mental health in the United States particularly. These professionals include psychologists, counselors, physicians, social workers, psychiatric nurses and nurse practitioners, marriage and family therapists, and more.

Examples of mental disorders classified within the DSM include:
  • Major depressive disorder is a mood disorder defined by symptoms of loss of motivation, decreased mood, lack of energy and thoughts of suicide.
  • Bipolar disorders are mood disorders characterized by depressive and manic episodes of varying lengths and degrees.
  • Dysthymia is a mood disorder similar to depression. Characterized by a persistent low mood, dysthymia is a less debilitating form of depression with no break in ordinary functioning.
  • Schizophrenia is characterized by altered perception of reality, including delusional thoughts, hallucinations, and disorganized speech and behaviour. Most cases arise in patients in their late teens or early adulthood, but can also appear later on in life.
  • Borderline personality disorder occurs in early adulthood for most patients; specific symptoms include patterns of unstable and intense relationships, chronic feelings of emptiness, emotional instability, paranoid thoughts, intense episodes of anger, and suicidal behavior.
  • Bulimia nervosa "binge and purge", an eating disorder specified as reoccurring episodes of uncontrollable binge eating followed by a need to vomit, take laxatives, or exercise excessively. Usually begins occurring at adolescence but most individuals do not seek help until later in life when it can be harder to change their eating habits.
  • Phobias Found in people of all ages. Characterized by an abnormal response to fear or danger. Persons diagnosed with Phobias suffer from feelings of terror and uncontrollable fear, exaggerated reactions to danger that in reality is not life-threatening, and is usually accompanied by physical reactions related to extreme fear: panic, rapid heartbeat, and/or shortened breathing.
  • Pyromania this disorder is indicated by fascination, curiosity, or attraction to purposely setting things on fire. Pyromaniacs find pleasure and/or relief by watching things burn. Can occur due to delusional thinking, impaired judgement due to other mental disorders, or simply as aggressive behavior to express anger.

DSM/RDoc debate

Some scholars have argued that field should switch from the DSM categorical approach of mental disorders to the Research Domain Criteria (RDoC) dimensional approach of mental disorders, although the consensus at present is to retain DSM for treatment, insurance, and related purposes, while emphasizing RDoC conceptualizations for planning and funding psychiatric research.

Cognitive bias

From Wikipedia, the free encyclopedia

A cognitive bias is a systematic pattern of deviation from norm or rationality in judgment. Individuals create their own "subjective social reality" from their perception of the input. An individual's construction of social reality, not the objective input, may dictate their behaviour in the social world. Thus, cognitive biases may sometimes lead to perceptual distortion, inaccurate judgment, illogical interpretation, or what is broadly called irrationality.

Some cognitive biases are presumably adaptive. Cognitive biases may lead to more effective actions in a given context. Furthermore, allowing cognitive biases enable faster decisions which can be desirable when timeliness is more valuable than accuracy, as illustrated in heuristics. Other cognitive biases are a "by-product" of human processing limitations, resulting from a lack of appropriate mental mechanisms (bounded rationality), or simply from a limited capacity for information processing.

A continually evolving list of cognitive biases has been identified over the last six decades of research on human judgment and decision-making in cognitive science, social psychology, and behavioral economics. Kahneman and Tversky (1996) argue that cognitive biases have efficient practical implications for areas including clinical judgment, entrepreneurship, finance, and management.

Overview

Bias arises from various processes that are sometimes difficult to distinguish. These include
  • information-processing shortcuts (heuristics)
  • noisy information processing (distortions in the process of storage in and retrieval from memory)
  • the brain's limited information processing capacity
  • emotional and moral motivations
  • social influence
The notion of cognitive biases was introduced by Amos Tversky and Daniel Kahneman in 1972 and grew out of their experience of people's innumeracy, or inability to reason intuitively with the greater orders of magnitude. Tversky, Kahneman and colleagues demonstrated several replicable ways in which human judgments and decisions differ from rational choice theory. Tversky and Kahneman explained human differences in judgement and decision making in terms of heuristics. Heuristics involve mental shortcuts which provide swift estimates about the possibility of uncertain occurrences. Heuristics are simple for the brain to compute but sometimes introduce "severe and systematic errors."

For example, the representativeness heuristic is defined as the tendency to "judge the frequency or likelihood" of an occurrence by the extent of which the event "resembles the typical case". The "Linda Problem" illustrates the representativeness heuristic (Tversky & Kahneman, 1983). Participants were given a description of "Linda" that suggests Linda might well be a feminist (e.g., she is said to be concerned about discrimination and social justice issues). They were then asked whether they thought Linda was more likely to be a "(a) bank teller" or a "(b) bank teller and active in the feminist movement". A majority chose answer (b). This error (mathematically, answer (b) cannot be more likely than answer (a)) is an example of the "conjunction fallacy"; Tversky and Kahneman argued that respondents chose (b) because it seemed more "representative" or typical of persons who might fit the description of Linda. The representativeness heuristic may lead to errors such as activating stereotypes and inaccurate judgments of others (Haselton et al., 2005, p. 726). 

Alternatively, critics of Kahneman and Tversky such as Gerd Gigerenzer argue that heuristics should not lead us to conceive of human thinking as riddled with irrational cognitive biases, but rather to conceive rationality as an adaptive tool that is not identical to the rules of formal logic or the probability calculus. Nevertheless, experiments such as the "Linda problem" grew into heuristics and biases research programs, which spread beyond academic psychology into other disciplines including medicine and political science.

Types

Biases can be distinguished on a number of dimensions. For example,
  • there are biases specific to groups (such as the risky shift) as well as biases at the individual level.
  • Some biases affect decision-making, where the desirability of options has to be considered (e.g., sunk costs fallacy).
  • Others such as illusory correlation affect judgment of how likely something is, or of whether one thing is the cause of another.
  • A distinctive class of biases affect memory, such as consistency bias (remembering one's past attitudes and behavior as more similar to one's present attitudes).
Some biases reflect a subject's motivation, for example, the desire for a positive self-image leading to egocentric bias and the avoidance of unpleasant cognitive dissonance. Other biases are due to the particular way the brain perceives, forms memories and makes judgments. This distinction is sometimes described as "hot cognition" versus "cold cognition", as motivated reasoning can involve a state of arousal

Among the "cold" biases,
  • some are due to ignoring relevant information (e.g., neglect of probability).
  • some involve a decision or judgement being affected by irrelevant information (for example the framing effect where the same problem receives different responses depending on how it is described; or the distinction bias where choices presented together have different outcomes than those presented separately).
  • others give excessive weight to an unimportant but salient feature of the problem (e.g., anchoring).
The fact that some biases reflect motivation, and in particular the motivation to have positive attitudes to oneself accounts for the fact that many biases are self-serving or self-directed (e.g., illusion of asymmetric insight, self-serving bias). There are also biases in how subjects evaluate in-groups or out-groups; evaluating in-groups as more diverse and "better" in many respects, even when those groups are arbitrarily-defined (ingroup bias, outgroup homogeneity bias). 

Some cognitive biases belong to the subgroup of attentional biases which refer to the paying of increased attention to certain stimuli. It has been shown, for example, that people addicted to alcohol and other drugs pay more attention to drug-related stimuli. Common psychological tests to measure those biases are the Stroop task and the dot probe task

Individuals' susceptibility to some types of cognitive biases can be measured by the Cognitive Reflection Test (CRT) developed by Frederick (2005).

List

The following is a list of the more commonly studied cognitive biases:

Name Description
Fundamental attribution error (FAE) Also known as the correspondence bias  is the tendency for people to over-emphasize personality-based explanations for behaviours observed in others. At the same time, individuals under-emphasize the role and power of situational influences on the same behaviour. Jones and Harris' (1967) classic study illustrates the FAE. Despite being made aware that the target's speech direction (pro-Castro/anti-Castro) was assigned to the writer, participants ignored the situational pressures and attributed pro-Castro attitudes to the writer when the speech represented such attitudes.
Priming bias The tendency to be influenced by what someone else has said to create preconceived idea.
Confirmation bias The tendency to search for or interpret information in a way that confirms one's preconceptions. In addition, individuals may discredit information that does not support their views. The confirmation bias is related to the concept of cognitive dissonance. Whereby, individuals may reduce inconsistency by searching for information which re-confirms their views (Jermias, 2001, p. 146).
Affinity bias The tendency to be biased toward people like ourselves
Self-serving bias The tendency to claim more responsibility for successes than failures. It may also manifest itself as a tendency for people to evaluate ambiguous information in a way beneficial to their interests.
Belief bias When one's evaluation of the logical strength of an argument is biased by their belief in the truth or falsity of the conclusion.
Framing Using a too-narrow approach and description of the situation or issue.
Hindsight bias Sometimes called the "I-knew-it-all-along" effect, is the inclination to see past events as being predictable.

A 2012 Psychological Bulletin article suggests that at least 8 seemingly unrelated biases can be produced by the same information-theoretic generative mechanism. It is shown that noisy deviations in the memory-based information processes that convert objective evidence (observations) into subjective estimates (decisions) can produce regressive conservatism, the belief revision (Bayesian conservatism), illusory correlations, illusory superiority (better-than-average effect) and worse-than-average effect, subadditivity effect, exaggerated expectation, overconfidence, and the hard–easy effect.

Practical significance

Many social institutions rely on individuals to make rational judgments. 

The securities regulation regime largely assumes that all investors act as perfectly rational persons. In truth, actual investors face cognitive limitations from biases, heuristics, and framing effects.

A fair jury trial, for example, requires that the jury ignore irrelevant features of the case, weigh the relevant features appropriately, consider different possibilities open-mindedly and resist fallacies such as appeal to emotion. The various biases demonstrated in these psychological experiments suggest that people will frequently fail to do all these things. However, they fail to do so in systematic, directional ways that are predictable.

Cognitive biases are also related to the persistence of superstition, to large social issues such as prejudice, and they also work as a hindrance in the acceptance of scientific non-intuitive knowledge by the public.

However, in some academic disciplines, the study of bias is very popular. For instance, bias is a wide spread phenomenon and well studied, because most decisions that concern the minds and hearts of entrepreneurs are computationally intractable.

Reducing

Because they cause systematic errors, cognitive biases cannot be compensated for using a wisdom of the crowd technique of averaging answers from several people. Debiasing is the reduction of biases in judgment and decision making through incentives, nudges, and training. Cognitive bias mitigation and cognitive bias modification are forms of debiasing specifically applicable to cognitive biases and their effects. Reference class forecasting is a method for systematically debiasing estimates and decisions, based on what Daniel Kahneman has dubbed the outside view

Similar to Gigerenzer (1996), Haselton et al. (2005) state the content and direction of cognitive biases are not "arbitrary" (p. 730). Moreover, cognitive biases can be controlled. One debiasing technique aims to decrease biases by encouraging individuals to use controlled processing compared to automatic processing. In relation to reducing the FAE, monetary incentives and informing participants they will be held accountable for their attributions have been linked to the increase of accurate attributions. Training has also shown to reduce cognitive bias. Morewedge and colleagues (2015) found that research participants exposed to one-shot training interventions, such as educational videos and debiasing games that taught mitigating strategies, exhibited significant reductions in their commission of six cognitive biases immediately and up to 3 months later.

Cognitive bias modification refers to the process of modifying cognitive biases in healthy people and also refers to a growing area of psychological (non-pharmaceutical) therapies for anxiety, depression and addiction called cognitive bias modification therapy (CBMT). CBMT is sub-group of therapies within a growing area of psychological therapies based on modifying cognitive processes with or without accompanying medication and talk therapy, sometimes referred to as applied cognitive processing therapies (ACPT). Although cognitive bias modification can refer to modifying cognitive processes in healthy individuals, CBMT is a growing area of evidence-based psychological therapy, in which cognitive processes are modified to relieve suffering from serious depression, anxiety, and addiction. CBMT techniques are technology assisted therapies that are delivered via a computer with or without clinician support. CBM combines evidence and theory from the cognitive model of anxiety, cognitive neuroscience, and attentional models.

Common theoretical causes of some cognitive biases

A 2012 Psychological Bulletin article suggested that at least eight seemingly unrelated biases can be produced by the same information-theoretic generative mechanism that assumes noisy information processing during storage and retrieval of information in human memory.

Individual differences in decision making biases

People do appear to have stable individual differences in their susceptibility to decision biases such as overconfidence, temporal discounting, and bias blind spot. That said, these stable levels of bias within individuals are possible to change. Participants in experiments who watched training videos and played debiasing games showed medium to large reductions both immediately and up to three months later in the extent to which they exhibited susceptibility to six cognitive biases: anchoring, bias blind spot, confirmation bias, fundamental attribution error, projection bias, and representativeness.

Criticisms

There are criticisms against theories of cognitive biases based on the fact that both sides in a debate often claim each other's thoughts to be in human nature and the result of cognitive bias, while claiming their own viewpoint as being the correct way to "overcome" cognitive bias. This is not due simply to debate misconduct but is a more fundamental problem that stems from psychology's making up of multiple opposed cognitive bias theories that can be non-falsifiably used to explain away any viewpoint.

Cryogenics

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cryogenics...