Search This Blog

Wednesday, November 23, 2022

Cognitive model

From Wikipedia, the free encyclopedia

A cognitive model is an approximation of one or more cognitive processes in humans or other animals for the purposes of comprehension and prediction. There are many types of cognitive models, and they can range from box-and-arrow diagrams to a set of equations to software programs that interact with the same tools that humans use to complete tasks (e.g., computer mouse and keyboard).

Relationship to cognitive architectures

Cognitive models can be developed within or without a cognitive architecture, though the two are not always easily distinguishable. In contrast to cognitive architectures, cognitive models tend to be focused on a single cognitive phenomenon or process (e.g., list learning), how two or more processes interact (e.g., visual search bsc1780 decision making), or making behavioral predictions for a specific task or tool (e.g., how instituting a new software package will affect productivity). Cognitive architectures tend to be focused on the structural properties of the modeled system, and help constrain the development of cognitive models within the architecture. Likewise, model development helps to inform limitations and shortcomings of the architecture. Some of the most popular architectures for cognitive modeling include ACT-R, Clarion, LIDA, and Soar.

History

Cognitive modeling historically developed within cognitive psychology/cognitive science (including human factors), and has received contributions from the fields of machine learning and artificial intelligence among others.

Box-and-arrow models

A number of key terms are used to describe the processes involved in the perception, storage, and production of speech. Typically, they are used by speech pathologists while treating a child patient. The input signal is the speech signal heard by the child, usually assumed to come from an adult speaker. The output signal is the utterance produced by the child. The unseen psychological events that occur between the arrival of an input signal and the production of speech are the focus of psycholinguistic models. Events that process the input signal are referred to as input processes, whereas events that process the production of speech are referred to as output processes. Some aspects of speech processing are thought to happen online—that is, they occur during the actual perception or production of speech and thus require a share of the attentional resources dedicated to the speech task. Other processes, thought to happen offline, take place as part of the child's background mental processing rather than during the time dedicated to the speech task. In this sense, online processing is sometimes defined as occurring in real-time, whereas offline processing is said to be time-free (Hewlett, 1990). In box-and-arrow psycholinguistic models, each hypothesized level of representation or processing can be represented in a diagram by a “box,” and the relationships between them by “arrows,” hence the name. Sometimes (as in the models of Smith, 1973, and Menn, 1978, described later in this paper) the arrows represent processes additional to those shown in boxes. Such models make explicit the hypothesized information- processing activities carried out in a particular cognitive function (such as language), in a manner analogous to computer flowcharts that depict the processes and decisions carried out by a computer program. Box-and-arrow models differ widely in the number of unseen psychological processes they describe and thus in the number of boxes they contain. Some have only one or two boxes between the input and output signals (e.g., Menn, 1978; Smith, 1973), whereas others have multiple boxes representing complex relationships between a number of different information-processing events (e.g., Hewlett, 1990; Hewlett, Gibbon, & Cohen- McKenzie, 1998; Stackhouse & Wells, 1997). The most important box, however, and the source of much ongoing debate, is that representing the underlying representation (or UR). In essence, an underlying representation captures information stored in a child's mind about a word he or she knows and uses. As the following description of several models will illustrate, the nature of this information and thus the type(s) of representation present in the child's knowledge base have captured the attention of researchers for some time. (Elise Baker et al. Psycholinguistic Models of Speech Development and Their Application to Clinical Practice. Journal of Speech, Language, and Hearing Research. June 2001. 44. p 685–702.)

Computational models

A computational model is a mathematical model in computational science that requires extensive computational resources to study the behavior of a complex system by computer simulation. The system under study is often a complex nonlinear system for which simple, intuitive analytical solutions are not readily available. Rather than deriving a mathematical analytical solution to the problem, experimentation with the model is done by changing the parameters of the system in the computer, and studying the differences in the outcome of the experiments. Theories of operation of the model can be derived/deduced from these computational experiments. Examples of common computational models are weather forecasting models, earth simulator models, flight simulator models, molecular protein folding models, and neural network models.

Symbolic

A symbolic model is expressed in characters, usually non-numeric ones, that require translation before they can be used.

Subsymbolic

A cognitive model is subsymbolic if it is made by constituent entities that are not representations in their turn, e.g., pixels, sound images as perceived by the ear, signal samples; subsymbolic units in neural networks can be considered particular cases of this category.

Hybrid

Hybrid computers are computers that exhibit features of analog computers and digital computers. The digital component normally serves as the controller and provides logical operations, while the analog component normally serves as a solver of differential equations. See more details at hybrid intelligent system.

Dynamical systems

In the traditional computational approach, representations are viewed as static structures of discrete symbols. Cognition takes place by transforming static symbol structures in discrete, sequential steps. Sensory information is transformed into symbolic inputs, which produce symbolic outputs that get transformed into motor outputs. The entire system operates in an ongoing cycle.

What is missing from this traditional view is that human cognition happens continuously and in real time. Breaking down the processes into discrete time steps may not fully capture this behavior. An alternative approach is to define a system with (1) a state of the system at any given time, (2) a behavior, defined as the change over time in overall state, and (3) a state set or state space, representing the totality of overall states the system could be in. The system is distinguished by the fact that a change in any aspect of the system state depends on other aspects of the same or other system states.

A typical dynamical model is formalized by several differential equations that describe how the system's state changes over time. By doing so, the form of the space of possible trajectories and the internal and external forces that shape a specific trajectory that unfold over time, instead of the physical nature of the underlying mechanisms that manifest this dynamics, carry explanatory force. On this dynamical view, parametric inputs alter the system's intrinsic dynamics, rather than specifying an internal state that describes some external state of affairs.

Early dynamical systems

Associative memory

Early work in the application of dynamical systems to cognition can be found in the model of Hopfield networks. These networks were proposed as a model for associative memory. They represent the neural level of memory, modeling systems of around 30 neurons which can be in either an on or off state. By letting the network learn on its own, structure and computational properties naturally arise. Unlike previous models, “memories” can be formed and recalled by inputting a small portion of the entire memory. Time ordering of memories can also be encoded. The behavior of the system is modeled with vectors which can change values, representing different states of the system. This early model was a major step toward a dynamical systems view of human cognition, though many details had yet to be added and more phenomena accounted for.

Language acquisition

By taking into account the evolutionary development of the human nervous system and the similarity of the brain to other organs, Elman proposed that language and cognition should be treated as a dynamical system rather than a digital symbol processor. Neural networks of the type Elman implemented have come to be known as Elman networks. Instead of treating language as a collection of static lexical items and grammar rules that are learned and then used according to fixed rules, the dynamical systems view defines the lexicon as regions of state space within a dynamical system. Grammar is made up of attractors and repellers that constrain movement in the state space. This means that representations are sensitive to context, with mental representations viewed as trajectories through mental space instead of objects that are constructed and remain static. Elman networks were trained with simple sentences to represent grammar as a dynamical system. Once a basic grammar had been learned, the networks could then parse complex sentences by predicting which words would appear next according to the dynamical model.

Cognitive development

A classic developmental error has been investigated in the context of dynamical systems: The A-not-B error is proposed to be not a distinct error occurring at a specific age (8 to 10 months), but a feature of a dynamic learning process that is also present in older children. Children 2 years old were found to make an error similar to the A-not-B error when searching for toys hidden in a sandbox. After observing the toy being hidden in location A and repeatedly searching for it there, the 2-year-olds were shown a toy hidden in a new location B. When they looked for the toy, they searched in locations that were biased toward location A. This suggests that there is an ongoing representation of the toy's location that changes over time. The child's past behavior influences its model of locations of the sandbox, and so an account of behavior and learning must take into account how the system of the sandbox and the child's past actions is changing over time.

Locomotion

One proposed mechanism of a dynamical system comes from analysis of continuous-time recurrent neural networks (CTRNNs). By focusing on the output of the neural networks rather than their states and examining fully interconnected networks, three-neuron central pattern generator (CPG) can be used to represent systems such as leg movements during walking. This CPG contains three motor neurons to control the foot, backward swing, and forward swing effectors of the leg. Outputs of the network represent whether the foot is up or down and how much force is being applied to generate torque in the leg joint. One feature of this pattern is that neuron outputs are either off or on most of the time. Another feature is that the states are quasi-stable, meaning that they will eventually transition to other states. A simple pattern generator circuit like this is proposed to be a building block for a dynamical system. Sets of neurons that simultaneously transition from one quasi-stable state to another are defined as a dynamic module. These modules can in theory be combined to create larger circuits that comprise a complete dynamical system. However, the details of how this combination could occur are not fully worked out.

Modern dynamical systems

Behavioral dynamics

Modern formalizations of dynamical systems applied to the study of cognition vary. One such formalization, referred to as “behavioral dynamics”, treats the agent and the environment as a pair of coupled dynamical systems based on classical dynamical systems theory. In this formalization, the information from the environment informs the agent's behavior and the agent's actions modify the environment. In the specific case of perception-action cycles, the coupling of the environment and the agent is formalized by two functions. The first transforms the representation of the agents action into specific patterns of muscle activation that in turn produce forces in the environment. The second function transforms the information from the environment (i.e., patterns of stimulation at the agent's receptors that reflect the environment's current state) into a representation that is useful for controlling the agents actions. Other similar dynamical systems have been proposed (although not developed into a formal framework) in which the agent's nervous systems, the agent's body, and the environment are coupled together

Adaptive behaviors

Behavioral dynamics have been applied to locomotive behavior. Modeling locomotion with behavioral dynamics demonstrates that adaptive behaviors could arise from the interactions of an agent and the environment. According to this framework, adaptive behaviors can be captured by two levels of analysis. At the first level of perception and action, an agent and an environment can be conceptualized as a pair of dynamical systems coupled together by the forces the agent applies to the environment and by the structured information provided by the environment. Thus, behavioral dynamics emerge from the agent-environment interaction. At the second level of time evolution, behavior can be expressed as a dynamical system represented as a vector field. In this vector field, attractors reflect stable behavioral solutions, where as bifurcations reflect changes in behavior. In contrast to previous work on central pattern generators, this framework suggests that stable behavioral patterns are an emergent, self-organizing property of the agent-environment system rather than determined by the structure of either the agent or the environment.

Open dynamical systems

In an extension of classical dynamical systems theory, rather than coupling the environment's and the agent's dynamical systems to each other, an “open dynamical system” defines a “total system”, an “agent system”, and a mechanism to relate these two systems. The total system is a dynamical system that models an agent in an environment, whereas the agent system is a dynamical system that models an agent's intrinsic dynamics (i.e., the agent's dynamics in the absence of an environment). Importantly, the relation mechanism does not couple the two systems together, but rather continuously modifies the total system into the decoupled agent's total system. By distinguishing between total and agent systems, it is possible to investigate an agent's behavior when it is isolated from the environment and when it is embedded within an environment. This formalization can be seen as a generalization from the classical formalization, whereby the agent system can be viewed as the agent system in an open dynamical system, and the agent coupled to the environment and the environment can be viewed as the total system in an open dynamical system.

Embodied cognition

In the context of dynamical systems and embodied cognition, representations can be conceptualized as indicators or mediators. In the indicator view, internal states carry information about the existence of an object in the environment, where the state of a system during exposure to an object is the representation of that object. In the mediator view, internal states carry information about the environment which is used by the system in obtaining its goals. In this more complex account, the states of the system carries information that mediates between the information the agent takes in from the environment, and the force exerted on the environment by the agents behavior. The application of open dynamical systems have been discussed for four types of classical embodied cognition examples:

  1. Instances where the environment and agent must work together to achieve a goal, referred to as "intimacy". A classic example of intimacy is the behavior of simple agents working to achieve a goal (e.g., insects traversing the environment). The successful completion of the goal relies fully on the coupling of the agent to the environment.
  2. Instances where the use of external artifacts improves the performance of tasks relative to performance without these artifacts. The process is referred to as "offloading". A classic example of offloading is the behavior of Scrabble players; people are able to create more words when playing Scrabble if they have the tiles in front of them and are allowed to physically manipulate their arrangement. In this example, the Scrabble tiles allow the agent to offload working memory demands on to the tiles themselves.
  3. Instances where a functionally equivalent external artifact replaces functions that are normally performed internally by the agent, which is a special case of offloading. One famous example is that of human (specifically the agents Otto and Inga) navigation in a complex environment with or without assistance of an artifact.
  4. Instances where there is not a single agent. The individual agent is part of larger system that contains multiple agents and multiple artifacts. One famous example, formulated by Ed Hutchins in his book Cognition in the Wild, is that of navigating a naval ship.

The interpretations of these examples rely on the following logic: (1) the total system captures embodiment; (2) one or more agent systems capture the intrinsic dynamics of individual agents; (3) the complete behavior of an agent can be understood as a change to the agent's intrinsic dynamics in relation to its situation in the environment; and (4) the paths of an open dynamical system can be interpreted as representational processes. These embodied cognition examples show the importance of studying the emergent dynamics of an agent-environment systems, as well as the intrinsic dynamics of agent systems. Rather than being at odds with traditional cognitive science approaches, dynamical systems are a natural extension of these methods and should be studied in parallel rather than in competition.

Hypothesis

From Wikipedia, the free encyclopedia
 
The hypothesis of Andreas Cellarius, showing the planetary motions in eccentric and epicyclical orbits.

A hypothesis (plural hypotheses) is a proposed explanation for a phenomenon. For a hypothesis to be a scientific hypothesis, the scientific method requires that one can test it. Scientists generally base scientific hypotheses on previous observations that cannot satisfactorily be explained with the available scientific theories. Even though the words "hypothesis" and "theory" are often used interchangeably, a scientific hypothesis is not the same as a scientific theory. A working hypothesis is a provisionally accepted hypothesis proposed for further research in a process beginning with an educated guess or thought.

A different meaning of the term hypothesis is used in formal logic, to denote the antecedent of a proposition; thus in the proposition "If P, then Q", P denotes the hypothesis (or antecedent); Q can be called a consequent. P is the assumption in a (possibly counterfactual) What If question.

The adjective hypothetical, meaning "having the nature of a hypothesis", or "being assumed to exist as an immediate consequence of a hypothesis", can refer to any of these meanings of the term "hypothesis".

Uses

In its ancient usage, hypothesis referred to a summary of the plot of a classical drama. The English word hypothesis comes from the ancient Greek word ὑπόθεσις hypothesis whose literal or etymological sense is "putting or placing under" and hence in extended use has many other meanings including "supposition".

In Plato's Meno (86e–87b), Socrates dissects virtue with a method used by mathematicians, that of "investigating from a hypothesis." In this sense, 'hypothesis' refers to a clever idea or to a convenient mathematical approach that simplifies cumbersome calculations. Cardinal Bellarmine gave a famous example of this usage in the warning issued to Galileo in the early 17th century: that he must not treat the motion of the Earth as a reality, but merely as a hypothesis.

In common usage in the 21st century, a hypothesis refers to a provisional idea whose merit requires evaluation. For proper evaluation, the framer of a hypothesis needs to define specifics in operational terms. A hypothesis requires more work by the researcher in order to either confirm or disprove it. In due course, a confirmed hypothesis may become part of a theory or occasionally may grow to become a theory itself. Normally, scientific hypotheses have the form of a mathematical model. Sometimes, but not always, one can also formulate them as existential statements, stating that some particular instance of the phenomenon under examination has some characteristic and causal explanations, which have the general form of universal statements, stating that every instance of the phenomenon has a particular characteristic.

In entrepreneurial science, a hypothesis is used to formulate provisional ideas within a business setting. The formulated hypothesis is then evaluated where either the hypothesis is proven to be "true" or "false" through a verifiability- or falsifiability-oriented experiment.

Any useful hypothesis will enable predictions by reasoning (including deductive reasoning). It might predict the outcome of an experiment in a laboratory setting or the observation of a phenomenon in nature. The prediction may also invoke statistics and only talk about probabilities. Karl Popper, following others, has argued that a hypothesis must be falsifiable, and that one cannot regard a proposition or theory as scientific if it does not admit the possibility of being shown false. Other philosophers of science have rejected the criterion of falsifiability or supplemented it with other criteria, such as verifiability (e.g., verificationism) or coherence (e.g., confirmation holism). The scientific method involves experimentation to test the ability of some hypothesis to adequately answer the question under investigation. In contrast, unfettered observation is not as likely to raise unexplained issues or open questions in science, as would the formulation of a crucial experiment to test the hypothesis. A thought experiment might also be used to test the hypothesis as well.

In framing a hypothesis, the investigator must not currently know the outcome of a test or that it remains reasonably under continuing investigation. Only in such cases does the experiment, test or study potentially increase the probability of showing the truth of a hypothesis. If the researcher already knows the outcome, it counts as a "consequence" — and the researcher should have already considered this while formulating the hypothesis. If one cannot assess the predictions by observation or by experience, the hypothesis needs to be tested by others providing observations. For example, a new technology or theory might make the necessary experiments feasible.

Scientific hypothesis

People refer to a trial solution to a problem as a hypothesis, often called an "educated guess" because it provides a suggested outcome based on the evidence. However, some scientists reject the term "educated guess" as incorrect. Experimenters may test and reject several hypotheses before solving the problem.

According to Schick and Vaughn, researchers weighing up alternative hypotheses may take into consideration:

  • Testability (compare falsifiability as discussed above)
  • Parsimony (as in the application of "Occam's razor", discouraging the postulation of excessive numbers of entities)
  • Scope – the apparent application of the hypothesis to multiple cases of phenomena
  • Fruitfulness – the prospect that a hypothesis may explain further phenomena in the future
  • Conservatism – the degree of "fit" with existing recognized knowledge-systems.

Working hypothesis

A working hypothesis is a hypothesis that is provisionally accepted as a basis for further research in the hope that a tenable theory will be produced, even if the hypothesis ultimately fails. Like all hypotheses, a working hypothesis is constructed as a statement of expectations, which can be linked to the exploratory research purpose in empirical investigation. Working hypotheses are often used as a conceptual framework in qualitative research.

The provisional nature of working hypotheses makes them useful as an organizing device in applied research. Here they act like a useful guide to address problems that are still in a formative phase.

In recent years, philosophers of science have tried to integrate the various approaches to evaluating hypotheses, and the scientific method in general, to form a more complete system that integrates the individual concerns of each approach. Notably, Imre Lakatos and Paul Feyerabend, Karl Popper's colleague and student, respectively, have produced novel attempts at such a synthesis.

Hypotheses, concepts and measurement

Concepts in Hempel's deductive-nomological model play a key role in the development and testing of hypotheses. Most formal hypotheses connect concepts by specifying the expected relationships between propositions. When a set of hypotheses are grouped together, they become a type of conceptual framework. When a conceptual framework is complex and incorporates causality or explanation, it is generally referred to as a theory. According to noted philosopher of science Carl Gustav Hempel, "An adequate empirical interpretation turns a theoretical system into a testable theory: The hypothesis whose constituent terms have been interpreted become capable of test by reference to observable phenomena. Frequently the interpreted hypothesis will be derivative hypotheses of the theory; but their confirmation or disconfirmation by empirical data will then immediately strengthen or weaken also the primitive hypotheses from which they were derived."

Hempel provides a useful metaphor that describes the relationship between a conceptual framework and the framework as it is observed and perhaps tested (interpreted framework). "The whole system floats, as it were, above the plane of observation and is anchored to it by rules of interpretation. These might be viewed as strings which are not part of the network but link certain points of the latter with specific places in the plane of observation. By virtue of those interpretative connections, the network can function as a scientific theory." Hypotheses with concepts anchored in the plane of observation are ready to be tested. In "actual scientific practice the process of framing a theoretical structure and of interpreting it are not always sharply separated, since the intended interpretation usually guides the construction of the theoretician." It is, however, "possible and indeed desirable, for the purposes of logical clarification, to separate the two steps conceptually."

Statistical hypothesis testing

When a possible correlation or similar relation between phenomena is investigated, such as whether a proposed remedy is effective in treating a disease, the hypothesis that a relation exists cannot be examined the same way one might examine a proposed new law of nature. In such an investigation, if the tested remedy shows no effect in a few cases, these do not necessarily falsify the hypothesis. Instead, statistical tests are used to determine how likely it is that the overall effect would be observed if the hypothesized relation does not exist. If that likelihood is sufficiently small (e.g., less than 1%), the existence of a relation may be assumed. Otherwise, any observed effect may be due to pure chance.

In statistical hypothesis testing, two hypotheses are compared. These are called the null hypothesis and the alternative hypothesis. The null hypothesis is the hypothesis that states that there is no relation between the phenomena whose relation is under investigation, or at least not of the form given by the alternative hypothesis. The alternative hypothesis, as the name suggests, is the alternative to the null hypothesis: it states that there is some kind of relation. The alternative hypothesis may take several forms, depending on the nature of the hypothesized relation; in particular, it can be two-sided (for example: there is some effect, in a yet unknown direction) or one-sided (the direction of the hypothesized relation, positive or negative, is fixed in advance).

Conventional significance levels for testing hypotheses (acceptable probabilities of wrongly rejecting a true null hypothesis) are .10, .05, and .01. The significance level for deciding whether the null hypothesis is rejected and the alternative hypothesis is accepted must be determined in advance, before the observations are collected or inspected. If these criteria are determined later, when the data to be tested are already known, the test is invalid.

The above procedure is actually dependent on the number of the participants (units or sample size) that are included in the study. For instance, to avoid having the sample size be too small to reject a null hypothesis, it is recommended that one specify a sufficient sample size from the beginning. It is advisable to define a small, medium and large effect size for each of a number of important statistical tests which are used to test the hypotheses.

Scientific theory

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Scientific_theory

A scientific theory is an explanation of an aspect of the natural world and universe that has been repeatedly tested and corroborated in accordance with the scientific method, using accepted protocols of observation, measurement, and evaluation of results. Where possible, theories are tested under controlled conditions in an experiment. In circumstances not amenable to experimental testing, theories are evaluated through principles of abductive reasoning. Established scientific theories have withstood rigorous scrutiny and embody scientific knowledge.A scientific theory differs from a scientific fact or scientific law in that a theory explains "why" or "how": a fact is a simple, basic observation, whereas a law is a statement (often a mathematical equation) about a relationship between facts. For example, Newton’s Law of Gravity is a mathematical equation that can be used to predict the attraction between bodies, but it is not a theory to explain how gravity works. Stephen Jay Gould wrote that "...facts and theories are different things, not rungs in a hierarchy of increasing certainty. Facts are the world's data. Theories are structures of ideas that explain and interpret facts."

The meaning of the term scientific theory (often contracted to theory for brevity) as used in the is significantly different from the common vernacular usage of theory. In everyday speech, theory can imply an explanation that represents an unsubstantiated and speculative guess, whereas in science it describes an explanation that has been tested and is widely accepted as valid.

The strength of a scientific theory is related to the diversity of phenomena it can explain and its simplicity. As additional scientific evidence is gathered, a scientific theory may be modified and ultimately rejected if it cannot be made to fit the new findings; in such circumstances, a more accurate theory is then required. Some theories are so well-established that they are unlikely ever to be fundamentally changed (for example, scientific theories such as evolution, heliocentric theory, cell theory, theory of plate tectonics, germ theory of disease, etc.). In certain cases, a scientific theory or scientific law that fails to fit all data can still be useful (due to its simplicity) as an approximation under specific conditions. An example is Newton's laws of motion, which are a highly accurate approximation to special relativity at velocities that are small relative to the speed of light.

Scientific theories are testable and make falsifiable predictions. They describe the causes of a particular natural phenomenon and are used to explain and predict aspects of the physical universe or specific areas of inquiry (for example, electricity, chemistry, and astronomy). As with other forms of scientific knowledge, scientific theories are both deductive and inductive, aiming for predictive and explanatory power. Scientists use theories to further scientific knowledge, as well as to facilitate advances in technology or medicine.

Types

Albert Einstein described two different types of scientific theories: "Constructive theories" and "principle theories". Constructive theories are constructive models for phenomena: for example, kinetic theory. Principle theories are empirical generalisations , one such example being Newton's laws of motion.

Characteristics

Essential criteria

For any theory to be accepted within most academia there is usually one simple criterion. The essential criterion is that the theory must be observable and repeatable. The aforementioned criterion is essential to prevent fraud and perpetuate science itself.

The tectonic plates of the world were mapped in the second half of the 20th century. Plate tectonic theory successfully explains numerous observations about the Earth, including the distribution of earthquakes, mountains, continents, and oceans.

The defining characteristic of all scientific knowledge, including theories, is the ability to make falsifiable or testable predictions. The relevance and specificity of those predictions determine how potentially useful the theory is. A would-be theory that makes no observable predictions is not a scientific theory at all. Predictions not sufficiently specific to be tested are similarly not useful. In both cases, the term "theory" is not applicable.

A body of descriptions of knowledge can be called a theory if it fulfills the following criteria:

  • It makes falsifiable predictions with consistent accuracy across a broad area of scientific inquiry (such as mechanics).
  • It is well-supported by many independent strands of evidence, rather than a single foundation.
  • It is consistent with preexisting experimental results and at least as accurate in its predictions as are any preexisting theories.

These qualities are certainly true of such established theories as special and general relativity, quantum mechanics, plate tectonics, the modern evolutionary synthesis, etc.

Other criteria

In addition, most scientists prefer to work with a theory that meets the following qualities:

  • It can be subjected to minor adaptations to account for new data that do not fit it perfectly, as they are discovered, thus increasing its predictive capability over time.
  • It is among the most parsimonious explanations, economical in the use of proposed entities or explanatory steps as per Occam's razor. This is because for each accepted explanation of a phenomenon, there may be an extremely large, perhaps even incomprehensible, number of possible and more complex alternatives, because one can always burden failing explanations with ad hoc hypotheses to prevent them from being falsified; therefore, simpler theories are preferable to more complex ones because they are more testable.

Definitions from scientific organizations

The United States National Academy of Sciences defines scientific theories as follows:

The formal scientific definition of theory is quite different from the everyday meaning of the word. It refers to a comprehensive explanation of some aspect of nature that is supported by a vast body of evidence. Many scientific theories are so well established that no new evidence is likely to alter them substantially. For example, no new evidence will demonstrate that the Earth does not orbit around the Sun (heliocentric theory), or that living things are not made of cells (cell theory), that matter is not composed of atoms, or that the surface of the Earth is not divided into solid plates that have moved over geological timescales (the theory of plate tectonics)...One of the most useful properties of scientific theories is that they can be used to make predictions about natural events or phenomena that have not yet been observed.

From the American Association for the Advancement of Science:

A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment. Such fact-supported theories are not "guesses" but reliable accounts of the real world. The theory of biological evolution is more than "just a theory". It is as factual an explanation of the universe as the atomic theory of matter or the germ theory of disease. Our understanding of gravity is still a work in progress. But the phenomenon of gravity, like evolution, is an accepted fact.

Note that the term theory would not be appropriate for describing untested but intricate hypotheses or even scientific models.

Formation

The first observation of cells, by Robert Hooke, using an early microscope. This led to the development of cell theory.

The scientific method involves the proposal and testing of hypotheses, by deriving predictions from the hypotheses about the results of future experiments, then performing those experiments to see whether the predictions are valid. This provides evidence either for or against the hypothesis. When enough experimental results have been gathered in a particular area of inquiry, scientists may propose an explanatory framework that accounts for as many of these as possible. This explanation is also tested, and if it fulfills the necessary criteria (see above), then the explanation becomes a theory. This can take many years, as it can be difficult or complicated to gather sufficient evidence.

Once all of the criteria have been met, it will be widely accepted by scientists (see scientific consensus) as the best available explanation of at least some phenomena. It will have made predictions of phenomena that previous theories could not explain or could not predict accurately, and it will have resisted attempts at falsification. The strength of the evidence is evaluated by the scientific community, and the most important experiments will have been replicated by multiple independent groups.

Theories do not have to be perfectly accurate to be scientifically useful. For example, the predictions made by classical mechanics are known to be inaccurate in the relativistic realm, but they are almost exactly correct at the comparatively low velocities of common human experience. In chemistry, there are many acid-base theories providing highly divergent explanations of the underlying nature of acidic and basic compounds, but they are very useful for predicting their chemical behavior. Like all knowledge in science, no theory can ever be completely certain, since it is possible that future experiments might conflict with the theory's predictions. However, theories supported by the scientific consensus have the highest level of certainty of any scientific knowledge; for example, that all objects are subject to gravity or that life on Earth evolved from a common ancestor.

Acceptance of a theory does not require that all of its major predictions be tested, if it is already supported by sufficiently strong evidence. For example, certain tests may be unfeasible or technically difficult. As a result, theories may make predictions that have not yet been confirmed or proven incorrect; in this case, the predicted results may be described informally with the term "theoretical". These predictions can be tested at a later time, and if they are incorrect, this may lead to the revision or rejection of the theory.

Modification and improvement

If experimental results contrary to a theory's predictions are observed, scientists first evaluate whether the experimental design was sound, and if so they confirm the results by independent replication. A search for potential improvements to the theory then begins. Solutions may require minor or major changes to the theory, or none at all if a satisfactory explanation is found within the theory's existing framework. Over time, as successive modifications build on top of each other, theories consistently improve and greater predictive accuracy is achieved. Since each new version of a theory (or a completely new theory) must have more predictive and explanatory power than the last, scientific knowledge consistently becomes more accurate over time.

If modifications to the theory or other explanations seem to be insufficient to account for the new results, then a new theory may be required. Since scientific knowledge is usually durable, this occurs much less commonly than modification. Furthermore, until such a theory is proposed and accepted, the previous theory will be retained. This is because it is still the best available explanation for many other phenomena, as verified by its predictive power in other contexts. For example, it has been known since 1859 that the observed perihelion precession of Mercury violates Newtonian mechanics, but the theory remained the best explanation available until relativity was supported by sufficient evidence. Also, while new theories may be proposed by a single person or by many, the cycle of modifications eventually incorporates contributions from many different scientists.

After the changes, the accepted theory will explain more phenomena and have greater predictive power (if it did not, the changes would not be adopted); this new explanation will then be open to further replacement or modification. If a theory does not require modification despite repeated tests, this implies that the theory is very accurate. This also means that accepted theories continue to accumulate evidence over time, and the length of time that a theory (or any of its principles) remains accepted often indicates the strength of its supporting evidence.

Unification

In quantum mechanics, the electrons of an atom occupy orbitals around the nucleus. This image shows the orbitals of a hydrogen atom (s, p, d) at three different energy levels (1, 2, 3). Brighter areas correspond to higher probability density.

In some cases, two or more theories may be replaced by a single theory that explains the previous theories as approximations or special cases, analogous to the way a theory is a unifying explanation for many confirmed hypotheses; this is referred to as unification of theories. For example, electricity and magnetism are now known to be two aspects of the same phenomenon, referred to as electromagnetism.

When the predictions of different theories appear to contradict each other, this is also resolved by either further evidence or unification. For example, physical theories in the 19th century implied that the Sun could not have been burning long enough to allow certain geological changes as well as the evolution of life. This was resolved by the discovery of nuclear fusion, the main energy source of the Sun. Contradictions can also be explained as the result of theories approximating more fundamental (non-contradictory) phenomena. For example, atomic theory is an approximation of quantum mechanics. Current theories describe three separate fundamental phenomena of which all other theories are approximations; the potential unification of these is sometimes called the Theory of Everything.

Example: Relativity

In 1905, Albert Einstein published the principle of special relativity, which soon became a theory. Special relativity predicted the alignment of the Newtonian principle of Galilean invariance, also termed Galilean relativity, with the electromagnetic field. By omitting from special relativity the luminiferous aether, Einstein stated that time dilation and length contraction measured in an object in relative motion is inertial—that is, the object exhibits constant velocity, which is speed with direction, when measured by its observer. He thereby duplicated the Lorentz transformation and the Lorentz contraction that had been hypothesized to resolve experimental riddles and inserted into electrodynamic theory as dynamical consequences of the aether's properties. An elegant theory, special relativity yielded its own consequences, such as the equivalence of mass and energy transforming into one another and the resolution of the paradox that an excitation of the electromagnetic field could be viewed in one reference frame as electricity, but in another as magnetism.

Einstein sought to generalize the invariance principle to all reference frames, whether inertial or accelerating. Rejecting Newtonian gravitation—a central force acting instantly at a distance—Einstein presumed a gravitational field. In 1907, Einstein's equivalence principle implied that a free fall within a uniform gravitational field is equivalent to inertial motion. By extending special relativity's effects into three dimensions, general relativity extended length contraction into space contraction, conceiving of 4D space-time as the gravitational field that alters geometrically and sets all local objects' pathways. Even massless energy exerts gravitational motion on local objects by "curving" the geometrical "surface" of 4D space-time. Yet unless the energy is vast, its relativistic effects of contracting space and slowing time are negligible when merely predicting motion. Although general relativity is embraced as the more explanatory theory via scientific realism, Newton's theory remains successful as merely a predictive theory via instrumentalism. To calculate trajectories, engineers and NASA still uses Newton's equations, which are simpler to operate.

Theories and laws

Both scientific laws and scientific theories are produced from the scientific method through the formation and testing of hypotheses, and can predict the behavior of the natural world. Both are also typically well-supported by observations and/or experimental evidence. However, scientific laws are descriptive accounts of how nature will behave under certain conditions. Scientific theories are broader in scope, and give overarching explanations of how nature works and why it exhibits certain characteristics. Theories are supported by evidence from many different sources, and may contain one or several laws.

A common misconception is that scientific theories are rudimentary ideas that will eventually graduate into scientific laws when enough data and evidence have been accumulated. A theory does not change into a scientific law with the accumulation of new or better evidence. A theory will always remain a theory; a law will always remain a law. Both theories and laws could potentially be falsified by countervailing evidence.

Theories and laws are also distinct from hypotheses. Unlike hypotheses, theories and laws may be simply referred to as scientific fact. However, in science, theories are different from facts even when they are well supported. For example, evolution is both a theory and a fact.

About theories

Theories as axioms

The logical positivists thought of scientific theories as statements in a formal language. First-order logic is an example of a formal language. The logical positivists envisaged a similar scientific language. In addition to scientific theories, the language also included observation sentences ("the sun rises in the east"), definitions, and mathematical statements. The phenomena explained by the theories, if they could not be directly observed by the senses (for example, atoms and radio waves), were treated as theoretical concepts. In this view, theories function as axioms: predicted observations are derived from the theories much like theorems are derived in Euclidean geometry. However, the predictions are then tested against reality to verify the predictions, and the "axioms" can be revised as a direct result.

The phrase "the received view of theories" is used to describe this approach. Terms commonly associated with it are "linguistic" (because theories are components of a language) and "syntactic" (because a language has rules about how symbols can be strung together). Problems in defining this kind of language precisely, e.g., are objects seen in microscopes observed or are they theoretical objects, led to the effective demise of logical positivism in the 1970s.

Theories as models

The semantic view of theories, which identifies scientific theories with models rather than propositions, has replaced the received view as the dominant position in theory formulation in the philosophy of science. A model is a logical framework intended to represent reality (a "model of reality"), similar to the way that a map is a graphical model that represents the territory of a city or country.

Precession of the perihelion of Mercury (exaggerated). The deviation in Mercury's position from the Newtonian prediction is about 43 arc-seconds (about two-thirds of 1/60 of a degree) per century.

In this approach, theories are a specific category of models that fulfill the necessary criteria (see above). One can use language to describe a model; however, the theory is the model (or a collection of similar models), and not the description of the model. A model of the solar system, for example, might consist of abstract objects that represent the sun and the planets. These objects have associated properties, e.g., positions, velocities, and masses. The model parameters, e.g., Newton's Law of Gravitation, determine how the positions and velocities change with time. This model can then be tested to see whether it accurately predicts future observations; astronomers can verify that the positions of the model's objects over time match the actual positions of the planets. For most planets, the Newtonian model's predictions are accurate; for Mercury, it is slightly inaccurate and the model of general relativity must be used instead.

The word "semantic" refers to the way that a model represents the real world. The representation (literally, "re-presentation") describes particular aspects of a phenomenon or the manner of interaction among a set of phenomena. For instance, a scale model of a house or of a solar system is clearly not an actual house or an actual solar system; the aspects of an actual house or an actual solar system represented in a scale model are, only in certain limited ways, representative of the actual entity. A scale model of a house is not a house; but to someone who wants to learn about houses, analogous to a scientist who wants to understand reality, a sufficiently detailed scale model may suffice.

Differences between theory and model

Several commentators have stated that the distinguishing characteristic of theories is that they are explanatory as well as descriptive, while models are only descriptive (although still predictive in a more limited sense). Philosopher Stephen Pepper also distinguished between theories and models, and said in 1948 that general models and theories are predicated on a "root" metaphor that constrains how scientists theorize and model a phenomenon and thus arrive at testable hypotheses.

Engineering practice makes a distinction between "mathematical models" and "physical models"; the cost of fabricating a physical model can be minimized by first creating a mathematical model using a computer software package, such as a computer aided design tool. The component parts are each themselves modelled, and the fabrication tolerances are specified. An exploded view drawing is used to lay out the fabrication sequence. Simulation packages for displaying each of the subassemblies allow the parts to be rotated, magnified, in realistic detail. Software packages for creating the bill of materials for construction allows subcontractors to specialize in assembly processes, which spreads the cost of manufacturing machinery among multiple customers. See: Computer-aided engineering, Computer-aided manufacturing, and 3D printing

Assumptions in formulating theories

An assumption (or axiom) is a statement that is accepted without evidence. For example, assumptions can be used as premises in a logical argument. Isaac Asimov described assumptions as follows:

...it is incorrect to speak of an assumption as either true or false, since there is no way of proving it to be either (If there were, it would no longer be an assumption). It is better to consider assumptions as either useful or useless, depending on whether deductions made from them corresponded to reality...Since we must start somewhere, we must have assumptions, but at least let us have as few assumptions as possible.

Certain assumptions are necessary for all empirical claims (e.g. the assumption that reality exists). However, theories do not generally make assumptions in the conventional sense (statements accepted without evidence). While assumptions are often incorporated during the formation of new theories, these are either supported by evidence (such as from previously existing theories) or the evidence is produced in the course of validating the theory. This may be as simple as observing that the theory makes accurate predictions, which is evidence that any assumptions made at the outset are correct or approximately correct under the conditions tested.

Conventional assumptions, without evidence, may be used if the theory is only intended to apply when the assumption is valid (or approximately valid). For example, the special theory of relativity assumes an inertial frame of reference. The theory makes accurate predictions when the assumption is valid, and does not make accurate predictions when the assumption is not valid. Such assumptions are often the point with which older theories are succeeded by new ones (the general theory of relativity works in non-inertial reference frames as well).

The term "assumption" is actually broader than its standard use, etymologically speaking. The Oxford English Dictionary (OED) and online Wiktionary indicate its Latin source as assumere ("accept, to take to oneself, adopt, usurp"), which is a conjunction of ad- ("to, towards, at") and sumere (to take). The root survives, with shifted meanings, in the Italian assumere and Spanish sumir. The first sense of "assume" in the OED is "to take unto (oneself), receive, accept, adopt". The term was originally employed in religious contexts as in "to receive up into heaven", especially "the reception of the Virgin Mary into heaven, with body preserved from corruption", (1297 CE) but it was also simply used to refer to "receive into association" or "adopt into partnership". Moreover, other senses of assumere included (i) "investing oneself with (an attribute)", (ii) "to undertake" (especially in Law), (iii) "to take to oneself in appearance only, to pretend to possess", and (iv) "to suppose a thing to be" (all senses from OED entry on "assume"; the OED entry for "assumption" is almost perfectly symmetrical in senses). Thus, "assumption" connotes other associations than the contemporary standard sense of "that which is assumed or taken for granted; a supposition, postulate" (only the 11th of 12 senses of "assumption", and the 10th of 11 senses of "assume").

Descriptions

From philosophers of science

Karl Popper described the characteristics of a scientific theory as follows:

  1. It is easy to obtain confirmations, or verifications, for nearly every theory—if we look for confirmations.
  2. Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory—an event which would have refuted the theory.
  3. Every "good" scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.
  4. A theory which is not refutable by any conceivable event is non-scientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.
  5. Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.
  6. Confirming evidence should not count except when it is the result of a genuine test of the theory; and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory. (I now speak in such cases of "corroborating evidence".)
  7. Some genuinely testable theories, when found to be false, might still be upheld by their admirers—for example by introducing post hoc (after the fact) some auxiliary hypothesis or assumption, or by reinterpreting the theory post hoc in such a way that it escapes refutation. Such a procedure is always possible, but it rescues the theory from refutation only at the price of destroying, or at least lowering, its scientific status, by tampering with evidence. The temptation to tamper can be minimized by first taking the time to write down the testing protocol before embarking on the scientific work.

Popper summarized these statements by saying that the central criterion of the scientific status of a theory is its "falsifiability, or refutability, or testability". Echoing this, Stephen Hawking states, "A theory is a good theory if it satisfies two requirements: It must accurately describe a large class of observations on the basis of a model that contains only a few arbitrary elements, and it must make definite predictions about the results of future observations." He also discusses the "unprovable but falsifiable" nature of theories, which is a necessary consequence of inductive logic, and that "you can disprove a theory by finding even a single observation that disagrees with the predictions of the theory".

Several philosophers and historians of science have, however, argued that Popper's definition of theory as a set of falsifiable statements is wrong because, as Philip Kitcher has pointed out, if one took a strictly Popperian view of "theory", observations of Uranus when first discovered in 1781 would have "falsified" Newton's celestial mechanics. Rather, people suggested that another planet influenced Uranus' orbit—and this prediction was indeed eventually confirmed.

Kitcher agrees with Popper that "There is surely something right in the idea that a science can succeed only if it can fail." He also says that scientific theories include statements that cannot be falsified, and that good theories must also be creative. He insists we view scientific theories as an "elaborate collection of statements", some of which are not falsifiable, while others—those he calls "auxiliary hypotheses", are.

According to Kitcher, good scientific theories must have three features:

  1. Unity: "A science should be unified.... Good theories consist of just one problem-solving strategy, or a small family of problem-solving strategies, that can be applied to a wide range of problems."
  2. Fecundity: "A great scientific theory, like Newton's, opens up new areas of research.... Because a theory presents a new way of looking at the world, it can lead us to ask new questions, and so to embark on new and fruitful lines of inquiry.... Typically, a flourishing science is incomplete. At any time, it raises more questions than it can currently answer. But incompleteness is not vice. On the contrary, incompleteness is the mother of fecundity.... A good theory should be productive; it should raise new questions and presume those questions can be answered without giving up its problem-solving strategies."
  3. Auxiliary hypotheses that are independently testable: "An auxiliary hypothesis ought to be testable independently of the particular problem it is introduced to solve, independently of the theory it is designed to save." (For example, the evidence for the existence of Neptune is independent of the anomalies in Uranus's orbit.)

Like other definitions of theories, including Popper's, Kitcher makes it clear that a theory must include statements that have observational consequences. But, like the observation of irregularities in the orbit of Uranus, falsification is only one possible consequence of observation. The production of new hypotheses is another possible and equally important result.

Analogies and metaphors

The concept of a scientific theory has also been described using analogies and metaphors. For example, the logical empiricist Carl Gustav Hempel likened the structure of a scientific theory to a "complex spatial network:"

Its terms are represented by the knots, while the threads connecting the latter correspond, in part, to the definitions and, in part, to the fundamental and derivative hypotheses included in the theory. The whole system floats, as it were, above the plane of observation and is anchored to it by the rules of interpretation. These might be viewed as strings which are not part of the network but link certain points of the latter with specific places in the plane of observation. By virtue of these interpretive connections, the network can function as a scientific theory: From certain observational data, we may ascend, via an interpretive string, to some point in the theoretical network, thence proceed, via definitions and hypotheses, to other points, from which another interpretive string permits a descent to the plane of observation.

Michael Polanyi made an analogy between a theory and a map:

A theory is something other than myself. It may be set out on paper as a system of rules, and it is the more truly a theory the more completely it can be put down in such terms. Mathematical theory reaches the highest perfection in this respect. But even a geographical map fully embodies in itself a set of strict rules for finding one's way through a region of otherwise uncharted experience. Indeed, all theory may be regarded as a kind of map extended over space and time.

A scientific theory can also be thought of as a book that captures the fundamental information about the world, a book that must be researched, written, and shared. In 1623, Galileo Galilei wrote:

Philosophy [i.e. physics] is written in this grand book—I mean the universe—which stands continually open to our gaze, but it cannot be understood unless one first learns to comprehend the language and interpret the characters in which it is written. It is written in the language of mathematics, and its characters are triangles, circles, and other geometrical figures, without which it is humanly impossible to understand a single word of it; without these, one is wandering around in a dark labyrinth.

The book metaphor could also be applied in the following passage, by the contemporary philosopher of science Ian Hacking:

I myself prefer an Argentine fantasy. God did not write a Book of Nature of the sort that the old Europeans imagined. He wrote a Borgesian library, each book of which is as brief as possible, yet each book of which is inconsistent with every other. No book is redundant. For every book there is some humanly accessible bit of Nature such that that book, and no other, makes possible the comprehension, prediction and influencing of what is going on...Leibniz said that God chose a world which maximized the variety of phenomena while choosing the simplest laws. Exactly so: but the best way to maximize phenomena and have simplest laws is to have the laws inconsistent with each other, each applying to this or that but none applying to all.

In physics

In physics, the term theory is generally used for a mathematical framework—derived from a small set of basic postulates (usually symmetries—like equality of locations in space or in time, or identity of electrons, etc.)—that is capable of producing experimental predictions for a given category of physical systems. A good example is classical electromagnetism, which encompasses results derived from gauge symmetry (sometimes called gauge invariance) in a form of a few equations called Maxwell's equations. The specific mathematical aspects of classical electromagnetic theory are termed "laws of electromagnetism", reflecting the level of consistent and reproducible evidence that supports them. Within electromagnetic theory generally, there are numerous hypotheses about how electromagnetism applies to specific situations. Many of these hypotheses are already considered to be adequately tested, with new ones always in the making and perhaps untested. An example of the latter might be the radiation reaction force. As of 2009, its effects on the periodic motion of charges are detectable in synchrotrons, but only as averaged effects over time. Some researchers are now considering experiments that could observe these effects at the instantaneous level (i.e. not averaged over time).

Examples

Note that many fields of inquiry do not have specific named theories, e.g. developmental biology. Scientific knowledge outside a named theory can still have a high level of certainty, depending on the amount of evidence supporting it. Also note that since theories draw evidence from many fields, the categorization is not absolute.

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...