Search This Blog

Friday, May 31, 2019

Model checking

From Wikipedia, the free encyclopedia

In computer science, model checking or property checking is, for a given model of a system, exhaustively and automatically checking whether this model meets a given specification. Typically, one has hardware or software systems in mind, whereas the specification contains safety requirements such as the absence of deadlocks and similar critical states that can cause the system to crash. Model checking is a technique for automatically verifying correctness properties of finite-state systems.
 
In order to solve such a problem algorithmically, both the model of the system and the specification are formulated in some precise mathematical language. To this end, the problem is formulated as a task in logic, namely to check whether a given structure satisfies a given logical formula. This general concept applies to many kinds of logics and suitable structures. A simple model checking problem is verifying whether a given formula in the propositional logic is satisfied by a given structure.

Overview

Property checking is used for verification instead of equivalence checking when two descriptions are not functionally equivalent. Particularly, during refinement, the specification is complemented with the details that are unnecessary in the higher level specification. Yet, there is no need to verify the newly introduced properties against the original specification. It is not even possible. Therefore, the strict bi-directional equivalence check is relaxed to one-way property checking. The implementation or design is regarded a model of the circuit whereas the specifications are properties that the model must satisfy.

An important class of model checking methods have been developed for checking models of hardware and software designs where the specification is given by a temporal logic formula. Pioneering work in temporal logic specification was done by Amir Pnueli, who received the 1996 Turing award for "seminal work introducing temporal logic into computing science". Model checking began with the pioneering work by E. M. Clarke and E. A. Emerson and by J. P. Queille and J. Sifakis. Clarke, Emerson, and Sifakis shared the 2007 Turing Award for their seminal work founding and developing the field of model checking.

Model checking is most often applied to hardware designs. For software, because of undecidability the approach cannot be fully algorithmic; typically it may fail to prove or disprove a given property. In embedded systems hardware designs it is possible to validate (verify against some specified requirements) a specification delivered i.e. by means of UML activity diagrams or control interpreted Petri nets.

The structure is usually given as a source code description in an industrial hardware description language or a special-purpose language. Such a program corresponds to a finite state machine (FSM), i.e., a directed graph consisting of nodes (or vertices) and edges. A set of atomic propositions is associated with each node, typically stating which memory elements are one. The nodes represent states of a system, the edges represent possible transitions which may alter the state, while the atomic propositions represent the basic properties that hold at a point of execution. 

Formally, the problem can be stated as follows: given a desired property, expressed as a temporal logic formula p, and a structure M with initial state s, decide if . If M is finite, as it is in hardware, model checking reduces to a graph search.

Algorithms

Explicit-state model checking

Symbolic model checking

Instead of enumerating reachable states one at a time, the state space can sometimes be traversed more efficiently by considering large numbers of states at a single step. When such state space traversal is based on representations of set of states and transition relations as logical formulas, binary decision diagrams (BDD) or other related data structures, the model-checking method is symbolic.

Historically, the first symbolic methods used BDDs. After the success of propositional satisfiability in solving the planning problem in artificial intelligence (see satplan) in 1996, the same approach was generalized to model checking for the Linear Temporal Logic LTL (the planning problem corresponds to model-checking for safety properties). This method is known as bounded model checking. The success of Boolean satisfiability solvers in bounded model checking led to the widespread use of satisfiability solvers in symbolic model checking.

Techniques

Model checking tools face a combinatorial blow up of the state-space, commonly known as the state explosion problem, that must be addressed to solve most real-world problems. There are several approaches to combat this problem.
  1. Symbolic algorithms avoid ever explicitly constructing the graph for the finite state machines (FSM); instead, they represent the graph implicitly using a formula in quantified propositional logic. The use of binary decision diagrams (BDDs) was made popular by the work of Ken McMillan and development of open-source BDD manipulation libraries such as CUDD and BuDDy.
  2. Bounded model checking algorithms unroll the FSM for a fixed number of steps and check whether a property violation can occur in or fewer steps. This typically involves encoding the restricted model as an instance of SAT. The process can be repeated with larger and larger values of until all possible violations have been ruled out (cf. Iterative deepening depth-first search).
  3. Partial order reduction can be used (on explicitly represented graphs) to reduce the number of independent interleavings of concurrent processes that need to be considered. The basic idea is that if it does not matter, for the kind of things one intends to prove, whether A or B is executed first, then it is a waste of time to consider both the AB and the BA interleavings.
  4. Abstraction attempts to prove properties on a system by first simplifying it. The simplified system usually does not satisfy exactly the same properties as the original one so that a process of refinement may be necessary. Generally, one requires the abstraction to be sound (the properties proved on the abstraction are true of the original system); however, most often, the abstraction is not complete (not all true properties of the original system are true of the abstraction). An example of abstraction is, on a program, to ignore the values of non-boolean variables and to only consider boolean variables and the control flow of the program; such an abstraction, though it may appear coarse, may in fact be sufficient to prove e.g. properties of mutual exclusion.
  5. Counterexample guided abstraction refinement (CEGAR) begins checking with a coarse (imprecise) abstraction and iteratively refines it. When a violation (counterexample) is found, the tool analyzes it for feasibility (i.e., is the violation genuine or the result of an incomplete abstraction?). If the violation is feasible, it is reported to the user; if it is not, the proof of infeasibility is used to refine the abstraction and checking begins again.
Model checking tools were initially developed to reason about the logical correctness of discrete state systems, but have since been extended to deal with real-time and limited forms of hybrid systems.

Computational complexity

First-order logic

The field of computational complexity theory has studied the model checking problem. Specifically, we fix a first-order logical formula without free variables and consider the following decision problem: given a finite interpretation, for instance one described as a relational database, decide whether the interpretation is a model of the formula. This problem is in the circuit class AC0. The problem is tractable when imposing some restrictions on the input structure: for instance requiring that it has treewidth bounded by a constant (which more generally implies the tractability of model checking for monadic second-order logic), bounding the degree of every domain element, and more general conditions such as bounded expansion, locally bounded expansion, and nowhere-dense structures. These results have been extended to the task of enumerating all solutions to a first-order formula with free variables.

Tools

Here is a partial list of model checking tools that have a Wikipedia page:
  • Alloy (Alloy Analyzer)
  • BLAST (Berkeley Lazy Abstraction Software Verification Tool)
  • CADP (Construction and Analysis of Distributed Processes) a toolbox for the design of communication protocols and distributed systems
  • CPAchecker, an open-source software model checker for C programs, based on the CPA framework
  • ECLAIR, a platform for the automatic analysis, verification, testing and transformation of C and C++ programs
  • FDR2, a model checker for verifying real-time systems modeled and specified as CSP Processes
  • ISP code level verifier for MPI programs
  • Java Pathfinder – open source model checker for Java programs
  • mCRL2 Toolset, Boost Software License, Based on ACP
  • NuSMV, a new symbolic model checker
  • PAT – an enhanced simulator, model checker and refinement checker for concurrent and real-time systems
  • Prism, a probabilistic symbolic model checker
  • Roméo, an integrated tool environment for modeling, simulation and verification of real-time systems modeled as parametric, time and stopwatch Petri nets
  • SPIN a general tool for verifying the correctness of distributed software models in a rigorous and mostly automated fashion.
  • TAPAs: tool for the analysis of process algebra.
  • TAPAAL, an integrated tool environment for modeling, validation and verification of Timed-Arc Petri Nets
  • TLA+ model checker by Leslie Lamport
  • UPPAAL, an integrated tool environment for modeling, validation and verification of real-time systems modeled as networks of timed automata
  • Zing – experimental tool from Microsoft to validate state models of software at various levels: high-level protocol descriptions, work-flow specifications, web services, device drivers, and protocols in the core of the operating system. Zing is currently being used for developing drivers for Windows.

Formal methods

From Wikipedia, the free encyclopedia

In computer science, specifically software engineering and hardware engineering, formal methods are a particular kind of mathematically based technique for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design.
 
Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, discrete event dynamic system and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification.

Background

Semi-Formal Methods are formalisms and languages that are not considered fully “formal”. It defers the task of completing the semantics to a later stage, which is then done either by human interpretation or by interpretation through software like code or test case generators.

Taxonomy

Formal methods can be used at a number of levels: 

Level 0: Formal specification may be undertaken and then a program developed from this informally. This has been dubbed formal methods lite. This may be the most cost-effective option in many cases. 

Level 1: Formal development and formal verification may be used to produce a program in a more formal manner. For example, proofs of properties or refinement from the specification to a program may be undertaken. This may be most appropriate in high-integrity systems involving safety or security.

Level 2: Theorem provers may be used to undertake fully formal machine-checked proofs. This can be very expensive and is only practically worthwhile if the cost of mistakes is extremely high (e.g., in critical parts of microprocessor design).

As with programming language semantics, styles of formal methods may be roughly classified as follows:
  • Denotational semantics, in which the meaning of a system is expressed in the mathematical theory of domains. Proponents of such methods rely on the well-understood nature of domains to give meaning to the system; critics point out that not every system may be intuitively or naturally viewed as a function.
  • Operational semantics, in which the meaning of a system is expressed as a sequence of actions of a (presumably) simpler computational model. Proponents of such methods point to the simplicity of their models as a means to expressive clarity; critics counter that the problem of semantics has just been delayed (who defines the semantics of the simpler model?).
  • Axiomatic semantics, in which the meaning of the system is expressed in terms of preconditions and postconditions which are true before and after the system performs a task, respectively. Proponents note the connection to classical logic; critics note that such semantics never really describe what a system does (merely what is true before and afterwards).

Lightweight formal methods

Some practitioners believe that the formal methods community has overemphasized full formalization of a specification or design. They contend that the expressiveness of the languages involved, as well as the complexity of the systems being modelled, make full formalization a difficult and expensive task. As an alternative, various lightweight formal methods, which emphasize partial specification and focused application, have been proposed. Examples of this lightweight approach to formal methods include the Alloy object modelling notation, Denney's synthesis of some aspects of the Z notation with use case driven development, and the CSK VDM Tools.

Uses

Formal methods can be applied at various points through the development process.

Specification

Formal methods may be used to give a description of the system to be developed, at whatever level(s) of detail desired. This formal description can be used to guide further development activities (see following sections); additionally, it can be used to verify that the requirements for the system being developed have been completely and accurately specified. or formalising system requirements by expressing them in a formal language with a precise and unambiguously defined syntax and semantics. 

The need for formal specification systems has been noted for years. In the ALGOL 58 report, John Backus presented a formal notation for describing programming language syntax, later named Backus normal form then renamed Backus–Naur form (BNF). Backus also wrote that a formal description of the meaning of syntactically valid ALGOL programs wasn't completed in time for inclusion in the report. "Therefore the formal treatment of the semantics of legal programs will be included in a subsequent paper." It never appeared.

Development

Formal development using formal methods as an integrated part of a tool-supported system development process. 

Once a formal specification has been produced, the specification may be used as a guide while the concrete system is developed during the design process (i.e., realized typically in software, but also potentially in hardware). For example:
  • If the formal specification is in an operational semantics, the observed behavior of the concrete system can be compared with the behavior of the specification (which itself should be executable or simulateable). Additionally, the operational commands of the specification may be amenable to direct translation into executable code.
  • If the formal specification is in an axiomatic semantics, the preconditions and postconditions of the specification may become assertions in the executable code.

Verification

Formal verification using a software tool to prove properties of a formal specification, or that a formal model of a system implementation satisfies its specification. 

Once a formal specification has been developed, the specification may be used as the basis for proving properties of the specification (and hopefully by inference the developed system).

Sign-off verification

A formal verification tool for sign-off verification is a tool that is highly trusted such that it can replace traditional verification methods (the tool may even be certified).

Human-directed proof

Sometimes, the motivation for proving the correctness of a system is not the obvious need for reassurance of the correctness of the system, but a desire to understand the system better. Consequently, some proofs of correctness are produced in the style of mathematical proof: handwritten (or typeset) using natural language, using a level of informality common to such proofs. A "good" proof is one which is readable and understandable by other human readers. 

Critics of such approaches point out that the ambiguity inherent in natural language allows errors to be undetected in such proofs; often, subtle errors can be present in the low-level details typically overlooked by such proofs. Additionally, the work involved in producing such a good proof requires a high level of mathematical sophistication and expertise.

Automated proof

In contrast, there is increasing interest in producing proofs of correctness of such systems by automated means. Automated techniques fall into three general categories:
  • Automated theorem proving, in which a system attempts to produce a formal proof from scratch, given a description of the system, a set of logical axioms, and a set of inference rules.
  • Model checking, in which a system verifies certain properties by means of an exhaustive search of all possible states that a system could enter during its execution.
  • Abstract interpretation, in which a system verifies an over-approximation of a behavioural property of the program, using a fixpoint computation over a (possibly complete) lattice representing it.
Some automated theorem provers require guidance as to which properties are "interesting" enough to pursue, while others work without human intervention. Model checkers can quickly get bogged down in checking millions of uninteresting states if not given a sufficiently abstract model.

Proponents of such systems argue that the results have greater mathematical certainty than human-produced proofs, since all the tedious details have been algorithmically verified. The training required to use such systems is also less than that required to produce good mathematical proofs by hand, making the techniques accessible to a wider variety of practitioners. 

Critics note that some of those systems are like oracles: they make a pronouncement of truth, yet give no explanation of that truth. There is also the problem of "verifying the verifier"; if the program which aids in the verification is itself unproven, there may be reason to doubt the soundness of the produced results. Some modern model checking tools produce a "proof log" detailing each step in their proof, making it possible to perform, given suitable tools, independent verification. 

The main feature of the abstract interpretation approach is that it provides a sound analysis, i.e. no false negatives are returned. Moreover, it is efficiently scalable, by tuning the abstract domain representing the property to be analyzed, and by applying widening operators to get fast convergence.

Applications

Formal methods are applied in different areas of hardware and software, including routers, Ethernet switches, routing protocols, and security applications. There are several examples in which they have been used to verify the functionality of the hardware and software used in DCs. IBM used ACL2, a theorem prover, in AMD x86 processor development process. Intel uses such methods to verify its hardware and firmware (permanent software programmed into a read-only memory). Dansk Datamatik Center used formal methods in the 1980s to develop a compiler system for the Ada programming language that went on to become a long-lived commercial product.

There are several other projects of NASA in which formal methods are applied, such as Next Generation Air Transportation System, Unmanned Aircraft System integration in National Airspace System, and Airborne Coordinated Conflict Resolution and Detection (ACCoRD). B-Method with AtelierB, is used to develop safety automatisms for the various subways installed throughout the world by Alstom and Siemens, and also for Common Criteria certification and the development of system models by ATMEL and STMicroelectronics.

Formal verification has been frequently used in hardware by most of the well-known hardware vendors, such as IBM, Intel, and AMD. There are many areas of hardware, where Intel have used FMs to verify the working of the products, such as parameterized verification of cache coherent protocol, Intel Core i7 processor execution engine validation  (using theorem proving, BDDs, and symbolic evaluation), optimization for Intel IA-64 architecture using HOL light theorem prover, and verification of high performance dual-port gigabit Ethernet controller with a support for PCI express protocol and Intel advance management technology using Cadence. Similarly, IBM has used formal methods in the verification of power gates, registers, and functional verification of the IBM Power7 microprocessor.

In software development

In software development, formal methods are mathematical approaches to solving software (and hardware) problems at the requirements, specification, and design levels. Formal methods are most likely to be applied to safety-critical or security-critical software and systems, such as avionics software. Software safety assurance standards, such as DO-178C allows the usage of formal methods through supplementation, and Common Criteria mandates formal methods at the highest levels of categorization. 

For sequential software, examples of formal methods include the B-Method, the specification languages used in automated theorem proving, RAISE, and the Z notation

In functional programming, property-based testing has allowed the mathematical specification and testing (if not exhaustive testing) of the expected behaviour of individual functions. 

The Object Constraint Language (and specializations such as Java Modeling Language) has allowed object-oriented systems to be formally specified, if not necessarily formally verified. 

For concurrent software and systems, Petri nets, process algebra, and finite state machines (which are based on automata theory - see also virtual finite state machine or event driven finite state machine) allow executable software specification and can be used to build up and validate application behavior. 

Another approach to formal methods in software development is to write a specification in some form of logic—usually a variation of first-order logic (FOL)—and then to directly execute the logic as though it were a program. The OWL language, based on Description Logic (DL), is an example. There is also work on mapping some version of English (or another natural language) automatically to and from logic, and executing the logic directly. Examples are Attempto Controlled English, and Internet Business Logic, which do not seek to control the vocabulary or syntax. A feature of systems that support bidirectional English-logic mapping and direct execution of the logic is that they can be made to explain their results, in English, at the business or scientific level.

Formal methods and notations

There are a variety of formal methods and notations available.

Specification languages

Model checkers

  • SPIN
  • PAT is a powerful free model checker, simulator and refinement checker for concurrent systems and CSP extensions (e.g. shared variables, arrays, fairness).
  • MALPAS Software Static Analysis Toolset is an industrial strength model checker used for formal proof of safety-critical systems
  • UPPAAL
  • ESBMC

Formal system

From Wikipedia, the free encyclopedia

A formal system is used to infer theorems from axioms according to a set of rules. These rules used to carry out the inference of theorems from axioms are known as the logical calculus of the formal system. A formal system is essentially an "axiomatic system". In 1921, David Hilbert proposed to use such system as the foundation for the knowledge in mathematics . A formal system may represent a well-defined system of abstract thought. Spinoza's Ethics imitates the form of Euclid's Elements. Spinoza employed Euclidean elements such as "axioms" or "primitive truths", rules of inferences, etc., so that a calculus can be built using these.
 
The term formalism is sometimes used as a rough synonym for formal system, but is also used to refer to a given style of notation, for example, Paul Dirac's bra–ket notation.

Background

Each formal system uses primitive symbols (which collectively form an alphabet) to finitely construct a formal language from a set of axioms through inferential rules of formation.

The system thus consists of valid formulas built up through finite combinations of the primitive symbols—combinations that are formed from the axioms in accordance with the stated rules.

More formally, this can be expressed as the following:
  1. A finite set of symbols, known as the alphabet, which concatenate formulas, so that a formula is just a finite string of symbols taken from the alphabet.
  2. A grammar consisting of rules to form formulas from simpler formulas. A formula is said to be well-formed if it can be formed using the rules of the formal grammar. It is often required that there be a decision procedure for deciding whether a formula is well-formed.
  3. A set of axioms, or axiom schemata, consisting of well-formed formulas.
  4. A set of inference rules. A well-formed formula that can be inferred from the axioms is known as a theorem of the formal system.

Recursive

A formal system is said to be recursive (i.e. effective) or recursively enumerable if the set of axioms and the set of inference rules are decidable sets or semidecidable sets, respectively.

Inference and entailment

The entailment of the system by its logical foundation is what distinguishes a formal system from others which may have some basis in an abstract model. Often the formal system will be the basis for or even identified with a larger theory or field (e.g. Euclidean geometry) consistent with the usage in modern mathematics such as model theory.

Formal language

A formal language is a language that is defined by a formal system. Like languages in linguistics, formal languages generally have two aspects:
  • the syntax of a language is what the language looks like (more formally: the set of possible expressions that are valid utterances in the language) studied in formal language theory
  • the semantics of a language are what the utterances of the language mean (which is formalized in various ways, depending on the type of language in question)
In computer science and linguistics a formal grammar is a precise description of a formal language: a set of strings. The two main categories of formal grammar are that of generative grammars, which are sets of rules for how strings in a language can be generated, and that of analytic grammars (or reductive grammar,) which are sets of rules for how a string can be analyzed to determine whether it is a member of the language. In short, an analytic grammar describes how to recognize when strings are members in the set, whereas a generative grammar describes how to write only those strings in the set.

Logical system

A logical system or, for short, a logic, is a formal system together with its semantics. According to model-theoretic interpretation, the semantics of a logical system describe whether a well-formed formula is satisfied by a given structure. A structure that satisfies all the axioms of the formal system is known as a model of the logical system. A logical system is sound if each well-formed formula that can be inferred from the axioms is satisfied by every model of the logical system. Conversely, a logic system is complete if each well-formed formula that is satisfied by every model of the logical system can be inferred from the axioms.

Deductive inference

A deductive system, also called a deductive apparatus, consists of the axioms (or axiom schemata) and rules of inference that can be used to derive theorems of the system.

Such deductive systems preserve deductive qualities in the formulas that are expressed in the system. Usually the quality we are concerned with is truth as opposed to falsehood. However, other modalities, such as justification or belief may be preserved instead.

In order to sustain its deductive integrity, a deductive apparatus must be definable without reference to any intended interpretation of the language. The aim is to ensure that each line of a derivation is merely a syntactic consequence of the lines that precede it. There should be no element of any interpretation of the language that gets involved with the deductive nature of the system.

History

Early logic systems includes Indian logic of Pāṇini, syllogistic logic of Aristotle, propositional logic of Stoicism, and Chinese logic of Gongsun Long (c. 325–250 BCE) . In recent times, contributors include George Boole, Augustus De Morgan, and Gottlob Frege. Mathematical logic was developed in 19th century Europe.

Formalism

Hilbert's program

David Hilbert instigated a formalist movement that was eventually tempered by Gödel's incompleteness theorems.

QED manifesto

The QED manifesto represented a subsequent unsuccessful effort at formalization of known mathematics.

Examples

Examples of formal systems include:

Variants

The following systems are variations of formal systems.

Proof system

Formal proofs are sequences of well-formed formulas (or wff for short). For a wff to qualify as part of a proof, it might either be an axiom or be the product of applying an inference rule on previous wffs in the proof sequence. The last wff in the sequence is recognized as a theorem.

The point of view that generating formal proofs is all there is to mathematics is often called formalism. David Hilbert founded metamathematics as a discipline for discussing formal systems. Any language that one uses to talk about a formal system is called a metalanguage. The metalanguage may be a natural language, or it may be partially formalized itself, but it is generally less completely formalized than the formal language component of the formal system under examination, which is then called the object language, that is, the object of the discussion in question.
Once a formal system is given, one can define the set of theorems which can be proved inside the formal system. This set consists of all wffs for which there is a proof. Thus all axioms are considered theorems. Unlike the grammar for wffs, there is no guarantee that there will be a decision procedure for deciding whether a given wff is a theorem or not. The notion of theorem just defined should not be confused with theorems about the formal system, which, in order to avoid confusion, are usually called metatheorems.

Branches of science

From Wikipedia, the free encyclopedia

The scale of the Universe mapped to branches of science and the hierarchy of the sciences.
 
The branches of science, also referred to as sciences, "scientific fields", or "scientific disciplines," are commonly divided into three major groups:
Natural and social sciences are empirical sciences, meaning that the knowledge must be based on observable phenomena and must be capable of being verified by other researchers working under the same conditions. This verifiability may well vary even within a scientific discipline.

Natural, social, and formal science make up the fundamental sciences, which form the basis of interdisciplinary and applied sciences such as engineering and medicine. Specialized scientific disciplines that exist in multiple categories may include parts of other scientific disciplines but often possess their own terminologies and expertises.

Formal sciences

The formal sciences are the branches of science that are concerned with formal systems, such as logic, mathematics, theoretical computer science, information theory, systems theory, decision theory, statistics, and theoretical linguistics

Unlike other sciences, the formal sciences are not concerned with the validity of theories based on observations in the real world (empirical knowledge), but rather with the properties of formal systems based on definitions and rules. Methods of the formal sciences are, however, essential to the construction and testing of scientific models dealing with observable reality, and major advances in formal sciences have often enabled major advances in the empirical sciences.

Decision theory

Decision theory in economics, psychology, philosophy, mathematics, and statistics is concerned with identifying the values, uncertainties and other issues relevant in a given decision, its rationality, and the resulting optimal decision. It is very closely related to the field of game law.

Logic

Logic (from the Greek λογική logikē) is the formal systematic study of the principles of valid inference and correct reasoning. Logic is used in most intellectual activities, but is studied primarily in the disciplines of philosophy, mathematics, semantics, and computer science. Logic examines general forms which arguments may take, which forms are valid, and which are fallacies. In philosophy, the study of logic figures in most major areas: epistemology, ethics, metaphysics. In mathematics and computer science, it is the study of valid inferences within some formal language. Logic is also studied in argumentation theory.

Mathematics

Mathematics, first of all known as The Science of numbers which is classified in Arithmetic and Algebra, is classified as a formal science, has both similarities and differences with the empirical sciences (the natural and social sciences). It is similar to empirical sciences in that it involves an objective, careful and systematic study of an area of knowledge; it is different because of its method of verifying its knowledge, using a priori rather than empirical methods.

Statistics

Statistics is the study of the collection, organization, and interpretation of data. It deals with all aspects of this, including the planning of data collection in terms of the design of surveys and experiments.

A statistician is someone who is particularly well versed in the ways of thinking necessary for the successful application of statistical analysis. Such people have often gained this experience through working in any of a wide number of fields. There is also a discipline called mathematical statistics, which is concerned with the theoretical basis of the subject.

The word statistics, when referring to the scientific discipline, is singular, as in "Statistics is an art." This should not be confused with the word statistic, referring to a quantity (such as mean or median) calculated from a set of data, whose plural is statistics ("this statistic seems wrong" or "these statistics are misleading").

Systems theory

Systems theory is the interdisciplinary study of systems in general, with the goal of elucidating principles that can be applied to all types of systems in all fields of research. The term does not yet have a well-established, precise meaning, but systems theory can reasonably be considered a specialization of systems thinking and a generalization of systems science. The term originates from Ludwig von Bertalanffy's General System Theory (GS) and is used in later efforts in other fields, such as the action theory of Alcott Parsons and the system-theory of Nickolas McLuhan.

In this context the word systems is used to refer specifically to self-regulating systems, i.e. that are self-correcting through feedback. Self-regulating systems are found in nature, including the physiological systems of our body, in local and global ecosystems, and in climate.

Theoretical computer science

Theoretical computer science (TCS) is a division or subset of general computer science and focuses on more abstract or mathematical aspects of computing. 

These divisions and subsets include analysis of algorithms and formal semantics of programming languages. Technically, there are hundreds of divisions and subsets besides these two. Each of the multiple parts have their own individual personal leaders (of popularity) and there are many associations and professional social groups and publications of distinction.

Natural/Pure Science

Natural science is a branch of science that seeks to elucidate the rules that govern the natural world by applying an empirical and scientific method to the study of the universe. The term natural sciences is used to distinguish it from the social sciences, which apply the scientific method to study human behavior and social patterns; the humanities, which use a critical, or analytical approach to the study of the human condition; and the formal sciences.

Physical science

Physical science is an encompassing term for the branches of natural science and science that study non-living systems, in contrast to the life sciences. However, the term "physical" creates an unintended, somewhat arbitrary distinction, since many branches of physical science also study biological phenomena. There is a difference between physical science and physics.

Physics

Physics (from Ancient Greek: φύσις, romanizedphysis, lit. 'nature') is a natural science that involves the study of matter and its motion through spacetime, along with related concepts such as energy and force. More broadly, it is the general analysis of nature, conducted in order to understand how the universe behaves.

Physics is one of the oldest academic disciplines, perhaps the oldest through its inclusion of astronomy. Over the last two millennia, physics was a part of natural philosophy along with chemistry, certain branches of mathematics, and biology, but during the Scientific Revolution in the 16th century, the natural sciences emerged as unique research programs in their own right. Certain research areas are interdisciplinary, such as biophysics and quantum chemistry, which means that the boundaries of physics are not rigidly defined. In the nineteenth and twentieth centuries physicalism emerged as a major unifying feature of the philosophy of science as physics provides fundamental explanations for every observed natural phenomenon. New ideas in physics often explain the fundamental mechanisms of other sciences, while opening to new research areas in mathematics and philosophy.

Chemistry

Chemistry (the etymology of the word has been much disputed) is the science of matter and the changes it undergoes. The science of matter is also addressed by physics, but while physics takes a more general and fundamental approach, chemistry is more specialized, being concerned by the composition, behavior (or reaction), structure, and properties of matter, as well as the changes it undergoes during chemical reactions. It is a physical science which studies various substances, atoms, molecules, and matter (especially carbon based); biochemistry, the study of substances found in biological organisms; physical chemistry, the study of chemical processes using physical concepts such as thermodynamics and quantum mechanics; and analytical chemistry, the analysis of material samples to gain an understanding of their chemical composition and structure. Many more specialized disciplines have emerged in recent years, e.g. neurochemistry the chemical study of the nervous system (see subdisciplines).

Earth science

Earth science (also known as geoscience, the geosciences or the Earth sciences) is an all-embracing term for the sciences related to the planet Earth. It is arguably a special case in planetary science, the Earth being the only known life-bearing planet. There are both reductionist and holistic approaches to Earth sciences. The formal discipline of Earth sciences may include the study of the atmosphere, hydrosphere, oceans and biosphere, as well as the solid earth. Typically Earth scientists will use tools from physics, chemistry, biology, geography, chronology and mathematics to build a quantitative understanding of how the Earth system works, and how it evolved to its current state.
Ecology
Ecology (from Greek: οἶκος, "house"; -λογία, "study of") is the scientific study of the relationships that living organisms have with each other and with their abiotic environment. Topics of interest to ecologists include the composition, distribution, amount (biomass), number, and changing states of organisms within and among ecosystems.
Oceanography
Oceanography, or marine biology, is the branch of Earth science that studies ocean. It covers a wide range of topics, including marine organisms and ecosystem dynamics; ocean currents, waves, and geophysical fluid dynamics; plate tectonics and the geology of the sea floor; and fluxes of various chemical substances and physical properties within the ocean and across its boundaries. These diverse topics reflect multiple disciplines that oceanographers blend to further knowledge of the world ocean and understanding of processes within it: biology, chemistry, geology, meteorology, and physics as well as geography.
Geology
Geology (from the Greek γῆ, gê, "earth" and λόγος, logos, "study") is the science comprising the study of solid Earth, the rocks of which it is composed, and the processes by which they change.
Meteorology
Meteorology is the interdisciplinary scientific study of the atmosphere. Studies in the field stretch back millennia, though significant progress in meteorology did not occur until the 17th century. The 19th century saw breakthroughs occur after observing networks developed across several countries. After the development of the computer in the latter half of the 20th century, breakthroughs in weather forecasting were achieved.

Space Science or Astronomy

Space science or Astronomy is the study of everything in outer space. This has sometimes been called astronomy, but recently astronomy has come to be regarded as a division of broader space science, which has grown to include other related fields, such as studying issues related to space travel and space exploration (including space medicine), space archaeology and science performed in outer space.

Science of living things

The science of living things comprises the branches of science that involve the scientific study of living organisms, like plants, animals, and human beings. However, the study of behavior of organisms, such as practiced in ethology and psychology, is only included in as much as it involves a clearly biological aspect. While biology remains the centerpiece of the science of living things, technological advances in molecular biology and biotechnology have led to a burgeoning of specializations and new, often interdisciplinary, fields.

Biology

Biology is the branch of natural science concerned with the study of life and living organisms, including their structure, function, growth, origin, evolution, distribution, and taxonomy. Biology is a vast subject containing many subdivisions, topics, and disciplines.
Zoology
Zoology, occasionally spelled zoölogy, is the branch of science that relates to the animal kingdom, including the structure, embryology, evolution, classification, habits, and distribution of all animals, both living and extinct. The term is derived from Ancient Greek ζῷον (zōon, "animal") + λόγος (logos, "knowledge"). Some branches of zoology include: anthrozoology, arachnology, archaeozoology, cetology, embryology, entomology, helminthology, herpetology, histology, ichthyology, malacology, mammalogy, morphology, nematology, ornithology, palaeozoology, pathology, primatology, protozoology, taxonomy, and zoogeography.
Human biology
Human biology is an interdisciplinary academic field of biology, biological anthropology, nutrition and medicine which focuses on humans; it is closely related to primate biology, and a number of other fields. 

Botany
Botany, plant science, or plant biology is a branch of biology that involves the scientific study of plant life. Botany covers a wide range of scientific disciplines including structure, growth, reproduction, metabolism, development, diseases, chemical properties, and evolutionary relationships among taxonomic groups. Botany began with early human efforts to identify edible, medicinal and poisonous plants, making it one of the oldest sciences. Today botanists study over 550,000 species of living organisms. The term "botany" comes from Greek βοτάνη, meaning "pasture, grass, fodder", perhaps via the idea of a livestock keeper needing to know which plants are safe for livestock to eat.

Social sciences

The social sciences are the fields of scholarship that study society. "Social science" is commonly used as an umbrella term for empirical fields outside of the natural sciences. These include: anthropology, archaeology, criminology, economics, linguistics, international relations, political science (aka government), public health, sociology, some branches of psychology (results of which can not be replicated or validated easily - e.g. social psychology), and certain aspects of business administration, communication, education, geography, history, and law.

Applied sciences

Applied science is the application of scientific knowledge transferred into a physical environment. Examples include testing a theoretical model through the use of formal science or solving a practical problem through the use of natural science. 

Applied science differs from fundamental science, which seeks to describe the most basic objects and forces, having less emphasis on practical applications. Applied science can be like biological science and physical science. 

Example fields of applied science include
Fields of engineering are closely related to applied sciences. Applied science is important for technology development. Its use in industrial settings is usually referred to as research and development (R&D).

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...