Search This Blog

Friday, May 31, 2019

Space research

From Wikipedia, the free encyclopedia

One extent of space research is seen in the Mars Opportunity mission
 
The first major scientific discovery made from space was the dangerous Van Allen radiation belts
 
Space research is scientific study carried out in outer space, and by studying outer space. From the use of space technology to the observable universe, space research is a wide research field. Earth science, materials science, biology, medicine, and physics all apply to the space research environment. The term includes scientific payloads at any altitude from deep space to low Earth orbit, extended to include sounding rocket research in the upper atmosphere, and high-altitude balloons.
Space exploration is also a form of space research.

History

The first US satellite was the Explorer 1, seen here launching, 1st February 1958
 
First image of the far side of the Moon, sent back to Earth by the Luna 3 mission

Rockets

The Chinese used rockets in ceremony and as weaponry since the 13th century, but no rocket would overcome Earth's gravity until the latter half of the 20th century. Space-capable rocketry appeared simultaneously in the work of three scientists, in three separate countries. In Russia, Konstantin Tsiolkovski, in the United States, Robert Goddard, and in Germany, Hermann Oberth

The United States and the Soviet Union created their own missile programs. The space research field evolved as scientific investigation based on advancing rocket technology. 

In 1948–1949 detectors on V-2 rocket flights detected x-rays from the Sun. Sounding rockets helped show us the structure of the upper atmosphere. As higher altitudes were reached, space physics emerged as a field of research with studies of Earths aurorae, ionosphere and magnetosphere.

Artificial satellites

The first artificial satellite, Russian Sputnik 1, launched on October 4, 1957, four months before the United States first, Explorer 1. The first major discovery of satellite research was in 1958, when Explorer 1 detected the Van Allen radiation belts. Planetology reached a new stage with the Russian Luna programme, between 1959 and 1976, a series of lunar probes which gave us evidence of the Moons chemical composition, gravity, temperature, soil samples, the first photographs of the far side of the Moon by Luna 3, and the first remotely controlled robots (Lunokhod) to land on another planetary body

Yuri Gagarin was the first passenger on a space flight, the Vostok 1, first flight of the Vostok programme

Astronauts

The early space researchers obtained an important international forum with the establishment of the Committee on Space Research (COSPAR) in 1958, which achieved an exchange of scientific information between east and west during the cold war, despite the military origin of the rocket technology underlying the research field.

On April 12, 1961, Russian Lieutenant Yuri Gagarin was the first human to orbit Earth, in Vostok 1. In 1961, US astronaut Alan Shepard was the first American in space. And on July 20, 1969, astronaut Neil Armstrong was the first human on the Moon

On April 19, 1971, the Soviet Union launched the Salyut 1, the first space station of substantial duration, a successful 23 day mission, sadly ruined by transport disasters. On May 14, 1973, Skylab, the first American space station launched, on a modified Saturn V rocket. Skylab was occupied for 24 weeks.

Interstellar

"Ultima Thule" (2014 MU69) is the farthest object visited by human spacecraft
 
"Ultima Thule" is the nickname of the farthest and most primitive object visited by human spacecraft. Originally named "1110113Y" when detected by Hubble in 2014, the planetessimal was reached by the New Horizons probe on 1st January 2019 after a week long manoeuvering phase. New Horizons detected Ultima Thule from 107 million miles and performed a total 9 days of manoeuvres to pass within 3,500 miles of the 19 mile long contact binary. Ultima Thule has an orbital period around 298 years, is 4.1 billion miles from Earth, and over 1 billion miles beyond Pluto

Two Pioneer probes and the New Horizons probe are expected to enter interstellar medium, but these three are expected to have depleted available power before then, so the point of exit cannot be confirmed precisely. Predicting probes speed is imprecise as they pass through the variable heliosphere. Pioneer 10 is roughly at the outer edge of the heliosphere in 2019. New Horizons should reach it by 2040, and Pioneer 11 by 2060. 

The Voyager 1 probe launched 41 years, 8 months and 18 days ago, and flew beyond the edge our solar system in August 2012 to the interstellar medium. The farthest human object from the Earth, predictions include collision, an Oort cloud, and destiny, "perhaps eternally—to wander the Milky Way." 

Voyager 2 launched on 20th August 1977 travelling slower than Voyager 1 and reached interstellar medium by the end of 2018. Voyager 2 is the only Earth probe to have visited the ice giants of Neptune or Uranus.
 
Neither Voyager is aimed at a particular visible object. Both continue to send research data to NASA Deep Space Network as of 2019. 

Two Voyager probes have reached interstellar medium, and three other probes are expected to join that list.

Research fields

Space research includes the following fields of science:

Space research from artificial satellites

Upper Atmosphere Research Satellite

Upper Atmosphere Research Satellite was a NASA-led mission launched on September 12, 1991. The 5,900 kg (13,000 lb) satellite was deployed from the Space Shuttle Discovery during the STS-48 mission on 15 September 1991. It was the first multi-instrumented satellite to study various aspects of the Earth's atmosphere and have a better understanding of photochemistry. After 14 years of service, the UARS finished its scientific career in 2005.

Great Observatories program

Great Observatories program telescopes are combined for enhanced detail in this image of the Crab Nebula
 
Great Observatories program is the flagship NASA telescope program. The Great Observatories program pushes forward our understanding of the universe with detailed observation of the sky, based in gamma rays, ultraviolet, x-ray, infrared, and visible, light spectrums. The four main telescopes for the Great Observatories program are, Hubble Space Telescope (visible, ultraviolet), launched 1990, Compton Gamma Ray Observatory (gamma), launched 1991, Chandra X-Ray Observatory (x-ray), launched 1999, and Spitzer Space Telescope (infrared), launched 2003. 

Origins of the Hubble, named after American astronomer Edwin Hubble, go back as far as 1946. In the present day, the Hubble is used to identify exo-planets and give detailed accounts of events in our own solar system. Hubbles visible-light observations are combined with the other great observatories to give us some of the most detailed images of the visible universe.

International Gamma-Ray Astrophysics Laboratory

INTEGRAL is one of the most powerful gamma-ray observatories, launched by the European Space Agency in 2002, and continuing to operate (as of March 2019). INTEGRAL provides insight into the most energetic cosmological formations in space including, black holes, neutron stars, and supernovas. INTEGRAL plays an important role researching gamma-rays, one of the most exotic and energetic phenomena in space.

Gravity and Extreme Magnetism Small Explorer

The NASA-led GEMS mission was scheduled to launch for November 2014. The spacecraft would use an X-Ray telescope to measure the polarization of x-rays coming from black holes and neutron stars. It would research into remnants of supernovae, stars that have exploded. Few experiments have been conducted in X-Ray polarization since the 1970s, and scientists anticipated GEMS to break new ground. Understanding x-ray polarisation will improve scientists knowledge of black holes, in particular whether matter around a black hole is confined, to a flat-disk, a puffed disk, or a squirting jet. The GEMS project was cancelled in June 2012, projected to fail time and finance limits. The purpose of the GEMS mission continues to be relevant (as of 2019).

Space research on space stations

Russian station Mir was the first long term inhabited station

Salyut 1

Salyut 1 was the first space station ever built. It was launched in April 19, 1971 by the Soviet Union. The first crew failed entry into the space station. The second crew was able to spend twenty-three days in the space station, but this achievement was quickly overshadowed since the crew died on reentry to Earth. Salyut 1 was intentionally deorbited six months into orbit since it prematurely ran out of fuel.

Skylab

Skylab was the first American space station. It was 4 times larger than Salut 1. Skylab was launched in May 19, 1973. It rotated through three crews of three during its operational time. Skylab’s experiments confirmed coronal holes and were able to photograph eight solar flares.

Mir

The International Space Station of today is a modern research facility
 
Russian station Mir, from 1986 to 2001, was the first long term inhabited space station. Occupied in low Earth orbit for twelve and a half years, Mir served a permanent microgravity laboratory. Crews experimented with biology, human biology, physics, astronomy, meteorology and spacecraft systems. Goals included developing technologies for permanent occupation of space.

International Space Station

The International Space Station received its first crew as part of Expedition 1, in November 2000, an internationally co-operative mission of almost 20 participants. The station has been continuously occupied for 18 years and 202 days, exceeding the previous record, almost ten years by Russian station Mir. The ISS provides research in microgravity, and exposure to the local space environment. Crew members conduct tests relevant to biology, physics, astronomy, and others. Even studying the experience and health of the crew advances space research.

Computer algebra

From Wikipedia, the free encyclopedia

In computational mathematics, computer algebra, also called symbolic computation or algebraic computation, is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions and other mathematical objects. Although computer algebra could be considered a subfield of scientific computing, they are generally considered as distinct fields because scientific computing is usually based on numerical computation with approximate floating point numbers, while symbolic computation emphasizes exact computation with expressions containing variables that have no given value and are manipulated as symbols.

Software applications that perform symbolic calculations are called computer algebra systems, with the term system alluding to the complexity of the main applications that include, at least, a method to represent mathematical data in a computer, a user programming language (usually different from the language used for the implementation), a dedicated memory manager, a user interface for the input/output of mathematical expressions, a large set of routines to perform usual operations, like simplification of expressions, differentiation using chain rule, polynomial factorization, indefinite integration, etc.

Computer algebra is widely used to experiment in mathematics and to design the formulas that are used in numerical programs. It is also used for complete scientific computations, when purely numerical methods fail, as in public key cryptography, or for some non-linear problems.

Terminology

Some authors distinguish computer algebra from symbolic computation using the latter name to refer to kinds of symbolic computation other than the computation with mathematical formulas. Some authors use symbolic computation for the computer science aspect of the subject and "computer algebra" for the mathematical aspect. In some languages the name of the field is not a direct translation of its English name. Typically, it is called calcul formel in French, which means "formal computation". This name reflects the ties this field has with formal methods

Symbolic computation has also been referred to, in the past, as symbolic manipulation, algebraic manipulation, symbolic processing, symbolic mathematics, or symbolic algebra, but these terms, which also refer to non-computational manipulation, are no longer used in reference to computer algebra.

Scientific community

There is no learned society that is specific to computer algebra, but this function is assumed by the special interest group of the Association for Computing Machinery named SIGSAM (Special Interest Group on Symbolic and Algebraic Manipulation).

There are several annual conferences on computer algebra, the premier being ISSAC (International Symposium on Symbolic and Algebraic Computation), which is regularly sponsored by SIGSAM.

There are several journals specializing in computer algebra, the top one being Journal of Symbolic Computation founded in 1985 by Bruno Buchberger. There are also several other journals that regularly publish articles in computer algebra.

Computer science aspects

Data representation

As numerical software is highly efficient for approximate numerical computation, it is common, in computer algebra, to emphasize exact computation with exactly represented data. Such an exact representation implies that, even when the size of the output is small, the intermediate data generated during a computation may grow in an unpredictable way. This behavior is called expression swell. To obviate this problem, various methods are used in the representation of the data, as well as in the algorithms that manipulate them.

Numbers

The usual numbers systems used in numerical computation are floating point numbers and integers of a fixed bounded size. None of these is convenient for computer algebra, due to expression swell.

Therefore, the basic numbers used in computer algebra are the integers of the mathematicians, commonly represented by an unbounded signed sequence of digits in some base of numeration, usually the largest base allowed by the machine word. These integers allow to define the rational numbers, which are irreducible fractions of two integers. 

Programming an efficient implementation of the arithmetic operations is a hard task. Therefore, most free computer algebra systems and some commercial ones, like Maple (software), use the GMP library, which is thus a de facto standard.

Expressions

Except for numbers and variables, every mathematical expression may be viewed as the symbol of an operator followed by a sequence of operands. In computer algebra software, the expressions are usually represented in this way. This representation is very flexible, and many things, that seem not to be mathematical expressions at first glance, may be represented and manipulated as such. For example, an equation is an expression with “=” as an operator, a matrix may be represented as an expression with “matrix” as an operator and its rows as operands. 

Even programs may be considered and represented as expressions with operator “procedure” and, at least, two operands, the list of parameters and the body, which is itself an expression with “body” as an operator and a sequence of instructions as operands. Conversely, any mathematical expression may be viewed as a program. For example, the expression a + b may be viewed as a program for the addition, with a and b as parameters. Executing this program consists in evaluating the expression for given values of a and b; if they do not have any value—that is they are indeterminates—, the result of the evaluation is simply its input. 

This process of delayed evaluation is fundamental in computer algebra. For example, the operator “=” of the equations is also, in most computer algebra systems, the name of the program of the equality test: normally, the evaluation of an equation results in an equation, but, when an equality test is needed,—either explicitly asked by the user through an “evaluation to a Boolean” command, or automatically started by the system in the case of a test inside a program—then the evaluation to a boolean 0 or 1 is executed. 

As the size of the operands of an expression is unpredictable and may change during a working session, the sequence of the operands is usually represented as a sequence of either pointers (like in Macsyma) or entries in a hash table (like in Maple).

Simplification

The raw application of the basic rules of differentiation with respect to x on the expression gives the result
Such a complicated expression is clearly not acceptable, and a procedure of simplification is needed as soon as one works with general expressions. 

This simplification is normally done through rewriting rules. There are several classes of rewriting rules that have to be considered. The simplest consists in the rewriting rules that always reduce the size of the expression, like EE → 0 or sin(0) → 0. They are systematically applied in the computer algebra systems. 

The first difficulty occurs with associative operations like addition and multiplication. The standard way to deal with associativity is to consider that addition and multiplication have an arbitrary number of operands, that is that a + b + c is represented as "+"(a, b, c). Thus a + (b + c) and (a + b) + c are both simplified to "+"(a, b, c), which is displayed a + b + c. What about ab + c? To deal with this problem, the simplest way is to rewrite systematically E, EF, E/F as, respectively, (−1)⋅E, E + (−1)⋅F, EF−1. In other words, in the internal representation of the expressions, there is no subtraction nor division nor unary minus, outside the representation of the numbers. 

A second difficulty occurs with the commutativity of addition and multiplication. The problem is to recognize quickly the like terms in order to combine or cancel them. In fact, the method for finding like terms, consisting of testing every pair of terms, is too costly for being practicable with very long sums and products. For solving this problem, Macsyma sorts the operands of sums and products with a function of comparison that is designed in order that like terms are in consecutive places, and thus easily detected. In Maple, the hash function is designed for generating collisions when like terms are entered, allowing to combine them as soon as they are introduced. This design of the hash function allows also to recognize immediately the expressions or subexpressions that appear several times in a computation and to store them only once. This allows not only to save some memory space, but also to speed up computation, by avoiding repetition of the same operations on several identical expressions. 

Some rewriting rules sometimes increase and sometimes decrease the size of the expressions to which they are applied. This is the case of distributivity or trigonometric identities. For example, the distributivity law allows rewriting and As there is no way to make a good general choice of applying or not such a rewriting rule, such rewritings are done only when explicitly asked for by the user. For the distributivity, the computer function that applies this rewriting rule is generally called "expand". The reverse rewriting rule, called "factor", requires a non-trivial algorithm, which is thus a key function in computer algebra systems.

Mathematical aspects

In this section we consider some fundamental mathematical questions that arise as soon as one wants to manipulate mathematical expressions in a computer. We consider mainly the case of the multivariate rational fractions. This is not a real restriction, because, as soon as the irrational functions appearing in an expression are simplified, they are usually considered as new indeterminates. For example,
is viewed as a polynomial in and

Equality

There are two notions of equality for mathematical expressions. The syntactic equality is the equality of the expressions which means that they are written (or represented in a computer) in the same way. Being trivial, the syntactic equality is rarely considered by mathematicians, although it is the only equality that is easy to test with a program. The semantic equality is when two expressions represent the same mathematical object, like in
It is known from Richardson's theorem that there may not exist an algorithm that decides if two expressions representing numbers are semantically equal, if exponentials and logarithms are allowed in the expressions. Therefore, (semantical) equality may be tested only on some classes of expressions such as the polynomials and rational fractions

To test the equality of two expressions, instead of designing specific algorithms, it is usual to put expressions in some canonical form or to put their difference in a normal form, and to test the syntactic equality of the result. 

Unlike in usual mathematics, "canonical form" and "normal form" are not synonymous in computer algebra. A canonical form is such that two expressions in canonical form are semantically equal if and only if they are syntactically equal, while a normal form is such that an expression in normal form is semantically zero only if it is syntactically zero. In other words, zero has a unique representation by expressions in normal form. 

Normal forms are usually preferred in computer algebra for several reasons. Firstly, canonical forms may be more costly to compute than normal forms. For example, to put a polynomial in canonical form, one has to expand by distributivity every product, while it is not necessary with a normal form (see below). Secondly, It may be the case, like for expressions involving radicals, that a canonical form, if it exists, depends on some arbitrary choices and that these choices may be different for two expressions that have been computed independently. This may make impracticable the use of a canonical form.

History

At the beginning of computer algebra, circa 1970, when the long-known algorithms were first put on computers, they turned out to be highly inefficient. Therefore, a large part of the work of the researchers in the field consisted in revisiting classical algebra in order to make it effective and to discover efficient algorithms to implement this effectiveness. A typical example of this kind of work is the computation of polynomial greatest common divisors, which is required to simplify fractions. Surprisingly, the classical Euclid's algorithm turned out to be inefficient for polynomials over infinite fields, and thus new algorithms needed to be developed. The same was also true for the classical algorithms from linear algebra.

Automated theorem proving

From Wikipedia, the free encyclopedia

Automated theorem proving (also known as ATP or automated deduction) is a subfield of automated reasoning and mathematical logic dealing with proving mathematical theorems by computer programs. Automated reasoning over mathematical proof was a major impetus for the development of computer science.

Logical foundations

While the roots of formalised logic go back to Aristotle, the end of the 19th and early 20th centuries saw the development of modern logic and formalised mathematics. Frege's Begriffsschrift (1879) introduced both a complete propositional calculus and what is essentially modern predicate logic. His Foundations of Arithmetic, published 1884, expressed (parts of) mathematics in formal logic. This approach was continued by Russell and Whitehead in their influential Principia Mathematica, first published 1910–1913, and with a revised second edition in 1927. Russell and Whitehead thought they could derive all mathematical truth using axioms and inference rules of formal logic, in principle opening up the process to automatisation. In 1920, Thoralf Skolem simplified a previous result by Leopold Löwenheim, leading to the Löwenheim–Skolem theorem and, in 1930, to the notion of a Herbrand universe and a Herbrand interpretation that allowed (un)satisfiability of first-order formulas (and hence the validity of a theorem) to be reduced to (potentially infinitely many) propositional satisfiability problems.

In 1929, Mojżesz Presburger showed that the theory of natural numbers with addition and equality (now called Presburger arithmetic in his honor) is decidable and gave an algorithm that could determine if a given sentence in the language was true or false. However, shortly after this positive result, Kurt Gödel published On Formally Undecidable Propositions of Principia Mathematica and Related Systems (1931), showing that in any sufficiently strong axiomatic system there are true statements which cannot be proved in the system. This topic was further developed in the 1930s by Alonzo Church and Alan Turing, who on the one hand gave two independent but equivalent definitions of computability, and on the other gave concrete examples for undecidable questions.

First implementations

Shortly after World War II, the first general purpose computers became available. In 1954, Martin Davis programmed Presburger's algorithm for a JOHNNIAC vacuum tube computer at the Princeton Institute for Advanced Study. According to Davis, "Its great triumph was to prove that the sum of two even numbers is even". More ambitious was the Logic Theory Machine in 1956, a deduction system for the propositional logic of the Principia Mathematica, developed by Allen Newell, Herbert A. Simon and J. C. Shaw. Also running on a JOHNNIAC, the Logic Theory Machine constructed proofs from a small set of propositional axioms and three deduction rules: modus ponens, (propositional) variable substitution, and the replacement of formulas by their definition. The system used heuristic guidance, and managed to prove 38 of the first 52 theorems of the Principia.

The "heuristic" approach of the Logic Theory Machine tried to emulate human mathematicians, and could not guarantee that a proof could be found for every valid theorem even in principle. In contrast, other, more systematic algorithms achieved, at least theoretically, completeness for first-order logic. Initial approaches relied on the results of Herbrand and Skolem to convert a first-order formula into successively larger sets of propositional formulae by instantiating variables with terms from the Herbrand universe. The propositional formulas could then be checked for unsatisfiability using a number of methods. Gilmore's program used conversion to disjunctive normal form, a form in which the satisfiability of a formula is obvious.

Decidability of the problem

Depending on the underlying logic, the problem of deciding the validity of a formula varies from trivial to impossible. For the frequent case of propositional logic, the problem is decidable but co-NP-complete, and hence only exponential-time algorithms are believed to exist for general proof tasks. For a first order predicate calculus, Gödel's completeness theorem states that the theorems (provable statements) are exactly the logically valid well-formed formulas, so identifying valid formulas is recursively enumerable: given unbounded resources, any valid formula can eventually be proven. However, invalid formulas (those that are not entailed by a given theory), cannot always be recognized. 

The above applies to first order theories, such as Peano arithmetic. However, for a specific model that may be described by a first order theory, some statements may be true but undecidable in the theory used to describe the model. For example, by Gödel's incompleteness theorem, we know that any theory whose proper axioms are true for the natural numbers cannot prove all first order statements true for the natural numbers, even if the list of proper axioms is allowed to be infinite enumerable. It follows that an automated theorem prover will fail to terminate while searching for a proof precisely when the statement being investigated is undecidable in the theory being used, even if it is true in the model of interest. Despite this theoretical limit, in practice, theorem provers can solve many hard problems, even in models that are not fully described by any first order theory (such as the integers).

Related problems

A simpler, but related, problem is proof verification, where an existing proof for a theorem is certified valid. For this, it is generally required that each individual proof step can be verified by a primitive recursive function or program, and hence the problem is always decidable. 

Since the proofs generated by automated theorem provers are typically very large, the problem of proof compression is crucial and various techniques aiming at making the prover's output smaller, and consequently more easily understandable and checkable, have been developed. 

Proof assistants require a human user to give hints to the system. Depending on the degree of automation, the prover can essentially be reduced to a proof checker, with the user providing the proof in a formal way, or significant proof tasks can be performed automatically. Interactive provers are used for a variety of tasks, but even fully automatic systems have proved a number of interesting and hard theorems, including at least one that has eluded human mathematicians for a long time, namely the Robbins conjecture. However, these successes are sporadic, and work on hard problems usually requires a proficient user. 

Another distinction is sometimes drawn between theorem proving and other techniques, where a process is considered to be theorem proving if it consists of a traditional proof, starting with axioms and producing new inference steps using rules of inference. Other techniques would include model checking, which, in the simplest case, involves brute-force enumeration of many possible states (although the actual implementation of model checkers requires much cleverness, and does not simply reduce to brute force). 

There are hybrid theorem proving systems which use model checking as an inference rule. There are also programs which were written to prove a particular theorem, with a (usually informal) proof that if the program finishes with a certain result, then the theorem is true. A good example of this was the machine-aided proof of the four color theorem, which was very controversial as the first claimed mathematical proof which was essentially impossible to verify by humans due to the enormous size of the program's calculation (such proofs are called non-surveyable proofs). Another example of a program-assisted proof is the one that shows that the game of Connect Four can always be won by first player.

Industrial uses

Commercial use of automated theorem proving is mostly concentrated in integrated circuit design and verification. Since the Pentium FDIV bug, the complicated floating point units of modern microprocessors have been designed with extra scrutiny. AMD, Intel and others use automated theorem proving to verify that division and other operations are correctly implemented in their processors.

First-order theorem proving

In the late 1960s agencies funding research in automated deduction began to emphasize the need for practical applications. One of the first fruitful areas was that of program verification whereby first-order theorem provers were applied to the problem of verifying the correctness of computer programs in languages such as Pascal, Ada, Java etc. Notable among early program verification systems was the Stanford Pascal Verifier developed by David Luckham at Stanford University. This was based on the Stanford Resolution Prover also developed at Stanford using John Alan Robinson's resolution principle. This was the first automated deduction system to demonstrate an ability to solve mathematical problems that were announced in the Notices of the American Mathematical Society before solutions were formally published. 

First-order theorem proving is one of the most mature subfields of automated theorem proving. The logic is expressive enough to allow the specification of arbitrary problems, often in a reasonably natural and intuitive way. On the other hand, it is still semi-decidable, and a number of sound and complete calculi have been developed, enabling fully automated systems. More expressive logics, such as Higher-order logics, allow the convenient expression of a wider range of problems than first order logic, but theorem proving for these logics is less well developed.

Benchmarks, competitions, and sources

The quality of implemented systems has benefited from the existence of a large library of standard benchmark examples — the Thousands of Problems for Theorem Provers (TPTP) Problem Library — as well as from the CADE ATP System Competition (CASC), a yearly competition of first-order systems for many important classes of first-order problems. 

Some important systems (all have won at least one CASC competition division) are listed below.
The Theorem Prover Museum is an initiative to conserve the sources of theorem prover systems for future analysis, since they are important cultural/scientific artefacts. It has the sources of many of the systems mentioned above.

Popular techniques

Software systems

Free software

Proprietary software

Notable people

Confucianism

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Confucianism Temple of Confucius of Jiangyin , Wuxi , Jiangsu . ...