In computational mathematics, computer algebra, also called symbolic computation or algebraic computation, is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions and other mathematical objects. Although computer algebra could be considered a subfield of scientific computing, they are generally considered as distinct fields because scientific computing is usually based on numerical computation with approximate floating point numbers, while symbolic computation emphasizes exact computation with expressions containing variables that have no given value and are manipulated as symbols.
Software applications that perform symbolic calculations are called computer algebra systems, with the term system alluding to the complexity of the main applications that include, at least, a method to represent mathematical data in a computer, a user programming language (usually different from the language used for the implementation), a dedicated memory manager, a user interface for the input/output of mathematical expressions, a large set of routines to perform usual operations, like simplification of expressions, differentiation using chain rule, polynomial factorization, indefinite integration, etc.
Computer algebra is widely used to experiment in mathematics and to design the formulas that are used in numerical programs. It is also used for complete scientific computations, when purely numerical methods fail, as in public key cryptography, or for some non-linear problems.
Software applications that perform symbolic calculations are called computer algebra systems, with the term system alluding to the complexity of the main applications that include, at least, a method to represent mathematical data in a computer, a user programming language (usually different from the language used for the implementation), a dedicated memory manager, a user interface for the input/output of mathematical expressions, a large set of routines to perform usual operations, like simplification of expressions, differentiation using chain rule, polynomial factorization, indefinite integration, etc.
Computer algebra is widely used to experiment in mathematics and to design the formulas that are used in numerical programs. It is also used for complete scientific computations, when purely numerical methods fail, as in public key cryptography, or for some non-linear problems.
Terminology
Some authors distinguish computer algebra from symbolic computation using the latter name to refer to kinds of symbolic computation other than the computation with mathematical formulas. Some authors use symbolic computation for the computer science aspect of the subject and "computer algebra" for the mathematical aspect. In some languages the name of the field is not a direct translation of its English name. Typically, it is called calcul formel in French, which means "formal computation". This name reflects the ties this field has with formal methods.
Symbolic computation has also been referred to, in the past, as symbolic manipulation, algebraic manipulation, symbolic processing, symbolic mathematics, or symbolic algebra, but these terms, which also refer to non-computational manipulation, are no longer used in reference to computer algebra.
Scientific community
There is no learned society that is specific to computer algebra, but this function is assumed by the special interest group of the Association for Computing Machinery named SIGSAM (Special Interest Group
on Symbolic and Algebraic Manipulation).
There are several annual conferences on computer algebra, the premier being ISSAC (International Symposium on Symbolic and Algebraic Computation), which is regularly sponsored by SIGSAM.
There are several journals specializing in computer algebra, the top one being Journal of Symbolic Computation founded in 1985 by Bruno Buchberger. There are also several other journals that regularly publish articles in computer algebra.
Computer science aspects
Data representation
As numerical software is highly efficient for approximate numerical computation, it is common, in computer algebra, to emphasize exact
computation with exactly represented data. Such an exact representation
implies that, even when the size of the output is small, the
intermediate data generated during a computation may grow in an
unpredictable way. This behavior is called expression swell. To
obviate this problem, various methods are used in the representation of
the data, as well as in the algorithms that manipulate them.
Numbers
The usual numbers systems used in numerical computation are floating point numbers and integers of a fixed bounded size. None of these is convenient for computer algebra, due to expression swell.
Therefore, the basic numbers used in computer algebra are the
integers of the mathematicians, commonly represented by an unbounded
signed sequence of digits in some base of numeration, usually the largest base allowed by the machine word. These integers allow to define the rational numbers, which are irreducible fractions of two integers.
Programming an efficient implementation of the arithmetic operations is a hard task. Therefore, most free computer algebra systems and some commercial ones, like Maple (software), use the GMP library, which is thus a de facto standard.
Expressions
Except for numbers and variables, every mathematical expression may be viewed as the symbol of an operator followed by a sequence
of operands. In computer algebra software, the expressions are usually
represented in this way. This representation is very flexible, and many
things, that seem not to be mathematical expressions at first glance,
may be represented and manipulated as such. For example, an equation is
an expression with “=” as an operator, a matrix may be represented as an
expression with “matrix” as an operator and its rows as operands.
Even programs may be considered and represented as expressions
with operator “procedure” and, at least, two operands, the list of
parameters and the body, which is itself an expression with “body” as an
operator and a sequence of instructions as operands. Conversely, any
mathematical expression may be viewed as a program. For example, the
expression a + b may be viewed as a program for the addition, with a and b as parameters. Executing this program consists in evaluating the expression for given values of a and b; if they do not have any value—that is they are indeterminates—, the result of the evaluation is simply its input.
This process of delayed evaluation is fundamental in computer
algebra. For example, the operator “=” of the equations is also, in most
computer algebra systems, the name of the program of the equality test:
normally, the evaluation of an equation results in an equation, but,
when an equality test is needed,—either explicitly asked by the user
through an “evaluation to a Boolean” command, or automatically started
by the system in the case of a test inside a program—then the evaluation
to a boolean 0 or 1 is executed.
As the size of the operands of an expression is unpredictable and
may change during a working session, the sequence of the operands is
usually represented as a sequence of either pointers (like in Macsyma) or entries in a hash table (like in Maple).
Simplification
The raw application of the basic rules of differentiation with respect to x on the expression gives the result
Such a complicated expression is clearly not acceptable, and a
procedure of simplification is needed as soon as one works with general
expressions.
This simplification is normally done through rewriting rules.
There are several classes of rewriting rules that have to be
considered. The simplest consists in the rewriting rules that always
reduce the size of the expression, like E − E → 0 or sin(0) → 0. They are systematically applied in the computer algebra systems.
The first difficulty occurs with associative operations
like addition and multiplication. The standard way to deal with
associativity is to consider that addition and multiplication have an
arbitrary number of operands, that is that a + b + c is represented as "+"(a, b, c). Thus a + (b + c) and (a + b) + c are both simplified to "+"(a, b, c), which is displayed a + b + c. What about a − b + c? To deal with this problem, the simplest way is to rewrite systematically −E, E − F, E/F as, respectively, (−1)⋅E, E + (−1)⋅F, E⋅F−1.
In other words, in the internal representation of the expressions,
there is no subtraction nor division nor unary minus, outside the
representation of the numbers.
A second difficulty occurs with the commutativity of addition and multiplication. The problem is to recognize quickly the like terms
in order to combine or cancel them. In fact, the method for finding
like terms, consisting of testing every pair of terms, is too costly for
being practicable with very long sums and products. For solving this
problem, Macsyma
sorts the operands of sums and products with a function of comparison
that is designed in order that like terms are in consecutive places, and
thus easily detected. In Maple, the hash function
is designed for generating collisions when like terms are entered,
allowing to combine them as soon as they are introduced. This design of
the hash function allows also to recognize immediately the expressions
or subexpressions that appear several times in a computation and to
store them only once. This allows not only to save some memory space,
but also to speed up computation, by avoiding repetition of the same
operations on several identical expressions.
Some rewriting rules sometimes increase and sometimes decrease
the size of the expressions to which they are applied. This is the case
of distributivity or trigonometric identities. For example, the distributivity law allows rewriting and
As there is no way to make a good general choice of applying or not
such a rewriting rule, such rewritings are done only when explicitly
asked for by the user. For the distributivity, the computer function
that applies this rewriting rule is generally called "expand". The
reverse rewriting rule, called "factor", requires a non-trivial
algorithm, which is thus a key function in computer algebra systems.
Mathematical aspects
In this section we consider some fundamental mathematical questions that arise as soon as one wants to manipulate mathematical expressions in a computer. We consider mainly the case of the multivariate rational fractions. This is not a real restriction, because, as soon as the irrational functions appearing in an expression are simplified, they are usually considered as new indeterminates. For example,
is viewed as a polynomial in and
Equality
There are two notions of equality for mathematical expressions. The syntactic equality
is the equality of the expressions which means that they are written
(or represented in a computer) in the same way. Being trivial, the
syntactic equality is rarely considered by mathematicians, although it
is the only equality that is easy to test with a program. The semantic equality is when two expressions represent the same mathematical object, like in
It is known from Richardson's theorem
that there may not exist an algorithm that decides if two expressions
representing numbers are semantically equal, if exponentials and
logarithms are allowed in the expressions. Therefore, (semantical)
equality may be tested only on some classes of expressions such as the polynomials and rational fractions.
To test the equality of two expressions, instead of designing specific algorithms, it is usual to put expressions in some canonical form or to put their difference in a normal form, and to test the syntactic equality of the result.
Unlike in usual mathematics, "canonical form" and "normal form" are not synonymous in computer algebra. A canonical form is such that two expressions in canonical form are semantically equal if and only if they are syntactically equal, while a normal form
is such that an expression in normal form is semantically zero only if
it is syntactically zero. In other words, zero has a unique
representation by expressions in normal form.
Normal forms are usually preferred in computer algebra for
several reasons. Firstly, canonical forms may be more costly to compute
than normal forms. For example, to put a polynomial in canonical form,
one has to expand by distributivity
every product, while it is not necessary with a normal form (see
below). Secondly, It may be the case, like for expressions involving
radicals, that a canonical form, if it exists, depends on some arbitrary
choices and that these choices may be different for two expressions
that have been computed independently. This may make impracticable the
use of a canonical form.
History
At the beginning of computer algebra, circa 1970, when the long-known algorithms were first put on computers, they turned out to be highly inefficient. Therefore, a large part of the work of the researchers in the field consisted in revisiting classical algebra in order to make it effective and to discover efficient algorithms to implement this effectiveness. A typical example of this kind of work is the computation of polynomial greatest common divisors, which is required to simplify fractions. Surprisingly, the classical Euclid's algorithm
turned out to be inefficient for polynomials over infinite fields, and
thus new algorithms needed to be developed. The same was also true for
the classical algorithms from linear algebra.