In computer programming, operators are constructs defined within programming languages which behave generally like functions, but which differ syntactically or semantically.
Common simple examples include arithmetic (e.g. addition with +
), comparison (e.g. "greater than" with >
), and logical operations (e.g. AND
, also written &&
in some languages). More involved examples include assignment (usually =
or :=
), field access in a record or object (usually .
), and the scope resolution operator (often ::
or .
).
Languages usually define a set of built-in operators, and in some cases
allow users to add new meanings to existing operators or even define
completely new operators.
Syntax
Syntactically operators usually contrast to functions. In most languages, functions may be seen as a special form of prefix operator with fixed precedence level and associativity, often with compulsory parentheses e.g. Func(a)
(or (Func a)
in Lisp).
Most languages support programmer-defined functions, but cannot really
claim to support programmer-defined operators, unless they have more
than prefix notation and more than a single precedence level.
Semantically operators can be seen as special form of function with
different calling notation and a limited number of parameters (usually 1
or 2).
The position of the operator with respect to its operands may be prefix, infix or postfix (suffix), and the syntax of an expression involving an operator depends on its arity (number of operands), precedence, and (if applicable), associativity. Most programming languages support binary operators and a few unary operators, with a few supporting more operands, such as the ?: operator in C, which is ternary. There are prefix unary operators, such as unary minus -x
, and postfix unary operators, such as post-increment x++
; and binary operations are infix, such as x + y
or x = y
. Infix operations of higher arity require additional symbols, such as the ternary operator ?: in C, written as a ? b : c
– indeed, since this is the only common example, it is often referred to as the ternary operator. Prefix and postfix operations can support any desired arity, however, such as 1 2 3 4 +
.
Occasionally parts of a language may be described as "matchfix" or "circumfix" or "bifix" operators, either to simplify the language's description or implementation. A circumfix operator consists of two or more parts which enclose its operands. Circumfix operators have the highest precedence, with their contents being evaluated and the resulting value used in the surrounding expression. The most familiar circumfix operator are the parentheses mentioned above, used to indicate which parts of an expression are to be evaluated before others. Another example from physics is the inner product notation of Dirac's bra–ket notation. Circumfix operators are especially useful to denote operations that involve many or varying numbers of operands.
The specification of a language will specify the syntax the operators it supports, while languages, such as Prolog that support programmer-defined operators require that the syntax be defined by the programmer.
Semantics
The semantics of operators particularly depends on value, evaluation strategy, and argument passing mode (such as Boolean short-circuiting). Simply, an expression involving an operator is evaluated in some way, and the resulting value may be just a value (an r-value), or may be an object allowing assignment (an l-value).
In simple cases this is identical to usual function calls; for example, addition x + y
is generally equivalent to a function call add(x, y)
and less-than comparison x < y
to lt(x, y)
,
meaning that the arguments are evaluated in their usual way, then some
function is evaluated and the result is returned as a value. However,
the semantics can be significantly different. For example, in assignment
a = b
the target a
is not evaluated, but instead its location (address) is used to store the value of b
– corresponding to call-by-reference
semantics. Further, an assignment may be a statement (no value), or may
be an expression (value), with the value itself either an r-value (just
a value) or an l-value (able to be assigned to). As another example,
the scope resolution operator :: and the element access operator . (as in Foo::Bar
or a.b
) operate not on values, but on names, essentially call-by-name semantics, and their value is a name.
Use of l-values as operator operands is particularly notable in unary increment and decrement operators. In C, for instance, the following statement is legal and well-defined, and depends on the fact that array indexing returns an l-value:
x = ++a[i];
An important use is when a left-associative binary operator modifies its left argument (or produces a side effect) and then evaluates to that argument as an l-value. This allows a sequence of operators all affecting the original argument, allowing a fluent interface, similar to method cascading. A common example is the <<
operator in the C++ iostream
library, which allows fluent output, as follows:
cout << "Hello" << " " << "world!" << endl;
User-defined operators
A language may contain a fixed number of built-in operators (e.g. +, -, *, <, <=, !, =, etc. in C and C++, PHP), or it may allow the creation of programmer-defined operators (e.g. Prolog, Seed7, F#, OCaml, Haskell). Some programming languages restrict operator symbols to special characters like + or := while others allow also names like div
(e.g. Pascal).
Most languages have a built-in set of operators, but do not allow user-defined operators, as this significantly complicates parsing. Many languages only allow operators to be used for built-in types, but others allow existing operators to be used for user-defined types; this is known as operator overloading. Some languages allow new operators to be defined, however, either at compile time or at run time. This may involve meta-programming (specifying the operators in a separate language), or within the language itself. Definition of new operators, particularly runtime definition, often makes correct static analysis of programs impossible, since the syntax of the language may be Turing-complete, so even constructing the syntax tree may require solving the halting problem, which is impossible. This occurs for Perl, for example, and some dialects of Lisp.
Examples
Common examples that differ from functions syntactically are relational operators, e.g. ">" for "greater than", with names often outside the language's set of identifiers
for functions, and called with a syntax different from the language's
syntax for calling functions. As a function, "greater than" would
generally be named by an identifier, such as gt
or greater_than
and called as a function, as gt(x, y)
. Instead, the operation uses the special character >
(which is tokenized separately during lexical analysis), and infix notation, as x > y
.
Common examples that differ semantically (by argument passing mode) are Boolean operations, which frequently feature short-circuit evaluation: e.g. a short-circuiting conjunction (X AND Y) that only evaluates later arguments if earlier ones are not false, in a language with strict call-by-value functions. This behaves instead similarly to if/then/else.
Less common operators include:
- Comma operator:
e, f
- Dereference operator:
*p
and address-of operator:&x
- ?: or ternary operator:
number = spell_out_numbers ? "forty-two" : 42
- Elvis operator:
x ?: y
- Elvis operator:
- Null coalescing operator:
x ?? y
- Spaceship operator (for three-way comparison):
x <=> y
- Compound operators combining two or more atomic operations into one to simplify expressions,
ease compiler optimizations depending on the underlying hardware
implementation, or improve performance for speed or size. An example are
the set of compound assignment operators (aka augmented assignments) in C/C++:
+=
,-=
,*=
,/=
,%=
,<<=
,>>=
,&=
,^=
,|=
Similarly, some digital signal processors provide special opcodes for fused operations like multiply–accumulate (MAC/MAD) or fused multiply–add (FMA) and some high-performance software libraries support functions like cis x = cos x + i sin x to boost processing speed or reduce code size.
Compilation
A compiler can implement operators and functions with subroutine calls or with inline code. Some built-in operators supported by a language have a direct mapping to a small number of instructions commonly found on central processing units, though others (e.g. '+' used to express string concatenation) may have complicated implementations.
Operator overloading
In some programming languages an operator may be ad hoc polymorphic, that is, have definitions for more than one kind of data, (such as in Java where the +
operator is used both for the addition of numbers and for the concatenation of strings). Such an operator is said to be overloaded. In languages that support operator overloading by the programmer (such as C++) but have a limited set of operators, operator overloading is often used to define customized uses for operators.
In the example IF ORDER_DATE > "12/31/2011" AND ORDER_DATE < "01/01/2013" THEN CONTINUE ELSE STOP
, the operators are: >
(greater than), AND
and <
(less than).
Operand coercion
Some languages also allow for the operands of an operator to be implicitly converted, or coerced, to suitable data types for the operation to occur. For example, in Perl coercion rules lead into 12 + "3.14"
producing the result of 15.14
. The text "3.14"
is converted to the number 3.14 before addition can take place. Further, 12
is an integer and 3.14
is either a floating or fixed-point number (a number that has a decimal
place in it) so the integer is then converted to a floating point or
fixed-point number respectively.
JavaScript follows opposite rules—finding the same expression above, it will convert the integer 12
into a string "12"
, then concatenate the two operands to form "123.14"
.
In the presence of coercions in a language, the programmer must be aware of the specific rules regarding operand types and the operation result type to avoid subtle programming mistakes.