Search This Blog

Saturday, June 23, 2018

Linear equation

From Wikipedia, the free encyclopedia
 
Graph sample of linear equations in two variables.

In mathematics, a linear equation is an equation that may be put in the form
{\displaystyle a_{1}x_{1}+\cdots +a_{n}x_{n}+c=0,}
where x_{1},\ldots ,x_{n} are the variables or unknowns, and {\displaystyle c,a_{1},\ldots ,a_{n}} are coefficients, which are often real numbers, but may be parameters, or even any expression that does not contain the unknowns. In other words, a linear equation is obtained by equating to zero a linear polynomial.

The solutions of such an equation are the values that, when substituted to the unknowns, make the equality true.

The case of one unknown is of a particular importance, and it is frequent that linear equation refers implicitly to this particular case, that is to an equation that may be written in the form
{\displaystyle ax+b=0.}
If a ≠ 0 this linear equation has the unique solution
{\displaystyle x=-{\frac {b}{a}}}
The solutions of a linear equation in two variables form a line in the Euclidean plane, and every line may be defined as the solutions of a linear equation. This is the origin of the term linear for qualifying this type of equations. More generally, the solutions of a linear equation in n variables form a hyperplane (of dimension n – 1) in the Euclidean space of dimension n.

Linear equations occur frequently in all mathematics and their applications in physics and engineering, partly because non-linear systems are often well approximated by linear equations.

This article considers the case of a single equation with real coefficients, for which one studies the real solutions. All its content applies for complex solutions and, more generally for linear equations with coefficient and solutions in any field. For the case of several simultaneous linear equations, see System of linear equations.

One variable

A linear equation in one unknown x may always be rewritten
ax=b.
If a ≠ 0, there is a unique solution
x={\frac {b}{a}}.
If a = 0, then, if b = 0, every number is a solution of the equation, and, if b ≠ 0, there are no solutions (and the equation is said to be inconsistent).

Two variables

A common linear equation in two variables x and y is the relation that links the argument and the value of a linear function:
{\displaystyle y=mx+y_{0},}
where m and y_{0} are real numbers. The graph of such a linear function is thus the set of the solutions of this linear equation, which is a line in the Euclidean plane of slope m and y-intercept y_{0}.

Every linear equation in x and y may be rewritten
{\displaystyle ax+by+c=0,}
where a and b are not both zero. The set of the solutions form a line in the Euclidean plane, which is the graph of a linear function if and only if b ≠ 0.

Using the laws of elementary algebra, linear equations in two variables may be rewritten in several standard forms that are described below, which are often referred to as "equations of a line". In what follows, x, y, t, and θ are variables; other letters represent constants (fixed numbers).

General (or standard) form

In the general (or standard[1]) form the linear equation is written as:
Ax+By=C,\,
where A and B are not both equal to zero. The equation is usually written so that A ≥ 0, by convention. The graph of the equation is a straight line, and every straight line can be represented by an equation in the above form. If A is nonzero, then the x-intercept, that is, the x-coordinate of the point where the graph crosses the x-axis (where, y is zero), is C/A. If B is nonzero, then the y-intercept, that is the y-coordinate of the point where the graph crosses the y-axis (where x is zero), is C/B, and the slope of the line is −A/B. The general form is sometimes written as:
ax+by+c=0,\,
where a and b are not both equal to zero. The two versions can be converted from one to the other by moving the constant term to the other side of the equal sign.

Slope–intercept form

y=mx+b,
where m is the slope of the line and b is the y intercept, which is the y coordinate of the location where the line crosses the y axis. This can be seen by letting x = 0, which immediately gives y = b. It may be helpful to think about this in terms of y = b + mx; where the line passes through the point (0, b) and extends to the left and right at a slope of m. Vertical lines, having undefined slope, cannot be represented by this form.

A corresponding form exists for the x intercept, though it is less-used, since y is conventionally a function of x:
{\displaystyle x=ny+a.}
Analogously, horizontal lines cannot be represented in this form. If a line is neither horizontal nor vertical, it can be expressed in both these forms, with {\displaystyle m\cdot n=1}, so {\displaystyle m=1/n}. Expressing y as a function of x gives the form:
{\displaystyle y=m(x-a),}
which is equivalent to the polynomial factorization of the y intercept form. This is useful when the x intercept is of more interest than the y intercept. Expanding both forms shows that {\displaystyle b=-ma}, so {\displaystyle a=-b/m}, expressing the x intercept in terms of the y intercept and slope, or conversely.

Point–slope form

y-y_{1}=m(x-x_{1}),\,
where m is the slope of the line and (x1,y1) is any point on the line.

The point-slope form expresses the fact that the difference in the y coordinate between two points on a line (that is, y − y1) is proportional to the difference in the x coordinate (that is, x − x1). The proportionality constant is m (the slope of the line).

Two-point form

y-y_{1}={\frac {y_{2}-y_{1}}{x_{2}-x_{1}}}(x-x_{1}),\,
where (x1y1) and (x2y2) are two points on the line with x2x1. This is equivalent to the point-slope form above, where the slope is explicitly given as (y2 − y1)/(x2 − x1).

Multiplying both sides of this equation by (x2 − x1) yields a form of the line generally referred to as the symmetric form:
(x_{2}-x_{1})(y-y_{1})=(y_{2}-y_{1})(x-x_{1}).\,
Expanding the products and regrouping the terms leads to the general form:
x\,(y_{2}-y_{1})-y\,(x_{2}-x_{1})=x_{1}y_{2}-x_{2}y_{1}
Using a determinant, one gets a determinant form, easy to remember:
{\begin{vmatrix}x&y&1\\x_{1}&y_{1}&1\\x_{2}&y_{2}&1\end{vmatrix}}=0\,.

Intercept form

{\frac {x}{a}}+{\frac {y}{b}}=1,\,
where a and b must be nonzero. The graph of the equation has x-intercept a and y-intercept b. The intercept form is in standard form with A/C = 1/a and B/C = 1/b. Lines that pass through the origin or which are horizontal or vertical violate the nonzero condition on a or b and cannot be represented in this form.

Matrix form

Using the order of the standard form
Ax+By=C,\,
one can rewrite the equation in matrix form:
{\begin{pmatrix}A&B\end{pmatrix}}{\begin{pmatrix}x\\y\end{pmatrix}}={\begin{pmatrix}C\end{pmatrix}}.
Further, this representation extends to systems of linear equations.
A_{1}x+B_{1}y=C_{1},\,
A_{2}x+B_{2}y=C_{2},\,
becomes:
{\begin{pmatrix}A_{1}&B_{1}\\A_{2}&B_{2}\end{pmatrix}}{\begin{pmatrix}x\\y\end{pmatrix}}={\begin{pmatrix}C_{1}\\C_{2}\end{pmatrix}}.
Since this extends easily to higher dimensions, it is a common representation in linear algebra, and in computer programming. There are named methods for solving a system of linear equations, like Gauss-Jordan which can be expressed as matrix elementary row operations.

Parametric form

x=Tt+U\,
and
y=Vt+W.\,
These are two simultaneous equations in terms of a variable parameter t, with slope m = V / T, x-intercept (VU - WT) / V and y-intercept (WT - VU) / T. This can also be related to the two-point form, where T = p - h, U = h, V = q - k, and W = k:
x=(p-h)t+h\,
and
y=(q-k)t+k.\,
In this case t varies from 0 at point (h,k) to 1 at point (p,q), with values of t between 0 and 1 providing interpolation and other values of t providing extrapolation.

2D vector determinant form

The equation of a line can also be written as the determinant of two vectors. If P_{1} and P_{2} are unique points on the line, then P will also be a point on the line if the following is true:
\det({\overrightarrow {P_{1}P}},{\overrightarrow {P_{1}P_{2}}})=0.
One way to understand this formula is to use the fact that the determinant of two vectors on the plane will give the area of the parallelogram they form. Therefore, if the determinant equals zero then the parallelogram has no area, and that will happen when two vectors are on the same line.

To expand on this we can say that P_{1}=(x_{1},\,y_{1}), P_{2}=(x_{2},\,y_{2}) and P=(x,\,y). Thus {\overrightarrow {P_{1}P}}=(x-x_{1},\,y-y_{1}) and {\overrightarrow {P_{1}P_{2}}}=(x_{2}-x_{1},\,y_{2}-y_{1}), then the above equation becomes:
\det {\begin{pmatrix}x-x_{1}&y-y_{1}\\x_{2}-x_{1}&y_{2}-y_{1}\end{pmatrix}}=0.
Thus,
(x-x_{1})(y_{2}-y_{1})-(y-y_{1})(x_{2}-x_{1})=0.
Ergo,
(x-x_{1})(y_{2}-y_{1})=(y-y_{1})(x_{2}-x_{1}).
Then dividing both side by (x_{2}-x_{1}) would result in the “Two-point form” shown above, but leaving it here allows the equation to still be valid when x_{1}=x_{2}.

Special cases

y=b\,
Horizontal Line y = b
This is a special case of the standard form where A = 0 and B = 1, or of the slope-intercept form where the slope m = 0. The graph is a horizontal line with y-intercept equal to b. There is no x-intercept, unless b = 0, in which case the graph of the line is the x-axis, and so every real number is an x-intercept.
x=a\,
Vertical Line x = a
This is a special case of the standard form where A = 1 and B = 0. The graph is a vertical line with x-intercept equal to a. The slope is undefined. There is no y-intercept, unless a = 0, in which case the graph of the line is the y-axis, and every real number is a y-intercept. This is the only type of straight line which is not the graph of a function (it obviously fails the vertical line test).

Connection with linear functions

A linear equation, written in the form y = f(x) whose graph crosses the origin (x,y) = (0,0), that is, whose y-intercept is 0, has the following properties:
  • Additivity: f(x_{1}+x_{2})=f(x_{1})+f(x_{2})\
  • Homogeneity of degree 1: f(ax)=af(x),\,
where a is any scalar. A function which satisfies these properties is called a linear function (or linear operator, or more generally a linear map). However, linear equations that have non-zero y-intercepts, when written in this manner, produce functions which will have neither property above and hence are not linear functions in this sense. They are known as affine functions.

Example

An everyday example of the use of different forms of linear equations is computation of tax with tax brackets. This is commonly done with a progressive tax computation, using either point–slope form or slope–intercept form.

More than two variables

A linear equation can involve more than two variables. Every linear equation in n unknowns may be rewritten
a_{1}x_{1}+a_{2}x_{2}+\cdots +a_{n}x_{n}=b,
where, a1, a2, ..., an represent numbers, called the coefficients, x1, x2, ..., xn are the unknowns, and b is called the constant term. When dealing with three or fewer variables, it is common to use x, y and z instead of x1, x2 and x3.

If all the coefficients are zero, then either b ≠ 0 and the equation does not have any solution, or b = 0 and every set of values for the unknowns is a solution.

If at least one coefficient is nonzero, a permutation of the subscripts allows one to suppose a1 ≠ 0, and rewrite the equation
{\displaystyle x_{1}={\frac {b}{a_{1}}}-{\frac {a_{2}}{a_{1}}}x_{2}-\cdots -{\frac {a_{n}}{a_{1}}}x_{n}.}
In other words, if ai ≠ 0, one may choose arbitrary values for all the unknowns except xi, and express xi in term of these values.

If n = 3 the set of the solutions is a plane in a three-dimensional space. More generally, the set of the solutions is an (n – 1)-dimensional hyperplane in a n-dimensional Euclidean space (or affine space if the coefficients are complex numbers or belong to any field).

Differential equation

From Wikipedia, the free encyclopedia


Visualization of heat transfer in a pump casing, created by solving the heat equation. Heat is being generated internally in the casing and being cooled at the boundary, providing a steady state temperature distribution.

A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the equation defines a relationship between the two. Because such relations are extremely common, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology.

In pure mathematics, differential equations are studied from several different perspectives, mostly concerned with their solutions—the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit formulas; however, some properties of solutions of a given differential equation may be determined without finding their exact form.

If a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy.

History

Differential equations first came into existence with the invention of calculus by Newton and Leibniz. In Chapter 2 of his 1671 work "Methodus fluxionum et Serierum Infinitarum",[1] Isaac Newton listed three kinds of differential equations:
{\displaystyle {\begin{aligned}&{\frac {dy}{dx}}=f(x)\\[5pt]&{\frac {dy}{dx}}=f(x,y)\\[5pt]&x_{1}{\frac {\partial y}{\partial x_{1}}}+x_{2}{\frac {\partial y}{\partial x_{2}}}=y\end{aligned}}}
He solves these examples and others using infinite series and discusses the non-uniqueness of solutions.

Jacob Bernoulli proposed the Bernoulli differential equation in 1695.[2] This is an ordinary differential equation of the form
y'+P(x)y=Q(x)y^{n}\,
for which the following year Leibniz obtained solutions by simplifying it.[3]

Historically, the problem of a vibrating string such as that of a musical instrument was studied by Jean le Rond d'Alembert, Leonhard Euler, Daniel Bernoulli, and Joseph-Louis Lagrange.[4][5][6][7] In 1746, d’Alembert discovered the one-dimensional wave equation, and within ten years Euler discovered the three-dimensional wave equation.[8]

The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point.

Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics.

In 1822, Fourier published his work on heat flow in Théorie analytique de la chaleur (The Analytic Theory of Heat),[9] in which he based his reasoning on Newton's law of cooling, namely, that the flow of heat between two adjacent molecules is proportional to the extremely small difference of their temperatures. Contained in this book was Fourier's proposal of his heat equation for conductive diffusion of heat. This partial differential equation is now taught to every student of mathematical physics.

Example

For example, in classical mechanics, the motion of a body is described by its position and velocity as the time value varies. Newton's laws allow these variables to be expressed dynamically (given the position, velocity, acceleration and various forces acting on the body) as a differential equation for the unknown position of the body as a function of time.

In some cases, this differential equation (called an equation of motion) may be solved explicitly.

An example of modelling a real world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity and air resistance. The ball's acceleration towards the ground is the acceleration due to gravity minus the acceleration due to air resistance. Gravity is considered constant, and air resistance may be modeled as proportional to the ball's velocity. This means that the ball's acceleration, which is a derivative of its velocity, depends on the velocity (and the velocity depends on time). Finding the velocity as a function of time involves solving a differential equation and verifying its validity.

Types

Differential equations can be divided into several types. Apart from describing the properties of the equation itself, these classes of differential equations can help inform the choice of approach to a solution. Commonly used distinctions include whether the equation is: Ordinary/Partial, Linear/Non-linear, and Homogeneous/Inhomogeneous. This list is far from exhaustive; there are many other properties and subclasses of differential equations which can be very useful in specific contexts.

Ordinary differential equations

An ordinary differential equation (ODE) is an equation containing an unknown function of one real or complex variable x, its derivatives, and some given functions of x. The unknown function is generally represented by a variable (often denoted y), which, therefore, depends on x. Thus x is often called the independent variable of the equation. The term "ordinary" is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable.

Linear differential equations are the differential equations that are linear in the unknown function and its derivatives. Their theory is well developed, and, in many cases, one may express their solutions in terms of integrals.

Most ODEs that are encountered in physics are linear, and, therefore, most special functions may be defined as solutions of linear differential equations.

As, in general, the solutions of a differential equation cannot be expressed by a closed-form expression, numerical methods are commonly used for solving differential equations on a computer.

Partial differential equations

A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved in closed form, or used to create a relevant computer model.

PDEs can be used to describe a wide variety of phenomena in nature such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations.

Non-linear differential equations

Non-linear differential equations are formed by the products of the unknown function and its derivatives are allowed and its degree is > 1. There are very few methods of solving nonlinear differential equations exactly; those that are known typically depend on the equation having particular symmetries. Nonlinear differential equations can exhibit very complicated behavior over extended time intervals, characteristic of chaos. Even the fundamental questions of existence, uniqueness, and extendability of solutions for nonlinear differential equations, and well-posedness of initial and boundary value problems for nonlinear PDEs are hard problems and their resolution in special cases is considered to be a significant advance in the mathematical theory (cf. Navier–Stokes existence and smoothness). However, if the differential equation is a correctly formulated representation of a meaningful physical process, then one expects it to have a solution.[10]

Linear differential equations frequently appear as approximations to nonlinear equations. These approximations are only valid under restricted conditions. For example, the harmonic oscillator equation is an approximation to the nonlinear pendulum equation that is valid for small amplitude oscillations.

Equation order

Differential equations are described by their order, determined by the term with the highest derivatives. An equation containing only first derivatives is a first-order differential equation, an equation containing the second derivative is a second-order differential equation, and so on.[11][12] Differential equations that describe natural phenomena almost always have only first and second order derivatives in them, but there are some exceptions, such as the thin film equation, which is a fourth order partial differential equation.

Examples

In the first group of examples, let u be an unknown function of x, and let c & ω be known constants. Note both ordinary and partial differential equations are broadly classified as linear and nonlinear.
  • Inhomogeneous first-order linear constant coefficient ordinary differential equation:
{\frac {du}{dx}}=cu+x^{2}.
  • Homogeneous second-order linear ordinary differential equation:
{\frac {d^{2}u}{dx^{2}}}-x{\frac {du}{dx}}+u=0.
  • Homogeneous second-order linear constant coefficient ordinary differential equation describing the harmonic oscillator:
{\frac {d^{2}u}{dx^{2}}}+\omega ^{2}u=0.
  • Inhomogeneous first-order nonlinear ordinary differential equation:
{\frac {du}{dx}}=u^{2}+4.
  • Second-order nonlinear (due to sine function) ordinary differential equation describing the motion of a pendulum of length L:
L{\frac {d^{2}u}{dx^{2}}}+g\sin u=0.
In the next group of examples, the unknown function u depends on two variables x and t or x and y.
  • Homogeneous first-order linear partial differential equation:
{\frac {\partial u}{\partial t}}+t{\frac {\partial u}{\partial x}}=0.
  • Homogeneous second-order linear constant coefficient partial differential equation of elliptic type, the Laplace equation:
{\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}=0.
{\frac {\partial u}{\partial t}}=6u{\frac {\partial u}{\partial x}}-{\frac {\partial ^{3}u}{\partial x^{3}}}.

Existence of solutions

Solving differential equations is not like solving algebraic equations. Not only are their solutions often unclear, but whether solutions are unique or exist at all are also notable subjects of interest.

For first order initial value problems, the Peano existence theorem gives one set of circumstances in which a solution exists. Given any point (a,b) in the xy-plane, define some rectangular region Z, such that Z=[l,m]\times [n,p] and (a,b) is in the interior of Z. If we are given a differential equation {\frac {\mathrm {d} y}{\mathrm {d} x}}=g(x,y) and the condition that y=b when x=a, then there is locally a solution to this problem if g(x,y) and {\frac {\partial g}{\partial x}} are both continuous on Z. This solution exists on some interval with its center at a. The solution may not be unique. (See Ordinary differential equation for other results.)

However, this only helps us with first order initial value problems. Suppose we had a linear initial value problem of the nth order:
f_{n}(x){\frac {\mathrm {d} ^{n}y}{\mathrm {d} x^{n}}}+\cdots +f_{1}(x){\frac {\mathrm {d} y}{\mathrm {d} x}}+f_{0}(x)y=g(x)
such that
y(x_{0})=y_{0},y'(x_{0})=y'_{0},y''(x_{0})=y''_{0},\cdots
For any nonzero f_{{n}}(x), if \{f_{0},f_{1},\cdots \} and g are continuous on some interval containing x_{0}, y is unique and exists.[13]

Related concepts

Connection to difference equations

The theory of differential equations is closely related to the theory of difference equations, in which the coordinates assume only discrete values, and the relationship involves values of the unknown function or functions and values at nearby coordinates. Many methods to compute numerical solutions of differential equations or study the properties of differential equations involve the approximation of the solution of a differential equation by the solution of a corresponding difference equation.

Applications

The study of differential equations is a wide field in pure and applied mathematics, physics, and engineering. All of these disciplines are concerned with the properties of differential equations of various types. Pure mathematics focuses on the existence and uniqueness of solutions, while applied mathematics emphasizes the rigorous justification of the methods for approximating solutions. Differential equations play an important role in modelling virtually every physical, technical, or biological process, from celestial motion, to bridge design, to interactions between neurons. Differential equations such as those used to solve real-life problems may not necessarily be directly solvable, i.e. do not have closed form solutions. Instead, solutions can be approximated using numerical methods.

Many fundamental laws of physics and chemistry can be formulated as differential equations. In biology and economics, differential equations are used to model the behavior of complex systems. The mathematical theory of differential equations first developed together with the sciences where the equations had originated and where the results found application. However, diverse problems, sometimes originating in quite distinct scientific fields, may give rise to identical differential equations. Whenever this happens, mathematical theory behind the equations can be viewed as a unifying principle behind diverse phenomena. As an example, consider the propagation of light and sound in the atmosphere, and of waves on the surface of a pond. All of them may be described by the same second-order partial differential equation, the wave equation, which allows us to think of light and sound as forms of waves, much like familiar waves in the water. Conduction of heat, the theory of which was developed by Joseph Fourier, is governed by another second-order partial differential equation, the heat equation. It turns out that many diffusion processes, while seemingly different, are described by the same equation; the Black–Scholes equation in finance is, for instance, related to the heat equation.

Physics

Classical mechanics

So long as the force acting on a particle is known, Newton's second law is sufficient to describe the motion of a particle. Once independent relations for each force acting on a particle are available, they can be substituted into Newton's second law to obtain an ordinary differential equation, which is called the equation of motion.

Electrodynamics

Maxwell's equations are a set of partial differential equations that, together with the Lorentz force law, form the foundation of classical electrodynamics, classical optics, and electric circuits. These fields in turn underlie modern electrical and communications technologies. Maxwell's equations describe how electric and magnetic fields are generated and altered by each other and by charges and currents. They are named after the Scottish physicist and mathematician James Clerk Maxwell, who published an early form of those equations between 1861 and 1862.

General relativity

The Einstein field equations (EFE; also known as "Einstein's equations") are a set of ten partial differential equations in Albert Einstein's general theory of relativity which describe the fundamental interaction of gravitation as a result of spacetime being curved by matter and energy.[14] First published by Einstein in 1915[15] as a tensor equation, the EFE equate local spacetime curvature (expressed by the Einstein tensor) with the local energy and momentum within that spacetime (expressed by the stress–energy tensor).[16]

Quantum mechanics

In quantum mechanics, the analogue of Newton's law is Schrödinger's equation (a partial differential equation) for a quantum system (usually atoms, molecules, and subatomic particles whether free, bound, or localized). It is not a simple algebraic equation, but in general a linear partial differential equation, describing the time-evolution of the system's wave function (also called a "state function").[17]

Biology

Predator-prey equations

The Lotka–Volterra equations, also known as the predator–prey equations, are a pair of first-order, non-linear, differential equations frequently used to describe the population dynamics of two species that interact, one as a predator and the other as prey.

Chemistry

The rate law or rate equation for a chemical reaction is a differential equation that links the reaction rate with concentrations or pressures of reactants and constant parameters (normally rate coefficients and partial reaction orders).[18] To determine the rate equation for a particular system one combines the reaction rate with a mass balance for the system.[19] In addition, a range of differential equations are present in the study of thermodynamics and quantum mechanics.

Economics

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...