Search This Blog

Monday, June 11, 2018

Complex number

From Wikipedia, the free encyclopedia

A complex number can be visually represented as a pair of numbers (a, b) forming a vector on a diagram called an Argand diagram, representing the complex plane. "Re" is the real axis, "Im" is the imaginary axis, and i satisfies i2 = −1.

A complex number is a number that can be expressed in the form a + bi, where a and b are real numbers, and i is a solution of the equation x2 = −1. Because no real number satisfies this equation, i is called an imaginary number. For the complex number a + bi, a is called the real part, and b is called the imaginary part. Despite the historical nomenclature "imaginary", complex numbers are regarded in the mathematical sciences as just as "real" as the real numbers, and are fundamental in many aspects of the scientific description of the natural world.[1][2]

The complex number system can be defined as the algebraic extension of the ordinary real numbers by an imaginary number i.[3] This means that complex numbers can be added, subtracted, and multiplied, as polynomials in the variable i, with the rule i2 = −1 imposed. Furthermore, complex numbers can also be divided by nonzero complex numbers. Overall, the complex number system is a field.

Most importantly the complex numbers give rise to the fundamental theorem of algebra: every non-constant polynomial equation with complex coefficients has a complex solution. This property is true of the complex numbers, but not the reals. The 16th century Italian mathematician Gerolamo Cardano is credited with introducing complex numbers in his attempts to find solutions to cubic equations.[4]

Geometrically, complex numbers extend the concept of the one-dimensional number line to the two-dimensional complex plane by using the horizontal axis for the real part and the vertical axis for the imaginary part. The complex number a + bi can be identified with the point (a, b) in the complex plane. A complex number whose real part is zero is said to be purely imaginary; the points for these numbers lie on the vertical axis of the complex plane. A complex number whose imaginary part is zero can be viewed as a real number; its point lies on the horizontal axis of the complex plane. Complex numbers can also be represented in polar form, which associates each complex number with its distance from the origin (its magnitude) and with a particular angle known as the argument of this complex number.

Overview

Complex numbers allow solutions to certain equations that have no solutions in real numbers. For example, the equation
(x+1)^{2}=-9\,
has no real solution, since the square of a real number cannot be negative. Complex numbers provide a solution to this problem. The idea is to extend the real numbers with an indeterminate i (sometimes called the imaginary unit) that is taken to satisfy the relation i2 = −1, so that solutions to equations like the preceding one can be found. In this case the solutions are −1 + 3i and −1 − 3i, as can be verified using the fact that i2 = −1:
((-1+3i)+1)^{2}=(3i)^{2}=(3^{2})(i^{2})=9(-1)=-9,
((-1-3i)+1)^{2}=(-3i)^{2}=(-3)^{2}(i^{2})=9(-1)=-9.
According to the fundamental theorem of algebra, all polynomial equations with real or complex coefficients in a single variable have a solution in complex numbers.

Definition


An illustration of the complex plane. The real part of a complex number z = x + iy is x, and its imaginary part is y.

A complex number is a number of the form a + bi, where a and b are real numbers and i is an indeterminate satisfying i2 = −1. For example, 2 + 3i is a complex number.[5]

A complex number may therefore be defined as a polynomial in the single indeterminate i, with the relation i2 + 1 = 0 imposed. From this definition, complex numbers can be added or multiplied, using the addition and multiplication for polynomials. Formally, the set of complex numbers is the quotient ring of the polynomial ring in the indeterminate i, by the ideal generated by the polynomial i2 + 1 (see below).[6] The set of all complex numbers is denoted by \mathbf {C} (upright bold) or \mathbb {C} (blackboard bold).

The real number a is called the real part of the complex number a + bi; the real number b is called the imaginary part of a + bi. By this convention, the imaginary part does not include a factor of i: hence b, not bi, is the imaginary part.[7][8] The real part of a complex number z is denoted by Re(z) or ℜ(z); the imaginary part of a complex number z is denoted by Im(z) or ℑ(z). For example,
{\displaystyle {\begin{aligned}\operatorname {Re} (2+3i)&=2\\\operatorname {Im} (2+3i)&=3.\end{aligned}}}
A real number a can be regarded as a complex number a + 0i whose imaginary part is 0. A purely imaginary number bi is a complex number 0 + bi whose real part is zero. It is common to write a for a + 0i and bi for 0 + bi. Moreover, when the imaginary part is negative, it is common to write abi with b > 0 instead of a + (−b)i, for example 3 − 4i instead of 3 + (−4)i.

Cartesian form and definition via ordered pairs

A complex number can thus be identified with an ordered pair (Re(z),Im(z)) in the Cartesian plane, an identification sometimes known as the Cartesian form of z. In fact, a complex number can be defined as an ordered pair (a,b), but then rules for addition and multiplication must also be included as part of the definition (see below).[9] William Rowan Hamilton introduced this approach to define the complex number system.[10]

Complex plane


Figure 1: A complex number z, plotted as a point (red) and position vector (blue) on an Argand diagram; a+bi is its rectangular expression.

A complex number can be viewed as a point or position vector in a two-dimensional Cartesian coordinate system called the complex plane or Argand diagram (see Pedoe 1988 and Solomentsev 2001), named after Jean-Robert Argand. The numbers are conventionally plotted using the real part as the horizontal component, and imaginary part as vertical (see Figure 1). These two values used to identify a given complex number are therefore called its Cartesian, rectangular, or algebraic form.

A position vector may also be defined in terms of its magnitude and direction relative to the origin. These are emphasized in a complex number's polar form. Using the polar form of the complex number in calculations may lead to a more intuitive interpretation of mathematical results. Notably, the operations of addition and multiplication take on a very natural geometric character when complex numbers are viewed as position vectors: addition corresponds to vector addition while multiplication corresponds to multiplying their magnitudes and adding their arguments (i.e. the angles they make with the x axis). Viewed in this way the multiplication of a complex number by i corresponds to rotating the position vector counterclockwise by a quarter turn (90°) about the origin: (a+bi)i = ai+bi2 = -b+ai.

History in brief

The solution in radicals (without trigonometric functions) of a general cubic equation contains the square roots of negative numbers when all three roots are real numbers, a situation that cannot be rectified by factoring aided by the rational root test if the cubic is irreducible (the so-called casus irreducibilis). This conundrum led Italian mathematician Gerolamo Cardano to conceive of complex numbers in around 1545,[11] though his understanding was rudimentary.

Work on the problem of general polynomials ultimately led to the fundamental theorem of algebra, which shows that with complex numbers, a solution exists to every polynomial equation of degree one or higher. Complex numbers thus form an algebraically closed field, where any polynomial equation has a root.

Many mathematicians contributed to the full development of complex numbers. The rules for addition, subtraction, multiplication, and division of complex numbers were developed by the Italian mathematician Rafael Bombelli.[12] A more abstract formalism for the complex numbers was further developed by the Irish mathematician William Rowan Hamilton, who extended this abstraction to the theory of quaternions.

Notation

Because it is a polynomial in the indeterminate i, a + ib may be written instead of a + bi, which is often expedient when b is a radical.[13] In some disciplines, in particular electromagnetism and electrical engineering, j is used instead of i,[14] since i is frequently used for electric current. In these cases complex numbers are written as a + bj or a + jb.

Equality and order relations

Two complex numbers are equal if and only if both their real and imaginary parts are equal. That is, complex numbers z_{1} and z_{2} are equal if and only if {\displaystyle \operatorname {Re} (z_{1})=\operatorname {Re} (z_{2})} and {\displaystyle \operatorname {Im} (z_{1})=\operatorname {Im} (z_{2})}. If the complex numbers are written in polar form, they are equal if and only if they have the same argument and the same magnitude.

Because complex numbers are naturally thought of as existing on a two-dimensional plane, there is no natural linear ordering on the set of complex numbers. Furthermore, there is no linear ordering on the complex numbers that is compatible with addition and multiplication – the complex numbers cannot have the structure of an ordered field. This is because any square in an ordered field is at least 0, but i2 = −1.

Elementary operations

Conjugate


Geometric representation of z and its conjugate {\bar {z}} in the complex plane

The complex conjugate of the complex number z = x + yi is defined to be xyi. It is denoted by either {\overline {z}} or z*.[15]

Geometrically, {\bar {z}} is the "reflection" of z about the real axis. Conjugating twice gives the original complex number: {\bar {\bar {z}}}=z.

The real and imaginary parts of a complex number z can be extracted using the conjugate:
{\displaystyle \operatorname {Re} (z)={\dfrac {z+{\overline {z}}}{2}},\,}
{\displaystyle \operatorname {Im} (z)={\dfrac {z-{\overline {z}}}{2i}}.\,}
Moreover, a complex number is real if and only if it equals its own conjugate.

Conjugation distributes over the standard arithmetic operations:
{\displaystyle {\overline {z+w}}={\overline {z}}+{\overline {w}},\,}
{\displaystyle {\overline {z-w}}={\overline {z}}-{\overline {w}},\,}
{\displaystyle {\overline {z\cdot w}}={\overline {z}}\cdot {\overline {w}},\,}
{\displaystyle {\overline {z/w}}={\overline {z}}/{\overline {w}}.\,}

Addition and subtraction


Addition of two complex numbers can be done geometrically by constructing a parallelogram.

Complex numbers are added by separately adding the real and imaginary parts of the summands. That is to say:
(a+bi)+(c+di)=(a+c)+(b+d)i.\
Similarly, subtraction is defined by
(a+bi)-(c+di)=(a-c)+(b-d)i.\
Using the visualization of complex numbers in the complex plane, the addition has the following geometric interpretation: the sum of two complex numbers A and B, interpreted as points of the complex plane, is the point X obtained by building a parallelogram, three of whose vertices are O, A and B. Equivalently, X is the point such that the triangles with vertices O, A, B, and X, B, A, are congruent.

Multiplication and division

The multiplication of two complex numbers is defined by the following formula:
(a+bi)(c+di)=(ac-bd)+(bc+ad)i.\
In particular, the square of i is −1:
i^{2}=i\times i=-1.\
The preceding definition of multiplication of general complex numbers follows naturally from this fundamental property of i. Indeed, if i is treated as a number so that di means d times i, the above multiplication rule is identical to the usual rule for multiplying two sums of two terms.
(a+bi)(c+di)=ac+bci+adi+bidi (distributive property)
=ac+bidi+bci+adi (commutative property of addition—the order of the summands can be changed)
=ac+bdi^{2}+(bc+ad)i (commutative and distributive properties)
{\displaystyle =(ac-bd)+(bc+ad)i} (fundamental property of i).
The division of two complex numbers is defined in terms of complex multiplication, which is described above, and real division. When at least one of c and d is non-zero, we have
{\displaystyle {\frac {a+bi}{c+di}}=\left({ac+bd \over c^{2}+d^{2}}\right)+\left({bc-ad \over c^{2}+d^{2}}\right)i.}
Division can be defined in this way because of the following observation:
{\displaystyle {\frac {a+bi}{c+di}}={\frac {\left(a+bi\right)\cdot \left(c-di\right)}{\left(c+di\right)\cdot \left(c-di\right)}}=\left({ac+bd \over c^{2}+d^{2}}\right)+\left({bc-ad \over c^{2}+d^{2}}\right)i.}
As shown earlier, cdi is the complex conjugate of the denominator c + di. At least one of the real part c and the imaginary part d of the denominator must be nonzero for division to be defined. This is called "rationalization" of the denominator (although the denominator in the final expression might be an irrational real number).

Reciprocal

The reciprocal of a nonzero complex number z = x + yi is given by
{\displaystyle {\frac {1}{z}}={\frac {\bar {z}}{z{\bar {z}}}}={\frac {\bar {z}}{x^{2}+y^{2}}}={\frac {x}{x^{2}+y^{2}}}-{\frac {y}{x^{2}+y^{2}}}i.}
This formula can be used to compute the multiplicative inverse of a complex number if it is given in rectangular coordinates. Inversive geometry, a branch of geometry studying reflections more general than ones about a line, can also be expressed in terms of complex numbers. In the network analysis of electrical circuits, the complex conjugate is used in finding the equivalent impedance when the maximum power transfer theorem is used.

Square root

The square roots of a + bi (with b ≠ 0) are \pm (\gamma +\delta i), where
\gamma ={\sqrt {\frac {a+{\sqrt {a^{2}+b^{2}}}}{2}}}
and
\delta =\operatorname {sgn}(b){\sqrt {\frac {-a+{\sqrt {a^{2}+b^{2}}}}{2}}},
where sgn is the signum function. This can be seen by squaring \pm (\gamma +\delta i) to obtain a + bi.[16][17] Here {\sqrt {a^{2}+b^{2}}} is called the modulus of a + bi, and the square root sign indicates the square root with non-negative real part, called the principal square root; also {\sqrt {a^{2}+b^{2}}}={\sqrt {z{\bar {z}}}}, where z=a+bi.[18]

Polar form


Figure 2: The argument φ and modulus r locate a point on an Argand diagram; r(\cos \varphi +i\sin \varphi ) or re^{i\varphi } are polar expressions of the point.

Absolute value and argument

An alternative way of defining a point P in the complex plane, other than using the x- and y-coordinates, is to use the distance of the point from O, the point whose coordinates are (0, 0) (the origin), together with the angle subtended between the positive real axis and the line segment OP in a counterclockwise direction. This idea leads to the polar form of complex numbers.

The absolute value (or modulus or magnitude) of a complex number z = x + yi is[19]
\textstyle r=|z|={\sqrt {x^{2}+y^{2}}}.\,
If z is a real number (that is, if y = 0), then r = | x |. That is, the absolute value of a real number equals its absolute value as a complex number.

By Pythagoras' theorem, the absolute value of complex number is the distance to the origin of the point representing the complex number in the complex plane.

The square of the absolute value is
\textstyle |z|^{2}=z{\bar {z}}=x^{2}+y^{2}.\,
where {\bar {z}} is the complex conjugate of z.

The argument of z (in many applications referred to as the "phase") is the angle of the radius OP with the positive real axis, and is written as \arg(z). As with the modulus, the argument can be found from the rectangular form x+yi:[20]
{\displaystyle \varphi =\arg(z)={\begin{cases}\arctan \left({\dfrac {y}{x}}\right)&{\text{if }}x>0\\\arctan \left({\dfrac {y}{x}}\right)+\pi &{\text{if }}x<0{\text{ and }}y\geq 0\\\arctan \left({\dfrac {y}{x}}\right)-\pi &{\text{if }}x<0{\text{ and }}y<0\\{\dfrac {\pi }{2}}&{\text{if }}x=0{\text{ and }}y>0\\-{\dfrac {\pi }{2}}&{\text{if }}x=0{\text{ and }}y<0\\{\text{indeterminate }}&{\text{if }}x=0{\text{ and }}y=0.\end{cases}}}

Visualisation of the square to sixth roots of a complex number z, in polar form re where φ = arg z and r = |z | – if z is real, φ = 0 or π. Principal roots are in black.

Normally, as given above, the principal value in the interval (−π,π] is chosen. Values in the range [0,2π) are obtained by adding if the value is negative. The value of φ is expressed in radians in this article. It can increase by any integer multiple of and still give the same angle. Hence, the arg function is sometimes considered as multivalued. The polar angle for the complex number 0 is indeterminate, but arbitrary choice of the angle 0 is common.

The value of φ equals the result of atan2:
{\displaystyle \varphi =\operatorname {atan2} \left(\operatorname {Im} (z),\operatorname {Re} (z)\right).}
Together, r and φ give another way of representing complex numbers, the polar form, as the combination of modulus and argument fully specify the position of a point on the plane. Recovering the original rectangular co-ordinates from the polar form is done by the formula called trigonometric form
z=r(\cos \varphi +i\sin \varphi ).\,
Using Euler's formula this can be written as
z=re^{i\varphi }.\,
Using the cis function, this is sometimes abbreviated to
z=r\operatorname {cis} \varphi .\,
In angle notation, often used in electronics to represent a phasor with amplitude r and phase φ, it is written as[21]
z=r\angle \varphi .\,

Multiplication and division in polar form


Multiplication of 2 + i (blue triangle) and 3 + i (red triangle). The red triangle is rotated to match the vertex of the blue one and stretched by 5, the length of the hypotenuse of the blue triangle.

Formulas for multiplication, division and exponentiation are simpler in polar form than the corresponding formulas in Cartesian coordinates. Given two complex numbers z1 = r1(cos φ1 + i sin φ1) and z2 = r2(cos φ2 + i sin φ2), because of the well-known trigonometric identities
\cos(a)\cos(b)-\sin(a)\sin(b)=\cos(a+b)
\cos(a)\sin(b)+\sin(a)\cos(b)=\sin(a+b)
we may derive
z_{1}z_{2}=r_{1}r_{2}(\cos(\varphi _{1}+\varphi _{2})+i\sin(\varphi _{1}+\varphi _{2})).\,
In other words, the absolute values are multiplied and the arguments are added to yield the polar form of the product. For example, multiplying by i corresponds to a quarter-turn counter-clockwise, which gives back i2 = −1. The picture at the right illustrates the multiplication of
(2+i)(3+i)=5+5i.\,
Since the real and imaginary part of 5 + 5i are equal, the argument of that number is 45 degrees, or π/4 (in radian). On the other hand, it is also the sum of the angles at the origin of the red and blue triangles are arctan(1/3) and arctan(1/2), respectively. Thus, the formula
{\frac {\pi }{4}}=\arctan {\frac {1}{2}}+\arctan {\frac {1}{3}}
holds. As the arctan function can be approximated highly efficiently, formulas like this—known as Machin-like formulas—are used for high-precision approximations of π.

Similarly, division is given by
{\frac {z_{1}}{z_{2}}}={\frac {r_{1}}{r_{2}}}\left(\cos(\varphi _{1}-\varphi _{2})+i\sin(\varphi _{1}-\varphi _{2})\right).

Exponentiation

Euler's formula

Euler's formula states that, for any real number x,
e^{ix}=\cos x+i\sin x\ ,
where e is the base of the natural logarithm. This can be proved through induction by observing that
{\begin{aligned}i^{0}&{}=1,\quad &i^{1}&{}=i,\quad &i^{2}&{}=-1,\quad &i^{3}&{}=-i,\\i^{4}&={}1,\quad &i^{5}&={}i,\quad &i^{6}&{}=-1,\quad &i^{7}&{}=-i,\end{aligned}}
and so on, and by considering the Taylor series expansions of eix, cos x and sin x:
{\begin{aligned}e^{ix}&{}=1+ix+{\frac {(ix)^{2}}{2!}}+{\frac {(ix)^{3}}{3!}}+{\frac {(ix)^{4}}{4!}}+{\frac {(ix)^{5}}{5!}}+{\frac {(ix)^{6}}{6!}}+{\frac {(ix)^{7}}{7!}}+{\frac {(ix)^{8}}{8!}}+\cdots \\[8pt]&{}=1+ix-{\frac {x^{2}}{2!}}-{\frac {ix^{3}}{3!}}+{\frac {x^{4}}{4!}}+{\frac {ix^{5}}{5!}}-{\frac {x^{6}}{6!}}-{\frac {ix^{7}}{7!}}+{\frac {x^{8}}{8!}}+\cdots \\[8pt]&{}=\left(1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-{\frac {x^{6}}{6!}}+{\frac {x^{8}}{8!}}-\cdots \right)+i\left(x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots \right)\\[8pt]&{}=\cos x+i\sin x\ .\end{aligned}}
The rearrangement of terms is justified because each series is absolutely convergent.

Natural logarithm

It follows from Euler's formula that, for any complex number z written in polar form,
{\displaystyle z=r(\cos \varphi +i\sin \varphi )}
where r is a non-negative real number, one possible value for the complex logarithm of z is
{\displaystyle \ln(z)=\ln(r)+\varphi i.}
Because cosine and sine are periodic functions, other possible values may be obtained. For example, {\displaystyle e^{i\pi }=e^{3i\pi }=-1}, so both {\displaystyle i\pi } and {\displaystyle 3i\pi } are two possible values for the natural logarithm of -1.

To deal with the existence of more than one possible value for a given input, the complex logarithm may be considered a multi-valued function, with
{\displaystyle \ln(z)=\left\{\ln(r)+(\varphi +2\pi k)i\;|\;k\in \mathbb {Z} \right\}.}
Alternatively, a branch cut can be used to define a single-valued "branch" of the complex logarithm.

Integer and fractional exponents

We may use the identity
\ln(a^{b})=b\ln(a)
to define complex exponentiation, which is likewise multi-valued:
{\displaystyle {\begin{aligned}\ln(z^{n})&=\ln((r(\cos \varphi +i\sin \varphi ))^{n})\\&=n\ln(r(\cos \varphi +i\sin \varphi ))\\&=\{n(\ln(r)+(\varphi +k2\pi )i)\ |\ k\in \mathbb {Z} \}\\&=\{n\ln(r)+n\varphi i+nk2\pi i\ |\ k\in \mathbb {Z} \}.\end{aligned}}}
When n is an integer, this simplifies to de Moivre's formula:
z^{n}=(r(\cos \varphi +i\sin \varphi ))^{n}=r^{n}\,(\cos n\varphi +i\sin n\varphi ).
The nth roots of z are given by
{\sqrt[{n}]{z}}={\sqrt[{n}]{r}}\left(\cos \left({\frac {\varphi +2k\pi }{n}}\right)+i\sin \left({\frac {\varphi +2k\pi }{n}}\right)\right)
for any integer k satisfying 0 ≤ kn − 1. Here nr is the usual (positive) nth root of the positive real number r. While the nth root of a positive real number r is chosen to be the positive real number c satisfying cn = r there is no natural way of distinguishing one particular complex nth root of a complex number. Therefore, the nth root of z is considered as a multivalued function (in z), as opposed to a usual function f, for which f(z) is a uniquely defined number. Formulas such as
{\sqrt[{n}]{z^{n}}}=z
(which holds for positive real numbers), do in general not hold for complex numbers.

Properties

Field structure

The set C of complex numbers is a field.[22] Briefly, this means that the following facts hold: first, any two complex numbers can be added and multiplied to yield another complex number. Second, for any complex number z, its additive inverse z is also a complex number; and third, every nonzero complex number has a reciprocal complex number. Moreover, these operations satisfy a number of laws, for example the law of commutativity of addition and multiplication for any two complex numbers z1 and z2:
z_{1}+z_{2}=z_{2}+z_{1},
z_{1}z_{2}=z_{2}z_{1}.
These two laws and the other requirements on a field can be proven by the formulas given above, using the fact that the real numbers themselves form a field.

Unlike the reals, C is not an ordered field, that is to say, it is not possible to define a relation z1 < z2 that is compatible with the addition and multiplication. In fact, in any ordered field, the square of any element is necessarily positive, so i2 = −1 precludes the existence of an ordering on C.[23]

When the underlying field for a mathematical topic or construct is the field of complex numbers, the topic's name is usually modified to reflect that fact. For example: complex analysis, complex matrix, complex polynomial, and complex Lie algebra.

Solutions of polynomial equations

Given any complex numbers (called coefficients) a0, …, an, the equation
a_{n}z^{n}+\dotsb +a_{1}z+a_{0}=0
has at least one complex solution z, provided that at least one of the higher coefficients a1, …, an is nonzero.[24] This is the statement of the fundamental theorem of algebra, of Carl Friedrich Gauss and Jean le Rond d'Alembert. Because of this fact, C is called an algebraically closed field. This property does not hold for the field of rational numbers Q (the polynomial x2 − 2 does not have a rational root, since 2 is not a rational number) nor the real numbers R (the polynomial x2 + a does not have a real root for a > 0, since the square of x is positive for any real number x).

There are various proofs of this theorem, either by analytic methods such as Liouville's theorem, or topological ones such as the winding number, or a proof combining Galois theory and the fact that any real polynomial of odd degree has at least one real root.

Because of this fact, theorems that hold for any algebraically closed field, apply to C. For example, any non-empty complex square matrix has at least one (complex) eigenvalue.

Algebraic characterization

The field C has the following three properties: first, it has characteristic 0. This means that 1 + 1 + ⋯ + 1 ≠ 0 for any number of summands (all of which equal one). Second, its transcendence degree over Q, the prime field of C, is the cardinality of the continuum. Third, it is algebraically closed (see above). It can be shown that any field having these properties is isomorphic (as a field) to C. For example, the algebraic closure of Qp also satisfies these three properties, so these two fields are isomorphic (as fields, but not as topological fields).[25] Also, C is isomorphic to the field of complex Puiseux series. However, specifying an isomorphism requires the axiom of choice. Another consequence of this algebraic characterization is that C contains many proper subfields that are isomorphic to C.

Characterization as a topological field

The preceding characterization of C describes only the algebraic aspects of C. That is to say, the properties of nearness and continuity, which matter in areas such as analysis and topology, are not dealt with. The following description of C as a topological field (that is, a field that is equipped with a topology, which allows the notion of convergence) does take into account the topological properties. C contains a subset P (namely the set of positive real numbers) of nonzero elements satisfying the following three conditions:
  • P is closed under addition, multiplication and taking inverses.
  • If x and y are distinct elements of P, then either xy or yx is in P.
  • If S is any nonempty subset of P, then S + P = x + P for some x in C.
Moreover, C has a nontrivial involutive automorphism xx* (namely the complex conjugation), such that x x* is in P for any nonzero x in C.

Any field F with these properties can be endowed with a topology by taking the sets B(x, p) = { y | p − (yx)(yx)* ∈ P }  as a base, where x ranges over the field and p ranges over P. With this topology F is isomorphic as a topological field to C.

The only connected locally compact topological fields are R and C. This gives another characterization of C as a topological field, since C can be distinguished from R because the nonzero complex numbers are connected, while the nonzero real numbers are not.[26]

Formal construction

Construction as ordered pairs

The set C of complex numbers can be defined as the set R2 of ordered pairs (a, b) of real numbers, in which the following rules for addition and multiplication are imposed:[27]
{\begin{aligned}(a,b)+(c,d)&=(a+c,b+d)\\(a,b)\cdot (c,d)&=(ac-bd,bc+ad).\end{aligned}}
It is then just a matter of notation to express (a, b) as a + bi.

Construction as a quotient field

Though this low-level construction does accurately describe the structure of the complex numbers, the following equivalent definition reveals the algebraic nature of C more immediately. This characterization relies on the notion of fields and polynomials. A field is a set endowed with addition, subtraction, multiplication and division operations that behave as is familiar from, say, rational numbers. For example, the distributive law
(x+y)z=xz+yz
must hold for any three elements x, y and z of a field. The set R of real numbers does form a field. A polynomial p(X) with real coefficients is an expression of the form
a_{n}X^{n}+\dotsb +a_{1}X+a_{0},
where the a0, ..., an are real numbers. The usual addition and multiplication of polynomials endows the set R[X] of all such polynomials with a ring structure. This ring is called the polynomial ring over the real numbers.

The set of complex numbers is defined as the quotient ring R[X]/(X 2 + 1).[28] This extension field contains two square roots of −1, namely (the cosets of) X and X, respectively. (The cosets of) 1 and X form a basis of R[X]/(X 2 + 1) as a real vector space, which means that each element of the extension field can be uniquely written as a linear combination in these two elements. Equivalently, elements of the extension field can be written as ordered pairs (a, b) of real numbers. The quotient ring is a field, because the (X2 + 1) is a prime ideal in R[X], a principal ideal domain, and therefore is a maximal ideal.

The formulas for addition and multiplication in the ring R[X], modulo the relation (X2 = 1 correspond to the formulas for addition and multiplication of complex numbers defined as ordered pairs. So the two definitions of the field C are isomorphic (as fields).

Accepting that C is algebraically closed, since it is an algebraic extension of R in this approach, C is therefore the algebraic closure of R.

Matrix representation of complex numbers

Complex numbers a + bi can also be represented by 2 × 2 matrices that have the following form:
{\displaystyle {\begin{pmatrix}a&-b\\b&\;\;a\end{pmatrix}}}
Here the entries a and b are real numbers. The sum and product of two such matrices is again of this form, and the sum and product of complex numbers corresponds to the sum and product of such matrices, the product being:
{\displaystyle {\begin{pmatrix}a&-b\\b&\;\;a\end{pmatrix}}{\begin{pmatrix}c&-d\\d&\;\;c\end{pmatrix}}={\begin{pmatrix}ac-bd&-ad-bc\\bc+ad&\;\;-bd+ac\end{pmatrix}}}
The geometric description of the multiplication of complex numbers can also be expressed in terms of rotation matrices by using this correspondence between complex numbers and such matrices. Moreover, the square of the absolute value of a complex number expressed as a matrix is equal to the determinant of that matrix:
{\displaystyle |z|^{2}={\begin{vmatrix}a&-b\\b&a\end{vmatrix}}=a^{2}+b^{2}.}
The conjugate {\overline {z}} corresponds to the transpose of the matrix.

Though this representation of complex numbers with matrices is the most common, many other representations arise from matrices other than {\bigl (}{\begin{smallmatrix}0&-1\\1&0\end{smallmatrix}}{\bigr )} that square to the negative of the identity matrix. See the article on 2 × 2 real matrices for other representations of complex numbers.

Complex analysis


Color wheel graph of sin(1/z). Black parts inside refer to numbers having large absolute values.

The study of functions of a complex variable is known as complex analysis and has enormous practical use in applied mathematics as well as in other branches of mathematics. Often, the most natural proofs for statements in real analysis or even number theory employ techniques from complex analysis (see prime number theorem for an example). Unlike real functions, which are commonly represented as two-dimensional graphs, complex functions have four-dimensional graphs and may usefully be illustrated by color-coding a three-dimensional graph to suggest four dimensions, or by animating the complex function's dynamic transformation of the complex plane.

Complex exponential and related functions

The notions of convergent series and continuous functions in (real) analysis have natural analogs in complex analysis. A sequence of complex numbers is said to converge if and only if its real and imaginary parts do. This is equivalent to the (ε, δ)-definition of limits, where the absolute value of real numbers is replaced by the one of complex numbers. From a more abstract point of view, C, endowed with the metric
\operatorname {d} (z_{1},z_{2})=|z_{1}-z_{2}|\,
is a complete metric space, which notably includes the triangle inequality
|z_{1}+z_{2}|\leq |z_{1}|+|z_{2}|
for any two complex numbers z1 and z2.

Like in real analysis, this notion of convergence is used to construct a number of elementary functions: the exponential function exp(z), also written ez, is defined as the infinite series
{\displaystyle \exp(z):=1+z+{\frac {z^{2}}{2\cdot 1}}+{\frac {z^{3}}{3\cdot 2\cdot 1}}+\cdots =\sum _{n=0}^{\infty }{\frac {z^{n}}{n!}}.}
The series defining the real trigonometric functions sine and cosine, as well as the hyperbolic functions sinh and cosh, also carry over to complex arguments without change. For the other trigonometric and hyperbolic functions, such as tangent, things are slightly more complicated, as the defining series do not converge for all complex values. Therefore, one must define them either in terms of sine, cosine and exponential, or, equivalently, by using the method of analytic continuation.
Euler's formula states:
\exp(i\varphi )=\cos(\varphi )+i\sin(\varphi )\,
for any real number φ, in particular
\exp(i\pi )=-1\,
Unlike in the situation of real numbers, there is an infinitude of complex solutions z of the equation
\exp(z)=w\,
for any complex number w ≠ 0. It can be shown that any such solution z—called complex logarithm of w—satisfies
{\displaystyle \log(w)=\ln |w|+i\arg(w),\,}
where arg is the argument defined above, and ln the (real) natural logarithm. As arg is a multivalued function, unique only up to a multiple of 2π, log is also multivalued. The principal value of log is often taken by restricting the imaginary part to the interval (−π,π].

Complex exponentiation zω is defined as
{\displaystyle z^{\omega }=\exp(\omega \log z),}
and is multi-valued, except when \omega is an integer. For ω = 1 / n, for some natural number n, this recovers the non-uniqueness of nth roots mentioned above.

Complex numbers, unlike real numbers, do not in general satisfy the unmodified power and logarithm identities, particularly when naïvely treated as single-valued functions; see failure of power and logarithm identities. For example, they do not satisfy
{\displaystyle a^{bc}=(a^{b})^{c}.}
Both sides of the equation are multivalued by the definition of complex exponentiation given here, and the values on the left are a subset of those on the right.

Holomorphic functions

A function f : CC is called holomorphic if it satisfies the Cauchy–Riemann equations. For example, any R-linear map CC can be written in the form
f(z)=az+b{\overline {z}}
with complex coefficients a and b. This map is holomorphic if and only if b = 0. The second summand b{\overline {z}} is real-differentiable, but does not satisfy the Cauchy–Riemann equations.

Complex analysis shows some features not apparent in real analysis. For example, any two holomorphic functions f and g that agree on an arbitrarily small open subset of C necessarily agree everywhere. Meromorphic functions, functions that can locally be written as f(z)/(zz0)n with a holomorphic function f, still share some of the features of holomorphic functions. Other functions have essential singularities, such as sin(1/z) at z = 0.

Applications

Complex numbers have essential concrete applications in a variety of scientific and related areas such as signal processing, control theory, electromagnetism, fluid dynamics, quantum mechanics, cartography, and vibration analysis. Some applications of complex numbers are:

Control theory

In control theory, systems are often transformed from the time domain to the frequency domain using the Laplace transform. The system's zeros and poles are then analyzed in the complex plane. The root locus, Nyquist plot, and Nichols plot techniques all make use of the complex plane.

In the root locus method, it is important whether zeros and poles are in the left or right half planes, i.e. have real part greater than or less than zero. If a linear, time-invariant (LTI) system has poles that are
If a system has zeros in the right half plane, it is a nonminimum phase system.

Improper integrals

In applied fields, complex numbers are often used to compute certain real-valued improper integrals, by means of complex-valued functions. Several methods exist to do this; see methods of contour integration.

Fluid dynamics

In fluid dynamics, complex functions are used to describe potential flow in two dimensions.

Dynamic equations

In differential equations, it is common to first find all complex roots r of the characteristic equation of a linear differential equation or equation system and then attempt to solve the system in terms of base functions of the form f(t) = ert. Likewise, in difference equations, the complex roots r of the characteristic equation of the difference equation system are used, to attempt to solve the system in terms of base functions of the form f(t) = rt.

Electromagnetism and electrical engineering

In electrical engineering, the Fourier transform is used to analyze varying voltages and currents. The treatment of resistors, capacitors, and inductors can then be unified by introducing imaginary, frequency-dependent resistances for the latter two and combining all three in a single complex number called the impedance. This approach is called phasor calculus.

In electrical engineering, the imaginary unit is denoted by j, to avoid confusion with I, which is generally in use to denote electric current, or, more particularly, i, which is generally in use to denote instantaneous electric current.

Since the voltage in an AC circuit is oscillating, it can be represented as
V(t)=V_{0}e^{j\omega t}=V_{0}\left(\cos \omega t+j\sin \omega t\right),
To obtain the measurable quantity, the real part is taken:
v(t)=\mathrm {Re} (V)=\mathrm {Re} \left[V_{0}e^{j\omega t}\right]=V_{0}\cos \omega t.
The complex-valued signal V(t) is called the analytic representation of the real-valued, measurable signal v(t). [29]

Signal analysis

Complex numbers are used in signal analysis and other fields for a convenient description for periodically varying signals. For given real functions representing actual physical quantities, often in terms of sines and cosines, corresponding complex functions are considered of which the real parts are the original quantities. For a sine wave of a given frequency, the absolute value | z | of the corresponding z is the amplitude and the argument arg(z) is the phase.

If Fourier analysis is employed to write a given real-valued signal as a sum of periodic functions, these periodic functions are often written as complex valued functions of the form
{\displaystyle x(t)=\operatorname {Re} \{X(t)\}\,}
and
X(t)=Ae^{i\omega t}=ae^{i\phi }e^{i\omega t}=ae^{i(\omega t+\phi )}\,
where ω represents the angular frequency and the complex number A encodes the phase and amplitude as explained above.

This use is also extended into digital signal processing and digital image processing, which utilize digital versions of Fourier analysis (and wavelet analysis) to transmit, compress, restore, and otherwise process digital audio signals, still images, and video signals.

Another example, relevant to the two side bands of amplitude modulation of AM radio, is:
{\begin{aligned}\cos((\omega +\alpha )t)+\cos \left((\omega -\alpha )t\right)&=\operatorname {Re} \left(e^{i(\omega +\alpha )t}+e^{i(\omega -\alpha )t}\right)\\&=\operatorname {Re} \left((e^{i\alpha t}+e^{-i\alpha t})\cdot e^{i\omega t}\right)\\&=\operatorname {Re} \left(2\cos(\alpha t)\cdot e^{i\omega t}\right)\\&=2\cos(\alpha t)\cdot \operatorname {Re} \left(e^{i\omega t}\right)\\&=2\cos(\alpha t)\cdot \cos \left(\omega t\right)\,.\end{aligned}}

Quantum mechanics

The complex number field is intrinsic to the mathematical formulations of quantum mechanics, where complex Hilbert spaces provide the context for one such formulation that is convenient and perhaps most standard. The original foundation formulas of quantum mechanics—the Schrödinger equation and Heisenberg's matrix mechanics—make use of complex numbers.

Relativity

In special and general relativity, some formulas for the metric on spacetime become simpler if one takes the time component of the spacetime continuum to be imaginary. (This approach is no longer standard in classical relativity, but is used in an essential way in quantum field theory.) Complex numbers are essential to spinors, which are a generalization of the tensors used in relativity.

Geometry

Fractals

Certain fractals are plotted in the complex plane, e.g. the Mandelbrot set and Julia sets.

Triangles

Every triangle has a unique Steiner inellipse—an ellipse inside the triangle and tangent to the midpoints of the three sides of the triangle. The foci of a triangle's Steiner inellipse can be found as follows, according to Marden's theorem:[30][31] Denote the triangle's vertices in the complex plane as a = xA + yAi, b = xB + yBi, and c = xC + yCi. Write the cubic equation \scriptstyle (x-a)(x-b)(x-c)=0, take its derivative, and equate the (quadratic) derivative to zero. Marden's Theorem says that the solutions of this equation are the complex numbers denoting the locations of the two foci of the Steiner inellipse.

Algebraic number theory


Construction of a regular pentagon using straightedge and compass.

As mentioned above, any nonconstant polynomial equation (in complex coefficients) has a solution in C. A fortiori, the same is true if the equation has rational coefficients. The roots of such equations are called algebraic numbers – they are a principal object of study in algebraic number theory. Compared to Q, the algebraic closure of Q, which also contains all algebraic numbers, C has the advantage of being easily understandable in geometric terms. In this way, algebraic methods can be used to study geometric questions and vice versa. With algebraic methods, more specifically applying the machinery of field theory to the number field containing roots of unity, it can be shown that it is not possible to construct a regular nonagon using only compass and straightedge – a purely geometric problem.

Another example are Gaussian integers, that is, numbers of the form x + iy, where x and y are integers, which can be used to classify sums of squares.

Analytic number theory

Analytic number theory studies numbers, often integers or rationals, by taking advantage of the fact that they can be regarded as complex numbers, in which analytic methods can be used. This is done by encoding number-theoretic information in complex-valued functions. For example, the Riemann zeta function ζ(s) is related to the distribution of prime numbers.

History

The earliest fleeting reference to square roots of negative numbers can perhaps be said to occur in the work of the Greek mathematician Hero of Alexandria in the 1st century AD, where in his Stereometrica he considers, apparently in error, the volume of an impossible frustum of a pyramid to arrive at the term \scriptstyle {\sqrt {81-144}}=3i{\sqrt {7}} in his calculations, although negative quantities were not conceived of in Hellenistic mathematics and Heron merely replaced it by its positive (\scriptstyle {\sqrt {144-81}}=3{\sqrt {7}}).[32]

The impetus to study complex numbers as a topic in itself first arose in the 16th century when algebraic solutions for the roots of cubic and quartic polynomials were discovered by Italian mathematicians (see Niccolò Fontana Tartaglia, Gerolamo Cardano). It was soon realized that these formulas, even if one was only interested in real solutions, sometimes required the manipulation of square roots of negative numbers. As an example, Tartaglia's formula for a cubic equation of the form \scriptstyle x^{3}=px+q[33] gives the solution to the equation x3 = x as
{\frac {1}{\sqrt {3}}}\left(({\sqrt {-1}})^{1/3}+{\frac {1}{({\sqrt {-1}})^{1/3}}}\right).
At first glance this looks like nonsense. However formal calculations with complex numbers show that the equation z3 = i has solutions i, {\scriptstyle {\frac {\sqrt {3}}{2}}}+{\scriptstyle {\frac {1}{2}}}i and {\scriptstyle {\frac {-{\sqrt {3}}}{2}}}+{\scriptstyle {\frac {1}{2}}}i. Substituting these in turn for {\scriptstyle {\sqrt {-1}}^{1/3}} in Tartaglia's cubic formula and simplifying, one gets 0, 1 and −1 as the solutions of x3x = 0. Of course this particular equation can be solved at sight but it does illustrate that when general formulas are used to solve cubic equations with real roots then, as later mathematicians showed rigorously, the use of complex numbers is unavoidable. Rafael Bombelli was the first to explicitly address these seemingly paradoxical solutions of cubic equations and developed the rules for complex arithmetic trying to resolve these issues.

The term "imaginary" for these quantities was coined by René Descartes in 1637, although he was at pains to stress their imaginary nature[34]
[...] sometimes only imaginary, that is one can imagine as many as I said in each equation, but sometimes there exists no quantity that matches that which we imagine.
([...] quelquefois seulement imaginaires c’est-à-dire que l’on peut toujours en imaginer autant que j'ai dit en chaque équation, mais qu’il n’y a quelquefois aucune quantité qui corresponde à celle qu’on imagine.)
A further source of confusion was that the equation \scriptstyle {\sqrt {-1}}^{2}={\sqrt {-1}}{\sqrt {-1}}=-1 seemed to be capriciously inconsistent with the algebraic identity \scriptstyle {\sqrt {a}}{\sqrt {b}}={\sqrt {ab}}, which is valid for non-negative real numbers a and b, and which was also used in complex number calculations with one of a, b positive and the other negative. The incorrect use of this identity (and the related identity \scriptstyle {\frac {1}{\sqrt {a}}}={\sqrt {\frac {1}{a}}}) in the case when both a and b are negative even bedeviled Euler. This difficulty eventually led to the convention of using the special symbol i in place of −1 to guard against this mistake.[citation needed] Even so, Euler considered it natural to introduce students to complex numbers much earlier than we do today. In his elementary algebra text book, Elements of Algebra, he introduces these numbers almost at once and then uses them in a natural way throughout.

In the 18th century complex numbers gained wider use, as it was noticed that formal manipulation of complex expressions could be used to simplify calculations involving trigonometric functions. For instance, in 1730 Abraham de Moivre noted that the complicated identities relating trigonometric functions of an integer multiple of an angle to powers of trigonometric functions of that angle could be simply re-expressed by the following well-known formula which bears his name, de Moivre's formula:
(\cos \theta +i\sin \theta )^{n}=\cos n\theta +i\sin n\theta .\,
In 1748 Leonhard Euler went further and obtained Euler's formula of complex analysis:
\cos \theta +i\sin \theta =e^{i\theta }\,
by formally manipulating complex power series and observed that this formula could be used to reduce any trigonometric identity to much simpler exponential identities.

The idea of a complex number as a point in the complex plane (above) was first described by Caspar Wessel in 1799, although it had been anticipated as early as 1685 in Wallis's De Algebra tractatus.

Wessel's memoir appeared in the Proceedings of the Copenhagen Academy but went largely unnoticed. In 1806 Jean-Robert Argand independently issued a pamphlet on complex numbers and provided a rigorous proof of the fundamental theorem of algebra. Carl Friedrich Gauss had earlier published an essentially topological proof of the theorem in 1797 but expressed his doubts at the time about "the true metaphysics of the square root of −1". It was not until 1831 that he overcame these doubts and published his treatise on complex numbers as points in the plane, largely establishing modern notation and terminology. In the beginning of the 19th century, other mathematicians discovered independently the geometrical representation of the complex numbers: Buée, Mourey, Warren, Français and his brother, Bellavitis.[35]

The English mathematician G. H. Hardy remarked that Gauss was the first mathematician to use complex numbers in 'a really confident and scientific way' although mathematicians such as Niels Henrik Abel and Carl Gustav Jacob Jacobi were necessarily using them routinely before Gauss published his 1831 treatise.[36] Augustin Louis Cauchy and Bernhard Riemann together brought the fundamental ideas of complex analysis to a high state of completion, commencing around 1825 in Cauchy's case.

The common terms used in the theory are chiefly due to the founders. Argand called \scriptstyle \cos \phi +i\sin \phi the direction factor, and \scriptstyle r={\sqrt {a^{2}+b^{2}}} the modulus; Cauchy (1828) called \cos \phi +i\sin \phi the reduced form (l'expression réduite) and apparently introduced the term argument; Gauss used i for \scriptstyle {\sqrt {-1}}, introduced the term complex number for a + bi, and called a2 + b2 the norm. The expression direction coefficient, often used for \cos \phi +i\sin \phi , is due to Hankel (1867), and absolute value, for modulus, is due to Weierstrass.

Later classical writers on the general theory include Richard Dedekind, Otto Hölder, Felix Klein, Henri Poincaré, Hermann Schwarz, Karl Weierstrass and many others.

Generalizations and related notions

The process of extending the field R of reals to C is known as the Cayley–Dickson construction. It can be carried further to higher dimensions, yielding the quaternions H and octonions O which (as a real vector space) are of dimension 4 and 8, respectively. In this context the complex numbers have been called the binarions.[37]

However, just as applying the construction to reals loses the property of ordering, more properties familiar from real and complex numbers vanish with increasing dimension. The quaternions are not commutative, i.e. for some x, y: x·yy·x for two quaternions. The multiplication of octonions fails (in addition to not being commutative) to be associative: for some x, y, z: (x·yzx·(y·z).

Reals, complex numbers, quaternions and octonions are all normed division algebras over R. However, by Hurwitz's theorem they are the only ones. The next step in the Cayley–Dickson construction, the sedenions, in fact fails to have this structure.

The Cayley–Dickson construction is closely related to the regular representation of C, thought of as an R-algebra (an R-vector space with a multiplication), with respect to the basis (1, i). This means the following: the R-linear map
\mathbb {C} \rightarrow \mathbb {C} ,z\mapsto wz
for some fixed complex number w can be represented by a 2 × 2 matrix (once a basis has been chosen). With respect to the basis (1, i), this matrix is
{\begin{pmatrix}\operatorname {Re} (w)&-\operatorname {Im} (w)\\\operatorname {Im} (w)&\;\;\operatorname {Re} (w)\end{pmatrix}}
i.e., the one mentioned in the section on matrix representation of complex numbers above. While this is a linear representation of C in the 2 × 2 real matrices, it is not the only one. Any matrix
J={\begin{pmatrix}p&q\\r&-p\end{pmatrix}},\quad p^{2}+qr+1=0
has the property that its square is the negative of the identity matrix: J2 = −I. Then
\{z=aI+bJ:a,b\in R\}
is also isomorphic to the field C, and gives an alternative complex structure on R2. This is generalized by the notion of a linear complex structure.

Hypercomplex numbers also generalize R, C, H, and O. For example, this notion contains the split-complex numbers, which are elements of the ring R[x]/(x2 − 1) (as opposed to R[x]/(x2 + 1)). In this ring, the equation a2 = 1 has four solutions.

The field R is the completion of Q, the field of rational numbers, with respect to the usual absolute value metric. Other choices of metrics on Q lead to the fields Qp of p-adic numbers (for any prime number p), which are thereby analogous to R. There are no other nontrivial ways of completing Q than R and Qp, by Ostrowski's theorem. The algebraic closures {\overline {\mathbf {Q} _{p}}} of Qp still carry a norm, but (unlike C) are not complete with respect to it. The completion \mathbf {C} _{p} of {\overline {\mathbf {Q} _{p}}} turns out to be algebraically closed. This field is called p-adic complex numbers by analogy.

The fields R and Qp and their finite field extensions, including C, are local fields.

Orthogonality

From Wikipedia, the free encyclopedia

The line segments AB and CD are orthogonal to each other.

In mathematics, orthogonality is the generalization of the notion of perpendicularity to the linear algebra of bilinear forms. Two elements u and v of a vector space with bilinear form B are orthogonal when B(u, v) = 0. Depending on the bilinear form, the vector space may contain nonzero self-orthogonal vectors. In the case of function spaces, families of orthogonal functions are used to form a basis.

By extension, orthogonality is also used to refer to the separation of specific features of a system. The term also has specialized meanings in other fields including art and chemistry.

Etymology

The word comes from the Greek ὀρθός (orthos), meaning "upright", and γωνία (gonia), meaning "angle". The ancient Greek ὀρθογώνιον orthogōnion (< ὀρθός orthos 'upright'[1] + γωνία gōnia 'angle'[2]) and classical Latin orthogonium originally denoted a rectangle.[3] Later, they came to mean a right triangle. In the 12th century, the post-classical Latin word orthogonalis came to mean a right angle or something related to a right angle.[4]

Mathematics and physics

Orthogonality and rotation of coordinate systems compared between left: Euclidean space through circular angle ϕ, right: in Minkowski spacetime through hyperbolic angle ϕ (red lines labelled c denote the worldlines of a light signal, a vector is orthogonal to itself if it lies on this line).[5]

Definitions

  • In geometry, two Euclidean vectors are orthogonal if they are perpendicular, i.e., they form a right angle.
  • Two vectors, x and y, in an inner product space, V, are orthogonal if their inner product \langle x, y \rangle is zero.[6] This relationship is denoted {\displaystyle x\perp y}.
  • Two vector subspaces, A and B, of an inner product space V, are called orthogonal subspaces if each vector in A is orthogonal to each vector in B. The largest subspace of V that is orthogonal to a given subspace is its orthogonal complement.
  • Given a module M and its dual M, an element m′ of M and an element m of M are orthogonal if their natural pairing is zero, i.e. m′, m⟩ = 0. Two sets S′ ⊆ M and SM are orthogonal if each element of S′ is orthogonal to each element of S.[7]
  • A term rewriting system is said to be orthogonal if it is left-linear and is non-ambiguous. Orthogonal term rewriting systems are confluent.
A set of vectors in an inner product space is called pairwise orthogonal if each pairing of them is orthogonal. Such a set is called an orthogonal set.

In certain cases, the word normal is used to mean orthogonal, particularly in the geometric sense as in the normal to a surface. For example, the y-axis is normal to the curve y = x2 at the origin. However, normal may also refer to the magnitude of a vector. In particular, a set is called orthonormal (orthogonal plus normal) if it is an orthogonal set of unit vectors. As a result, use of the term normal to mean "orthogonal" is often avoided. The word "normal" also has a different meaning in probability and statistics.

A vector space with a bilinear form generalizes the case of an inner product. When the bilinear form applied to two vectors results in zero, then they are orthogonal. The case of a pseudo-Euclidean plane uses the term hyperbolic orthogonality. In the diagram, axes x′ and t′ are hyperbolic-orthogonal for any given ϕ.

Euclidean vector spaces

In Euclidean space, two vectors are orthogonal if and only if their dot product is zero, i.e. they make an angle of 90° (π/2 radians), or one of the vectors is zero.[8] Hence orthogonality of vectors is an extension of the concept of perpendicular vectors to spaces of any dimension.

The orthogonal complement of a subspace is the space of all vectors that are orthogonal to every vector in the subspace. In a three-dimensional Euclidean vector space, the orthogonal complement of a line through the origin is the plane through the origin perpendicular to it, and vice versa.[9]

Note that the geometric concept two planes being perpendicular does not correspond to the orthogonal complement, since in three dimensions a pair of vectors, one from each of a pair of perpendicular planes, might meet at any angle.

In four-dimensional Euclidean space, the orthogonal complement of a line is a hyperplane and vice versa, and that of a plane is a plane.[9]

Orthogonal functions

By using integral calculus, it is common to use the following to define the inner product of two functions f and g with respect to a nonnegative weight function w over an interval [a, b]:
\langle f, g\rangle_w = \int_a^b f(x)g(x)w(x)\,dx.
In simple cases, w(x) = 1.

We say that functions f and g are orthogonal if their inner product (equivalently, the value of this integral) is zero:
{\displaystyle \langle f,g\rangle _{w}=0.}
Orthogonality of two functions with respect to one inner product does not imply orthogonality with respect to another inner product.

We write the norm with respect to this inner product as
\|f\|_w = \sqrt{\langle f, f\rangle_w}
The members of a set of functions {fi : i = 1, 2, 3, ...} are orthogonal with respect to w on the interval [a, b] if
{\displaystyle \langle f_{i},f_{j}\rangle _{w}=0\quad i\neq j.}
The members of such a set of functions are orthonormal with respect to w on the interval [a, b] if
{\displaystyle \langle f_{i},f_{j}\rangle _{w}=\delta _{i,j},}
where
{\displaystyle \delta _{i,j}=\left\{{\begin{matrix}1,&&i=j\\0,&&i\neq j\end{matrix}}\right.}
is the Kronecker delta. In other words, every pair of them (excluding pairing of a function with itself) is orthogonal, and the norm of each is 1. See in particular the orthogonal polynomials.

Examples

  • The vectors (1, 3, 2)T, (3, −1, 0)T, (1, 3, −5)T are orthogonal to each other, since (1)(3) + (3)(−1) + (2)(0) = 0, (3)(1) + (−1)(3) + (0)(−5) = 0, and (1)(1) + (3)(3) + (2)(−5) = 0.
  • The vectors (1, 0, 1, 0, ...)T and (0, 1, 0, 1, ...)T are orthogonal to each other. The dot product of these vectors is 0. We can then make the generalization to consider the vectors in Z2n:
\mathbf{v}_k = \sum_{i=0\atop ai+k < n}^{n/a} \mathbf{e}_i
for some positive integer a, and for 1 ≤ ka − 1, these vectors are orthogonal, for example (1, 0, 0, 1, 0, 0, 1, 0)T, (0, 1, 0, 0, 1, 0, 0, 1)T, (0, 0, 1, 0, 0, 1, 0, 0)T are orthogonal.
  • The functions 2t + 3 and 45t2 + 9t − 17 are orthogonal with respect to a unit weight function on the interval from −1 to 1:

    \int_{-1}^1 \left(2t+3\right)\left(45t^2+9t-17\right)\,dt = 0
  • The functions 1, sin(nx), cos(nx) : n = 1, 2, 3, ... are orthogonal with respect to Riemann integration on the intervals [0, 2π], [−π, π], or any other closed interval of length 2π. This fact is a central one in Fourier series.

Orthogonal polynomials

Orthogonal states in quantum mechanics

  • In quantum mechanics, a sufficient (but not necessary) condition that two eigenstates of a Hermitian operator,  \psi_m and \psi _{n}, are orthogonal is that they correspond to different eigenvalues. This means, in Dirac notation, that  \langle \psi_m | \psi_n \rangle = 0 if  \psi_m and \psi _{n} correspond to different eigenvalues. This follows from the fact that Schrödinger's equation is a Sturm–Liouville equation (in Schrödinger's formulation) or that observables are given by hermitian operators (in Heisenberg's formulation).[citation needed]

Art

In art, the perspective (imaginary) lines pointing to the vanishing point are referred to as "orthogonal lines".

The term "orthogonal line" often has a quite different meaning in the literature of modern art criticism. Many works by painters such as Piet Mondrian and Burgoyne Diller are noted for their exclusive use of "orthogonal lines" — not, however, with reference to perspective, but rather referring to lines that are straight and exclusively horizontal or vertical, forming right angles where they intersect. For example, an essay at the Web site of the Thyssen-Bornemisza Museum states that "Mondrian ... dedicated his entire oeuvre to the investigation of the balance between orthogonal lines and primary colours." [1]

Computer science

Orthogonality in programming language design is the ability to use various language features in arbitrary combinations with consistent results.[10] This usage was introduced by Van Wijngaarden in the design of Algol 68:
The number of independent primitive concepts has been minimized in order that the language be easy to describe, to learn, and to implement. On the other hand, these concepts have been applied “orthogonally” in order to maximize the expressive power of the language while trying to avoid deleterious superfluities.[11]
Orthogonality is a system design property which guarantees that modifying the technical effect produced by a component of a system neither creates nor propagates side effects to other components of the system. Typically this is achieved through the separation of concerns and encapsulation, and it is essential for feasible and compact designs of complex systems. The emergent behavior of a system consisting of components should be controlled strictly by formal definitions of its logic and not by side effects resulting from poor integration, i.e., non-orthogonal design of modules and interfaces. Orthogonality reduces testing and development time because it is easier to verify designs that neither cause side effects nor depend on them.

An instruction set is said to be orthogonal if it lacks redundancy (i.e., there is only a single instruction that can be used to accomplish a given task)[12] and is designed such that instructions can use any register in any addressing mode. This terminology results from considering an instruction as a vector whose components are the instruction fields. One field identifies the registers to be operated upon and another specifies the addressing mode. An orthogonal instruction set uniquely encodes all combinations of registers and addressing modes.[citation needed]

Communications

In communications, multiple-access schemes are orthogonal when an ideal receiver can completely reject arbitrarily strong unwanted signals from the desired signal using different basis functions. One such scheme is TDMA, where the orthogonal basis functions are nonoverlapping rectangular pulses ("time slots").

Another scheme is orthogonal frequency-division multiplexing (OFDM), which refers to the use, by a single transmitter, of a set of frequency multiplexed signals with the exact minimum frequency spacing needed to make them orthogonal so that they do not interfere with each other. Well known examples include (a, g, and n) versions of 802.11 Wi-Fi; WiMAX; ITU-T G.hn, DVB-T, the terrestrial digital TV broadcast system used in most of the world outside North America; and DMT (Discrete Multi Tone), the standard form of ADSL.

In OFDM, the subcarrier frequencies are chosen so that the subcarriers are orthogonal to each other, meaning that crosstalk between the subchannels is eliminated and intercarrier guard bands are not required. This greatly simplifies the design of both the transmitter and the receiver. In conventional FDM, a separate filter for each subchannel is required.

Statistics, econometrics, and economics

When performing statistical analysis, independent variables that affect a particular dependent variable are said to be orthogonal if they are uncorrelated,[13] since the covariance forms an inner product. In this case the same results are obtained for the effect of any of the independent variables upon the dependent variable, regardless of whether one models the effects of the variables individually with simple regression or simultaneously with multiple regression. If correlation is present, the factors are not orthogonal and different results are obtained by the two methods. This usage arises from the fact that if centered by subtracting the expected value (the mean), uncorrelated variables are orthogonal in the geometric sense discussed above, both as observed data (i.e., vectors) and as random variables (i.e., density functions). One econometric formalism that is alternative to the maximum likelihood framework, the Generalized Method of Moments, relies on orthogonality conditions. In particular, the Ordinary Least Squares estimator may be easily derived from an orthogonality condition between the explanatory variables and model residuals.

Taxonomy

In taxonomy, an orthogonal classification is one in which no item is a member of more than one group, that is, the classifications are mutually exclusive.

Combinatorics

In combinatorics, two n×n Latin squares are said to be orthogonal if their superimposition yields all possible n2 combinations of entries.[14]

Chemistry and biochemistry

In synthetic organic chemistry orthogonal protection is a strategy allowing the deprotection of functional groups independently of each other. In chemistry and biochemistry, an orthogonal interaction occurs when there are two pairs of substances and each substance can interact with their respective partner, but does not interact with either substance of the other pair. For example, DNA has two orthogonal pairs: cytosine and guanine form a base-pair, and adenine and thymine form another base-pair, but other base-pair combinations are strongly disfavored. As a chemical example, tetrazine reacts with transcyclooctene and azide reacts with cyclooctyne without any cross-reaction, so these are mutually orthogonal reactions, and so, can be performed simultaneously and selectively.[15] Bioorthogonal chemistry refers to chemical reactions occurring inside living systems without reacting with naturally present cellular components. In supramolecular chemistry the notion of orthogonality refers to the possibility of two or more supramolecular, often non-covalent, interactions being compatible; reversibly forming without interference from the other.

In analytical chemistry, analyses are "orthogonal" if they make a measurement or identification in completely different ways, thus increasing the reliability of the measurement. This is often required as a part of a new drug application.

System reliability

In the field of system reliability orthogonal redundancy is that form of redundancy where the form of backup device or method is completely different from the prone to error device or method. The failure mode of an orthogonally redundant back-up device or method does not intersect with and is completely different from the failure mode of the device or method in need of redundancy to safeguard the total system against catastrophic failure.

Neuroscience

In neuroscience, a sensory map in the brain which has overlapping stimulus coding (e.g. location and quality) is called an orthogonal map.

Gaming

In board games such as chess which feature a grid of squares, 'orthogonal' is used to mean "in the same row/'rank' or column/'file'". This is the counterpart to squares which are "diagonally adjacent".[16] In the ancient Chinese board game Go a player can capture the stones of an opponent by occupying all orthogonally-adjacent points.

Other examples

Stereo vinyl records encode both the left and right stereo channels in a single groove. The V-shaped groove in the vinyl has walls that are 90 degrees to each other, with variations in each wall separately encoding one of the two analogue channels that make up the stereo signal. The cartridge senses the motion of the stylus following the groove in two orthogonal directions: 45 degrees from vertical to either side.[17] A pure horizontal motion corresponds to a mono signal, equivalent to a stereo signal in which both channels carry identical (in-phase) signals.

Alpha Centauri system could have favorable conditions for life


X-ray radiation poses no threat to planets orbiting these two nearby Sun-like stars.
 
chandrascout
Alpha Centauri is the closest star system to Earth, and it
happens to house Sun-like stars. Sitting only 4 light years
away, or 25 trillion miles (40 trillion kilometers), Chandra
found that two of its stars could have favorable conditions for
habitable exoplanets.
X-ray: NASA/CXC/University of Colorado/T.Ayres;
Optical: Zdeněk Bardon/ESO
 
The search for habitable exoplanets spans far and wide, pushing the limits of what our modern telescopes are capable of. But rest assured that we aren’t ignoring what’s in our own backyard. Researchers have kept diligent eyes on Alpha Centauri, the closest system to Earth that happens to house Sun-like stars. And now, a comprehensive study published in Research Notes of the AAS clears Alpha Centauri’s two brightest stars of a crucial habitability factor: dangerous X-ray radiation.

In the study, NASA’s Chandra X-ray Observatory observed the three stars of Alpha Centauri, which sits just 4 light-years from Earth, twice a year since 2005. In an effort to determine the habitability of any planets within their orbits, Chandra monitored the amount of X-ray radiation that each star emitted into its habitable zone. An excess of X-ray radiation can wreak havoc on a planet by dissolving its atmosphere, causing harmful effects for potential residents, and creating destructive space weather that could mess with any technology possibly in use. But thankfully, the potential planets orbiting two of the three stars don’t have to worry any of that. In fact, these stars might actually create better planetary conditions than our own Sun.

"Because it is relatively close, the Alpha Centauri system is seen by many as the best candidate to explore for signs of life," said study’s author, Tom Ayres of the University of Colorado Boulder, in a press release. "The question is, will we find planets in an environment conducive to life as we know it?"

The three stars that make up Alpha Centauri aren’t exactly created equal, with some more hospitable to life than others. The two brightest stars in the system are a pair known as Alpha Cen A and Alpha Cen B (AB for short), which orbit each other so closely that Chandra is the only observatory precise enough to differentiate their X-rays. Farther out in the system is Alpha Cen C, known as Proxima, which is the closest non-Sun-like star to Earth. The AB pair are both remarkably similar to our Sun, with Alpha Cen A almost identical in size, brightness, and age, and Alpha Cen B only slightly smaller and dimmer.
 
Centauriaandb
Alpha Cen A and Alpha Cen B might look distinct in this
image captured by NASA’s Hubble Space Telescope, but
without high-precision instruments, the two Sun-like stars
appear as a single bright object in the sky. ESA/NASA
 
In regard to X-ray radiation, Alpha Cen A actually provides a safer planetary environment than the Sun, emitting lower doses of X-rays to its habitable zone. Alpha Cen B creates an environment that’s only marginally worse than the Sun, releasing higher amounts of X-rays by only a factor of five.

"This is very good news for Alpha Cen AB in terms of the ability of possible life on any of their planets to survive radiation bouts from the stars," Ayres said. "Chandra shows us that life should have a fighting chance on planets around either of these stars."

Proxima is a different story, though. It’s a significantly smaller red dwarf that emits about 500 times more X-ray radiation into its habitable zone than Earth receives from the Sun, and can radiate 50,000 time more during the massive X-ray flares that it’s known to hurl into space. While the AB duo’s X-ray radiation isn’t a threat to life, the massive dose expelled by Proxima definitely is.

And as luck would have it, the only exoplanet that’s been identified in Alpha Centauri is orbiting uninhabitable Proxima. Researchers haven’t given up hope, though. They continue to search for exoplanets around the AB pair, although their tight orbit makes it difficult to spot anything in between the two. But even if the search continues to turn up empty, Chandra’s extensive investigation will help researchers study the X-ray radiation patterns of stars similar to our Sun, allowing us to pinpoint any potential threats to Earth. And if we do come across planets orbiting these two stars, we might just find signs of life in our own backyard.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...