Search This Blog

Tuesday, March 18, 2025

Gaussian elimination

From Wikipedia, the free encyclopedia
Animation of Gaussian elimination. Red row eliminates the following rows, green rows change their order.

In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of row-wise operations performed on the corresponding matrix of coefficients. This method can also be used to compute the rank of a matrix, the determinant of a square matrix, and the inverse of an invertible matrix. The method is named after Carl Friedrich Gauss (1777–1855). To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations:

  • Swapping two rows,
  • Multiplying a row by a nonzero number,
  • Adding a multiple of one row to another row.

Using these operations, a matrix can always be transformed into an upper triangular matrix (possibly bordered by rows or columns of zeros), and in fact one that is in row echelon form. Once all of the leading coefficients (the leftmost nonzero entry in each row) are 1, and every column containing a leading coefficient has zeros elsewhere, the matrix is said to be in reduced row echelon form. This final form is unique; in other words, it is independent of the sequence of row operations used. For example, in the following sequence of row operations (where two elementary operations on different rows are done at the first and third steps), the third and fourth matrices are the ones in row echelon form, and the final matrix is the unique reduced row echelon form.

Using row operations to convert a matrix into reduced row echelon form is sometimes called Gauss–Jordan elimination. In this case, the term Gaussian elimination refers to the process until it has reached its upper triangular, or (unreduced) row echelon form. For computational reasons, when solving systems of linear equations, it is sometimes preferable to stop row operations before the matrix is completely reduced.

Definitions and example of algorithm

The process of row reduction makes use of elementary row operations, and can be divided into two parts. The first part (sometimes called forward elimination) reduces a given system to row echelon form, from which one can tell whether there are no solutions, a unique solution, or infinitely many solutions. The second part (sometimes called back substitution) continues to use row operations until the solution is found; in other words, it puts the matrix into reduced row echelon form.

Another point of view, which turns out to be very useful to analyze the algorithm, is that row reduction produces a matrix decomposition of the original matrix. The elementary row operations may be viewed as the multiplication on the left of the original matrix by elementary matrices. Alternatively, a sequence of elementary operations that reduces a single row may be viewed as multiplication by a Frobenius matrix. Then the first part of the algorithm computes an LU decomposition, while the second part writes the original matrix as the product of a uniquely determined invertible matrix and a uniquely determined reduced row echelon matrix.

Row operations

There are three types of elementary row operations which may be performed on the rows of a matrix:

  1. Interchanging two rows.
  2. Multiplying a row by a non-zero scalar.
  3. Adding a scalar multiple of one row to another.

If the matrix is associated to a system of linear equations, then these operations do not change the solution set. Therefore, if one's goal is to solve a system of linear equations, then using these row operations could make the problem easier.

Echelon form

For each row in a matrix, if the row does not consist of only zeros, then the leftmost nonzero entry is called the leading coefficient (or pivot) of that row. So if two leading coefficients are in the same column, then a row operation of type 3 could be used to make one of those coefficients zero. Then by using the row swapping operation, one can always order the rows so that for every non-zero row, the leading coefficient is to the right of the leading coefficient of the row above. If this is the case, then matrix is said to be in row echelon form. So the lower left part of the matrix contains only zeros, and all of the zero rows are below the non-zero rows. The word "echelon" is used here because one can roughly think of the rows being ranked by their size, with the largest being at the top and the smallest being at the bottom.

For example, the following matrix is in row echelon form, and its leading coefficients are shown in red:

It is in echelon form because the zero row is at the bottom, and the leading coefficient of the second row (in the third column), is to the right of the leading coefficient of the first row (in the second column).

A matrix is said to be in reduced row echelon form if furthermore all of the leading coefficients are equal to 1 (which can be achieved by using the elementary row operation of type 2), and in every column containing a leading coefficient, all of the other entries in that column are zero (which can be achieved by using elementary row operations of type 3).

Example of the algorithm

Suppose the goal is to find and describe the set of solutions to the following system of linear equations:

The table below is the row reduction process applied simultaneously to the system of equations and its associated augmented matrix. In practice, one does not usually deal with the systems in terms of equations, but instead makes use of the augmented matrix, which is more suitable for computer manipulations. The row reduction procedure may be summarized as follows: eliminate x from all equations below L1, and then eliminate y from all equations below L2. This will put the system into triangular form. Then, using back-substitution, each unknown can be solved for.

System of equations Row operations Augmented matrix

The matrix is now in echelon form (also called triangular form)

The second column describes which row operations have just been performed. So for the first step, the x is eliminated from L2 by adding 3/2L1 to L2. Next, x is eliminated from L3 by adding L1 to L3. These row operations are labelled in the table as

Once y is also eliminated from the third row, the result is a system of linear equations in triangular form, and so the first part of the algorithm is complete. From a computational point of view, it is faster to solve the variables in reverse order, a process known as back-substitution. One sees the solution is z = −1, y = 3, and x = 2. So there is a unique solution to the original system of equations.

Instead of stopping once the matrix is in echelon form, one could continue until the matrix is in reduced row echelon form, as it is done in the table. The process of row reducing until the matrix is reduced is sometimes referred to as Gauss–Jordan elimination, to distinguish it from stopping after reaching echelon form.

History

The method of Gaussian elimination appears – albeit without proof – in the Chinese mathematical text Chapter Eight: Rectangular Arrays of The Nine Chapters on the Mathematical Art. Its use is illustrated in eighteen problems, with two to five equations. The first reference to the book by this title is dated to 179 AD, but parts of it were written as early as approximately 150 BC. It was commented on by Liu Hui in the 3rd century.

According to Grcar solution of linear equations by elimination was invented independently in several cultures in Eurasia starting from antiquity and in Europe definite examples of procedure were published already by late Renaissance (in 1550's). It is quite possible that already then the procedure was considered by mathematicians elementary and in no need to explanation for professionals, so we may never learn its detailed history except that by then it was practiced in at least several places in Europe.

The method in Europe stems from the notes of Isaac Newton. In 1670, he wrote that all the algebra books known to him lacked a lesson for solving simultaneous equations, which Newton then supplied. Cambridge University eventually published the notes as Arithmetica Universalis in 1707 long after Newton had left academic life. The notes were widely imitated, which made (what is now called) Gaussian elimination a standard lesson in algebra textbooks by the end of the 18th century. Carl Friedrich Gauss in 1810 devised a notation for symmetric elimination that was adopted in the 19th century by professional hand computers to solve the normal equations of least-squares problems. The algorithm that is taught in high school was named for Gauss only in the 1950s as a result of confusion over the history of the subject.

Some authors use the term Gaussian elimination to refer only to the procedure until the matrix is in echelon form, and use the term Gauss–Jordan elimination to refer to the procedure which ends in reduced echelon form. The name is used because it is a variation of Gaussian elimination as described by Wilhelm Jordan in 1888. However, the method also appears in an article by Clasen published in the same year. Jordan and Clasen probably discovered Gauss–Jordan elimination independently.

Applications

Historically, the first application of the row reduction method is for solving systems of linear equations. Below are some other important applications of the algorithm.

Computing determinants

To explain how Gaussian elimination allows the computation of the determinant of a square matrix, we have to recall how the elementary row operations change the determinant:

  • Swapping two rows multiplies the determinant by −1
  • Multiplying a row by a nonzero scalar multiplies the determinant by the same scalar
  • Adding to one row a scalar multiple of another does not change the determinant.

If Gaussian elimination applied to a square matrix A produces a row echelon matrix B, let d be the product of the scalars by which the determinant has been multiplied, using the above rules. Then the determinant of A is the quotient by d of the product of the elements of the diagonal of B:

Computationally, for an n × n matrix, this method needs only O(n3) arithmetic operations, while using Leibniz formula for determinants requires operations (number of summands in the formula times the number of multiplications in each summand), and recursive Laplace expansion requires O(n 2n) operations if the sub-determinants are memorized for being computed only once (number of operations in a linear combination times the number of sub-determinants to compute, which are determined by their columns). Even on the fastest computers, these two methods are impractical or almost impracticable for n above 20.

Finding the inverse of a matrix

A variant of Gaussian elimination called Gauss–Jordan elimination can be used for finding the inverse of a matrix, if it exists. If A is an n × n square matrix, then one can use row reduction to compute its inverse matrix, if it exists. First, the n × n identity matrix is augmented to the right of A, forming an n × 2n block matrix [A | I]. Now through application of elementary row operations, find the reduced echelon form of this n × 2n matrix. The matrix A is invertible if and only if the left block can be reduced to the identity matrix I; in this case the right block of the final matrix is A−1. If the algorithm is unable to reduce the left block to I, then A is not invertible.

For example, consider the following matrix:

To find the inverse of this matrix, one takes the following matrix augmented by the identity and row-reduces it as a 3 × 6 matrix:

By performing row operations, one can check that the reduced row echelon form of this augmented matrix is

One can think of each row operation as the left product by an elementary matrix. Denoting by B the product of these elementary matrices, we showed, on the left, that BA = I, and therefore, B = A−1. On the right, we kept a record of BI = B, which we know is the inverse desired. This procedure for finding the inverse works for square matrices of any size.

Computing ranks and bases

The Gaussian elimination algorithm can be applied to any m × n matrix A. In this way, for example, some 6 × 9 matrices can be transformed to a matrix that has a row echelon form like where the stars are arbitrary entries, and a, b, c, d, e are nonzero entries. This echelon matrix T contains a wealth of information about A: the rank of A is 5, since there are 5 nonzero rows in T; the vector space spanned by the columns of A has a basis consisting of its columns 1, 3, 4, 7 and 9 (the columns with a, b, c, d, e in T), and the stars show how the other columns of A can be written as linear combinations of the basis columns.

All of this applies also to the reduced row echelon form, which is a particular row echelon format.

Computational efficiency

The number of arithmetic operations required to perform row reduction is one way of measuring the algorithm's computational efficiency. For example, to solve a system of n equations for n unknowns by performing row operations on the matrix until it is in echelon form, and then solving for each unknown in reverse order, requires n(n + 1)/2 divisions, (2n3 + 3n2 − 5n)/6 multiplications, and (2n3 + 3n2 − 5n)/6 subtractions, for a total of approximately 2n3/3 operations. Thus it has a arithmetic complexity (time complexity, where each arithmetic operation take a unit of time, independently of the size of the inputs) of O(n3).

This complexity is a good measure of the time needed for the whole computation when the time for each arithmetic operation is approximately constant. This is the case when the coefficients are represented by floating-point numbers or when they belong to a finite field. If the coefficients are integers or rational numbers exactly represented, the intermediate entries can grow exponentially large, so the bit complexity is exponential. However, Bareiss' algorithm is a variant of Gaussian elimination that avoids this exponential growth of the intermediate entries; with the same arithmetic complexity of O(n3), it has a bit complexity of O(n5), and has therefore a strongly-polynomial time complexity.

Gaussian elimination and its variants can be used on computers for systems with thousands of equations and unknowns. However, the cost becomes prohibitive for systems with millions of equations. These large systems are generally solved using iterative methods. Specific methods exist for systems whose coefficients follow a regular pattern (see system of linear equations).

Bareiss algorithm

The first strongly-polynomial time algorithm for Gaussian elimination was published by Jack Edmonds in 1967. Independently, and almost simultaneously, Erwin Bareiss discovered another algorithm, based on the following remark, which applies to a division-free variant of Gaussian elimination.

In standard Gaussian elimination, one subtracts from each row below the pivot row a multiple of by where and are the entries in the pivot column of and respectively.

Bareiss variant consists, instead, of replacing with This produces a row echelon form that has the same zero entries as with the standard Gaussian elimination.

Bareiss' main remark is that each matrix entry generated by this variant is the determinant of a submatrix of the original matrix.

In particular, if one starts with integer entries, the divisions occurring in the algorithm are exact divisions resulting in integers. So, all intermediate entries and final entries are integers. Moreover, Hadamard's inequality provides an upper bound on the absolute values of the intermediate and final entries, and thus a bit complexity of using soft O notation.

Moreover, as an upper bound on the size of final entries is known, a complexity can be obtained with modular computation followed either by Chinese remaindering or Hensel lifting.

As a corollary, the following problems can be solved in strongly polynomial time with the same bit complexity:

  • Testing whether m given rational vectors are linearly independent
  • Computing the determinant of a rational matrix
  • Computing a solution of a rational equation system Ax = b
  • Computing the inverse matrix of a nonsingular rational matrix
  • Computing the rank of a rational matrix

Numeric instability

One possible problem is numerical instability, caused by the possibility of dividing by very small numbers. If, for example, the leading coefficient of one of the rows is very close to zero, then to row-reduce the matrix, one would need to divide by that number. This means that any error which existed for the number that was close to zero would be amplified. Gaussian elimination is numerically stable for diagonally dominant or positive-definite matrices. For general matrices, Gaussian elimination is usually considered to be stable, when using partial pivoting, even though there are examples of stable matrices for which it is unstable.

Generalizations

Gaussian elimination can be performed over any field, not just the real numbers.

Buchberger's algorithm is a generalization of Gaussian elimination to systems of polynomial equations. This generalization depends heavily on the notion of a monomial order. The choice of an ordering on the variables is already implicit in Gaussian elimination, manifesting as the choice to work from left to right when selecting pivot positions.

Computing the rank of a tensor of order greater than 2 is NP-hard. Therefore, if P ≠ NP, there cannot be a polynomial time analog of Gaussian elimination for higher-order tensors (matrices are array representations of order-2 tensors).

Pseudocode

As explained above, Gaussian elimination transforms a given m × n matrix A into a matrix in row-echelon form.

In the following pseudocode, A[i, j] denotes the entry of the matrix A in row i and column j with the indices starting from 1. The transformation is performed in place, meaning that the original matrix is lost for being eventually replaced by its row-echelon form.

h := 1 /* Initialization of the pivot row */
k := 1 /* Initialization of the pivot column */

while h ≤ m and k ≤ n
    /* Find the k-th pivot: */
    i_max := argmax (i = h ... m, abs(A[i, k]))
    if A[i_max, k] = 0
        /* No pivot in this column, pass to next column */
        k := k + 1
    else
        swap rows(h, i_max)
        /* Do for all rows below pivot: */
        for i = h + 1 ... m:
            f := A[i, k] / A[h, k]
            /* Fill with zeros the lower part of pivot column: */
            A[i, k] := 0
            /* Do for all remaining elements in current row: */
            for j = k + 1 ... n:
                A[i, j] := A[i, j] - A[h, j] * f
        /* Increase pivot row and column */
        h := h + 1
        k := k + 1

This algorithm differs slightly from the one discussed earlier, by choosing a pivot with largest absolute value. Such a partial pivoting may be required if, at the pivot place, the entry of the matrix is zero. In any case, choosing the largest possible absolute value of the pivot improves the numerical stability of the algorithm, when floating point is used for representing numbers.

Upon completion of this procedure the matrix will be in row echelon form and the corresponding system may be solved by back substitution.

Parallel postulate

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Parallel_postulate
If the sum of the interior angles α and β is less than 180°, the two straight lines, produced indefinitely, meet on that side.

In geometry, the parallel postulate is the fifth postulate in Euclid's Elements and a distinctive axiom in Euclidean geometry. It states that, in two-dimensional geometry:

If a line segment intersects two straight lines forming two interior angles on the same side that are less than two right angles, then the two lines, if extended indefinitely, meet on that side on which the angles sum to less than two right angles.

This postulate does not specifically talk about parallel lines; it is only a postulate related to parallelism. Euclid gave the definition of parallel lines in Book I, Definition 23 just before the five postulates.

Euclidean geometry is the study of geometry that satisfies all of Euclid's axioms, including the parallel postulate.

The postulate was long considered to be obvious or inevitable, but proofs were elusive. Eventually, it was discovered that inverting the postulate gave valid, albeit different geometries. A geometry where the parallel postulate does not hold is known as a non-Euclidean geometry. Geometry that is independent of Euclid's fifth postulate (i.e., only assumes the modern equivalent of the first four postulates) is known as absolute geometry (or sometimes "neutral geometry").

Equivalent properties

Probably the best-known equivalent of Euclid's parallel postulate, contingent on his other postulates, is Playfair's axiom, named after the Scottish mathematician John Playfair, which states:

In a plane, given a line and a point not on it, at most one line parallel to the given line can be drawn through the point.

This axiom by itself is not logically equivalent to the Euclidean parallel postulate since there are geometries in which one is true and the other is not. However, in the presence of the remaining axioms which give Euclidean geometry, one can be used to prove the other, so they are equivalent in the context of absolute geometry.

Many other statements equivalent to the parallel postulate have been suggested, some of them appearing at first to be unrelated to parallelism, and some seeming so self-evident that they were unconsciously assumed by people who claimed to have proven the parallel postulate from Euclid's other postulates. These equivalent statements include:

  1. There is at most one line that can be drawn parallel to another given one through an external point. (Playfair's axiom)
  2. The sum of the angles in every triangle is 180° (triangle postulate).
  3. There exists a triangle whose angles add up to 180°.
  4. The sum of the angles is the same for every triangle.
  5. There exists a pair of similar, but not congruent, triangles.
  6. Every triangle can be circumscribed.
  7. If three angles of a quadrilateral are right angles, then the fourth angle is also a right angle.
  8. There exists a quadrilateral in which all angles are right angles, that is, a rectangle.
  9. There exists a pair of straight lines that are at constant distance from each other.
  10. Two lines that are parallel to the same line are also parallel to each other.
  11. In a right-angled triangle, the square of the hypotenuse equals the sum of the squares of the other two sides (Pythagoras' theorem).
  12. The law of cosines, a generalization of Pythagoras' theorem.
  13. There is no upper limit to the area of a triangle. (Wallis axiom)
  14. The summit angles of the Saccheri quadrilateral are 90°.
  15. If a line intersects one of two parallel lines, both of which are coplanar with the original line, then it also intersects the other. (Proclus' axiom)

However, the alternatives which employ the word "parallel" cease appearing so simple when one is obliged to explain which of the four common definitions of "parallel" is meant – constant separation, never meeting, same angles where crossed by some third line, or same angles where crossed by any third line – since the equivalence of these four is itself one of the unconsciously obvious assumptions equivalent to Euclid's fifth postulate. In the list above, it is always taken to refer to non-intersecting lines. For example, if the word "parallel" in Playfair's axiom is taken to mean 'constant separation' or 'same angles where crossed by any third line', then it is no longer equivalent to Euclid's fifth postulate, and is provable from the first four (the axiom says 'There is at most one line...', which is consistent with there being no such lines). However, if the definition is taken so that parallel lines are lines that do not intersect, or that have some line intersecting them in the same angles, Playfair's axiom is contextually equivalent to Euclid's fifth postulate and is thus logically independent of the first four postulates. Note that the latter two definitions are not equivalent, because in hyperbolic geometry the second definition holds only for ultraparallel lines.

History

From the beginning, the postulate came under attack as being provable, and therefore not a postulate, and for more than two thousand years, many attempts were made to prove (derive) the parallel postulate using Euclid's first four postulates. The main reason that such a proof was so highly sought after was that, unlike the first four postulates, the parallel postulate is not self-evident. If the order in which the postulates were listed in the Elements is significant, it indicates that Euclid included this postulate only when he realised he could not prove it or proceed without it. Many attempts were made to prove the fifth postulate from the other four, many of them being accepted as proofs for long periods until the mistake was found. Invariably the mistake was assuming some 'obvious' property which turned out to be equivalent to the fifth postulate (Playfair's axiom). Although known from the time of Proclus, this became known as Playfair's Axiom after John Playfair wrote a famous commentary on Euclid in 1795 in which he proposed replacing Euclid's fifth postulate by his own axiom. Today, over two thousand two hundred years later, Euclid's fifth postulate remains a postulate.

Proclus (410–485) wrote a commentary on The Elements where he comments on attempted proofs to deduce the fifth postulate from the other four; in particular, he notes that Ptolemy had produced a false 'proof'. Proclus then goes on to give a false proof of his own. However, he did give a postulate which is equivalent to the fifth postulate.

Ibn al-Haytham (Alhazen) (965–1039), an Arab mathematician, made an attempt at proving the parallel postulate using a proof by contradiction, in the course of which he introduced the concept of motion and transformation into geometry. He formulated the Lambert quadrilateral, which Boris Abramovich Rozenfeld names the "Ibn al-Haytham–Lambert quadrilateral", and his attempted proof contains elements similar to those found in Lambert quadrilaterals and Playfair's axiom.

The Persian mathematician, astronomer, philosopher, and poet Omar Khayyám (1050–1123), attempted to prove the fifth postulate from another explicitly given postulate (based on the fourth of the five principles due to the Philosopher (Aristotle), namely, "Two convergent straight lines intersect and it is impossible for two convergent straight lines to diverge in the direction in which they converge." He derived some of the earlier results belonging to elliptical geometry and hyperbolic geometry, though his postulate excluded the latter possibility. The Saccheri quadrilateral was also first considered by Omar Khayyám in the late 11th century in Book I of Explanations of the Difficulties in the Postulates of Euclid. Unlike many commentators on Euclid before and after him (including Giovanni Girolamo Saccheri), Khayyám was not trying to prove the parallel postulate as such but to derive it from his equivalent postulate. He recognized that three possibilities arose from omitting Euclid's fifth postulate; if two perpendiculars to one line cross another line, judicious choice of the last can make the internal angles where it meets the two perpendiculars equal (it is then parallel to the first line). If those equal internal angles are right angles, we get Euclid's fifth postulate, otherwise, they must be either acute or obtuse. He showed that the acute and obtuse cases led to contradictions using his postulate, but his postulate is now known to be equivalent to the fifth postulate.

Nasir al-Din al-Tusi (1201–1274), in his Al-risala al-shafiya'an al-shakk fi'l-khutut al-mutawaziya (Discussion Which Removes Doubt about Parallel Lines) (1250), wrote detailed critiques of the parallel postulate and on Khayyám's attempted proof a century earlier. Nasir al-Din attempted to derive a proof by contradiction of the parallel postulate. He also considered the cases of what are now known as elliptical and hyperbolic geometry, though he ruled out both of them.

Euclidean, elliptical and hyperbolic geometry. The Parallel Postulate is satisfied only for models of Euclidean geometry.

Nasir al-Din's son, Sadr al-Din (sometimes known as "Pseudo-Tusi"), wrote a book on the subject in 1298, based on his father's later thoughts, which presented one of the earliest arguments for a non-Euclidean hypothesis equivalent to the parallel postulate. "He essentially revised both the Euclidean system of axioms and postulates and the proofs of many propositions from the Elements." His work was published in Rome in 1594 and was studied by European geometers. This work marked the starting point for Saccheri's work on the subject which opened with a criticism of Sadr al-Din's work and the work of Wallis.

Giordano Vitale (1633–1711), in his book Euclide restituo (1680, 1686), used the Khayyam-Saccheri quadrilateral to prove that if three points are equidistant on the base AB and the summit CD, then AB and CD are everywhere equidistant. Girolamo Saccheri (1667–1733) pursued the same line of reasoning more thoroughly, correctly obtaining absurdity from the obtuse case (proceeding, like Euclid, from the implicit assumption that lines can be extended indefinitely and have infinite length), but failing to refute the acute case (although he managed to wrongly persuade himself that he had).

In 1766 Johann Lambert wrote, but did not publish, Theorie der Parallellinien in which he attempted, as Saccheri did, to prove the fifth postulate. He worked with a figure that today we call a Lambert quadrilateral, a quadrilateral with three right angles (can be considered half of a Saccheri quadrilateral). He quickly eliminated the possibility that the fourth angle is obtuse, as had Saccheri and Khayyám, and then proceeded to prove many theorems under the assumption of an acute angle. Unlike Saccheri, he never felt that he had reached a contradiction with this assumption. He had proved the non-Euclidean result that the sum of the angles in a triangle increases as the area of the triangle decreases, and this led him to speculate on the possibility of a model of the acute case on a sphere of imaginary radius. He did not carry this idea any further.

Where Khayyám and Saccheri had attempted to prove Euclid's fifth by disproving the only possible alternatives, the nineteenth century finally saw mathematicians exploring those alternatives and discovering the logically consistent geometries that result. In 1829, Nikolai Ivanovich Lobachevsky published an account of acute geometry in an obscure Russian journal (later re-published in 1840 in German). In 1831, János Bolyai included, in a book by his father, an appendix describing acute geometry, which, doubtlessly, he had developed independently of Lobachevsky. Carl Friedrich Gauss had also studied the problem, but he did not publish any of his results. Upon hearing of Bolyai's results in a letter from Bolyai's father, Farkas Bolyai, Gauss stated:

If I commenced by saying that I am unable to praise this work, you would certainly be surprised for a moment. But I cannot say otherwise. To praise it would be to praise myself. Indeed the whole contents of the work, the path taken by your son, the results to which he is led, coincide almost entirely with my meditations, which have occupied my mind partly for the last thirty or thirty-five years.

The resulting geometries were later developed by Lobachevsky, Riemann and Poincaré into hyperbolic geometry (the acute case) and elliptic geometry (the obtuse case). The independence of the parallel postulate from Euclid's other axioms was finally demonstrated by Eugenio Beltrami in 1868.

Converse of Euclid's parallel postulate

The converse of the parallel postulate: If the sum of the two interior angles equals 180°, then the lines are parallel and will never intersect.

Euclid did not postulate the converse of his fifth postulate, which is one way to distinguish Euclidean geometry from elliptic geometry. The Elements contains the proof of an equivalent statement (Book I, Proposition 27): If a straight line falling on two straight lines make the alternate angles equal to one another, the straight lines will be parallel to one another. As De Morgan pointed out, this is logically equivalent to (Book I, Proposition 16). These results do not depend upon the fifth postulate, but they do require the second postulate which is violated in elliptic geometry.

Criticism

Attempts to logically prove the parallel postulate, rather than the eighth axiom, were criticized by Arthur Schopenhauer in The World as Will and Idea. However, the argument used by Schopenhauer was that the postulate is evident by perception, not that it was not a logical consequence of the other axioms.[26]

Decomposition of the parallel postulate

The parallel postulate is equivalent to the conjunction of the Lotschnittaxiom and of Aristotle's axiom. The former states that the perpendiculars to the sides of a right angle intersect, while the latter states that there is no upper bound for the lengths of the distances from the leg of an angle to the other leg. As shown in, the parallel postulate is equivalent to the conjunction of the following incidence-geometric forms of the Lotschnittaxiom and of Aristotle's axiom:

Given three parallel lines, there is a line that intersects all three of them.

Given a line a and two distinct intersecting lines m and n, each different from a, there exists a line g which intersects a and m, but not n.

The splitting of the parallel postulate into the conjunction of these incidence-geometric axioms is possible only in the presence of absolute geometry.

Wolf–Rayet star

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Wolf%E2%80%93Rayet_star James Webb Spa...