Search This Blog

Sunday, September 4, 2022

Group (mathematics)

From Wikipedia, the free encyclopedia

A Rubik's cube with one side rotated
The manipulations of the Rubik's Cube form the Rubik's Cube group.

In mathematics, a group is a set and an operation that combines any two elements of the set to produce a third element of the set, in such a way that the operation is associative, an identity element exists and every element has an inverse. These three axioms hold for number systems and many other mathematical structures. For example, the integers together with the addition operation form a group. The concept of a group and the axioms that define it were elaborated for handling, in a unified way, essential structural properties of very different mathematical entities such as numbers, geometric shapes and polynomial roots. Because the concept of groups is ubiquitous in numerous areas both within and outside mathematics, some authors consider it as a central organizing principle of contemporary mathematics.

In geometry groups arise naturally in the study of symmetries and geometric transformations: The symmetries of an object form a group, called the symmetry group of the object, and the transformations of a given type form a general group. Lie groups appear in symmetry groups in geometry, and also in the Standard Model of particle physics. The Poincaré group is a Lie group consisting of the symmetries of spacetime in special relativity. Point groups describe symmetry in molecular chemistry.

The concept of a group arose in the study of polynomial equations, starting with Évariste Galois in the 1830s, who introduced the term group (French: groupe) for the symmetry group of the roots of an equation, now called a Galois group. After contributions from other fields such as number theory and geometry, the group notion was generalized and firmly established around 1870. Modern group theory—an active mathematical discipline—studies groups in their own right. To explore groups, mathematicians have devised various notions to break groups into smaller, better-understandable pieces, such as subgroups, quotient groups and simple groups. In addition to their abstract properties, group theorists also study the different ways in which a group can be expressed concretely, both from a point of view of representation theory (that is, through the representations of the group) and of computational group theory. A theory has been developed for finite groups, which culminated with the classification of finite simple groups, completed in 2004. Since the mid-1980s, geometric group theory, which studies finitely generated groups as geometric objects, has become an active area in group theory.

Definition and illustration

First example: the integers

One of the more familiar groups is the set of integers

together with addition. For any two integers and , the sum is also an integer; this closure property says that is a binary operation on . The following properties of integer addition serve as a model for the group axioms in the definition below.

  • For all integers , and , one has . Expressed in words, adding to first, and then adding the result to gives the same final result as adding to the sum of and . This property is known as associativity.
  • If is any integer, then and . Zero is called the identity element of addition because adding it to any integer returns the same integer.
  • For every integer , there is an integer such that and . The integer is called the inverse element of the integer and is denoted .

The integers, together with the operation , form a mathematical object belonging to a broad class sharing similar structural aspects. To appropriately understand these structures as a collective, the following definition is developed.

Definition

The axioms for a group are short and natural... Yet somehow hidden behind these axioms is the monster simple group, a huge and extraordinary mathematical object, which appears to rely on numerous bizarre coincidences to exist. The axioms for groups give no obvious hint that anything like this exists.

Richard Borcherds in Mathematicians: An Outer View of the Inner World

A group is a set together with a binary operation on , here denoted "", that combines any two elements and to form an element of , denoted , such that the following three requirements, known as group axioms, are satisfied:

Associativity
For all , , in , one has .
Identity element
There exists an element in such that, for every in , one has and .
Such an element is unique (see below). It is called the identity element of the group.
Inverse element
For each in , there exists an element in such that and , where is the identity element.
For each , the element is unique (see below); it is called the inverse of and is commonly denoted .

Notation and terminology

Formally, the group is the ordered pair of a set and a binary operation on this set that satisfies the group axioms. The set is called the underlying set of the group, and the operation is called the group operation or the group law.

A group and its underlying set are thus two different mathematical objects. To avoid cumbersome notation, it is common to abuse notation by using the same symbol to denote both. This reflects also an informal way of thinking: that the group is the same as the set except that it has been enriched by additional structure provided by the operation.

For example, consider the set of real numbers , which has the operations of addition and multiplication . Formally, is a set, is a group, and is a field. But it is common to write to denote any of these three objects.

The additive group of the field is the group whose underlying set is and whose operation is addition. The multiplicative group of the field is the group whose underlying set is the set of nonzero real numbers and whose operation is multiplication.

More generally, one speaks of an additive group whenever the group operation is notated as addition; in this case, the identity is typically denoted , and the inverse of an element is denoted . Similarly, one speaks of a multiplicative group whenever the group operation is notated as multiplication; in this case, the identity is typically denoted , and the inverse of an element is denoted . In a multiplicative group, the operation symbol is usually omitted entirely, so that the operation is denoted by juxtaposition, instead of .

The definition of a group does not require that for all elements and in . If this additional condition holds, then the operation is said to be commutative, and the group is called an abelian group. It is a common convention that for an abelian group either additive or multiplicative notation may be used, but for a nonabelian group only multiplicative notation is used.

Several other notations are commonly used for groups whose elements are not numbers. For a group whose elements are functions, the operation is often function composition ; then the identity may be denoted id. In the more specific cases of geometric transformation groups, symmetry groups, permutation groups, and automorphism groups, the symbol is often omitted, as for multiplicative groups. Many other variants of notation may be encountered.

Second example: a symmetry group

Two figures in the plane are congruent if one can be changed into the other using a combination of rotations, reflections, and translations. Any figure is congruent to itself. However, some figures are congruent to themselves in more than one way, and these extra congruences are called symmetries. A square has eight symmetries. These are:

The elements of the symmetry group of the square, . Vertices are identified by color or number.
A square with its four corners marked by 1 to 4
(keeping it as it is)
The square is rotated by 90° clockwise; the corners are enumerated accordingly.
(rotation by 90° clockwise)
The square is rotated by 180° clockwise; the corners are enumerated accordingly.
(rotation by 180°)
The square is rotated by 270° clockwise; the corners are enumerated accordingly.
(rotation by 270° clockwise)
The square is reflected vertically; the corners are enumerated accordingly.
(vertical reflection)

The square is reflected horizontally; the corners are enumerated accordingly.
(horizontal reflection)

The square is reflected along the SW–NE diagonal; the corners are enumerated accordingly.
(diagonal reflection)

The square is reflected along the SE–NW diagonal; the corners are enumerated accordingly.
(counter-diagonal reflection)

  • the identity operation leaving everything unchanged, denoted id;
  • rotations of the square around its center by 90°, 180°, and 270° clockwise, denoted by , and , respectively;
  • reflections about the horizontal and vertical middle line ( and ), or through the two diagonals ( and ).

These symmetries are functions. Each sends a point in the square to the corresponding point under the symmetry. For example, sends a point to its rotation 90° clockwise around the square's center, and sends a point to its reflection across the square's vertical middle line. Composing two of these symmetries gives another symmetry. These symmetries determine a group called the dihedral group of degree four, denoted . The underlying set of the group is the above set of symmetries, and the group operation is function composition. Two symmetries are combined by composing them as functions, that is, applying the first one to the square, and the second one to the result of the first application. The result of performing first and then is written symbolically from right to left as ("apply the symmetry after performing the symmetry "). This is the usual notation for composition of functions.

The group table lists the results of all such compositions possible. For example, rotating by 270° clockwise () and then reflecting horizontally () is the same as performing a reflection along the diagonal (). Using the above symbols, highlighted in blue in the group table:

Group table of
The elements , , , and form a subgroup whose group table is highlighted in   red (upper left region). A left and right coset of this subgroup are highlighted in   green (in the last row) and   yellow (last column), respectively. The result of the composition , the symmetry , is highlighted in   blue (below table center).

Given this set of symmetries and the described operation, the group axioms can be understood as follows.

Binary operation: Composition is a binary operation. That is, is a symmetry for any two symmetries and . For example,

that is, rotating 270° clockwise after reflecting horizontally equals reflecting along the counter-diagonal (). Indeed, every other combination of two symmetries still gives a symmetry, as can be checked using the group table.

Associativity: The associativity axiom deals with composing more than two symmetries: Starting with three elements , and of , there are two possible ways of using these three symmetries in this order to determine a symmetry of the square. One of these ways is to first compose and into a single symmetry, then to compose that symmetry with . The other way is to first compose and , then to compose the resulting symmetry with . These two ways must give always the same result, that is,

For example, can be checked using the group table:

Identity element: The identity element is , as it does not change any symmetry when composed with it either on the left or on the right.

Inverse element: Each symmetry has an inverse: , the reflections , , , and the 180° rotation are their own inverse, because performing them twice brings the square back to its original orientation. The rotations and are each other's inverses, because rotating 90° and then rotation 270° (or vice versa) yields a rotation over 360° which leaves the square unchanged. This is easily verified on the table.

In contrast to the group of integers above, where the order of the operation is immaterial, it does matter in , as, for example, but . In other words, is not abelian.

History

The modern concept of an abstract group developed out of several fields of mathematics. The original motivation for group theory was the quest for solutions of polynomial equations of degree higher than 4. The 19th-century French mathematician Évariste Galois, extending prior work of Paolo Ruffini and Joseph-Louis Lagrange, gave a criterion for the solvability of a particular polynomial equation in terms of the symmetry group of its roots (solutions). The elements of such a Galois group correspond to certain permutations of the roots. At first, Galois's ideas were rejected by his contemporaries, and published only posthumously. More general permutation groups were investigated in particular by Augustin Louis Cauchy. Arthur Cayley's On the theory of groups, as depending on the symbolic equation (1854) gives the first abstract definition of a finite group.

Geometry was a second field in which groups were used systematically, especially symmetry groups as part of Felix Klein's 1872 Erlangen program. After novel geometries such as hyperbolic and projective geometry had emerged, Klein used group theory to organize them in a more coherent way. Further advancing these ideas, Sophus Lie founded the study of Lie groups in 1884.

The third field contributing to group theory was number theory. Certain abelian group structures had been used implicitly in Carl Friedrich Gauss's number-theoretical work Disquisitiones Arithmeticae (1798), and more explicitly by Leopold Kronecker. In 1847, Ernst Kummer made early attempts to prove Fermat's Last Theorem by developing groups describing factorization into prime numbers.

The convergence of these various sources into a uniform theory of groups started with Camille Jordan's Traité des substitutions et des équations algébriques (1870). Walther von Dyck (1882) introduced the idea of specifying a group by means of generators and relations, and was also the first to give an axiomatic definition of an "abstract group", in the terminology of the time. As of the 20th century, groups gained wide recognition by the pioneering work of Ferdinand Georg Frobenius and William Burnside, who worked on representation theory of finite groups, Richard Brauer's modular representation theory and Issai Schur's papers. The theory of Lie groups, and more generally locally compact groups was studied by Hermann Weyl, Élie Cartan and many others. Its algebraic counterpart, the theory of algebraic groups, was first shaped by Claude Chevalley (from the late 1930s) and later by the work of Armand Borel and Jacques Tits.

The University of Chicago's 1960–61 Group Theory Year brought together group theorists such as Daniel Gorenstein, John G. Thompson and Walter Feit, laying the foundation of a collaboration that, with input from numerous other mathematicians, led to the classification of finite simple groups, with the final step taken by Aschbacher and Smith in 2004. This project exceeded previous mathematical endeavours by its sheer size, in both length of proof and number of researchers. Research concerning this classification proof is ongoing. Group theory remains a highly active mathematical branch, impacting many other fields, as the examples below illustrate.

Elementary consequences of the group axioms

Basic facts about all groups that can be obtained directly from the group axioms are commonly subsumed under elementary group theory. For example, repeated applications of the associativity axiom show that the unambiguity of

generalizes to more than three factors. Because this implies that parentheses can be inserted anywhere within such a series of terms, parentheses are usually omitted.

Individual axioms may be "weakened" to assert only the existence of a left identity and left inverses. From these one-sided axioms, one can prove that the left identity is also a right identity and a left inverse is also a right inverse for the same element. Since they define exactly the same structures as groups, collectively the axioms are no weaker.

Uniqueness of identity element

The group axioms imply that the identity element is unique: If and are identity elements of a group, then . Therefore, it is customary to speak of the identity.

Uniqueness of inverses

The group axioms also imply that the inverse of each element is unique: If a group element has both and as inverses, then

     since is the identity element

     since is an inverse of , so

     by associativity, which allows rearranging the parentheses

     since is an inverse of , so

     since is the identity element.

Therefore, it is customary to speak of the inverse of an element.

Division

Given elements and of a group , there is a unique solution in to the equation , namely . (One usually avoids using fraction notation unless is abelian, because of the ambiguity of whether it means or .) It follows that for each in , the function that maps each to is a bijection; it is called left multiplication by or left translation by .

Similarly, given and , the unique solution to is . For each , the function that maps each to is a bijection called right multiplication by or right translation by .

Basic concepts

When studying sets, one uses concepts such as subset, function, and quotient by an equivalence relation. When studying groups, one uses instead subgroups, homomorphisms, and quotient groups. These are the analogues that take the group structure into account.

Group homomorphisms

Group homomorphisms are functions that respect group structure; they may be used to relate two groups. A homomorphism from a group to a group is a function such that

for all elements and in .

It would be natural to require also that respect identities, , and inverses, for all in . However, these additional requirements need not be included in the definition of homomorphisms, because they are already implied by the requirement of respecting the group operation.

The identity homomorphism of a group is the homomorphism that maps each element of to itself. An inverse homomorphism of a homomorphism is a homomorphism such that and , that is, such that for all in and such that for all in . An isomorphism is a homomorphism that has an inverse homomorphism; equivalently, it is a bijective homomorphism. Groups and are called isomorphic if there exists an isomorphism . In this case, can be obtained from simply by renaming its elements according to the function ; then any statement true for is true for , provided that any specific elements mentioned in the statement are also renamed.

The collection of all groups, together with the homomorphisms between them, form a category, the category of groups.

Subgroups

Informally, a subgroup is a group contained within a bigger one, : it has a subset of the elements of , with the same operation. Concretely, this means that the identity element of must be contained in , and whenever and are both in , then so are and , so the elements of , equipped with the group operation on restricted to , indeed form a group. In this case, the inclusion map is a homomorphism.

In the example of symmetries of a square, the identity and the rotations constitute a subgroup , highlighted in red in the group table of the example: any two rotations composed are still a rotation, and a rotation can be undone by (i.e., is inverse to) the complementary rotations 270° for 90°, 180° for 180°, and 90° for 270°. The subgroup test provides a necessary and sufficient condition for a nonempty subset H of a group G to be a subgroup: it is sufficient to check that for all elements and in . Knowing a group's subgroups is important in understanding the group as a whole.

Given any subset of a group , the subgroup generated by consists of all products of elements of and their inverses. It is the smallest subgroup of containing . In the example of symmetries of a square, the subgroup generated by and consists of these two elements, the identity element , and the element . Again, this is a subgroup, because combining any two of these four elements or their inverses (which are, in this particular case, these same elements) yields an element of this subgroup.

An injective homomorphism factors canonically as an isomorphism followed by an inclusion, for some subgroup H of G. Injective homomorphisms are the monomorphisms in the category of groups.

Cosets

In many situations it is desirable to consider two group elements the same if they differ by an element of a given subgroup. For example, in the symmetry group of a square, once any reflection is performed, rotations alone cannot return the square to its original position, so one can think of the reflected positions of the square as all being equivalent to each other, and as inequivalent to the unreflected positions; the rotation operations are irrelevant to the question whether a reflection has been performed. Cosets are used to formalize this insight: a subgroup determines left and right cosets, which can be thought of as translations of by an arbitrary group element . In symbolic terms, the left and right cosets of , containing an element , are

and , respectively.

The left cosets of any subgroup form a partition of ; that is, the union of all left cosets is equal to and two left cosets are either equal or have an empty intersection. The first case happens precisely when , i.e., when the two elements differ by an element of . Similar considerations apply to the right cosets of . The left cosets of may or may not be the same as its right cosets. If they are (that is, if all in satisfy ), then is said to be a normal subgroup.

In , the group of symmetries of a square, with its subgroup of rotations, the left cosets are either equal to , if is an element of itself, or otherwise equal to (highlighted in green in the group table of ). The subgroup is normal, because and similarly for the other elements of the group. (In fact, in the case of , the cosets generated by reflections are all equal: .)

Quotient groups

Suppose that is a normal subgroup of a group , and

denotes its set of cosets. Then there is a unique group law on for which the map sending each element to is a homomorphism. Explicitly, the product of two cosets and is , the coset serves as the identity of , and the inverse of in the quotient group is . The group , read as " modulo ", is called a quotient group or factor group. The quotient group can alternatively be characterized by a universal property.

Group table of the quotient group

The elements of the quotient group are and . The group operation on the quotient is shown in the table. For example, . Both the subgroup and the quotient are abelian, but is not. Sometimes a group can be reconstructed from a subgroup and quotient (plus some additional data), by the semidirect product construction; is an example.

The first isomorphism theorem implies that any surjective homomorphism factors canonically as a quotient homomorphism followed by an isomorphism: . Surjective homomorphisms are the epimorphisms in the category of groups.

Presentations

Every group is isomorphic to a quotient of a free group, in many ways.

For example, the dihedral group is generated by the right rotation and the reflection in a vertical line (every element of is a finite product of copies of these and their inverses). Hence there is a surjective homomorphism φ from the free group on two generators to sending to and to . Elements in are called relations; examples include . In fact, it turns out that is the smallest normal subgroup of containing these three elements; in other words, all relations are consequences of these three. The quotient of the free group by this normal subgroup is denoted . This is called a presentation of by generators and relations, because the first isomorphism theorem for φ yields an isomorphism .

A presentation of a group can be used to construct the Cayley graph, a graphical depiction of a discrete group.

Examples and applications

A periodic wallpaper
A periodic wallpaper pattern gives rise to a wallpaper group.
 
A circle is shrunk to a point, another one does not completely shrink because a hole inside prevents this.
The fundamental group of a plane minus a point (bold) consists of loops around the missing point. This group is isomorphic to the integers.

Examples and applications of groups abound. A starting point is the group of integers with addition as group operation, introduced above. If instead of addition multiplication is considered, one obtains multiplicative groups. These groups are predecessors of important constructions in abstract algebra.

Groups are also applied in many other mathematical areas. Mathematical objects are often examined by associating groups to them and studying the properties of the corresponding groups. For example, Henri Poincaré founded what is now called algebraic topology by introducing the fundamental group. By means of this connection, topological properties such as proximity and continuity translate into properties of groups. For example, elements of the fundamental group are represented by loops. The second image shows some loops in a plane minus a point. The blue loop is considered null-homotopic (and thus irrelevant), because it can be continuously shrunk to a point. The presence of the hole prevents the orange loop from being shrunk to a point. The fundamental group of the plane with a point deleted turns out to be infinite cyclic, generated by the orange loop (or any other loop winding once around the hole). This way, the fundamental group detects the hole.

In more recent applications, the influence has also been reversed to motivate geometric constructions by a group-theoretical background. In a similar vein, geometric group theory employs geometric concepts, for example in the study of hyperbolic groups. Further branches crucially applying groups include algebraic geometry and number theory.

In addition to the above theoretical applications, many practical applications of groups exist. Cryptography relies on the combination of the abstract group theory approach together with algorithmical knowledge obtained in computational group theory, in particular when implemented for finite groups. Applications of group theory are not restricted to mathematics; sciences such as physics, chemistry and computer science benefit from the concept.

Numbers

Many number systems, such as the integers and the rationals, enjoy a naturally given group structure. In some cases, such as with the rationals, both addition and multiplication operations give rise to group structures. Such number systems are predecessors to more general algebraic structures known as rings and fields. Further abstract algebraic concepts such as modules, vector spaces and algebras also form groups.

Integers

The group of integers under addition, denoted , has been described above. The integers, with the operation of multiplication instead of addition, do not form a group. The associativity and identity axioms are satisfied, but inverses do not exist: for example, is an integer, but the only solution to the equation in this case is , which is a rational number, but not an integer. Hence not every element of has a (multiplicative) inverse.

Rationals

The desire for the existence of multiplicative inverses suggests considering fractions

Fractions of integers (with nonzero) are known as rational numbers. The set of all such irreducible fractions is commonly denoted . There is still a minor obstacle for , the rationals with multiplication, being a group: because zero does not have a multiplicative inverse (i.e., there is no such that ), is still not a group.

However, the set of all nonzero rational numbers does form an abelian group under multiplication, also denoted . Associativity and identity element axioms follow from the properties of integers. The closure requirement still holds true after removing zero, because the product of two nonzero rationals is never zero. Finally, the inverse of is , therefore the axiom of the inverse element is satisfied.

The rational numbers (including zero) also form a group under addition. Intertwining addition and multiplication operations yields more complicated structures called rings and – if division by other than zero is possible, such as in – fields, which occupy a central position in abstract algebra. Group theoretic arguments therefore underlie parts of the theory of those entities.

Modular arithmetic

The clock hand points to 9 o'clock; 4 hours later it is at 1 o'clock.
The hours on a clock form a group that uses addition modulo 12. Here, 9 + 4 ≡ 1.

Modular arithmetic for a modulus defines any two elements and that differ by a multiple of to be equivalent, denoted by . Every integer is equivalent to one of the integers from to , and the operations of modular arithmetic modify normal arithmetic by replacing the result of any operation by its equivalent representative. Modular addition, defined in this way for the integers from to , forms a group, denoted as or , with as the identity element and as the inverse element of .

A familiar example is addition of hours on the face of a clock, where 12 rather than 0 is chosen as the representative of the identity. If the hour hand is on and is advanced hours, it ends up on , as shown in the illustration. This is expressed by saying that is congruent to "modulo " or, in symbols,

For any prime number , there is also the multiplicative group of integers modulo . Its elements can be represented by to . The group operation, multiplication modulo , replaces the usual product by its representative, the remainder of division by . For example, for , the four group elements can be represented by . In this group, , because the usual product is equivalent to : when divided by it yields a remainder of . The primality of ensures that the usual product of two representatives is not divisible by , and therefore that the modular product is nonzero. The identity element is represented by , and associativity follows from the corresponding property of the integers. Finally, the inverse element axiom requires that given an integer not divisible by , there exists an integer such that

that is, such that evenly divides . The inverse can be found by using Bézout's identity and the fact that the greatest common divisor equals . In the case above, the inverse of the element represented by is that represented by , and the inverse of the element represented by is represented by , as . Hence all group axioms are fulfilled. This example is similar to above: it consists of exactly those elements in the ring that have a multiplicative inverse. These groups, denoted , are crucial to public-key cryptography.

Cyclic groups

A hexagon whose corners are located regularly on a circle
The 6th complex roots of unity form a cyclic group. is a primitive element, but is not, because the odd powers of are not a power of .

A cyclic group is a group all of whose elements are powers of a particular element . In multiplicative notation, the elements of the group are

where means , stands for , etc. Such an element is called a generator or a primitive element of the group. In additive notation, the requirement for an element to be primitive is that each element of the group can be written as

In the groups introduced above, the element is primitive, so these groups are cyclic. Indeed, each element is expressible as a sum all of whose terms are . Any cyclic group with elements is isomorphic to this group. A second example for cyclic groups is the group of th complex roots of unity, given by complex numbers satisfying . These numbers can be visualized as the vertices on a regular -gon, as shown in blue in the image for . The group operation is multiplication of complex numbers. In the picture, multiplying with corresponds to a counter-clockwise rotation by 60°. From field theory, the group is cyclic for prime : for example, if , is a generator since , , , and .

Some cyclic groups have an infinite number of elements. In these groups, for every non-zero element , all the powers of are distinct; despite the name "cyclic group", the powers of the elements do not cycle. An infinite cyclic group is isomorphic to , the group of integers under addition introduced above. As these two prototypes are both abelian, so are all cyclic groups.

The study of finitely generated abelian groups is quite mature, including the fundamental theorem of finitely generated abelian groups; and reflecting this state of affairs, many group-related notions, such as center and commutator, describe the extent to which a given group is not abelian.

Symmetry groups

The (2,3,7) triangle group, a hyperbolic reflection group, acts on this tiling of the hyperbolic plane

Symmetry groups are groups consisting of symmetries of given mathematical objects, principally geometric entities, such as the symmetry group of the square given as an introductory example above, although they also arise in algebra such as the symmetries among the roots of polynomial equations dealt with in Galois theory (see below). Conceptually, group theory can be thought of as the study of symmetry. Symmetries in mathematics greatly simplify the study of geometrical or analytical objects. A group is said to act on another mathematical object X if every group element can be associated to some operation on X and the composition of these operations follows the group law. For example, an element of the (2,3,7) triangle group acts on a triangular tiling of the hyperbolic plane by permuting the triangles. By a group action, the group pattern is connected to the structure of the object being acted on.

In chemical fields, such as crystallography, space groups and point groups describe molecular symmetries and crystal symmetries. These symmetries underlie the chemical and physical behavior of these systems, and group theory enables simplification of quantum mechanical analysis of these properties. For example, group theory is used to show that optical transitions between certain quantum levels cannot occur simply because of the symmetry of the states involved.

Group theory helps predict the changes in physical properties that occur when a material undergoes a phase transition, for example, from a cubic to a tetrahedral crystalline form. An example is ferroelectric materials, where the change from a paraelectric to a ferroelectric state occurs at the Curie temperature and is related to a change from the high-symmetry paraelectric state to the lower symmetry ferroelectric state, accompanied by a so-called soft phonon mode, a vibrational lattice mode that goes to zero frequency at the transition.

Such spontaneous symmetry breaking has found further application in elementary particle physics, where its occurrence is related to the appearance of Goldstone bosons.

A schematic depiction of a Buckminsterfullerene molecule A schematic depiction of an Ammonia molecule A schematic depiction of a cubane molecule K2PtCl4.png
Buckminsterfullerene displays
icosahedral symmetry
Ammonia, NH3. Its symmetry group is of order 6, generated by a 120° rotation and a reflection. Cubane C8H8 features
octahedral symmetry.
The tetrachloroplatinate(II) ion, [PtCl4]2- exhibits square-planar geometry

Finite symmetry groups such as the Mathieu groups are used in coding theory, which is in turn applied in error correction of transmitted data, and in CD players. Another application is differential Galois theory, which characterizes functions having antiderivatives of a prescribed form, giving group-theoretic criteria for when solutions of certain differential equations are well-behaved. Geometric properties that remain stable under group actions are investigated in (geometric) invariant theory.

General linear group and representation theory

Two vectors have the same length and span a 90° angle. Furthermore, they are rotated by 90° degrees, then one vector is stretched to twice its length.
Two vectors (the left illustration) multiplied by matrices (the middle and right illustrations). The middle illustration represents a clockwise rotation by 90°, while the right-most one stretches the -coordinate by factor 2.

Matrix groups consist of matrices together with matrix multiplication. The general linear group consists of all invertible -by- matrices with real entries. Its subgroups are referred to as matrix groups or linear groups. The dihedral group example mentioned above can be viewed as a (very small) matrix group. Another important matrix group is the special orthogonal group . It describes all possible rotations in dimensions. Rotation matrices in this group are used in computer graphics.

Representation theory is both an application of the group concept and important for a deeper understanding of groups. It studies the group by its group actions on other spaces. A broad class of group representations are linear representations in which the group acts on a vector space, such as the three-dimensional Euclidean space . A representation of a group on an -dimensional real vector space is simply a group homomorphism from the group to the general linear group. This way, the group operation, which may be abstractly given, translates to the multiplication of matrices making it accessible to explicit computations.

A group action gives further means to study the object being acted on. On the other hand, it also yields information about the group. Group representations are an organizing principle in the theory of finite groups, Lie groups, algebraic groups and topological groups, especially (locally) compact groups.

Galois groups

Galois groups were developed to help solve polynomial equations by capturing their symmetry features. For example, the solutions of the quadratic equation are given by

Each solution can be obtained by replacing the sign by or ; analogous formulae are known for cubic and quartic equations, but do not exist in general for degree 5 and higher. In the quadratic formula, changing the sign (permuting the resulting two solutions) can be viewed as a (very simple) group operation. Analogous Galois groups act on the solutions of higher-degree polynomials and are closely related to the existence of formulas for their solution. Abstract properties of these groups (in particular their solvability) give a criterion for the ability to express the solutions of these polynomials using solely addition, multiplication, and roots similar to the formula above.

Modern Galois theory generalizes the above type of Galois groups by shifting to field theory and considering field extensions formed as the splitting field of a polynomial. This theory establishes—via the fundamental theorem of Galois theory—a precise relationship between fields and groups, underlining once again the ubiquity of groups in mathematics.

Finite groups

A group is called finite if it has a finite number of elements. The number of elements is called the order of the group. An important class is the symmetric groups , the groups of permutations of objects. For example, the symmetric group on 3 letters is the group of all possible reorderings of the objects. The three letters ABC can be reordered into ABC, ACB, BAC, BCA, CAB, CBA, forming in total 6 (factorial of 3) elements. The group operation is composition of these reorderings, and the identity element is the reordering operation that leaves the order unchanged. This class is fundamental insofar as any finite group can be expressed as a subgroup of a symmetric group for a suitable integer , according to Cayley's theorem. Parallel to the group of symmetries of the square above, can also be interpreted as the group of symmetries of an equilateral triangle.

The order of an element in a group is the least positive integer such that , where represents

that is, application of the operation "" to copies of . (If "" represents multiplication, then corresponds to the th power of .) In infinite groups, such an may not exist, in which case the order of is said to be infinity. The order of an element equals the order of the cyclic subgroup generated by this element.

More sophisticated counting techniques, for example, counting cosets, yield more precise statements about finite groups: Lagrange's Theorem states that for a finite group the order of any finite subgroup divides the order of . The Sylow theorems give a partial converse.

The dihedral group of symmetries of a square is a finite group of order 8. In this group, the order of is 4, as is the order of the subgroup that this element generates. The order of the reflection elements etc. is 2. Both orders divide 8, as predicted by Lagrange's theorem. The groups of multiplication modulo a prime have order .

Finite abelian groups

Any finite abelian group is isomorphic to a product of finite cyclic groups; this statement is part of the fundamental theorem of finitely generated abelian groups.

Any group of prime order is isomorphic to the cyclic group (a consequence of Lagrange's theorem). Any group of order is abelian, isomorphic to or . But there exist nonabelian groups of order ; the dihedral group of order above is an example.

Simple groups

When a group has a normal subgroup other than and itself, questions about can sometimes be reduced to questions about and . A nontrivial group is called simple if it has no such normal subgroup. Finite simple groups are to finite groups as prime numbers are to positive integers: they serve as building blocks, in a sense made precise by the Jordan–Hölder theorem.

Classification of finite simple groups

Computer algebra systems have been used to list all groups of order up to 2000. But classifying all finite groups is a problem considered too hard to be solved.

The classification of all finite simple groups was a major achievement in contemporary group theory. There are several infinite families of such groups, as well as 26 "sporadic groups" that do not belong to any of the families. The largest sporadic group is called the monster group. The monstrous moonshine conjectures, proved by Richard Borcherds, relate the monster group to certain modular functions.

The gap between the classification of simple groups and the classification of all groups lies in the extension problem.

Groups with additional structure

An equivalent definition of group consists of replacing the "there exist" part of the group axioms by operations whose result is the element that must exist. So, a group is a set equipped with a binary operation (the group operation), a unary operation (which provides the inverse) and a nullary operation, which has no operand and results in the identity element. Otherwise, the group axioms are exactly the same. This variant of the definition avoids existential quantifiers and is used in computing with groups and for computer-aided proofs.

This way of defining groups lends itself to generalizations such as the notion of group object in a category. Briefly, this is an object with morphisms that mimic the group axioms.

Topological groups

A part of a circle (highlighted) is projected onto a line.
The unit circle in the complex plane under complex multiplication is a Lie group and, therefore, a topological group. It is topological since complex multiplication and division are continuous. It is a manifold and thus a Lie group, because every small piece, such as the red arc in the figure, looks like a part of the real line (shown at the bottom).
 

Some topological spaces may be endowed with a group law. In order for the group law and the topology to interweave well, the group operations must be continuous functions; informally, and must not vary wildly if and vary only a little. Such groups are called topological groups, and they are the group objects in the category of topological spaces. The most basic examples are the group of real numbers under addition and the group of nonzero real numbers under multiplication. Similar examples can be formed from any other topological field, such as the field of complex numbers or the field of p-adic numbers. These examples are locally compact, so they have Haar measures and can be studied via harmonic analysis. Other locally compact topological groups include the group of points of an algebraic group over a local field or adele ring; these are basic to number theory Galois groups of infinite algebraic field extensions are equipped with the Krull topology, which plays a role in infinite Galois theory. A generalization used in algebraic geometry is the étale fundamental group.

Lie groups

A Lie group is a group that also has the structure of a differentiable manifold; informally, this means that it looks locally like a Euclidean space of some fixed dimension. Again, the definition requires the additional structure, here the manifold structure, to be compatible: the multiplication and inverse maps are required to be smooth.

A standard example is the general linear group introduced above: it is an open subset of the space of all -by- matrices, because it is given by the inequality

where denotes an -by- matrix.

Lie groups are of fundamental importance in modern physics: Noether's theorem links continuous symmetries to conserved quantities. Rotation, as well as translations in space and time, are basic symmetries of the laws of mechanics. They can, for instance, be used to construct simple models—imposing, say, axial symmetry on a situation will typically lead to significant simplification in the equations one needs to solve to provide a physical description. Another example is the group of Lorentz transformations, which relate measurements of time and velocity of two observers in motion relative to each other. They can be deduced in a purely group-theoretical way, by expressing the transformations as a rotational symmetry of Minkowski space. The latter serves—in the absence of significant gravitation—as a model of spacetime in special relativity. The full symmetry group of Minkowski space, i.e., including translations, is known as the Poincaré group. By the above, it plays a pivotal role in special relativity and, by implication, for quantum field theories. Symmetries that vary with location are central to the modern description of physical interactions with the help of gauge theory. An important example of a gauge theory is the Standard Model, which describes three of the four known fundamental forces and classifies all known elementary particles.

Generalizations

Group-like structures

Totality Associativity Identity Division Commutativity
Semigroupoid Unneeded Required Unneeded Unneeded Unneeded
Small category Unneeded Required Required Unneeded Unneeded
Groupoid Unneeded Required Required Required Unneeded
Magma Required Unneeded Unneeded Unneeded Unneeded
Quasigroup Required Unneeded Unneeded Required Unneeded
Unital magma Required Unneeded Required Unneeded Unneeded
Semigroup Required Required Unneeded Unneeded Unneeded
Loop Required Unneeded Required Required Unneeded
Monoid Required Required Required Unneeded Unneeded
Group Required Required Required Required Unneeded
Commutative monoid Required Required Required Unneeded Required
Abelian group Required Required Required Required Required
The closure axiom, used by many sources and defined differently, is equivalent.

In abstract algebra, more general structures are defined by relaxing some of the axioms defining a group. For example, if the requirement that every element has an inverse is eliminated, the resulting algebraic structure is called a monoid. The natural numbers (including zero) under addition form a monoid, as do the nonzero integers under multiplication , see above. There is a general method to formally add inverses to elements to any (abelian) monoid, much the same way as is derived from , known as the Grothendieck group. Groupoids are similar to groups except that the composition need not be defined for all and . They arise in the study of more complicated forms of symmetry, often in topological and analytical structures, such as the fundamental groupoid or stacks. Finally, it is possible to generalize any of these concepts by replacing the binary operation with an arbitrary n-ary one (i.e., an operation taking n arguments). With the proper generalization of the group axioms this gives rise to an n-ary group. The table gives a list of several structures generalizing groups.

Saturday, September 3, 2022

Cognitive distortion

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Cognitive_distortion

A cognitive distortion is an exaggerated or irrational thought pattern involved in the onset or perpetuation of psychopathological states, such as depression and anxiety.

Cognitive distortions are thoughts that cause individuals to perceive reality inaccurately. According to Aaron Beck's cognitive model, a negative outlook on reality, sometimes called negative schemas (or schemata), is a factor in symptoms of emotional dysfunction and poorer subjective well-being. Specifically, negative thinking patterns reinforce negative emotions and thoughts. During difficult circumstances, these distorted thoughts can contribute to an overall negative outlook on the world and a depressive or anxious mental state. According to hopelessness theory and Beck's theory, the meaning or interpretation that people give to their experience importantly influences whether they will become depressed and whether they will experience severe, repeated, or long-duration episodes of depression.

Challenging and changing cognitive distortions is a key element of cognitive behavioral therapy (CBT).

Definition

Cognitive comes from the Medieval Latin cognitīvus, equivalent to Latin cognit(us), 'known'. Distortion means the act of twisting or altering something out of its true, natural, or original state.

History

In 1957, American psychologist Albert Ellis, though he did not know it yet, would aid cognitive therapy in correcting cognitive distortions and indirectly helping David D. Burns in writing The Feeling Good Handbook. Ellis created what he called the ABC Technique of rational beliefs. The ABC stands for the activating event, beliefs that are irrational, and the consequences that come from the belief. Ellis wanted to prove that the activating event is not what caused the emotional behavior or the consequences, but the beliefs and how the person irrationally perceive the events that aids the consequences. With this model, Ellis attempted to use rational emotive behavior therapy (REBT) with his patients, in order to help them "reframe" or reinterpret the experience in a more rational manner. In this model Ellis explains it all for his clients, while Beck helps his clients figure this out on their own. Beck first started to notice these automatic distorted thought processes when practicing psychoanalysis, while his patients followed the rule of saying anything that comes to mind. Aaron realized that his patients had irrational fears, thoughts, and perceptions that were automatic. Beck began noticing his automatic thought processes that he knew his patients had but did not report. Most of the time the thoughts were biased against themselves and very erroneous.

Beck believed that the negative schemas developed and manifested themselves in the perspective and behavior. The distorted thought processes lead to focusing on degrading the self, amplifying minor external setbacks, experiencing other's harmless comments as ill-intended, while simultaneously seeing self as inferior. Inevitably cognitions are reflected in their behavior with a reduced desire to care for oneself, to seek pleasure, and give up. These exaggerated perceptions, due to cognition, feel real and accurate because the schemas, after being reinforced through the behavior, tend to become automatic and do not allow time for reflection. This cycle is also known as Beck's cognitive triad, focused on the theory that the person's negative schema applied to the self, the future, and the environment.

In 1972, psychiatrist, psychoanalyst, and cognitive therapy scholar Aaron T. Beck published Depression: Causes and Treatment. He was dissatisfied with the conventional Freudian treatment of depression, because there was no empirical evidence for the success of Freudian psychoanalysis. Beck's book provided a comprehensive and empirically-supported theoretical model for depression—its potential causes, symptoms, and treatments. In Chapter 2, titled "Symptomatology of Depression", he described "cognitive manifestations" of depression, including low self-evaluation, negative expectations, self-blame and self-criticism, indecisiveness, and distortion of the body image.

Beck's student David D. Burns continued research on the topic. In his book Feeling Good: The New Mood Therapy, Burns described personal and professional anecdotes related to cognitive distortions and their elimination. When Burns published Feeling Good: The New Mood Therapy, it made Beck's approach to distorted thinking widely known and popularized. Burns sold over four million copies of the book in the United States alone. It was a book commonly "prescribed" for patients who have cognitive distortions that have led to depression. Beck approved of the book, saying that it would help others alter their depressed moods by simplifying the extensive study and research that had taken place since shortly after Beck had started as a student and practitioner of psychoanalytic psychiatry. Nine years later, The Feeling Good Handbook was published, which was also built on Beck's work and includes a list of ten specific cognitive distortions that will be discussed throughout this article.

Main types

Examples of some common cognitive distortions seen in depressed and anxious individuals. People may be taught how to identify and alter these distortions as part of cognitive behavioural therapy.

John C. Gibbs and Granville Bud Potter propose four categories for cognitive distortions: self-centered, blaming others, minimizing-mislabeling, and assuming the worst. The cognitive distortions listed below are categories of automatic thinking, and are to be distinguished from logical fallacies.

All-or-nothing thinking

The "all-or-nothing thinking distortion" is also referred to as "splitting", "black-and-white thinking", and "polarized thinking." Someone with the all-or-nothing thinking distortion looks at life in black and white categories. Either they are a success or a failure; either they are good or bad; there is no in-between. According to one article, "Because there is always someone who is willing to criticize, this tends to collapse into a tendency for polarized people to view themselves as a total failure. Polarized thinkers have difficulty with the notion of being 'good enough' or a partial success."

  • Example (from The Feeling Good Handbook): A woman eats a spoonful of ice cream. She thinks she is a complete failure for breaking her diet. She becomes so depressed that she ends up eating the whole quart of ice cream.

This example captures the polarized nature of this distortion—the person believes they are totally inadequate if they fall short of perfection. In order to combat this distortion, Burns suggests thinking of the world in terms of shades of gray. Rather than viewing herself as a complete failure for eating a spoonful of ice cream, the woman in the example could still recognize her overall effort to diet as at least a partial success.

This distortion is commonly found in perfectionists.

Jumping to conclusions

Reaching preliminary conclusions (usually negative) with little (if any) evidence. Three specific subtypes are identified:

Mind reading

Inferring a person's possible or probable (usually negative) thoughts from their behavior and nonverbal communication; taking precautions against the worst suspected case without asking the person.

  • Example 1: A student assumes that the readers of their paper have already made up their minds concerning its topic, and, therefore, writing the paper is a pointless exercise.
  • Example 2: Kevin assumes that because he sits alone at lunch, everyone else must think he is a loser. (This can encourage self-fulfilling prophecy; Kevin may not initiate social contact because of his fear that those around him already perceive him negatively).

Fortune-telling

Predicting outcomes (usually negative) of events.

  • Example: A depressed person tells themselves they will never improve; they will continue to be depressed for their whole life.

One way to combat this distortion is to ask, "If this is true, does it say more about me or them?"

Labeling

Labeling occurs when someone overgeneralizes characteristics of other people. For example, someone might use an unfavorable term to describe a complex person or event.

Emotional reasoning

In the emotional reasoning distortion, it is assumed that feelings expose the true nature of things and experience reality as a reflection of emotionally linked thoughts; something is believed true solely based on a feeling.

  • Examples: "I feel stupid, therefore I must be stupid". Feeling fear of flying in planes, and then concluding that planes must be a dangerous way to travel. Feeling overwhelmed by the prospect of cleaning one's house, therefore concluding that it's hopeless to even start cleaning.

Should/shouldn't and must/mustn't statements

Making "must" or "should" statements was included by Albert Ellis in his rational emotive behavior therapy (REBT), an early form of CBT; he termed it "musturbation". Michael C. Graham called it "expecting the world to be different than it is". It can be seen as demanding particular achievements or behaviors regardless of the realistic circumstances of the situation.

  • Example: After a performance, a concert pianist believes he or she should not have made so many mistakes.
  • In Feeling Good: The New Mood Therapy, David Burns clearly distinguished between pathological "should statements", moral imperatives, and social norms.

A related cognitive distortion, also present in Ellis' REBT, is a tendency to "awfulize"; to say a future scenario will be awful, rather than to realistically appraise the various negative and positive characteristics of that scenario. According to Burns, "must" and "should" statements are negative because they cause the person to feel guilty and upset at themselves. Some people also direct this distortion at other people, which can cause feelings of anger and frustration when that other person does not do what they should have done. He also mentions how this type of thinking can lead to rebellious thoughts. In other words, trying to whip oneself into doing something with "shoulds" may cause one to desire just the opposite.

Gratitude traps

A gratitude trap is a type of cognitive distortion that typically arises from misunderstandings regarding the nature or practice of gratitude. The term can refer to one of two related but distinct thought patterns:

  • A self-oriented thought process involving feelings of guilt, shame, or frustration related to one's expectations of how things "should" be
  • An "elusive ugliness in many relationships, a deceptive 'kindness,' the main purpose of which is to make others feel indebted", as defined by psychologist Ellen Kenner

Blaming others

Personalization and blaming

Personalization is assigning personal blame disproportionate to the level of control a person realistically has in a given situation.

  • Example 1: A foster child assumes that he/she has not been adopted because he/she is not "loveable enough".
  • Example 2: A child has bad grades. His/her mother believes it is because she is not a good enough parent.

Blaming is the opposite of personalization. In the blaming distortion, the disproportionate level of blame is placed upon other people, rather than oneself. In this way, the person avoids taking personal responsibility, making way for a "victim mentality".

  • Example: Placing blame for marital problems entirely on one's spouse.

Always being right

In this cognitive distortion, being wrong is unthinkable. This distortion is characterized by actively trying to prove one's actions or thoughts to be correct, and sometimes prioritizing self-interest over the feelings of another person. In this cognitive distortion, the facts that oneself has about their surroundings are always right while other people's opinions and perspectives are wrongly seen.

Fallacy of change

Relying on social control to obtain cooperative actions from another person. The underlying assumption of this thinking style is that one's happiness depends on the actions of others. The fallacy of change also assumes that other people should change to suit one's own interests automatically and/or that it is fair to pressure them to change. It may be present in most abusive relationships in which partners' "visions" of each other are tied into the belief that happiness, love, trust, and perfection would just occur once they or the other person change aspects of their beings.

Minimizing-mislabeling

Magnification and minimization

Giving proportionally greater weight to a perceived failure, weakness or threat, or lesser weight to a perceived success, strength or opportunity, so that the weight differs from that assigned by others, such as "making a mountain out of a molehill". In depressed clients, often the positive characteristics of other people are exaggerated and their negative characteristics are understated.

  • Catastrophizing – Giving greater weight to the worst possible outcome, however unlikely, or experiencing a situation as unbearable or impossible when it is just uncomfortable.

Labeling and mislabeling

A form of overgeneralization; attributing a person's actions to their character instead of to an attribute. Rather than assuming the behaviour to be accidental or otherwise extrinsic, one assigns a label to someone or something that is based on the inferred character of that person or thing.

Assuming the worst

Overgeneralizing

Someone who overgeneralizes makes faulty generalizations from insufficient evidence. Such as seeing a "single negative event" as a "never-ending pattern of defeat", and as such drawing a very broad conclusion from a single incident or a single piece of evidence. Even if something bad happens only once, it is expected to happen over and over again.

  • Example 1: A young woman is asked out on a first date, but not a second one. She is distraught as she tells her friend, "This always happens to me! I'll never find love!"
  • Example 2: A woman is lonely and often spends most of her time at home. Her friends sometimes ask her to dinner and to meet new people. She feels it is useless to even try. No one really could like her. And anyway, all people are the same; petty and selfish.

One suggestion to combat this distortion is to "examine the evidence" by performing an accurate analysis of one's situation. This aids in avoiding exaggerating one's circumstances.

Disqualifying the positive

Disqualifying the positive refers to rejecting positive experiences by insisting they "don't count" for some reason or other. Negative belief is maintained despite contradiction by everyday experiences. Disqualifying the positive may be the most common fallacy in the cognitive distortion range; it is often analyzed with "always being right", a type of distortion where a person is in an all-or-nothing self-judgment. People in this situation show signs of depression. Examples include:

  • "I will never be as good as Jane"
  • "Anyone could have done as well"
  • "They are just congratulating me to be nice"

Mental filtering

Filtering distortions occur when an individual dwells only on the negative details of a situation and filters out the positive aspects.

  • Example: Andy gets mostly compliments and positive feedback about a presentation he has done at work, but he also has received a small piece of criticism. For several days following his presentation, Andy dwells on this one negative reaction, forgetting all of the positive reactions that he had also been given.

The Feeling Good Handbook notes that filtering is like a "drop of ink that discolors a beaker of water". One suggestion to combat filtering is a cost–benefit analysis. A person with this distortion may find it helpful to sit down and assess whether filtering out the positive and focusing on the negative is helping or hurting them in the long run.

Conceptualization

In a series of publications, philosopher Paul Franceschi has proposed a unified conceptual framework for cognitive distortions designed to clarify their relationships and define new ones. This conceptual framework is based on three notions: (i) the reference class (a set of phenomena or objects, e.g. events in the patient's life); (ii) dualities (positive/negative, qualitative/quantitative, ...); (iii) the taxon system (degrees allowing to attribute properties according to a given duality to the elements of a reference class). In this model, "dichotomous reasoning", "minimization", "maximization" and "arbitrary focus" constitute general cognitive distortions (applying to any duality), whereas "disqualification of the positive" and "catastrophism" are specific cognitive distortions, applying to the positive/negative duality. This conceptual framework posits two additional cognitive distortion classifications: the "omission of the neutral" and the "requalification in the other pole".

Cognitive restructuring

Cognitive restructuring (CR) is a popular form of therapy used to identify and reject maladaptive cognitive distortions, and is typically used with individuals diagnosed with depression. In CR, the therapist and client first examine a stressful event or situation reported by the client. For example, a depressed male college student who experiences difficulty in dating might believe that his "worthlessness" causes women to reject him. Together, therapist and client might then create a more realistic cognition, e.g., "It is within my control to ask girls on dates. However, even though there are some things I can do to influence their decisions, whether or not they say yes is largely out of my control. Thus, I am not responsible if they decline my invitation." CR therapies are designed to eliminate "automatic thoughts" that include clients' dysfunctional or negative views. According to Beck, doing so reduces feelings of worthlessness, anxiety, and anhedonia that are symptomatic of several forms of mental illness. CR is the main component of Beck's and Burns's CBT.

Narcissistic defense

Those diagnosed with narcissistic personality disorder tend, unrealistically, to view themselves as superior, overemphasizing their strengths and understating their weaknesses. Narcissists use exaggeration and minimization this way to shield themselves against psychological pain.

Decatastrophizing

In cognitive therapy, decatastrophizing or decatastrophization is a cognitive restructuring technique that may be used to treat cognitive distortions, such as magnification and catastrophizing, commonly seen in psychological disorders like anxiety and psychosis. Major features of these disorders are the subjective report of being overwhelmed by life circumstances and the incapability of affecting them.

The goal of CR is to help the client change their perceptions to render the felt experience as less significant.

Criticism

Common criticisms of the diagnosis of cognitive distortion relate to epistemology and the theoretical basis. If the perceptions of the patient differ from those of the therapist, it may not be because of intellectual malfunctions but because the patient has different experiences. In some cases, depressed subjects appear to be "sadder but wiser".

Lateral computing

From Wikipedia, the free encyclopedia

Lateral computing is a lateral thinking approach to solving computing problems. Lateral thinking has been made popular by Edward de Bono. This thinking technique is applied to generate creative ideas and solve problems. Similarly, by applying lateral-computing techniques to a problem, it can become much easier to arrive at a computationally inexpensive, easy to implement, efficient, innovative or unconventional solution.

The traditional or conventional approach to solving computing problems is to either build mathematical models or have an IF- THEN -ELSE structure. For example, a brute-force search is used in many chess engines, but this approach is computationally expensive and sometimes may arrive at poor solutions. It is for problems like this that lateral computing can be useful to form a better solution.

A simple problem of truck backup can be used for illustrating lateral-computing. This is one of the difficult tasks for traditional computing techniques, and has been efficiently solved by the use of fuzzy logic (which is a lateral computing technique). Lateral-computing sometimes arrives at a novel solution for particular computing problem by using the model of how living beings, such as how humans, ants, and honeybees, solve a problem; how pure crystals are formed by annealing, or evolution of living beings or quantum mechanics etc.

From lateral-thinking to lateral-computing

Lateral thinking is technique for creative thinking for solving problems. The brain as center of thinking has a self-organizing information system. It tends to create patterns and traditional thinking process uses them to solve problems. The lateral thinking technique proposes to escape from this patterning to arrive at better solutions through new ideas. Provocative use of information processing is the basic underlying principle of lateral thinking,

The provocative operator (PO) is something which characterizes lateral thinking. Its function is to generate new ideas by provocation and providing escape route from old ideas. It creates a provisional arrangement of information.

Water logic is contrast to traditional or rock logic. Water logic has boundaries which depends on circumstances and conditions while rock logic has hard boundaries. Water logic, in someways, resembles fuzzy logic.

Transition to lateral-computing

Lateral computing does a provocative use of information processing similar to lateral-thinking. This is explained with the use of evolutionary computing which is a very useful lateral-computing technique. The evolution proceeds by change and selection. While random mutation provides change, the selection is through survival of the fittest. The random mutation works as a provocative information processing and provides a new avenue for generating better solutions for the computing problem. The term "Lateral Computing" was first proposed by Prof CR SUTHIKSHN Kumar and First World Congress on Lateral Computing WCLC 2004 was organized with international participants during December 2004.

Lateral computing takes the analogies from real-world examples such as:

  • How slow cooling of the hot gaseous state results in pure crystals (Annealing)
  • How the neural networks in the brain solve such problems as face and speech recognition
  • How simple insects such as ants and honeybees solve some sophisticated problems
  • How evolution of human beings from molecular life forms are mimicked by evolutionary computing
  • How living organisms defend themselves against diseases and heal their wounds
  • How electricity is distributed by grids

Differentiating factors of "lateral computing":

  • Does not directly approach the problem through mathematical means.
  • Uses indirect models or looks for analogies to solve the problem.
  • Radically different from what is in vogue, such as using "photons" for computing in optical computing. This is rare as most conventional computers use electrons to carry signals.
  • Sometimes the Lateral Computing techniques are surprisingly simple and deliver high performance solutions to very complex problems.
  • Some of the techniques in lateral computing use "unexplained jumps". These jumps may not look logical. The example is the use of "Mutation" operator in genetic algorithms.

Convention – lateral

It is very hard to draw a clear boundary between conventional and lateral computing. Over a period of time, some unconventional computing techniques become integral part of mainstream computing. So there will always be an overlap between conventional and lateral computing. It will be tough task classifying a computing technique as a conventional or lateral computing technique as shown in the figure. The boundaries are fuzzy and one may approach with fuzzy sets.

Formal definition

Lateral computing is a fuzzy set of all computing techniques which use unconventional computing approach. Hence Lateral computing includes those techniques which use semi-conventional or hybrid computing. The degree of membership for lateral computing techniques is greater than 0 in the fuzzy set of unconventional computing techniques.

The following brings out some important differentiators for lateral computing.

Conventional computing

  • The problem and technique are directly correlated.
  • Treats the problem with rigorous mathematical analysis.
  • Creates mathematical models.
  • The computing technique can be analyzed mathematically.
Lateral computing

  • The problem may hardly have any relation to the computing technique used
  • Approaches problems by analogies such as human information processing model, annealing, etc.
  • Sometimes the computing technique cannot be mathematically analyzed.

Lateral computing and parallel computing

Parallel computing focuses on improving the performance of the computers/algorithms through the use of several computing elements (such as processing elements). The computing speed is improved by using several computing elements. Parallel computing is an extension of conventional sequential computing. However, in lateral computing, the problem is solved using unconventional information processing whether using a sequential or parallel computing.

A review of lateral-computing techniques

There are several computing techniques which fit the Lateral computing paradigm. Here is a brief description of some of the Lateral Computing techniques:

Swarm intelligence

Swarm intelligence (SI) is the property of a system whereby the collective behaviors of (unsophisticated) agents, interacting locally with their environment, cause coherent functional global patterns to emerge. SI provides a basis with which it is possible to explore collective (or distributed) problem solving without centralized control or the provision of a global model.

One interesting swarm intelligent technique is the Ant Colony algorithm:

  • Ants are behaviorally unsophisticated; collectively they perform complex tasks. Ants have highly developed sophisticated sign-based communication.
  • Ants communicate using pheromones; trails are laid that can be followed by other ants.
  • Routing Problem Ants drop different pheromones used to compute the "shortest" path from source to destination(s).

Agent-based systems

Agents are encapsulated computer systems that are situated in some environment and are capable of flexible, autonomous action in that environment in order to meet their design objectives. Agents are considered to be autonomous (independent, not-controllable), reactive (responding to events), pro-active (initiating actions of their own volition), and social (communicative). Agents vary in their abilities: they can be static or mobile, or may or may not be intelligent. Each agent may have its own task and/or role. Agents, and multi-agent systems, are used as a metaphor to model complex distributed processes. Such agents invariably need to interact with one another in order to manage their inter-dependencies. These interactions involve agents cooperating, negotiating and coordinating with one another.

Agent-based systems are computer programs that try to simulate various complex phenomena via virtual "agents" that represent the components of a business system. The behaviors of these agents are programmed with rules that realistically depict how business is conducted. As widely varied individual agents interact in the model, the simulation shows how their collective behaviors govern the performance of the entire system - for instance, the emergence of a successful product or an optimal schedule. These simulations are powerful strategic tools for "what-if" scenario analysis: as managers change agent characteristics or "rules," the impact of the change can be easily seen in the model output

Grid computing

By analogy, a computational grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities. The applications of grid computing are in:

  • Chip design, cryptographic problems, medical instrumentation, and supercomputing.
  • Distributed supercomputing applications use grids to aggregate substantial computational resources in order to tackle problems that cannot be solved on a single system.

Autonomic computing

The autonomic nervous system governs our heart rate and body temperature, thus freeing our conscious brain from the burden of dealing with these and many other low-level, yet vital, functions. The essence of autonomic computing is self-management, the intent of which is to free system administrators from the details of system operation and maintenance.

Four aspects of autonomic computing are:

  • Self-configuration
  • Self-optimization
  • Self-healing
  • Self-protection

This is a grand challenge promoted by IBM.

Optical computing

Optical computing is to use photons rather than conventional electrons for computing. There are quite a few instances of optical computers and successful use of them. The conventional logic gates use semiconductors, which use electrons for transporting the signals. In case of optical computers, the photons in a light beam are used to do computation.

There are numerous advantages of using optical devices for computing such as immunity to electromagnetic interference, large bandwidth, etc.

DNA computing

DNA computing uses strands of DNA to encode the instance of the problem and to manipulate them using techniques commonly available in any molecular biology laboratory in order to simulate operations that select the solution of the problem if it exists.

Since the DNA molecule is also a code, but is instead made up of a sequence of four bases that pair up in a predictable manner, many scientists have thought about the possibility of creating a molecular computer. These computers rely on the much faster reactions of DNA nucleotides binding with their complements, a brute force method that holds enormous potential for creating a new generation of computers that would be 100 billion times faster than today's fastest PC. DNA computing has been heralded as the "first example of true nanotechnology", and even the "start of a new era", which forges an unprecedented link between computer science and life science.

Example applications of DNA computing are in solution for the Hamiltonian path problem which is a known NP complete one. The number of required lab operations using DNA grows linearly with the number of vertices of the graph. Molecular algorithms have been reported that solves the cryptographic problem in a polynomial number of steps. As known, factoring large numbers is a relevant problem in many cryptographic applications.

Quantum computing

In a quantum computer, the fundamental unit of information (called a quantum bit or qubit), is not binary but rather more quaternary in nature. This qubit property arises as a direct consequence of its adherence to the laws of quantum mechanics, which differ radically from the laws of classical physics. A qubit can exist not only in a state corresponding to the logical state 0 or 1 as in a classical bit, but also in states corresponding to a blend or quantum superposition of these classical states. In other words, a qubit can exist as a zero, a one, or simultaneously as both 0 and 1, with a numerical coefficient representing the probability for each state. A quantum computer manipulates qubits by executing a series of quantum gates, each a unitary transformation acting on a single qubit or pair of qubits. In applying these gates in succession, a quantum computer can perform a complicated unitary transformation to a set of qubits in some initial state.

Reconfigurable computing

Field-programmable gate arrays (FPGA) are making it possible to build truly reconfigurable computers. The computer architecture is transformed by on the fly reconfiguration of the FPGA circuitry. The optimal matching between architecture and algorithm improves the performance of the reconfigurable computer. The key feature is hardware performance and software flexibility.

For several applications such as fingerprint matching, DNA sequence comparison, etc., reconfigurable computers have been shown to perform several orders of magnitude better than conventional computers.

Simulated annealing

The Simulated annealing algorithm is designed by looking at how the pure crystals form from a heated gaseous state while the system is cooled slowly. The computing problem is redesigned as a simulated annealing exercise and the solutions are arrived at. The working principle of simulated annealing is borrowed from metallurgy: a piece of metal is heated (the atoms are given thermal agitation), and then the metal is left to cool slowly. The slow and regular cooling of the metal allows the atoms to slide progressively their most stable ("minimal energy") positions. (Rapid cooling would have "frozen" them in whatever position they happened to be at that time.) The resulting structure of the metal is stronger and more stable. By simulating the process of annealing inside a computer program, it is possible to find answers to difficult and very complex problems. Instead of minimizing the energy of a block of metal or maximizing its strength, the program minimizes or maximizes some objective relevant to the problem at hand.

Soft computing

One of the main components of "Lateral-computing" is soft computing which approaches problems with human information processing model. The Soft Computing technique comprises Fuzzy logic, neuro-computing, evolutionary-computing, machine learning and probabilistic-chaotic computing.

Neuro computing

Instead of solving a problem by creating a non-linear equation model of it, the biological neural network analogy is used for solving the problem. The neural network is trained like a human brain to solve a given problem. This approach has become highly successful in solving some of the pattern recognition problems.

Evolutionary computing

The genetic algorithm (GA) resembles the natural evolution to provide a universal optimization. Genetic algorithms start with a population of chromosomes which represent the various solutions. The solutions are evaluated using a fitness function and a selection process determines which solutions are to be used for competition process. These algorithms are highly successful in solving search and optimization problems. The new solutions are created using evolutionary principles such as mutation and crossover.

Fuzzy logic

Fuzzy logic is based on the fuzzy sets concepts proposed by Lotfi Zadeh. The degree of membership concept is central to fuzzy sets. The fuzzy sets differ from crisp sets since they allow an element to belong to a set to a degree (degree of membership). This approach finds good applications for control problems. The Fuzzy logic has found enormous applications and has already found a big market presence in consumer electronics such as washing machines, microwaves, mobile phones, Televisions, Camcoders etc.

Probabilistic/chaotic computing

Probabilistic computing engines, e.g. use of probabilistic graphical model such as Bayesian network. Such computational techniques are referred to as randomization, yielding probabilistic algorithms. When interpreted as a physical phenomenon through classical statistical thermodynamics, such techniques lead to energy savings that are proportional to the probability p with which each primitive computational step is guaranteed to be correct (or equivalently to the probability of error, (1–p). Chaotic Computing is based on the chaos theory.

Fractals

Fractal Computing are objects displaying self-similarity at different scales. Fractals generation involves small iterative algorithms. The fractals have dimensions greater than their topological dimensions. The length of the fractal is infinite and size of it cannot be measured. It is described by an iterative algorithm unlike a Euclidean shape which is given by a simple formula. There are several types of fractals and Mandelbrot sets are very popular.

Fractals have found applications in image processing, image compression music generation, computer games etc. Mandelbrot set is a fractal named after its creator. Unlike the other fractals, even though the Mandelbrot set is self-similar at magnified scales, the small scale details are not identical to the whole. I.e., the Mandelbrot set is infinitely complex. But the process of generating it is based on an extremely simple equation. The Mandelbrot set M is a collection of complex numbers. The numbers Z which belong to M are computed by iteratively testing the Mandelbrot equation. C is a constant. If the equation converges for chosen Z, then Z belongs to M. Mandelbrot equation:

Randomized algorithm

A Randomized algorithm makes arbitrary choices during its execution. This allows a savings in execution time at the beginning of a program. The disadvantage of this method is the possibility that an incorrect solution will occur. A well-designed randomized algorithm will have a very high probability of returning a correct answer. The two categories of randomized algorithms are:

Consider an algorithm to find the kth element of an array. A deterministic approach would be to choose a pivot element near the median of the list and partition the list around that element. The randomized approach to this problem would be to choose a pivot at random, thus saving time at the beginning of the process. Like approximation algorithms, they can be used to more quickly solve tough NP-complete problems. An advantage over the approximation algorithms, however, is that a randomized algorithm will eventually yield an exact answer if executed enough times

Machine learning

Human beings/animals learn new skills, languages/concepts. Similarly, machine learning algorithms provide capability to generalize from training data. There are two classes of Machine Learning (ML):

  • Supervised ML
  • Unsupervised ML

One of the well known machine learning technique is Back Propagation Algorithm. This mimics how humans learn from examples. The training patterns are repeatedly presented to the network. The error is back propagated and the network weights are adjusted using gradient descent. The network converges through several hundreds of iterative computations.

Support vector machines

This is another class of highly successful machine learning techniques successfully applied to tasks such as text classification, speaker recognition, image recognition etc.

Example applications

There are several successful applications of lateral-computing techniques. Here is a small set of applications that illustrates lateral computing:

  • Bubble sorting: Here the computing problem of sorting is approached with an analogy of bubbles rising in water. This is by treating the numbers as bubbles and floating them to their natural position.
  • Truck backup problem: This is an interesting problem of reversing a truck and parking it at a particular location. The traditional computing techniques have found it difficult to solve this problem. This has been successfully solved by Fuzzy system.
  • Balancing an inverted pendulum: This problem involves balancing and inverted pendulum. This problem has been efficiently solved by neural networks and fuzzy systems.
  • Smart volume control for mobile phones: The volume control in mobile phones depend on the background noise levels, noise classes, hearing profile of the user and other parameters. The measurement on noise level and loudness level involve imprecision and subjective measures. The authors have demonstrated the successful use of fuzzy logic system for volume control in mobile handsets.
  • Optimization using genetic algorithms and simulated annealing: The problems such as traveling salesman problem have been shown to be NP complete problems. Such problems are solved using algorithms which benefit by heuristics. Some of the applications are in VLSI routing, partitioning etc. Genetic algorithms and Simulated annealing have been successful in solving such optimization problems.
  • Programming The Unprogrammable (PTU) involving the automatic creation of computer programs for unconventional computing devices such as cellular automata, multi-agent systems, parallel systems, field-programmable gate arrays, field-programmable analog arrays, ant colonies, swarm intelligence, distributed systems, and the like.

Summary

Above is a review of lateral-computing techniques. Lateral-computing is based on the lateral-thinking approach and applies unconventional techniques to solve computing problems. While, most of the problems are solved in conventional techniques, there are problems which require lateral-computing. Lateral-computing provides advantage of computational efficiency, low cost of implementation, better solutions when compared to conventional computing for several problems. The lateral-computing successfully tackles a class of problems by exploiting tolerance for imprecision, uncertainty and partial truth to achieve tractability, robustness and low solution cost. Lateral-computing techniques which use the human like information processing models have been classified as "Soft Computing" in literature.

Lateral-computing is valuable while solving numerous computing problems whose mathematical models are unavailable. They provide a way of developing innovative solutions resulting in smart systems with Very High Machine IQ (VHMIQ). This article has traced the transition from lateral-thinking to lateral-computing. Then several lateral-computing techniques have been described followed by their applications. Lateral-computing is for building new generation artificial intelligence based on unconventional processing.

Homework

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Homework A person doing geometry home...