Search This Blog

Friday, November 28, 2025

Anima mundi

From Wikipedia, the free encyclopedia
Illustration of the correspondences between all parts of the created cosmos, with its soul depicted as a woman, from Robert Fludd's Utriusque Cosmi Maioris Scilicet et Minoris Metaphysica, Physica atque Technica Historia

The concept of the anima mundi (Latin), world soul (Ancient Greek: ψυχὴ κόσμου, psychḕ kósmou), or soul of the world (ψυχὴ τοῦ κόσμου, psychḕ toû kósmou) posits an intrinsic connection between all living beings, suggesting that the world is animated by a soul much like the human body. Rooted in ancient Greek and Roman philosophy, the idea holds that the world soul infuses the cosmos with life and intelligence. This notion has been influential across various systems of thought, including Stoicism, Gnosticism, Neoplatonism, and Hermeticism, shaping metaphysical and cosmological frameworks throughout history.

In ancient philosophy, Plato's dialogue Timaeus introduces the universe as a living creature endowed with a soul and reason, constructed by the demiurge according to a rational pattern expressed through mathematical principles. Plato describes the world soul as a mixture of sameness and difference, forming a unified, harmonious entity that permeates the cosmos. This soul animates the universe, ensuring its rational structure and function according to a divine plan, with the motions of the seven classical planets reflecting the deep connection between mathematics and reality in Platonic thought.

Stoicism and Gnosticism are two significant philosophical systems that elaborated on this concept. Stoicism, founded by Zeno of Citium in the early 3rd century BCE, posited that the universe is a single, living entity permeated by the divine rational principle known as the logos, which organizes and animates the cosmos, functioning as its soul. Gnosticism, emerging in the early centuries of the Common Era, often associates the world soul with Sophia, who embodies divine wisdom and the descent into the material world. Gnostics believed that esoteric knowledge could transcend the material world and reunite with the divine.

Neoplatonism and Hermeticism also incorporated the concept of the world soul into their cosmologies. Neoplatonism, flourishing in the 3rd century CE through philosophers like Plotinus and Proclus, proposed a hierarchical structure of existence with the World Soul acting as an intermediary between the intelligible realm and the material world, animating and organizing the cosmos. Hermeticism, based on writings attributed to Hermes Trismegistus, views the world soul as a vital force uniting the cosmos. Hermetic texts describe the cosmos as a living being imbued with a divine spirit, emphasizing the unity and interconnection of all things. Aligning oneself with the world soul is seen as a path to spiritual enlightenment and union with the divine, a belief that experienced a resurgence during the Renaissance when Hermeticism was revived and integrated into Renaissance thought, influencing various intellectual and spiritual movements of the time.

Ancient philosophy

Plato described the universe as a living being in his dialogue Timaeus (30b–d):

Thus, then, in accordance with the likely account, we must declare that this Cosmos has verily come into existence as a Living Creature endowed with soul and reason [...] a Living Creature, one and visible, containing within itself all the living creatures which are by nature akin to itself.

Plato's Timaeus describes this living cosmos as being built by the demiurge, constructed to be self-identical and intelligible to reason, according to a rational pattern expressed in mathematical principles and Pythagorean ratios describing the structure of the cosmos, and particularly the motions of the seven classical planets. The living universe is also a god titled Ouranos and Kosmos, which shows, as scholars have argued, that Plato mediates between the poetic and presocratic traditions.

In Timaeus, Plato presents the cosmos as a single, living organism that possesses a soul and intelligence. The demiurge, a divine craftsman, creates the universe by imposing order on pre-existing chaotic matter. This creation is not ex nihilo but rather a process of organizing the cosmos according to the eternal Forms, which are perfect, immutable archetypes of all things.

Plato explains that the world soul is a mixture of the same and the different, woven together to form a unified, harmonious entity. This soul permeates the entire cosmos, animating it and endowing it with life and intelligence. The world soul is responsible for the rational structure of the universe, ensuring that everything functions according to a divine plan.

The rational pattern of the cosmos is expressed through mathematical principles and Pythagorean ratios, reflecting the deep connection between mathematics and the structure of reality in Platonic thought. The motions of the seven classical planets (the Moon, Mercury, Venus, the Sun, Mars, Jupiter, and Saturn) are particularly significant, as they embody the harmony and order of the universe.

Plato's identification of the cosmos as a god, titled Ouranos and Kosmos, reveals his synthesis of different philosophical traditions. The name Ouranos connects the world soul to the ancient Greek personification of the sky, while Kosmos signifies order and beauty. By mediating between poetic and presocratic traditions, Plato integrates mythological and philosophical elements into a coherent cosmological vision.

Stoicism

The Stoic school of philosophy, founded by Zeno of Citium in the early 3rd century BCE, significantly contributed to the development of the concept of the world soul. Stoicism posits that the universe is a single, living entity permeated by a divine rational principle known as the logos. This principle organizes and animates the cosmos, functioning as its soul.

Central to Stoic cosmology is the belief that the logos operates as the rational structure underlying all existence. This rational principle is equated with God, nature, and the soul of the universe, making the cosmos a living, rational organism. The Stoics identified the world soul with the concept of pneuma, a life-giving force that pervades and sustains all things. Pneuma is a mixture of air and fire, elements considered active and capable of bestowing life and motion.

The Stoic philosopher Cleanthes described the world soul in his "Hymn to Zeus", where he praises Zeus (a personification of the logos) for harmonizing the cosmos and ensuring its rational order. Chrysippus, another prominent Stoic, further developed the idea of the world soul, arguing that it is the animating principle that ensures the coherence and unity of the cosmos.

The Stoic view of the world soul differs from Plato's in that it emphasizes the materiality of the pneuma. For the Stoics, the soul of the universe is not an abstract, separate entity but a physical presence that interpenetrates the cosmos, providing it with structure and purpose. This physicalist interpretation reflects the Stoic commitment to the idea that only bodies can act and be acted upon.

The Stoic concept of the world soul also has ethical implications. Since the logos governs the cosmos rationally, living in accordance with nature means aligning one's life with this rational order. The Stoics believed that by understanding and accepting the world's rational structure, individuals could achieve a state of tranquility and virtue.

Gnosticism and Neoplatonism

Gnosticism

Gnosticism, a diverse and syncretic religious movement that emerged in the early centuries of the Common Era, also incorporated the concept of the world soul into its cosmological and theological framework. Gnostic systems generally posited a dualistic worldview, contrasting the material world with a higher, spiritual reality. In this context, the world soul often played a crucial role in bridging the divine and material realms.

In Gnostic thought, the world soul is often associated with the figure of Sophia (Wisdom), who embodies both the divine wisdom and the tragic descent into the material world. Sophia's fall and subsequent redemption are central themes in many Gnostic texts. According to the Apocryphon of John, a key Gnostic scripture, Sophia's emanation resulted in the creation of the material world, which is seen as flawed and distant from the divine pleroma (fullness).

In Gnostic systems, the concept of the world soul often carries significant ethical and soteriological implications. Gnostics believed that by acquiring esoteric knowledge and understanding their divine origin, individuals could transcend the material world and reunite with the divine. This process of gnosis involved recognizing the world soul's entrapment in the material realm and working towards its liberation.

Manichaeism

In Manichaeism, a major Gnostic religion founded by the prophet Mani in the 3rd century CE, the world soul was also called the Light Soul and the Living Soul (Middle Persian: grīw zīndag), contrasting it with matter, which was associated with lifelessness and death and within which the world soul was imprisoned. The world soul was personified as the Suffering Jesus (Jesus patibilis) who, like the historical Jesus, was depicted as being crucified in the world. This mystica cruxificio was present in all parts of the world, including the skies, soil, and trees, as expressed in the Coptic Manichaean psalms.

Mandaeism

Mandaeism, another Gnostic tradition that has survived to the present day, also incorporates a concept akin to the world soul. In Mandaean cosmology, the soul's journey through the material world and its eventual return to the World of Light is a central narrative. The soul's purification and ascent are facilitated by esoteric knowledge and ritual practices.

Neoplatonism

The concept of the world soul continued to influence later philosophical thought, particularly in the development of Neoplatonism. Neoplatonists such as Plotinus and Proclus expanded on Plato's ideas, emphasizing the unity and divinity of the cosmos and its connection to the One, the ultimate source of all existence.

Neoplatonism, which flourished in the 3rd century CE, is a philosophical system that builds upon the teachings of Plato and incorporates metaphysical elements. Plotinus, the founder of Neoplatonism, articulated a vision of reality that centers on a hierarchical structure of existence. At the pinnacle of this hierarchy is the One, an ineffable and transcendent principle from which all reality emanates. The One generates the Nous (Divine Mind), which in turn produces the World Soul.

The World Soul in Neoplatonism functions as an intermediary between the intelligible realm (the realm of the Forms) and the sensible world (the material universe). Plotinus describes the World Soul as the vital force that animates and organizes the cosmos, imbuing it with life and intelligence. It is both one and many, maintaining unity while simultaneously generating individual souls and entities within the cosmos.

Proclus, a prominent later Neoplatonist, further developed these ideas. He posited a more elaborate structure, with the World Soul divided into a higher, more divine aspect and a lower, more material aspect. This dual nature allows the World Soul to mediate between the purely intellectual and the physical realms, ensuring the coherence and order of the universe.

The Neoplatonists viewed the World Soul not only as a metaphysical principle but also as a means to achieve personal and cosmic harmony. By aligning one's soul with the World Soul, individuals could attain a higher state of being and participate in the divine order of the cosmos. This process involves philosophical contemplation, ethical living, and the cultivation of virtues that reflect the harmonious nature of the World Soul.

The influence of Neoplatonism extended beyond the classical period, significantly impacting early Christian, Islamic, and Renaissance thought. The integration of Platonic and Neoplatonic ideas into Christian theology, particularly through the works of Augustine and Pseudo-Dionysius, demonstrates the enduring legacy of the concept of the World Soul.

Medieval and Renaissance thought

Scholasticism

During the 12th-Century Renaissance of the High Middle Ages, the analysis of Plato's Timaeus by members of the School of Chartres like William of Conches and Bernardus Silvestris led them to interpret the world soul as possibly or certainly the same as the Christian Holy Spirit under the covering (integumentum) of another name. As or immediately after Peter Abelard was condemned by Bernard of Clairvaux and the 1141 Council of Sens for doctrines similarly close to pantheism, William condemned his own writings on the subject and revised his De Philosophia Mundi to avoid its discussion.

Hermeticism

Hermeticism, a spiritual, philosophical, and esoteric tradition based primarily on writings attributed to Hermes Trismegistus, integrates the concept of the world soul into its cosmological framework. The Hermetic tradition, which flourished in the Hellenistic period and saw a revival during the Renaissance, views the world soul as a vital, animating force that permeates and unites the cosmos.

Hermetic writings, particularly the Corpus Hermeticum and the Asclepius, emphasize the unity and interconnection of all things in the universe. These texts describe the cosmos as a living being imbued with a divine spirit or soul. The world soul is seen as the intermediary between the divine intellect (Nous) and the material world, ensuring the harmonious functioning of the cosmos.

In the Corpus Hermeticum, the world soul is often depicted as an emanation of the divine that sustains all creation. This soul is responsible for the life, order, and movement within the universe, acting in accordance with the divine will. The Hermetic worldview is deeply rooted in the idea that understanding and aligning oneself with the world soul can lead to spiritual enlightenment and union with the divine.

Paracelsus

The Renaissance alchemist and physician Paracelsus significantly contributed to the Hermetic tradition by integrating the concept of the world soul into his medical and alchemical theories. Paracelsus believed that the world soul, which he referred to as the Archeus, was the vital force that governed the processes of nature and the human body. He posited that health and disease were influenced by the balance and interaction of this vital force within individuals.

Paracelsus' view of the world soul extended to his understanding of the macrocosm and microcosm, where the human body (microcosm) is a reflection of the larger universe (macrocosm). By studying the world soul's manifestations in nature, Paracelsus believed that alchemists and physicians could uncover the secrets of health and transformation.

Giordano Bruno

Giordano Bruno, a 16th-century Italian philosopher, theologian, and occultist, significantly contributed to the Renaissance revival of the Hermetic tradition. His work is known for its bold integration of Hermeticism, Copernican heliocentrism, and an infinite universe theory, which brought the concept of the world soul into a new, expansive context.

Bruno's cosmology was groundbreaking in that it proposed an infinite universe populated by innumerable worlds. Central to this vision was the idea of the world soul, or anima mundi, which Bruno described as an immanent and animating force pervading the entire cosmos. He argued that the world soul is the source of all motion, life, and intelligence in the universe, linking all parts of the cosmos into a single, living entity.

In his work De la causa, principio et uno (On Cause, Principle, and Unity), Bruno articulated his belief in the unity of the universe and the presence of a single, universal spirit. This spirit, akin to the world soul, ensures the cohesion and harmony of the cosmos, reflecting the Hermetic principle of the interconnectedness of all things.

Bruno was deeply influenced by the Hermetic texts, particularly the Corpus Hermeticum, which he saw as containing profound truths about the nature of the universe and the divine. His philosophy integrated the Hermetic concept of the world soul with the revolutionary scientific ideas of his time, leading to a vision of the cosmos that was both mystical and rational.

Bruno's emphasis on the world soul can also be seen in his metaphysical poetry and dialogues, where he often depicted the universe as a divine, living organism animated by an internal spirit. This perspective was revolutionary, challenging the Aristotelian view of a finite, hierarchical cosmos and aligning more closely with the Hermetic and Neoplatonic traditions.

Bruno's radical ideas, including his support for the Copernican model and his concept of an infinite universe with a pervasive world soul, led to his persecution by the Roman Catholic Church. He was tried for heresy and ultimately burned at the stake in 1600. Despite his tragic end, Bruno's ideas significantly influenced later thinkers and contributed to the development of modern cosmology and metaphysics.

Robert Fludd

Another key figure in Hermeticism, Robert Fludd, elaborated on the concept of the world soul in his extensive writings on cosmology and metaphysics. Fludd's works depict the world soul as the divine anima mundi that connects all levels of existence, from the highest spiritual realms to the material world. He emphasized the idea of cosmic harmony, where the world soul orchestrates the symphony of creation, maintaining balance and order.

Fludd's illustrations and writings highlight the Hermetic belief in the interconnection of all things, with the world soul as the binding principle that ensures the unity of the cosmos. His work reflects the Hermetic conviction that by attuning oneself to the world soul, one can achieve deeper knowledge and spiritual enlightenment.

Later European philosophers

Although the concept of a world soul originated in classical antiquity, similar ideas can be found in the thoughts of later European philosophers such as those of Baruch Spinoza, Gottfried Leibniz, Immanuel Kant, Friedrich Schelling, and Georg W.F. Hegel (particularly in his concept of Weltgeist).

Modern relevance

The concept of Anima Mundi, or the World Soul, continues to resonate in contemporary philosophical, ecological, and spiritual discourse. Modern interpretations often explore the interconnectedness of life and the universe, reflecting ancient notions through new lenses.

Ecological perspectives

In contemporary environmental philosophy, the idea of Anima Mundi is often invoked to emphasize the intrinsic value of nature and the interconnectedness of all living things. Ecologists and environmentalists draw parallels between the ancient concept and modern holistic approaches to ecology. James Lovelock's Gaia hypothesis posits that the Earth functions as a self-regulating system, echoing the idea of the World Soul animating and organizing the cosmos. This holistic view suggests that recognizing the Earth as a living entity can foster a deeper environmental ethic and a sense of stewardship for the planet.

Philosophical and scientific discourse

Philosophers like David Abram have explored the phenomenological aspects of Anima Mundi in the context of sensory experience and perception. Abram's work emphasizes the animate qualities of the natural world, suggesting that recognizing the Earth's sentience can foster a deeper ecological awareness and a sense of kinship with all forms of life. Additionally, systems thinking and complexity theory in science reflect a renewed interest in holistic and integrative approaches that resonate with the concept of the World Soul, highlighting the interconnection and interdependence of various components within ecological and social systems.

Spiritual and New Age movements

The Anima Mundi also finds relevance in modern spiritual and New Age movements, where it is often associated with the idea of a living, conscious Earth. Practices such as Earth-centered spirituality, animism, and certain strands of neopaganism embrace the notion of the World Soul as a guiding principle for living in harmony with nature. These movements emphasize rituals, meditations, and practices aimed at connecting with the spirit of the Earth and recognizing the sacredness of all life.

Literature and the arts

The influence of the Anima Mundi extends into contemporary literature and the arts, serving as a metaphor for exploring themes of unity, interconnection, and the mystery of existence. Authors and artists draw on the symbolism of the World Soul to convey a sense of wonder and reverence for the natural world. This is evident in the works of poets like Mary Oliver, who often evoke the living essence of nature in their writings, and in the visual arts, where the interplay of life and the cosmos is a recurring theme.

Thursday, November 27, 2025

History of supercomputing

From Wikipedia, the free encyclopedia
A Cray-1 supercomputer preserved at the Deutsches Museum

The history of supercomputing goes back to the 1960s when a series of computers at Control Data Corporation (CDC) were designed by Seymour Cray to use innovative designs and parallelism to achieve superior computational peak performance. The CDC 6600, released in 1964, is generally considered the first supercomputer. However, some earlier computers were considered supercomputers for their day such as the 1954 IBM NORC in the 1950s, and in the early 1960s, the UNIVAC LARC (1960), the IBM 7030 Stretch (1962), and the Manchester Atlas (1962), all of which were of comparable power.

While the supercomputers of the 1980s used only a few processors, in the 1990s, machines with thousands of processors began to appear both in the United States and in Japan, setting new computational performance records.

By the end of the 20th century, massively parallel supercomputers with thousands of "off-the-shelf" processors similar to those found in personal computers were constructed and broke through the teraFLOPS computational barrier.

Progress in the first decade of the 21st century was dramatic and supercomputers with over 60,000 processors appeared, reaching petaFLOPS performance levels.

Beginnings: 1950s and 1960s

The term "Super Computing" was first used in the New York World in 1929 to refer to large custom-built tabulators that IBM had made for Columbia University.

There were several lines of second generation computers that were substantially faster than most contemporary mainframes. These included

The second generation saw the introduction of features intended to support multiprogramming and multiprocessor configurations, including master/slave (supervisor/problem) mode, storage protection keys, limit registers, protection associated with address translation, and atomic instructions.

In 1957, a group of engineers left Sperry Corporation to form Control Data Corporation (CDC) in Minneapolis, Minnesota. Seymour Cray left Sperry a year later to join his colleagues at CDC. In 1960, Cray completed the CDC 1604, one of the first generation of commercially successful transistorized computers and at the time of its release, the fastest computer in the world. However, the sole fully transistorized Harwell CADET was operational in 1951, and IBM delivered its commercially successful transistorized IBM 7090 in 1959.

The CDC 6600 with the system console

Around 1960, Cray decided to design a computer that would be the fastest in the world by a large margin. After four years of experimentation along with Jim Thornton, and Dean Roush and about 30 other engineers, Cray completed the CDC 6600 in 1964. Cray switched from germanium to silicon transistors, built by Fairchild Semiconductor, that used the planar process. These did not have the drawbacks of the mesa silicon transistors. He ran them very fast, and the speed of light restriction forced a very compact design with severe overheating problems, which were solved by introducing refrigeration, designed by Dean Roush. The 6600 outperformed the industry's prior recordholder, the IBM 7030 Stretch, by a factor of three. With performance of up to three megaFLOPS, it was dubbed a supercomputer and defined the supercomputing market when two hundred computers were sold at $9 million each.

The 6600 gained speed by "farming out" work to peripheral computing elements, freeing the CPU (Central Processing Unit) to process actual data. The Minnesota FORTRAN compiler for the machine was developed by Liddiard and Mundstock at the University of Minnesota and with it the 6600 could sustain 500 kiloflops on standard mathematical operations. In 1968, Cray completed the CDC 7600, again the fastest computer in the world. At 36 MHz, the 7600 had 3.6 times the clock speed of the 6600, but ran significantly faster due to other technical innovations. They sold only about 50 of the 7600s, not quite a failure. Cray left CDC in 1972 to form his own company. Two years after his departure CDC delivered the STAR-100, which at 100 megaflops was three times the speed of the 7600. Along with the Texas Instruments ASC, the STAR-100 was one of the first machines to use vector processingthe idea having been inspired around 1964 by the APL programming language.

The University of Manchester Atlas in January 1963.

In 1956, a team at Manchester University in the United Kingdom began development of MUSEa name derived from microsecond engine—with the aim of eventually building a computer that could operate at processing speeds approaching one microsecond per instruction, about one million instructions per secondMu (the name of the Greek letter μ) is a prefix in the SI and other systems of units denoting a factor of 10−6 (one millionth).

At the end of 1958, Ferranti agreed to collaborate with Manchester University on the project, and the computer was shortly afterwards renamed Atlas, with the joint venture under the control of Tom Kilburn. The first Atlas was officially commissioned on 7 December 1962—nearly three years before the Cray CDC 6600 supercomputer was introduced—as one of the world's first supercomputers. It was considered at the time of its commissioning to be the most powerful computer in the world, equivalent to four IBM 7094s. It was said that whenever Atlas went offline half of the United Kingdom's computer capacity was lost. The Atlas pioneered virtual memory and paging as a way to extend its working memory by combining its 16,384 words of primary core memory with an additional 96K words of secondary drum memory. Atlas also pioneered the Atlas Supervisor, "considered by many to be the first recognizable modern operating system".

The Cray era: mid-1970s and 1980s

A Fluorinert-cooled Cray-2 supercomputer

Four years after leaving CDC, Cray delivered the 80 MHz Cray-1 in 1976, and it became the most successful supercomputer in history. The Cray-1, which used integrated circuits with two gates per chip, was a vector processor. It introduced a number of innovations, such as chaining, in which scalar and vector registers generate interim results that can be used immediately, without additional memory references which would otherwise reduce computational speed. The Cray X-MP (designed by Steve Chen) was released in 1982 as a 105 MHz shared-memory parallel vector processor with better chaining support and multiple memory pipelines. All three floating-point pipelines on the X-MP could operate simultaneously. By 1983 Cray and Control Data were supercomputer leaders; despite its lead in the overall computer market, IBM was unable to produce a profitable competitor.

The Cray-2, released in 1985, was a four-processor liquid cooled computer totally immersed in a tank of Fluorinert, which bubbled as it operated. It reached 1.9 gigaflops and was the world's fastest supercomputer, and the first to break the gigaflop barrier. The Cray-2 was a totally new design. It did not use chaining and had a high memory latency, but used much pipelining and was ideal for problems that required large amounts of memory. The software costs in developing a supercomputer should not be underestimated, as evidenced by the fact that in the 1980s the cost for software development at Cray came to equal what was spent on hardware. That trend was partly responsible for a move away from the in-house, Cray Operating System to UNICOS based on Unix.

The Cray Y-MP, also designed by Steve Chen, was released in 1988 as an improvement of the X-MP and could have eight vector processors at 167 MHz with a peak performance of 333 megaflops per processor. In the late 1980s, Cray's experiment on the use of gallium arsenide semiconductors in the Cray-3 did not succeed. Seymour Cray began to work on a massively parallel computer in the early 1990s, but died in a car accident in 1996 before it could be completed. Cray Research did, however, produce such computers.

Massive processing: the 1990s

The Cray-2 which set the frontiers of supercomputing in the mid to late 1980s had only 8 processors. In the 1990s, supercomputers with thousands of processors began to appear. Another development at the end of the 1980s was the arrival of Japanese supercomputers, some of which were modeled after the Cray-1.

During the first half of the Strategic Computing Initiative, some massively parallel architectures were proven to work, such as the WARP systolic array, message-passing MIMD like the Cosmic Cube hypercube, SIMD like the Connection Machine, etc. In 1987, a TeraOPS Computing Technology Program was proposed, with a goal of achieving 1 teraOPS (a trillion operations per second) by 1992, which was considered achievable by scaling up any of the previously proven architectures.

Rear of the Paragon cabinet showing the bus bars and mesh routers

The SX-3/44R was announced by NEC Corporation in 1989 and a year later earned the fastest-in-the-world title with a four-processor model. However, Fujitsu's Numerical Wind Tunnel supercomputer used 166 vector processors to gain the top spot in 1994. It had a peak speed of 1.7 gigaflops per processor. The Hitachi SR2201 obtained a peak performance of 600 gigaflops in 1996 by using 2,048 processors connected via a fast three-dimensional crossbar network.

In the same timeframe the Intel Paragon could have 1,000 to 4,000 Intel i860 processors in various configurations, and was ranked the fastest in the world in 1993. The Paragon was a MIMD machine which connected processors via a high speed two-dimensional mesh, allowing processes to execute on separate nodes; communicating via the Message Passing Interface. By 1995, Cray was also shipping massively parallel systems, e.g. the Cray T3E with over 2,000 processors, using a three-dimensional torus interconnect.

The Paragon architecture soon led to the Intel ASCI Red supercomputer in the United States, which held the top supercomputing spot to the end of the 20th century as part of the Advanced Simulation and Computing Initiative. This was also a mesh-based MIMD massively-parallel system with over 9,000 compute nodes and well over 12 terabytes of disk storage, but used off-the-shelf Pentium Pro processors that could be found in everyday personal computers. ASCI Red was the first system ever to break through the 1 teraflop barrier on the MP-Linpack benchmark in 1996; eventually reaching 2 teraflops.

Petascale computing in the 21st century

A Blue Gene/P supercomputer at Argonne National Laboratory

Significant progress was made in the first decade of the 21st century. The efficiency of supercomputers continued to increase, but not dramatically so. The Cray C90 used 500 kilowatts of power in 1991, while by 2003 the ASCI Q used 3,000 kW while being 2,000 times faster, increasing the performance per watt 300 fold.

In 2004, the Earth Simulator supercomputer built by NEC at the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) reached 35.9 teraflops, using 640 nodes, each with eight proprietary vector processors.

The IBM Blue Gene supercomputer architecture found widespread use in the early part of the 21st century, and 27 of the computers on the TOP500 list used that architecture. The Blue Gene approach is somewhat different in that it trades processor speed for low power consumption so that a larger number of processors can be used at air cooled temperatures. It can use over 60,000 processors, with 2048 processors "per rack", and connects them via a three-dimensional torus interconnect.

Progress in China has been rapid, in that China placed 51st on the TOP500 list in June 2003; this was followed by 14th in November 2003, 10th in June 2004, then 5th during 2005, before gaining the top spot in 2010 with the 2.5 petaflop Tianhe-I supercomputer.

In July 2011, the 8.1 petaflop Japanese K computer became the fastest in the world, using over 60,000 SPARC64 VIIIfx processors housed in over 600 cabinets. The fact that the K computer is over 60 times faster than the Earth Simulator, and that the Earth Simulator ranks as the 68th system in the world seven years after holding the top spot, demonstrates both the rapid increase in top performance and the widespread growth of supercomputing technology worldwide. By 2014, the Earth Simulator had dropped off the list and by 2018 the K computer had dropped out of the top 10. By 2018, Summit had become the world's most powerful supercomputer, at 200 petaFLOPS. In 2020, the Japanese once again took the top spot with the Fugaku supercomputer, capable of 442 PFLOPS. Finally, starting in 2022 and until the present (as of December 2023), the world's fastest supercomputer had become the Hewlett Packard Enterprise Frontier, also known as the OLCF-5 and hosted at the Oak Ridge Leadership Computing Facility (OLCF) in Tennessee, United States. The Frontier is based on the Cray EX, is the world's first exascale supercomputer, and uses only AMD CPUs and GPUs; it achieved an Rmax of 1.102 exaFLOPS, which is 1.102 quintillion operations per second.

Algorithm

From Wikipedia, the free encyclopedia
In a loop, subtract the larger number against the smaller number. Halt the loop when the subtraction will make a number negative. Assess two numbers, whether one of them is equal to zero or not. If yes, take the other number as the greatest common divisor. If no, put the two numbers in the subtraction loop again.
Flowchart of using successive subtractions to find the greatest common divisor of number r and s

In mathematics and computer science, an algorithm (/ˈælɡərɪðəm/ ) is a finite sequence of mathematically rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes (referred to as automated decision-making) and deduce valid inferences (referred to as automated reasoning).

In contrast, a heuristic is an approach to solving problems without well-defined correct or optimal results. For example, although social media recommender systems are commonly called "algorithms", they actually rely on heuristics as there is no truly "correct" recommendation.

As an effective method, an algorithm can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.

Etymology

Around 825 AD, Persian scientist and polymath Muḥammad ibn Mūsā al-Khwārizmī wrote kitāb al-ḥisāb al-hindī ("Book of Indian computation") and kitab al-jam' wa'l-tafriq al-ḥisāb al-hindī ("Addition and subtraction in Indian arithmetic"). In the early 12th century, Latin translations of these texts involving the Hindu–Arabic numeral system and arithmetic appeared, for example Liber Alghoarismi de practica arismetrice, attributed to John of Seville, and Liber Algorismi de numero Indorum, attributed to Adelard of Bath. Here, alghoarismi or algorismi is the Latinization of Al-Khwarizmi's name; the text starts with the phrase Dixit Algorismi, or "Thus spoke Al-Khwarizmi".

The word algorism in English came to mean the use of place-value notation in calculations; it occurs in the Ancrene Wisse from circa 1225. By the time Geoffrey Chaucer wrote The Canterbury Tales in the late 14th century, he used a variant of the same word in describing augrym stones, stones used for place-value calculation. In the 15th century, under the influence of the Greek word ἀριθμός (arithmos, "number"; cf. "arithmetic"), the Latin word was altered to algorithmus. By 1596, this form of the word was used in English, as algorithm, by Thomas Hood.

Definition

One informal definition is "a set of rules that precisely defines a sequence of operations", which would include all computer programs (including programs that do not perform numeric calculations), and any prescribed bureaucratic procedure or cook-book recipe. In general, a program is an algorithm only if it stops eventually—even though infinite loops may sometimes prove desirable. Boolos, Jeffrey & 1974, 1999 define an algorithm to be an explicit set of instructions for determining an output, that can be followed by a computing machine or a human who could only carry out specific elementary operations on symbols.

Most algorithms are intended to be implemented as computer programs. However, algorithms are also implemented by other means, such as in a biological neural network (for example, the human brain performing arithmetic or an insect looking for food), in an electrical circuit, or a mechanical device.

History

Ancient algorithms

Step-by-step procedures for solving mathematical problems have been recorded since antiquity. This includes in Babylonian mathematics (around 2500 BC), Egyptian mathematics (around 1550 BC), Indian mathematics (around 800 BC and later), the Ifa Oracle (around 500 BC), Greek mathematics (around 240 BC), Chinese mathematics (around 200 BC and later), and Arabic mathematics (around 800 AD).

The earliest evidence of algorithms is found in ancient Mesopotamian mathematics. A Sumerian clay tablet found in Shuruppak near Baghdad and dated to c. 2500 BC describes the earliest division algorithm. During the Hammurabi dynasty c. 1800 – c. 1600 BC, Babylonian clay tablets described algorithms for computing formulas. Algorithms were also used in Babylonian astronomy. Babylonian clay tablets describe and employ algorithmic procedures to compute the time and place of significant astronomical events.

Algorithms for arithmetic are also found in ancient Egyptian mathematics, dating back to the Rhind Mathematical Papyrus c. 1550 BC. Algorithms were later used in ancient Hellenistic mathematics. Two examples are the Sieve of Eratosthenes, which was described in the Introduction to Arithmetic by Nicomachus, and the Euclidean algorithm, which was first described in Euclid's Elements (c. 300 BC).Examples of ancient Indian mathematics included the Shulba Sutras, the Kerala School, and the Brāhmasphuṭasiddhānta.

The first cryptographic algorithm for deciphering encrypted code was developed by Al-Kindi, a 9th-century Arab mathematician, in A Manuscript On Deciphering Cryptographic Messages. He gave the first description of cryptanalysis by frequency analysis, the earliest codebreaking algorithm.

Computers

Weight-driven clocks

Bolter credits the invention of the weight-driven clock as "the key invention [of Europe in the Middle Ages]," specifically the verge escapement mechanism producing the tick and tock of a mechanical clock. "The accurate automatic machine" led immediately to "mechanical automata" in the 13th century and "computational machines"—the difference and analytical engines of Charles Babbage and Ada Lovelace in the mid-19th century. Lovelace designed the first algorithm intended for processing on a computer, Babbage's analytical engine, which is the first device considered a real Turing-complete computer instead of just a calculator. Although the full implementation of Babbage's second device was not realized for decades after her lifetime, Lovelace has been called "history's first programmer".

Electromechanical relay

Bell and Newell (1971) write that the Jacquard loom, a precursor to Hollerith cards (punch cards), and "telephone switching technologies" led to the development of the first computers. By the mid-19th century, the telegraph, the precursor of the telephone, was in use throughout the world. By the late 19th century, the ticker tape (c. 1870s) was in use, as were Hollerith cards (c. 1890). Then came the teleprinter (c. 1910) with its punched-paper use of Baudot code on tape.

Telephone-switching networks of electromechanical relays were invented in 1835. These led to the invention of the digital adding device by George Stibitz in 1937. While working in Bell Laboratories, he observed the "burdensome" use of mechanical calculators with gears. "He went home one evening in 1937 intending to test his idea... When the tinkering was over, Stibitz had constructed a binary adding device".

Formalization

Ada Lovelace's diagram from "Note G", the first published computer algorithm

In 1928, a partial formalization of the modern concept of algorithms began with attempts to solve the Entscheidungsproblem (decision problem) posed by David Hilbert. Later formalizations were framed as attempts to define "effective calculability" or "effective method". Those formalizations included the GödelHerbrandKleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, and Alan Turing's Turing machines of 1936–37 and 1939.

Modern Algorithms

Algorithms have evolved and improved in many ways as time goes on. Common uses of algorithms today include social media apps like Instagram and YouTube. Algorithms are used as a way to analyze what people like and push more of those things to the people who interact with them. Quantum computing uses quantum algorithm procedures to solve problems faster. More recently, in 2024, NIST updated their post-quantum encryption standards, which includes new encryption algorithms to enhance defenses against attacks using quantum computing.

Representations

Algorithms can be expressed in many kinds of notation, including natural languages, pseudocode, flowcharts, drakon-charts, programming languages or control tables (processed by interpreters). Natural language expressions of algorithms tend to be verbose and ambiguous and are rarely used for complex or technical algorithms. Pseudocode, flowcharts, drakon-charts, and control tables are structured expressions of algorithms that avoid common ambiguities of natural language. Programming languages are primarily for expressing algorithms in a computer-executable form but are also used to define or document algorithms.

Turing machines

There are many possible representations and Turing machine programs can be expressed as a sequence of machine tables (see finite-state machine, state-transition table, and control table for more), as flowcharts and drakon-charts (see state diagram for more), as a form of rudimentary machine code or assembly code called "sets of quadruples", and more. Algorithm representations can also be classified into three accepted levels of Turing machine description: high-level description, implementation description, and formal description. A high-level description describes the qualities of the algorithm itself, ignoring how it is implemented on the Turing machine. An implementation description describes the general manner in which the machine moves its head and stores data to carry out the algorithm, but does not give exact states. In the most detail, a formal description gives the exact state table and list of transitions of the Turing machine.

Flowchart representation

The graphical aid called a flowchart offers a way to describe and document an algorithm (and a computer program corresponding to it). It has four primary symbols: arrows showing program flow, rectangles (SEQUENCE, GOTO), diamonds (IF-THEN-ELSE), and dots (OR-tie). Sub-structures can "nest" in rectangles, but only if a single exit occurs from the superstructure.

Algorithmic analysis

It is often important to know how much time, storage, or other cost an algorithm may require. Methods have been developed for the analysis of algorithms to obtain such quantitative answers (estimates); for example, an algorithm that adds up the elements of a list of n numbers would have a time requirement of , using big O notation. The algorithm only needs to remember two values: the sum of all the elements so far, and its current position in the input list. If the space required to store the input numbers is not counted, it has a space requirement of , otherwise is required.

Different algorithms may complete the same task with a different set of instructions in less or more time, space, or 'effort' than others. For example, a binary search algorithm (with cost ) outperforms a sequential search (cost ) when used for table lookups on sorted lists or arrays.

Formal versus empirical

The analysis, and study of algorithms is a discipline of computer science. Algorithms are often studied abstractly, without referencing any specific programming language or implementation. Algorithm analysis resembles other mathematical disciplines as it focuses on the algorithm's properties, not implementation. Pseudocode is typical for analysis as it is a simple and general representation. Most algorithms are implemented on particular hardware/software platforms and their algorithmic efficiency is tested using real code. The efficiency of a particular algorithm may be insignificant for many "one-off" problems but it may be critical for algorithms designed for fast interactive, commercial, or long-life scientific usage. Scaling from small n to large n frequently exposes inefficient algorithms that are otherwise benign.

Empirical testing is useful for uncovering unexpected interactions that affect performance. Benchmarks may be used to compare before/after potential improvements to an algorithm after program optimization. Empirical tests cannot replace formal analysis, though, and are non-trivial to perform fairly.

Execution efficiency

To illustrate the potential improvements possible even in well-established algorithms, a recent significant innovation, relating to FFT algorithms (used heavily in the field of image processing), can decrease processing time up to 1,000 times for applications like medical imaging. In general, speed improvements depend on special properties of the problem, which are very common in practical applications. Speedups of this magnitude enable computing devices that make extensive use of image processing (like digital cameras and medical equipment) to consume less power.

Best Case and Worst Case

The best case of an algorithm refers to the scenario or input for which the algorithm or data structure takes the least time and resources to complete its tasks. The worst case of an algorithm is the case that causes the algorithm or data structure to consume the maximum period of time and computational resources.

Design

Algorithm design is a method or mathematical process for problem-solving and engineering algorithms. The design of algorithms is part of many solution theories, such as divide-and-conquer or dynamic programming within operation research. Techniques for designing and implementing algorithm designs are also called algorithm design patterns, with examples including the template method pattern and the decorator pattern. One of the most important aspects of algorithm design is resource (run-time, memory usage) efficiency; the big O notation is used to describe e.g., an algorithm's run-time growth as the size of its input increases.

Structured programming

Per the Church–Turing thesis, any algorithm can be computed by any Turing complete model. Turing completeness only requires four instruction types—conditional GOTO, unconditional GOTO, assignment, HALT. However, Kemeny and Kurtz observe that, while "undisciplined" use of unconditional GOTOs and conditional IF-THEN GOTOs can result in "spaghetti code", a programmer can write structured programs using only these instructions; on the other hand "it is also possible, and not too hard, to write badly structured programs in a structured language". Tausworthe augments the three Böhm-Jacopini canonical structures: SEQUENCE, IF-THEN-ELSE, and WHILE-DO, with two more: DO-WHILE and CASE. An additional benefit of a structured program is that it lends itself to proofs of correctness using mathematical induction.

By themselves, algorithms are not usually patentable. In the United States, a claim consisting solely of simple manipulations of abstract concepts, numbers, or signals does not constitute "processes" (USPTO 2006), so algorithms are not patentable (as in Gottschalk v. Benson). However practical applications of algorithms are sometimes patentable. For example, in Diamond v. Diehr, the application of a simple feedback algorithm to aid in the curing of synthetic rubber was deemed patentable. The patenting of software is controversial, and there are criticized patents involving algorithms, especially data compression algorithms, such as Unisys's LZW patent. Additionally, some cryptographic algorithms have export restrictions (see export of cryptography).

Classification

By implementation

Recursion
A recursive algorithm invokes itself repeatedly until meeting a termination condition and is a common functional programming method. Iterative algorithms use repetitions such as loops or data structures like stacks to solve problems. Problems may be suited for one implementation or the other. The Tower of Hanoi is a puzzle commonly solved using recursive implementation. Every recursive version has an equivalent (but possibly more or less complex) iterative version, and vice versa.
Serial, parallel or distributed
Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time on serial computers. Serial algorithms are designed for these environments, unlike parallel or distributed algorithms. Parallel algorithms take advantage of computer architectures where multiple processors can work on a problem at the same time. Distributed algorithms use multiple machines connected via a computer network. Parallel and distributed algorithms divide the problem into subproblems and collect the results back together. Resource consumption in these algorithms is not only processor cycles on each processor but also the communication overhead between the processors. Some sorting algorithms can be parallelized efficiently, but their communication overhead is expensive. Iterative algorithms are generally parallelizable, but some problems have no parallel algorithms and are called inherently serial problems.
Deterministic or non-deterministic
Deterministic algorithms solve the problem with exact decisions at every step; whereas non-deterministic algorithms solve problems via guessing. Guesses are typically made more accurate through the use of heuristics.
Exact or approximate
While many algorithms reach an exact solution, approximation algorithms seek an approximation that is close to the true solution. Such algorithms have practical value for many hard problems. For example, the Knapsack problem, where there is a set of items, and the goal is to pack the knapsack to get the maximum total value. Each item has some weight and some value. The total weight that can be carried is no more than some fixed number X. So, the solution must consider the weights of items as well as their value.
Quantum algorithm
Quantum algorithms run on a realistic model of quantum computation. The term is usually used for those algorithms that seem inherently quantum or use some essential feature of Quantum computing such as quantum superposition or quantum entanglement.

By design paradigm

Another way of classifying algorithms is by their design methodology or paradigm. Some common paradigms are:

Brute-force or exhaustive search
Brute force is a problem-solving method of systematically trying every possible option until the optimal solution is found. This approach can be very time-consuming, testing every possible combination of variables. It is often used when other methods are unavailable or too complex. Brute force can solve a variety of problems, including finding the shortest path between two points and cracking passwords.
Divide and conquer
A divide-and-conquer algorithm repeatedly reduces a problem to one or more smaller instances of itself (usually recursively) until the instances are small enough to solve easily. Merge sorting is an example of divide and conquer, where an unordered list is repeatedly split into smaller lists, which are sorted in the same way and then merged. In a simpler variant of divide and conquer called prune and search or decrease-and-conquer algorithm, which solves one smaller instance of itself, and does not require a merge step. An example of a prune and search algorithm is the binary search algorithm.
Search and enumeration
Many problems (such as playing chess) can be modelled as problems on graphs. A graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes search algorithms, branch and bound enumeration, and backtracking.
Randomized algorithm
Such algorithms make some choices randomly (or pseudo-randomly). They find approximate solutions when finding exact solutions may be impractical (see heuristic method below). For some problems, the fastest approximations must involve some randomness. Whether randomized algorithms with polynomial time complexity can be the fastest algorithm for some problems is an open question known as the P versus NP problem. There are two large classes of such algorithms:
  1. Monte Carlo algorithms return a correct answer with high probability. E.g. RP is the subclass of these that run in polynomial time.
  2. Las Vegas algorithms always return the correct answer, but their running time is only probabilistically bound, e.g. ZPP.
Reduction of complexity
This technique transforms difficult problems into better-known problems solvable with (hopefully) asymptotically optimal algorithms. The goal is to find a reducing algorithm whose complexity is not dominated by the resulting reduced algorithms. For example, one selection algorithm finds the median of an unsorted list by first sorting the list (the expensive portion), and then pulling out the middle element in the sorted list (the cheap portion). This technique is also known as transform and conquer.
Back tracking
In this approach, multiple solutions are built incrementally and abandoned when it is determined that they cannot lead to a valid full solution.

Optimization problems

For optimization problems there is a more specific classification of algorithms; an algorithm for such problems may fall into one or more of the general categories described above as well as into one of the following:

Linear programming
When searching for optimal solutions to a linear function bound by linear equality and inequality constraints, the constraints can be used directly to produce optimal solutions. There are algorithms that can solve any problem in this category, such as the popular simplex algorithm. Problems that can be solved with linear programming include the maximum flow problem for directed graphs. If a problem also requires that any of the unknowns be integers, then it is classified in integer programming. A linear programming algorithm can solve such a problem if it can be proved that all restrictions for integer values are superficial, i.e., the solutions satisfy these restrictions anyway. In the general case, a specialized algorithm or an algorithm that finds approximate solutions is used, depending on the difficulty of the problem.
Dynamic programming
When a problem shows optimal substructures—meaning the optimal solution can be constructed from optimal solutions to subproblems—and overlapping subproblems, meaning the same subproblems are used to solve many different problem instances, a quicker approach called dynamic programming avoids recomputing solutions. For example, Floyd–Warshall algorithm, the shortest path between a start and goal vertex in a weighted graph can be found using the shortest path to the goal from all adjacent vertices. Dynamic programming and memoization go together. Unlike divide and conquer, dynamic programming subproblems often overlap. The difference between dynamic programming and simple recursion is the caching or memoization of recursive calls. When subproblems are independent and do not repeat, memoization does not help; hence dynamic programming is not applicable to all complex problems. Using memoization dynamic programming reduces the complexity of many problems from exponential to polynomial.
The greedy method
Greedy algorithms, similarly to a dynamic programming, work by examining substructures, in this case not of the problem but of a given solution. Such algorithms start with some solution and improve it by making small modifications. For some problems, they always find the optimal solution but for others they may stop at local optima. The most popular use of greedy algorithms is finding minimal spanning trees of graphs without negative cycles. Huffman Tree, Kruskal, Prim, Sollin are greedy algorithms that can solve this optimization problem.
The heuristic method
In optimization problems, heuristic algorithms find solutions close to the optimal solution when finding the optimal solution is impractical. These algorithms get closer and closer to the optimal solution as they progress. In principle, if run for an infinite amount of time, they will find the optimal solution. They can ideally find a solution very close to the optimal solution in a relatively short time. These algorithms include local search, tabu search, simulated annealing, and genetic algorithms. Some, like simulated annealing, are non-deterministic algorithms while others, like tabu search, are deterministic. When a bound on the error of the non-optimal solution is known, the algorithm is further categorized as an approximation algorithm.

Celestial mechanics

From Wikipedia, the free encyclopedia ...