Search This Blog

Friday, November 28, 2025

Multiverse

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Multiverse

The multiverse is the hypothetical set of all universes. Together, these universes are presumed to comprise everything that exists: the entirety of space, time, matter, energy, information, and the physical laws and constants that describe them. The different universes within the multiverse are called "parallel universes", "flat universes", "other universes", "alternate universes", "multiple universes", "plane universes", "parent and child universes", "many universes", or "many worlds". One common assumption is that the multiverse is a "patchwork quilt of separate universes all bound by the same laws of physics."

The concept of multiple universes, or a multiverse, has been discussed throughout history. It has evolved and has been debated in various fields, including cosmology, physics, and philosophy. Some physicists have argued that the multiverse is a philosophical notion rather than a scientific hypothesis, as it cannot be empirically falsified. In recent years, there have been proponents and skeptics of multiverse theories within the physics community. Although some scientists have analyzed data in search of evidence for other universes, no statistically significant evidence has been found. Critics argue that the multiverse concept lacks testability and falsifiability, which are essential for scientific inquiry, and that it raises unresolved metaphysical issues.

Max Tegmark and Brian Greene have proposed different classification schemes for multiverses and universes. Tegmark's four-level classification consists of Level I: an extension of our universe, Level II: universes with different physical constants, Level III: many-worlds interpretation of quantum mechanics, and Level IV: ultimate ensemble. Brian Greene's nine types of multiverses include quilted, inflationary, brane, cyclic, landscape, quantum, holographic, simulated, and ultimate. The ideas explore various dimensions of space, physical laws, and mathematical structures to explain the existence and interactions of multiple universes. Some other multiverse concepts include twin-world models, cyclic theories, M-theory, and black-hole cosmology.

The anthropic principle suggests that the existence of a multitude of universes, each with different physical laws, could explain the asserted appearance of fine-tuning of our own universe for conscious life. The weak anthropic principle posits that we exist in one of the few universes that support life. Debates around Occam's razor and the simplicity of the multiverse versus a single universe arise, with proponents like Max Tegmark arguing that the multiverse is simpler and more elegant. The many-worlds interpretation of quantum mechanics and modal realism, the belief that all possible worlds exist and are as real as our world, are also subjects of debate in the context of the anthropic principle.

History of the concept

According to some, the idea of infinite worlds was first suggested by the pre-Socratic Greek philosopher Anaximander in the sixth century BCE. However, there is debate as to whether he believed in multiple worlds, and if he did, whether those worlds were co-existent or successive.

The first figures to whom historians can definitively attribute the concept of innumerable worlds are the Ancient Greek Atomists, beginning with Leucippus and Democritus in the 5th century BCE, followed by Epicurus (341–270 BCE) and the Roman Epicurean Lucretius (1st century BCE). In the third century BCE, the philosopher Chrysippus suggested that the world eternally expired and regenerated, effectively suggesting the existence of multiple universes across time. The concept of multiple universes became more defined in the Middle Ages. In the Renaissance, Giordano Bruno (1548-1600) expressed the concept of infinite worlds.

The American philosopher and psychologist William James used the term "multiverse" in 1895, but in a different context.

The concept first appeared in the modern scientific context in the course of the debate between Boltzmann and Zermelo in 1895.

In Dublin in 1952, Erwin Schrödinger gave a lecture in which he jocularly warned his audience that what he was about to say might "seem lunatic". He said that when his equations seemed to describe several different histories, these were "not alternatives, but all really happen simultaneously". This sort of duality is called "superposition".

Search for evidence

In the 1990s, after recent works of fiction about the concept gained popularity, scientific discussions about the multiverse and journal articles about it gained prominence.

Around 2010, scientists such as Stephen M. Feeney analyzed Wilkinson Microwave Anisotropy Probe (WMAP) data and claimed to find evidence suggesting that this universe collided with other (parallel) universes in the distant past. However, a more thorough analysis of data from the WMAP and from the Planck satellite, which has a resolution three times higher than WMAP, did not reveal any statistically significant evidence of such a bubble universe collision. In addition, there was no evidence of any gravitational pull of other universes on ours.

In 2015, an astrophysicist may have found evidence of alternate or parallel universes by looking back in time to a time immediately after the Big Bang, although it is still a matter of debate among physicists. Dr. Ranga-Ram Chary, after analyzing the cosmic radiation spectrum, found a signal 4,500 times brighter than it should have been, based on the number of protons and electrons scientists believe existed in the very early universe. This signal—an emission line that arose from the formation of atoms during the era of recombination—is more consistent with a universe whose ratio of matter particles to photons is about 65 times greater than our own. There is a 30% chance that this signal is noise, and not really a signal at all; however, it is also possible that it exists because a parallel universe dumped some of its matter particles into our universe. If additional protons and electrons had been added to our universe during recombination, more atoms would have formed, more photons would have been emitted during their formation, and the signature line that arose from all of these emissions would be greatly enhanced. Chary said:

Many other regions beyond our observable universe would exist with each such region governed by a different set of physical parameters than the ones we have measured for our universe.

— Ranga-Ram Chary, USA Today

Chary also noted:

Unusual claims like evidence for alternate universes require a very high burden of proof.

— Ranga-Ram Chary, "Universe Today"

The signature that Chary has isolated may be a consequence of incoming light from distant galaxies, or even from clouds of dust surrounding our own galaxy.

Proponents and skeptics

Modern proponents of one or more of the multiverse hypotheses include Lee SmolinDon PageBrian GreeneMax TegmarkAlan GuthAndrei LindeMichio KakuDavid DeutschLeonard SusskindAlexander VilenkinYasunori NomuraRaj PathriaLaura Mersini-HoughtonNeil deGrasse TysonSean Carroll and Stephen Hawking.

Scientists who are generally skeptical of the concept of a multiverse or popular multiverse hypotheses include Sabine HossenfelderDavid GrossPaul Steinhardt, Anna Ijjas, Abraham LoebDavid SpergelNeil TurokViatcheslav MukhanovMichael S. TurnerRoger PenroseGeorge EllisJoe SilkCarlo RovelliAdam FrankMarcelo GleiserJim Baggott and Paul Davies.

Arguments against multiverse hypotheses

In his 2003 New York Times opinion piece, "A Brief History of the Multiverse", author and cosmologist Paul Davies offered a variety of arguments that multiverse hypotheses are non-scientific:

For a start, how is the existence of the other universes to be tested? To be sure, all cosmologists accept that there are some regions of the universe that lie beyond the reach of our telescopes, but somewhere on the slippery slope between that and the idea that there is an infinite number of universes, credibility reaches a limit. As one slips down that slope, more and more must be accepted on faith, and less and less is open to scientific verification. Extreme multiverse explanations are therefore reminiscent of theological discussions. Indeed, invoking an infinity of unseen universes to explain the unusual features of the one we do see is just as ad hoc as invoking an unseen Creator. The multiverse theory may be dressed up in scientific language, but in essence, it requires the same leap of faith.

— Paul Davies, "A Brief History of the Multiverse", The New York Times

George Ellis, writing in August 2011, provided a criticism of the multiverse, and pointed out that it is not a traditional scientific theory. He accepts that the multiverse is thought to exist far beyond the cosmological horizon. He emphasized that it is theorized to be so far away that it is unlikely any evidence will ever be found. Ellis also explained that some theorists do not believe the lack of empirical testability and falsifiability is a major concern, but he is opposed to that line of thinking:

Many physicists who talk about the multiverse, especially advocates of the string landscape, do not care much about parallel universes per se. For them, objections to the multiverse as a concept are unimportant. Their theories live or die based on internal consistency and, one hopes, eventual laboratory testing.

Ellis says that scientists have proposed the idea of the multiverse as a way of explaining the nature of existence. He points out that it ultimately leaves those questions unresolved because it is a metaphysical issue that cannot be resolved by empirical science. He argues that observational testing is at the core of science and should not be abandoned:

As skeptical as I am, I think the contemplation of the multiverse is an excellent opportunity to reflect on the nature of science and on the ultimate nature of existence: why we are here. ... In looking at this concept, we need an open mind, though not too open. It is a delicate path to tread. Parallel universes may or may not exist; the case is unproved. We are going to have to live with that uncertainty. Nothing is wrong with scientifically based philosophical speculation, which is what multiverse proposals are. But we should name it for what it is.

— George Ellis, "Does the Multiverse Really Exist?", Scientific American

Philosopher Philip Goff argues that the inference of a multiverse to explain the apparent fine-tuning of the universe is an example of Inverse Gambler's Fallacy.

Stoeger, Ellis, and Kircher note that in a true multiverse theory, "the universes are then completely disjoint and nothing that happens in any one of them is causally linked to what happens in any other one. This lack of any causal connection in such multiverses really places them beyond any scientific support".

In May 2020, astrophysicist Ethan Siegel expressed criticism in a Forbes blog post that parallel universes would have to remain a science fiction dream for the time being, based on the scientific evidence available to us.

Scientific American contributor John Horgan also argues against the idea of a multiverse, claiming that they are "bad for science."

Types

Max Tegmark and Brian Greene have devised classification schemes for the various theoretical types of multiverses and universes that they might comprise.

Max Tegmark's four levels

Cosmologist Max Tegmark has provided a taxonomy of universes beyond the familiar observable universe. The four levels of Tegmark's classification are arranged such that subsequent levels can be understood to encompass and expand upon previous levels. They are briefly described below.

Level I: An extension of our universe

A prediction of cosmic inflation is the existence of an infinite ergodic universe, which, being infinite, must contain Hubble volumes realizing all initial conditions.

Accordingly, an infinite universe will contain an infinite number of Hubble volumes, all having the same physical laws and physical constants. In regard to configurations such as the distribution of matter, almost all will differ from our Hubble volume. However, because there are infinitely many, far beyond the cosmological horizon, there will eventually be Hubble volumes with similar, and even identical, configurations. Tegmark estimates that an identical volume to ours should be about 1010115 meters away from us.

Given infinite space, there would be an infinite number of Hubble volumes identical to ours in the universe. This follows directly from the cosmological principle, wherein it is assumed that our Hubble volume is not special or unique.

Level II: Universes with different physical constants

In the eternal inflation theory, which is a variant of the cosmic inflation theory, the multiverse or space as a whole is stretching and will continue doing so forever, but some regions of space stop stretching and form distinct bubbles (like gas pockets in a loaf of rising bread). Such bubbles are embryonic level I multiverses.

Different bubbles may experience different spontaneous symmetry breaking, which results in different properties, such as different physical constants.

Level II also includes John Archibald Wheeler's oscillatory universe theory and Lee Smolin's fecund universes theory.

Level III: Many-worlds interpretation of quantum mechanics

Schrödinger's cat in the many-worlds interpretation, where a branching of the universe occurs through a superposition of two quantum mechanical states

Hugh Everett III's many-worlds interpretation (MWI) is one of several mainstream interpretations of quantum mechanics.

In brief, one aspect of quantum mechanics is that certain observations cannot be predicted absolutely. Instead, there is a range of possible observations, each with a different probability. According to the MWI, each of these possible observations corresponds to a different "world" within the Universal wavefunction, with each world as real as ours. Suppose a six-sided die is thrown and that the result of the throw corresponds to observable quantum mechanics. All six possible ways the die can fall correspond to six different worlds. In the case of the Schrödinger's cat thought experiment, both outcomes would be "real" in at least one "world".

Tegmark argues that a Level III multiverse does not contain more possibilities in the Hubble volume than a Level I or Level II multiverse. In effect, all the different worlds created by "splits" in a Level III multiverse with the same physical constants can be found in some Hubble volume in a Level I multiverse. Tegmark writes that, "The only difference between Level I and Level III is where your doppelgängers reside. In Level I they live elsewhere in good old three-dimensional space. In Level III they live on another quantum branch in infinite-dimensional Hilbert space."

Similarly, all Level II bubble universes with different physical constants can, in effect, be found as "worlds" created by "splits" at the moment of spontaneous symmetry breaking in a Level III multiverse. According to Yasunori NomuraRaphael Bousso, and Leonard Susskind, this is because global spacetime appearing in the (eternally) inflating multiverse is a redundant concept. This implies that the multiverses of Levels I, II, and III are, in fact, the same thing. This hypothesis is referred to as "Multiverse = Quantum Many Worlds". According to Yasunori Nomura, this quantum multiverse is static, and time is a simple illusion.

Another version of the many-worlds idea is H. Dieter Zeh's many-minds interpretation.

Level IV: Ultimate ensemble

The ultimate mathematical universe hypothesis is Tegmark's own hypothesis.

This level considers all universes to be equally real which can be described by different mathematical structures.

Tegmark writes:

Abstract mathematics is so general that any Theory Of Everything (TOE) which is definable in purely formal terms (independent of vague human terminology) is also a mathematical structure. For instance, a TOE involving a set of different types of entities (denoted by words, say) and relations between them (denoted by additional words) is nothing but what mathematicians call a set-theoretical model, and one can generally find a formal system that it is a model of.

He argues that this "implies that any conceivable parallel universe theory can be described at Level IV" and "subsumes all other ensembles, therefore brings closure to the hierarchy of multiverses, and there cannot be, say, a Level V."

Jürgen Schmidhuber, however, says that the set of mathematical structures is not even well-defined and that it admits only universe representations describable by constructive mathematics—that is, computer programs.

Schmidhuber explicitly includes universe representations describable by non-halting programs whose output bits converge after a finite time, although the convergence time itself may not be predictable by a halting program, due to the undecidability of the halting problem. He also explicitly discusses the more restricted ensemble of quickly computable universes.

Brian Greene's nine types

The American theoretical physicist and string theorist Brian Greene discussed nine types of multiverses:

Quilted
The quilted multiverse works only in an infinite universe. With an infinite amount of space, every possible event will occur an infinite number of times. However, the speed of light prevents us from being aware of these other identical areas.
Inflationary
The inflationary multiverse is composed of various pockets in which inflation fields collapse and form new universes.
Brane
The brane multiverse version postulates that our entire universe exists on a membrane (brane) which floats in a higher dimension or "bulk". In this bulk, there are other membranes with their own universes. These universes can interact with one another, and when they collide, the violence and energy produced is more than enough to give rise to a Big Bang. The branes float or drift near each other in the bulk, and every few trillion years, attracted by gravity or some other force we do not understand, collide and bang into each other. This repeated contact gives rise to multiple or "cyclic" Big Bangs. This particular hypothesis falls under the string theory umbrella as it requires extra spatial dimensions.
Cosmos animation of a cyclic universe
Cyclic
The cyclic multiverse has multiple branes that have collided, causing Big Bangs. The universes bounce back and pass through time until they are pulled back together and again collide, destroying the old contents and creating them anew.
Landscape
The landscape multiverse relies on string theory's Calabi–Yau spaces. Quantum fluctuations drop the shapes to a lower energy level, creating a pocket with a set of laws different from that of the surrounding space.
Quantum
The quantum multiverse creates a new universe when a diversion in events occurs, as in the real-worlds variant of the many-worlds interpretation of quantum mechanics.
Holographic
The holographic multiverse is derived from the theory that the surface area of a space can encode the contents of the volume of the region.
Simulated
The simulated multiverse exists on complex computer systems that simulate entire universes. A related hypothesis, as put forward as a possibility by astronomer Avi Loeb, is that universes may be creatable in laboratories of advanced technological civilizations who have a theory of everything. Other related hypotheses include brain in a vat-type scenarios where the perceived universe is either simulated in a low-resource way or not perceived directly by the virtual/simulated inhabitant species.
Ultimate
The ultimate multiverse contains every mathematically possible universe under different laws of physics.

Twin-world models

Concept of a twin universe, with the beginning of time in the middle

There are models of two related universes that e.g. attempt to explain the baryon asymmetry – why there was more matter than antimatter at the beginning – with a mirror anti-universe. One two-universe cosmological model could explain the Hubble constant (H0) tension via interactions between the two worlds. The "mirror world" would contain copies of all existing fundamental particles. Another twin/pair-world or "bi-world" cosmology is shown to theoretically be able to solve the cosmological constant (Λ) problem, closely related to dark energy: two interacting worlds with a large Λ each could result in a small shared effective Λ.

Cyclic theories

In several theories, there is a series of, in some cases infinite, self-sustaining cycles – typically a series of Big Crunches (or Big Bounces). However, the respective universes do not exist at once but are forming or following in a logical order or sequence, with key natural constituents potentially varying between universes (see § Anthropic principle).

M-theory

A multiverse of a somewhat different kind has been envisaged within string theory and its higher-dimensional extension, M-theory.

These theories require the presence of 10 or 11 spacetime dimensions, respectively. The extra six or seven dimensions may either be compactified on a very small scale, or our universe may simply be localized on a dynamical (3+1)-dimensional object, a D3-brane. This opens up the possibility that there are other branes which could support other universes.

Black-hole cosmology

Black-hole cosmology is a cosmological model in which the observable universe is the interior of a black hole existing as one of possibly many universes inside a larger universe. This includes the theory of white holes, which are on the opposite side of space-time.

Anthropic principle

The concept of other universes has been proposed to explain how our own universe appears to be fine-tuned for conscious life as we experience it.

If there were a large (possibly infinite) number of universes, each with possibly different physical laws (or different fundamental physical constants), then some of these universes (even if very few) would have the combination of laws and fundamental parameters that are suitable for the development of matter, astronomical structures, elemental diversity, stars, and planets that can exist long enough for life to emerge and evolve.

The weak anthropic principle could then be applied to conclude that we (as conscious beings) would only exist in one of those few universes that happened to be finely tuned, permitting the existence of life with developed consciousness. Thus, while the probability might be extremely small that any particular universe would have the requisite conditions for life (as we understand life), those conditions do not require intelligent design as an explanation for the conditions in the Universe that promote our existence in it.

An early form of this reasoning is evident in Arthur Schopenhauer's 1844 work "Von der Nichtigkeit und dem Leiden des Lebens", where he argues that our world must be the worst of all possible worlds, because if it were significantly worse in any respect it could not continue to exist.

Occam's razor

Proponents and critics disagree about how to apply Occam's razor. Critics argue that to postulate an almost infinite number of unobservable universes, just to explain our own universe, is contrary to Occam's razor. However, proponents argue that in terms of Kolmogorov complexity the proposed multiverse is simpler than a single idiosyncratic universe.

For example, multiverse proponent Max Tegmark argues:

[A]n entire ensemble is often much simpler than one of its members. This principle can be stated more formally using the notion of algorithmic information content. The algorithmic information content in a number is, roughly speaking, the length of the shortest computer program that will produce that number as output. For example, consider the set of all integers. Which is simpler, the whole set or just one number? Naively, you might think that a single number is simpler, but the entire set can be generated by quite a trivial computer program, whereas a single number can be hugely long. Therefore, the whole set is actually simpler... (Similarly), the higher-level multiverses are simpler. Going from our universe to the Level I multiverse eliminates the need to specify initial conditions, upgrading to Level II eliminates the need to specify physical constants, and the Level IV multiverse eliminates the need to specify anything at all... A common feature of all four multiverse levels is that the simplest and arguably most elegant theory involves parallel universes by default. To deny the existence of those universes, one needs to complicate the theory by adding experimentally unsupported processes and ad hoc postulates: finite space, wave function collapse and ontological asymmetry. Our judgment therefore comes down to which we find more wasteful and inelegant: many worlds or many words. Perhaps we will gradually get used to the weird ways of our cosmos and find its strangeness to be part of its charm.

— Max Tegmark

Possible worlds and real worlds

In any given set of possible universes – e.g. in terms of histories or variables of nature – not all may be ever realized, and some may be realized many times. For example, over infinite time there could, in some potential theories, be infinite universes, but only a small or relatively small real number of universes where humanity could exist and only one where it ever does exist (with a unique history). It has been suggested that a universe that "contains life, in the form it has on Earth, is in a certain sense radically non-ergodic, in that the vast majority of possible organisms will never be realized". On the other hand, some scientists, theories and popular works conceive of a multiverse in which the universes are so similar that humanity exists in many equally real separate universes but with varying histories.

There is a debate about whether the other worlds are real in the many-worlds interpretation (MWI) of quantum mechanics. In Quantum Darwinism one does not need to adopt a MWI in which all of the branches are equally real.

Possible worlds are a way of explaining probability and hypothetical statements. Some philosophers, such as David Lewis, posit that all possible worlds exist and that they are just as real as the world we live in. This position is known as modal realism.

Cellularization

From Wikipedia, the free encyclopedia

In evolutionary biology, the term cellularization (also spelled cellularisation) has been used in theories to explain the evolution of cells from non-cellular components, for instance in the pre-cell theory, dealing with the evolution of the first cells on Earth, and in the syncytial theory, which attempts to explain the origins of multicellular Metazoa from simpler unicellular organisms.

Processes of cell development in multinucleate cells (known as syncytia) of animals and plants are also termed cellularization, often called syncytium cellularization.

Early diversification of life with Kandler's pre-cell theory.

      Key:
1. Reductive formation of organic compounds from CO or CO
2
by methyl-sulfur coordination chemistry.
2. Tapping of various redox energy sources and formation of primitive enzymes and templates.
3. Elements of a transcription and translation apparatus and loose associations.
4. Formation of pre-cells.
5. Stabilised circular or linear genomes.
6. Cytoplasmic membranes.
7. Rigid murein cell walls.
8. Various rigid non-murein cell walls.
9. Glycoproteinaceous cell envelope or glycocalyx.
10. Cytoskeleton.
11. Complex chromosomes and nuclear membrane.
12. Cell organelles via endosymbiosis.

The pre-cell theory

According to Otto Kandler's pre-cell theory, the early evolution of life and primordial metabolism (see iron-sulfur world hypothesis metabolism-first scenario, according to Wächtershäuser) led to the early diversification of life through the evolution of a "multiphenotypical population of pre-cells", from which the three founder groups A, B, and C, and then, from them, the precursor cells (here named proto-cells) of the three domains of life emerged successively.

In this scenario the three domains of life did not originate from an ancestral nearly complete “first cell“ nor a cellular organism often defined as the last universal common ancestor (LUCA), but from a population of evolving pre-cells. Kandler introduced the term cellularization for his concept of a successive evolution of cells by a process of evolutionary improvements.

His concept may explain the quasi-random distribution of evolutionarily important features among the three domains and, at the same time, the existence of the most basic biochemical features (the genetic code, the set of protein amino acids, etc.) in all three domains (unity of life), as well as the close relationship between the Archaea and the Eukarya. Kandler’s pre-cell theory is supported by Wächtershäuser.

According to Kandler, the protection of fragile primordial life forms from their environment by the invention of envelopes (i.e. cell membranes and cell walls) was an essential improvement. For instance, the emergence of rigid cell walls by the invention and elaboration of peptidoglycan in domain Bacteria may have been a prerequisite for their successful survival, radiation, and colonization of virtually all habitats of the geosphere and hydrosphere.

A coevolution of the biosphere and the geosphere is suggested: “The evolving life could venture into a larger variety of habitats, even into microaerobic habitats in shallow, illuminated surface waters. The continuous changes in the physical environment on the aging and cooling Earth led to further diversification of habitats and favored opportunistic radiation of primitive life into numerous phenotypes on the basis of each of the different chemolithoautotrophies. Concomitantly, with the accumulation of organic matter derived from chemolithoautotrophic life, opportunistic and obligate heterotrophic life may also have developed”.

The details of Kandler's proposal for the early diversification of life are represented in a scheme, where numbers indicate evolutionary improvements.

The syncytial theory or ciliate-acoel theory

This theory is also known as a theory of cellularization. It is a theory to explain the origin of the Metazoa. The idea was proposed by Hadži (1953) and Hanson (1977).

This cellularization (syncytial) theory states that metazoans evolved from a unicellular ciliate with multiple nuclei that went through cellularization. Firstly, the ciliate developed a ventral mouth for feeding and all nuclei moved to one side of the cell. Secondly, an epithelium was created by membranes forming barriers between the nuclei. In this way, a multicellular organism was created from one multinucleate cell (syncytium).

Example and criticism

Turbellarian flatworms

According to the syncytial theory, the ciliate ancestor, by several cellularization processes, evolved into the currently known turbellarian flatworms, which are therefore the most primitive metazoans. The theory of cellularization is based on the large similarities between ciliates and flatworms. Both ciliates and flatworms have cilia, are bilaterally symmetric, and syncytial. Therefore, the theory assumes that bilateral symmetry is more primitive than radial symmetry. However, current biological evidence shows that the most primitive forms of metazoans show radial symmetry, and thus radially symmetrical animals like cnidaria cannot be derived from bilateral flatworms.

By concluding that the first multicellular animals were flatworms, it is also suggested that simpler organisms as sponges, ctenophores and cnidarians would have derived from  more complex animals. However, most current molecular research has shown that sponges are the most primitive metazoans.

Germ layers are formed simultaneously

The syncytial theory rejects the theory of germ layers. During the development of the turbellaria (Acoela), three regions are formed without the formation of germ layers. From this, it was concluded that the germ layers are simultaneously formed during the cellularization process. This is in contrast to germ layer theory in which ectoderm, endoderm and mesoderm (in more complex animals) build up the embryo.

The macro and micronucleus of ciliates

There is a lot of evidence against ciliates being the metazoan ancestor.  Ciliates have two types of nuclei: a micronucleus which is used as germline nucleus and a macronucleus which regulates the vegetative growth. This division of nuclei is a unique feature of the ciliates and is not found in any other members of the animal kingdom. Therefore, it would be unlikely that ciliates are indeed the ancestors of the metazoans. This is confirmed by molecular phylogenetic research. Ciliates were never found close to animals in any molecular phylogeny.

Flagellated sperm

Furthermore, the syncytial theory cannot explain the flagellated sperm of metazoans. Since the ciliate ancestor does not have any flagella and it is unlikely that the flagella arose as a de novo trait in metazoans,  the syncytial theory makes it almost impossible to explain the origin of flagellated sperm.

Due to both the lack of molecular and morphological evidence for this theory, the alternative colonial theory of Haeckel, is currently gaining widespread acceptance.

For more theories see main article Multicellular organisms.

Cellularization in a syncytium (syncytium cellularization)

The development of cells in a syncytium (multinucleate cells) is termed syncytium cellularization. Syncytia are quite frequent in animals and plants. Syncytium cellularization occurs for instance in the embryonic development of animals and in endosperm development of plants. Here two examples:

Drosophila melanogaster development

In the embryonic development of Drosophila melanogaster, first 13 nuclear divisions take place forming a syncytial blastoderm consisting of approximately 6000 nuclei. During the later gastrulation stage, membranes are formed between the nuclei, and cellularization is completed.

Syncytium cellularization in plants

The term syncytium cellularization is used for instance for a process of cell development in the endosperm of the Poaceae, e.g. barley (Hordeum vulgare), rice (Oryza sativa).

Distributed operating system

From Wikipedia, the free encyclopedia

A distributed operating system is system software over a collection of independent software, networked, communicating, and physically separate computational nodes. They handle jobs which are serviced by multiple CPUs. Each individual node holds a specific software subset of the global aggregate operating system. Each subset is a composite of two distinct service provisioners. The first is a ubiquitous minimal kernel, or microkernel, that directly controls that node's hardware. Second is a higher-level collection of system management components that coordinate the node's individual and collaborative activities. These components abstract microkernel functions and support user applications.

The microkernel and the management components collection work together. They support the system's goal of integrating multiple resources and processing functionality into an efficient and stable system. This seamless integration of individual nodes into a global system is referred to as transparency, or single system image; describing the illusion provided to users of the global system's appearance as a single computational entity.

Description

Structure of monolithic kernel, microkernel and hybrid kernel-based operating systems

A distributed OS provides the essential services and functionality required of an OS but adds attributes and particular configurations to allow it to support additional requirements such as increased scale and availability. To a user, a distributed OS works in a manner similar to a single-node, monolithic operating system. That is, although it consists of multiple nodes, it appears to users and applications as a single-node.

Separating minimal system-level functionality from additional user-level modular services provides a "separation of mechanism and policy". Mechanism and policy can be simply interpreted as "what something is done" versus "how something is done," respectively. This separation increases flexibility and scalability.

Overview

The kernel

At each locale (typically a node), the kernel provides a minimally complete set of node-level utilities necessary for operating a node's underlying hardware and resources. These mechanisms include allocation, management, and disposition of a node's resources, processes, communication, and input/output management support functions. Within the kernel, the communications sub-system is of foremost importance for a distributed OS.

In a distributed OS, the kernel often supports a minimal set of functions, including low-level address space management, thread management, and inter-process communication (IPC). A kernel of this design is referred to as a microkernel. Its modular nature enhances reliability and security, essential features for a distributed OS.

General overview of system management components that reside above the microkernel.
System management components overview

System management

System management components are software processes that define the node's policies. These components are the part of the OS outside the kernel. These components provide higher-level communication, process and resource management, reliability, performance and security. The components match the functions of a single-entity system, adding the transparency required in a distributed environment.

The distributed nature of the OS requires additional services to support a node's responsibilities to the global system. In addition, the system management components accept the "defensive" responsibilities of reliability, availability, and persistence. These responsibilities can conflict with each other. A consistent approach, balanced perspective, and a deep understanding of the overall system can assist in identifying diminishing returns. Separation of policy and mechanism mitigates such conflicts.

Working together as an operating system

The architecture and design of a distributed operating system must realize both individual node and global system goals. Architecture and design must be approached in a manner consistent with separating policy and mechanism. In doing so, a distributed operating system attempts to provide an efficient and reliable distributed computing framework allowing for an absolute minimal user awareness of the underlying command and control efforts.

The multi-level collaboration between a kernel and the system management components, and in turn between the distinct nodes in a distributed operating system is the functional challenge of the distributed operating system. This is the point in the system that must maintain a perfect harmony of purpose, and simultaneously maintain a complete disconnect of intent from implementation. This challenge is the distributed operating system's opportunity to produce the foundation and framework for a reliable, efficient, available, robust, extensible, and scalable system. However, this opportunity comes at a very high cost in complexity.

The price of complexity

In a distributed operating system, the exceptional degree of inherent complexity could easily render the entire system an anathema to any user. As such, the logical price of realizing a distributed operation system must be calculated in terms of overcoming vast amounts of complexity in many areas, and on many levels. This calculation includes the depth, breadth, and range of design investment and architectural planning required in achieving even the most modest implementation.

These design and development considerations are critical and unforgiving. For instance, a deep understanding of a distributed operating system's overall architectural and design detail is required at an exceptionally early point. An exhausting array of design considerations are inherent in the development of a distributed operating system. Each of these design considerations can potentially affect many of the others to a significant degree. This leads to a massive effort in balanced approach, in terms of the individual design considerations, and many of their permutations. As an aid in this effort, most rely on documented experience and research in distributed computing power.

History

Research and experimentation efforts began in earnest in the 1970s and continued through the 1990s, with focused interest peaking in the late 1980s. A number of distributed operating systems were introduced during this period; however, very few of these implementations achieved even modest commercial success.

Fundamental and pioneering implementations of primitive distributed operating system component concepts date to the early 1950s. Some of these individual steps were not focused directly on distributed computing, and at the time, many may not have realized their important impact. These pioneering efforts laid important groundwork, and inspired continued research in areas related to distributed computing.

In the mid-1970s, research produced important advances in distributed computing. These breakthroughs provided a solid, stable foundation for efforts that continued through the 1990s.

The accelerating proliferation of multi-processor and multi-core processor systems research led to a resurgence of the distributed OS concept.

The DYSEAC

One of the first efforts was the DYSEAC, a general-purpose synchronous computer. In one of the earliest publications of the Association for Computing Machinery, in April 1954, a researcher at the National Bureau of Standards – now the National Institute of Standards and Technology (NIST) – presented a detailed specification of the DYSEAC. The introduction focused upon the requirements of the intended applications, including flexible communications, but also mentioned other computers:

Finally, the external devices could even include other full-scale computers employing the same digital language as the DYSEAC. For example, the SEAC or other computers similar to it could be harnessed to the DYSEAC and by use of coordinated programs could be made to work together in mutual cooperation on a common task… Consequently[,] the computer can be used to coordinate the diverse activities of all the external devices into an effective ensemble operation.

— ALAN L. LEINER, System Specifications for the DYSEAC

The specification discussed the architecture of multi-computer systems, preferring peer-to-peer rather than master-slave.

Each member of such an interconnected group of separate computers is free at any time to initiate and dispatch special control orders to any of its partners in the system. As a consequence, the supervisory control over the common task may initially be loosely distributed throughout the system and then temporarily concentrated in one computer, or even passed rapidly from one machine to the other as the need arises. …the various interruption facilities which have been described are based on mutual cooperation between the computer and the external devices subsidiary to it, and do not reflect merely a simple master-slave relationship.

— ALAN L. LEINER, System Specifications for the DYSEAC

This is one of the earliest examples of a computer with distributed control. The Dept. of the Army reports certified it reliable and that it passed all acceptance tests in April 1954. It was completed and delivered on time, in May 1954. This was a "portable computer", housed in a tractor-trailer, with 2 attendant vehicles and 6 tons of refrigeration capacity.

Lincoln TX-2

Described as an experimental input-output system, the Lincoln TX-2 emphasized flexible, simultaneously operational input-output devices, i.e., multiprogramming. The design of the TX-2 was modular, supporting a high degree of modification and expansion.

The system employed The Multiple-Sequence Program Technique. This technique allowed multiple program counters to each associate with one of 32 possible sequences of program code. These explicitly prioritized sequences could be interleaved and executed concurrently, affecting not only the computation in process, but also the control flow of sequences and switching of devices as well. Much discussion related to device sequencing.

Similar to DYSEAC the TX-2 separately programmed devices can operate simultaneously, increasing throughput. The full power of the central unit was available to any device. The TX-2 was another example of a system exhibiting distributed control, its central unit not having dedicated control.

Intercommunicating Cells

One early effort at abstracting memory access was Intercommunicating Cells, where a cell was composed of a collection of memory elements. A memory element was basically a binary electronic flip-flop or relay. Within a cell there were two types of elements, symbol and cell. Each cell structure stores data in a string of symbols, consisting of a name and a set of parameters. Information is linked through cell associations.

The theory contended that addressing is a wasteful and non-valuable level of indirection. Information was accessed in two ways, direct and cross-retrieval. Direct retrieval accepts a name and returns a parameter set. Cross-retrieval projects through parameter sets and returns a set of names containing the given subset of parameters. This was similar to a modified hash table data structure that allowed multiple values (parameters) for each key (name).

Cellular memory would have many advantages:

A major portion of a system's logic is distributed within the associations of information stored in the cells,

This flow of information association is somewhat guided by the act of storing and retrieving,

The time required for storage and retrieval is mostly constant and completely unrelated to the size and fill-factor of the memory

Cells are logically indistinguishable, making them both flexible to use and relatively simple to extend in size

This configuration was ideal for distributed systems. The constant-time projection through memory for storing and retrieval was inherently atomic and exclusive. The cellular memory's intrinsic distributed characteristics would be invaluable. The impact on the user, hardware/device, or Application programming interfaces was indirect. The authors were considering distributed systems, stating:

We wanted to present here the basic ideas of a distributed logic system with... the macroscopic concept of logical design, away from scanning, from searching, from addressing, and from counting, is equally important. We must, at all cost, free ourselves from the burdens of detailed local problems which only befit a machine low on the evolutionary scale of machines.

— Chung-Yeol (C. Y.) Lee, Intercommunicating Cells, Basis for a Distributed Logic Computer

Foundational work

Coherent memory abstraction

  Algorithms for scalable synchronization on shared-memory multiprocessors

File System abstraction

 Measurements of a distributed file system
 Memory coherence in shared virtual memory systems

Transaction abstraction

 Transactions
  Sagas

 Transactional Memory
 Composable memory transactions
 Transactional memory: architectural support for lock-free data structures
 Software transactional memory for dynamic-sized data structures
 Software transactional memory

Persistence abstraction

 OceanStore: an architecture for global-scale persistent storage

Coordinator abstraction

  Weighted voting for replicated data
  Consensus in the presence of partial synchrony

Reliability abstraction

 Sanity checks
 The Byzantine Generals Problem
 Fail-stop processors: an approach to designing fault-tolerant computing systems

 Recoverability
 Distributed snapshots: determining global states of distributed systems
 Optimistic recovery in distributed systems

Distributed computing models

Three basic distributions

To better illustrate this point, examine three system architectures; centralized, decentralized, and distributed. In this examination, consider three structural aspects: organization, connection, and control. Organization describes a system's physical arrangement characteristics. Connection covers the communication pathways among nodes. Control manages the operation of the earlier two considerations.

Organization

A centralized system has one level of structure, where all constituent elements directly depend upon a single control element. A decentralized system is hierarchical. The bottom level unites subsets of a system's entities. These entity subsets in turn combine at higher levels, ultimately culminating at a central master element. A distributed system is a collection of autonomous elements with no concept of levels.

Connection

Centralized systems connect constituents directly to a central master entity in a hub and spoke fashion. A decentralized system (aka network system) incorporates direct and indirect paths between constituent elements and the central entity. Typically this is configured as a hierarchy with only one shortest path between any two elements. Finally, the distributed operating system requires no pattern; direct and indirect connections are possible between any two elements. Consider the 1970s phenomena of “string art” or a spirograph drawing as a fully connected system, and the spider's web or the Interstate Highway System between U.S. cities as examples of a partially connected system.

Control

Centralized and decentralized systems have directed flows of connection to and from the central entity, while distributed systems communicate along arbitrary paths. This is the pivotal notion of the third consideration. Control involves allocating tasks and data to system elements balancing efficiency, responsiveness, and complexity.

Centralized and decentralized systems offer more control, potentially easing administration by limiting options. Distributed systems are more difficult to explicitly control, but scale better horizontally and offer fewer points of system-wide failure. The associations conform to the needs imposed by its design but not by organizational chaos

Design considerations

Transparency

Transparency or single-system image refers to the ability of an application to treat the system on which it operates without regard to whether it is distributed and without regard to hardware or other implementation details. Many areas of a system can benefit from transparency, including access, location, performance, naming, and migration. The consideration of transparency directly affects decision making in every aspect of design of a distributed operating system. Transparency can impose certain requirements and/or restrictions on other design considerations.

Systems can optionally violate transparency to varying degrees to meet specific application requirements. For example, a distributed operating system may present a hard drive on one computer as "C:" and a drive on another computer as "G:". The user does not require any knowledge of device drivers or the drive's location; both devices work the same way, from the application's perspective. A less transparent interface might require the application to know which computer hosts the drive. Transparency domains:

  • Location transparency – Location transparency comprises two distinct aspects of transparency, naming transparency and user mobility. Naming transparency requires that nothing in the physical or logical references to any system entity should expose any indication of the entity's location, or its local or remote relationship to the user or application. User mobility requires the consistent referencing of system entities, regardless of the system location from which the reference originates.
  • Access transparency – Local and remote system entities must remain indistinguishable when viewed through the user interface. The distributed operating system maintains this perception through the exposure of a single access mechanism for a system entity, regardless of that entity being local or remote to the user. Transparency dictates that any differences in methods of accessing any particular system entity—either local or remote—must be both invisible to, and undetectable by the user.
  • Migration transparency – Resources and activities migrate from one element to another controlled solely by the system and without user/application knowledge or action.
  • Replication transparency – The process or fact that a resource has been duplicated on another element occurs under system control and without user/application knowledge or intervention.
  • Concurrency transparency – Users/applications are unaware of and unaffected by the presence/activities of other users.
  • Failure transparency – The system is responsible for detection and remediation of system failures. No user knowledge/action is involved other than waiting for the system to resolve the problem.
  • Performance Transparency – The system is responsible for the detection and remediation of local or global performance shortfalls. Note that system policies may prefer some users/user classes/tasks over others. No user knowledge or interaction. is involved.
  • Size/Scale transparency – The system is responsible for managing its geographic reach, number of nodes, level of node capability without any required user knowledge or interaction.
  • Revision transparency – The system is responsible for upgrades and revisions and changes to system infrastructure without user knowledge or action.
  • Control transparency – The system is responsible for providing all system information, constants, properties, configuration settings, etc. in a consistent appearance, connotation, and denotation to all users and applications.
  • Data transparency – The system is responsible for providing data to applications without user knowledge or action relating to where the system stores it.
  • Parallelism transparency – The system is responsible for exploiting any ability to parallelize task execution without user knowledge or interaction. Arguably the most difficult aspect of transparency, and described by Tanenbaum as the "Holy grail" for distributed system designers.

Inter-process communication

Inter-Process Communication (IPC) is the implementation of general communication, process interaction, and dataflow between threads and/or processes both within a node, and between nodes in a distributed OS. The intra-node and inter-node communication requirements drive low-level IPC design, which is the typical approach to implementing communication functions that support transparency. In this sense, Interprocess communication is the greatest underlying concept in the low-level design considerations of a distributed operating system.

Process management

Process management provides policies and mechanisms for effective and efficient sharing of resources between distributed processes. These policies and mechanisms support operations involving the allocation and de-allocation of processes and ports to processors, as well as mechanisms to run, suspend, migrate, halt, or resume process execution. While these resources and operations can be either local or remote with respect to each other, the distributed OS maintains state and synchronization over all processes in the system.

As an example, load balancing is a common process management function. Load balancing monitors node performance and is responsible for shifting activity across nodes when the system is out of balance. One load balancing function is picking a process to move. The kernel may employ several selection mechanisms, including priority-based choice. This mechanism chooses a process based on a policy such as 'newest request'. The system implements the policy

Resource management

Systems resources such as memory, files, devices, etc. are distributed throughout a system, and at any given moment, any of these nodes may have light to idle workloads. Load sharing and load balancing require many policy-oriented decisions, ranging from finding idle CPUs, when to move, and which to move. Many algorithms exist to aid in these decisions; however, this calls for a second level of decision making policy in choosing the algorithm best suited for the scenario, and the conditions surrounding the scenario.

Reliability

Distributed OS can provide the necessary resources and services to achieve high levels of reliability, or the ability to prevent and/or recover from errors. Faults are physical or logical defects that can cause errors in the system. For a system to be reliable, it must somehow overcome the adverse effects of faults.

The primary methods for dealing with faults include fault avoidance, fault tolerance, and fault detection and recovery. Fault avoidance covers proactive measures taken to minimize the occurrence of faults. These proactive measures can be in the form of transactions, replication and backups. Fault tolerance is the ability of a system to continue operation in the presence of a fault. In the event, the system should detect and recover full functionality. In any event, any actions taken should make every effort to preserve the single system image.

Availability

Availability is the fraction of time during which the system can respond to requests.

Performance

Many benchmark metrics quantify performance; throughput, response time, job completions per unit time, system utilization, etc. With respect to a distributed OS, performance most often distills to a balance between process parallelism and IPC. Managing the task granularity of parallelism in a sensible relation to the messages required for support is extremely effective. Also, identifying when it is more beneficial to migrate a process to its data, rather than copy the data, is effective as well.

Synchronization

Cooperating concurrent processes have an inherent need for synchronization, which ensures that changes happen in a correct and predictable fashion. Three basic situations that define the scope of this need:

  • one or more processes must synchronize at a given point for one or more other processes to continue,
  • one or more processes must wait for an asynchronous condition in order to continue,
  • or a process must establish exclusive access to a shared resource.

Improper synchronization can lead to multiple failure modes including loss of atomicity, consistency, isolation and durability, deadlock, livelock and loss of serializability.

Flexibility

Flexibility in a distributed operating system is enhanced through the modular characteristics of the distributed OS, and by providing a richer set of higher-level services. The completeness and quality of the kernel/microkernel simplifies implementation of such services, and potentially enables service providers greater choice of providers for such services.

Research

Replicated model extended to a component object model

 Architectural Design of E1 Distributed Operating System
 The Cronus distributed operating system
 Design and development of MINIX distributed operating system

Complexity/Trust exposure through accepted responsibility

Scale and performance in the Denali isolation kernel.

Multi/Many-core focused systems

The multikernel: a new OS architecture for scalable multicore systems.
Corey: an Operating System for Many Cores.
Almos: Advanced Locality Management Operating System for cc-NUMA Many-Cores.

Distributed processing over extremes in heterogeneity

Helios: heterogeneous multiprocessing with satellite kernels.

Effective and stable in multiple levels of complexity

Tessellation: Space-Time Partitioning in a Manycore Client OS.

Preterism

From Wikipedia, the free encyclopedia ...