Search This Blog

Friday, July 11, 2025

Philosophy of space and time

Philosophy of space and time is a branch of philosophy concerned with ideas about knowledge and understanding within space and time. Such ideas have been central to philosophy from its inception.

The philosophy of space and time was both an inspiration for and a central aspect of early analytic philosophy. The subject focuses on a number of basic issues, including whether time and space exist independently of the mind, whether they exist independently of one another, what accounts for time's apparently unidirectional flow, whether times other than the present moment exist, and questions about the nature of identity (particularly the nature of identity over time).

Ancient and medieval views

The earliest recorded philosophy of time was expounded by the ancient Egyptian thinker Ptahhotep (c. 2650–2600 BC), who said:

Follow your desire as long as you live, and do not perform more than is ordered, do not lessen the time of the following desire, for the wasting of time is an abomination to the spirit...

— 11th maxim of Ptahhotep

The Vedas, the earliest texts on Indian philosophy and Hindu philosophy, dating back to the late 2nd millennium BC, describe ancient Hindu cosmology, in which the universe goes through repeated cycles of creation, destruction, and rebirth, with each cycle lasting 4,320,000,000 years. Ancient Greek philosophers, including Parmenides and Heraclitus, wrote essays on the nature of time.

Incas regarded space and time as a single concept, named pacha (Quechua: pacha, Aymara: pacha).

Plato, in the Timaeus, identified time with the period of motion of the heavenly bodies, and space as that in which things come to be. Aristotle, in Book IV of his Physics, defined time as the number of changes with respect to before and after, and the place of an object as the innermost motionless boundary of that which surrounds it.

In Book 11 of St. Augustine's Confessions, he reflects on the nature of time, asking, "What then is time? If no one asks me, I know: if I wish to explain it to one who asks, I know not." He goes on to comment on the difficulty of thinking about time, pointing out the inaccuracy of common speech: "For but few things are there of which we speak properly; of most things we speak improperly, still, the things intended are understood." But Augustine presented the first philosophical argument for the reality of Creation (against Aristotle) in the context of his discussion of time, saying that knowledge of time depends on the knowledge of the movement of things, and therefore time cannot be where there are no creatures to measure its passing (Confessions Book XI ¶30; City of God Book XI ch.6).

In contrast to ancient Greek philosophers who believed that the universe had an infinite past with no beginning, medieval philosophers and theologians developed the concept of the universe having a finite past with a beginning, now known as temporal finitism. John Philoponus presented early arguments, adopted by later Christian philosophers and theologians of the form "argument from the impossibility of the existence of an actual infinite", which states:

"An actual infinite cannot exist."
"An infinite temporal regress of events is an actual infinite."
"∴ An infinite temporal regress of events cannot exist."

In the early 11th century, the Muslim physicist Ibn al-Haytham (Alhacen or Alhazen) discussed space perception and its epistemological implications in his Book of Optics (1021). He also rejected Aristotle's definition of topos (Physics IV) by way of geometric demonstrations and defined place as a mathematical spatial extension. His experimental disproof of the extramission hypothesis of vision led to changes in the understanding of the visual perception of space, contrary to the previous emission theory of vision supported by Euclid and Ptolemy. In "tying the visual perception of space to prior bodily experience, Alhacen unequivocally rejected the intuitiveness of spatial perception and, therefore, the autonomy of vision. Without tangible notions of distance and size for correlation, sight can tell us next to nothing about such things."

Realism and anti-realism

A traditional realist position in ontology is that time and space have existence apart from the human mind. Idealists, by contrast, deny or doubt the existence of objects independent of the mind. Some anti-realists, whose ontological position is that objects outside the mind do exist, nevertheless doubt the independent existence of time and space.

In 1781, Immanuel Kant published the Critique of Pure Reason, one of the most influential works in the history of the philosophy of space and time. He describes time as an a priori notion that, together with other a priori notions such as space, allows us to comprehend sense experience. Kant holds that neither space nor time are substance, entities in themselves, or learned by experience; he holds, rather, that both are elements of a systematic framework we use to structure our experience. Spatial measurements are used to quantify how far apart objects are, and temporal measurements are used to quantitatively compare the interval between (or duration of) events. Although space and time are held to be transcendentally ideal in this sense—that is, mind-dependent—they are also empirically real—that is, according to Kant's definitions, a priori features of experience, and therefore not simply "subjective," variable, or accidental perceptions in a given consciousness.

Some idealist writers, such as J. M. E. McTaggart in The Unreality of Time, have argued that time is an illusion (see also: § Flow of time, below).

The writers discussed here are for the most part realists in this regard; for instance, Gottfried Leibniz held that his monads existed, at least independently of the mind of the observer.

Absolutism and relationalism

Leibniz and Newton

The great debate between defining notions of space and time as real objects themselves (absolute), or mere orderings upon actual objects (relational), began between physicists Isaac Newton (via his spokesman, Samuel Clarke) and Gottfried Leibniz in the papers of the Leibniz–Clarke correspondence.

Arguing against the absolutist position, Leibniz offers a number of thought experiments with the purpose of showing that there is contradiction in assuming the existence of facts such as absolute location and velocity. These arguments trade heavily on two principles central to his philosophy: the principle of sufficient reason and the identity of indiscernibles. The principle of sufficient reason holds that for every fact, there is a reason that is sufficient to explain what and why it is the way it is and not otherwise. The identity of indiscernibles states that if there is no way of telling two entities apart, then they are one and the same thing.

The example Leibniz uses involves two proposed universes situated in absolute space. The only discernible difference between them is that the latter is positioned five feet to the left of the first. The example is only possible if such a thing as absolute space exists. Such a situation, however, is not possible, according to Leibniz, for if it were, a universe's position in absolute space would have no sufficient reason, as it might very well have been anywhere else. Therefore, it contradicts the principle of sufficient reason, and there could exist two distinct universes that were in all ways indiscernible, thus contradicting the identity of indiscernibles.

Standing out in Clarke's (and Newton's) response to Leibniz's arguments is the bucket argument: Water in a bucket, hung from a rope and set to spin, will start with a flat surface. As the water begins to spin in the bucket, the surface of the water will become concave. If the bucket is stopped, the water will continue to spin, and while the spin continues, the surface will remain concave. The concave surface is apparently not the result of the interaction of the bucket and the water, since the surface is flat when the bucket first starts to spin; the surface of the water becomes concave as the water itself, influenced by the spinning motion of the bucket, also begins to spin, and the surface remains concave as the bucket stops.

In this response, Clarke argues for the necessity of the existence of absolute space to account for phenomena like rotation and acceleration that cannot be accounted for on a purely relationalist account. Clarke argues that since the curvature of the water occurs in the rotating bucket as well as in the stationary bucket containing spinning water, it can only be explained by stating that the water is rotating in relation to the presence of some third thing—absolute space.

Leibniz describes a space that exists only as a relation between objects, and which has no existence apart from the existence of those objects. Motion exists only as a relation between those objects. Newtonian space provided the absolute frame of reference within which objects can have motion. In Newton's system, the frame of reference exists independently of the objects contained within it. These objects can be described as moving in relation to space itself. For almost two centuries, the evidence of a concave water surface held authority.

Mach

Another important figure in this debate is 19th-century physicist Ernst Mach. While he did not deny the existence of phenomena like that seen in the bucket argument, he still denied the absolutist conclusion by offering a different answer as to what the bucket was rotating in relation to: the fixed stars.

Mach suggested that thought experiments like the bucket argument are problematic. If we were to imagine a universe that only contains a bucket, on Newton's account, this bucket could be set to spin relative to absolute space, and the water it contained would form the characteristic concave surface. But in the absence of anything else in the universe, it would be difficult to confirm that the bucket was indeed spinning. It seems equally possible that the surface of the water in the bucket would remain flat.

Mach argued that, in effect, the water experiment in an otherwise empty universe would remain flat. But if another object were introduced into this universe, perhaps a distant star, there would now be something relative to which the bucket could be seen as rotating. The water inside the bucket could possibly have a slight curve. To account for the curve that we observe, an increase in the number of objects in the universe also increases the curvature in the water. Mach argued that the momentum of an object, whether angular or linear, exists as a result of the sum of the effects of other objects in the universe (Mach's Principle).

Einstein

Albert Einstein proposed that the laws of physics should be based on the principle of relativity. This principle holds that the rules of physics must be the same for all observers, regardless of the frame of reference that is used, and that light propagates at the same speed in all reference frames. This theory was motivated by Maxwell's equations, which show that electromagnetic waves propagate in a vacuum at the speed of light. However, Maxwell's equations give no indication of what this speed is relative to. Prior to Einstein, it was thought that this speed was relative to a fixed medium, called the luminiferous ether. In contrast, the theory of special relativity postulates that light propagates at the speed of light in all inertial frames, and examines the implications of this postulate.

All attempts to measure any speed relative to this ether failed, which can be seen as a confirmation of Einstein's postulate that light propagates at the same speed in all reference frames. Special relativity is a formalization of the principle of relativity that does not contain a privileged inertial frame of reference, such as the luminiferous ether or absolute space, from which Einstein inferred that no such frame exists.

Einstein generalized relativity to frames of reference that were non-inertial. He achieved this by positing the Equivalence Principle, which states that the force felt by an observer in a given gravitational field and that felt by an observer in an accelerating frame of reference are indistinguishable. This led to the conclusion that the mass of an object warps the geometry of the space-time surrounding it, as described in Einstein's field equations.

In classical physics, an inertial reference frame is one in which an object that experiences no forces does not accelerate. In general relativity, an inertial frame of reference is one that is following a geodesic of space-time. An object that moves against a geodesic experiences a force. An object in free fall does not experience a force, because it is following a geodesic. An object standing on the earth, however, will experience a force, as it is being held against the geodesic by the surface of the planet.

Einstein partially advocates Mach's principle in that distant stars explain inertia because they provide the gravitational field against which acceleration and inertia occur. But contrary to Leibniz's account, this warped space-time is as integral a part of an object as are its other defining characteristics, such as volume and mass. If one holds, contrary to idealist beliefs, that objects exist independently of the mind, it seems that relativistics commits one to also hold the idea that space and temporality have exactly the same type of independent existence.

Conventionalism

The position of conventionalism states that there is no fact of the matter as to the geometry of space and time, but that it is decided by convention. The first proponent of such a view, Henri Poincaré, reacting to the creation of the new non-Euclidean geometry, argued that which geometry applied to a space was decided by convention, since different geometries will describe a set of objects equally well, based on considerations from his sphere-world.

This view was developed and updated to include considerations from relativistic physics by Hans Reichenbach. Reichenbach's conventionalism, applying to space and time, focuses around the idea of coordinative definition.

Coordinative definition has two major features. The first has to do with coordinating units of length with certain physical objects. This is motivated by the fact that we can never directly apprehend length. Instead we must choose some physical object, say the Standard Metre at the Bureau International des Poids et Mesures (International Bureau of Weights and Measures), or the wavelength of cadmium to stand in as our unit of length. The second feature deals with separated objects. Although we can, presumably, directly test the equality of length of two measuring rods when they are next to one another, we can not find out as much for two rods distant from one another. Even supposing that two rods, whenever brought near to one another are seen to be equal in length, we are not justified in stating that they are always equal in length. This impossibility undermines our ability to decide the equality of length of two distant objects. Sameness of length, to the contrary, must be set by definition.

Such a use of coordinative definition is in effect, on Reichenbach's conventionalism, in the General Theory of Relativity where light is assumed, i.e. not discovered, to mark out equal distances in equal times. After this setting of coordinative definition, however, the geometry of spacetime is set.

As in the absolutism/relationalism debate, contemporary philosophy is still in disagreement as to the correctness of the conventionalist doctrine.

Structure of space-time

Building from a mix of insights from the historical debates of absolutism and conventionalism as well as reflecting on the import of the technical apparatus of the General Theory of Relativity, details as to the structure of space-time have made up a large proportion of discussion within the philosophy of space and time, as well as the philosophy of physics. The following is a short list of topics.

Relativity of simultaneity

According to special relativity each point in the universe can have a different set of events that compose its present instant. This has been used in the Rietdijk–Putnam argument to demonstrate that relativity predicts a block universe in which events are fixed in four dimensions.

Historical frameworks

A further application of the modern mathematical methods, in league with the idea of invariance and covariance groups, is to try to interpret historical views of space and time in modern, mathematical language.

In these translations, a theory of space and time is seen as a manifold paired with vector spaces, the more vector spaces the more facts there are about objects in that theory. The historical development of spacetime theories is generally seen to start from a position where many facts about objects are incorporated in that theory, and as history progresses, more and more structure is removed.

For example, Aristotelian space and time has both absolute position and special places, such as the center of the cosmos and the circumference. Newtonian space and time has absolute position and is Galilean invariant, but does not have special positions.

Direction of time

The problem of the direction of time arises directly from two contradictory facts. Firstly, the fundamental physical laws are time-reversal invariant; if a cinematographic film were taken of any process describable by means of the aforementioned laws and then played backwards, it would still portray a physically possible process. Secondly, our experience of time, at the macroscopic level, is not time-reversal invariant. Glasses can fall and break, but shards of glass cannot reassemble and fly up onto tables. We have memories of the past, and none of the future. We feel we cannot change the past but can influence the future. There is no future without our past.

Causation solution

One solution to this problem takes a metaphysical view, in which the direction of time follows from an asymmetry of causation. We know more about the past because the elements of the past are causes for the effects that compose our perception. We cannot affect the past, but we can affect the outcomes of the future.

There are two main objections to this view. First is the problem of distinguishing the cause from the effect in a non-arbitrary way. The use of causation in constructing a temporal ordering could easily become circular. The second problem with this view is its explanatory power. While the causation account, if successful, may account for some time-asymmetric phenomena like perception and action, it does not account for many others.

However, asymmetry of causation can be observed in a non-arbitrary way which is not metaphysical in the case of a human hand dropping a cup of water which smashes into fragments on a hard floor, spilling the liquid. In this order, the causes of the resultant pattern of cup fragments and water spill is easily attributable in terms of the trajectory of the cup, irregularities in its structure, angle of its impact on the floor, etc. However, applying the same event in reverse, it is difficult to explain why the various pieces of the cup should fly up into the human hand and reassemble precisely into the shape of a cup, or why the water should position itself entirely within the cup. The causes of the resultant structure and shape of the cup and the encapsulation of the water by the hand within the cup are not easily attributable, as neither hand nor floor can achieve such formations of the cup or water. This asymmetry is perceivable on account of two features: i) the relationship between the agent capacities of the human hand (i.e., what it is and is not capable of and what it is for) and non-animal agency (i.e., what floors are and are not capable of and what they are for) and ii) that the pieces of cup came to possess exactly the nature and number of those of a cup before assembling. In short, such asymmetry is attributable to the relationship between i) temporal direction and ii) the implications of form and functional capacity.

The application of these ideas of form and functional capacity only dictates temporal direction in relation to complex scenarios involving specific, non-metaphysical agency which is not merely dependent on human perception of time. However, this last observation in itself is not sufficient to invalidate the implications of the example for the progressive nature of time in general.

Thermodynamics solution

The second major family of solutions to this problem, and by far the one that has generated the most literature, finds the existence of the direction of time as relating to the nature of thermodynamics.

The answer from classical thermodynamics states that while our basic physical theory is, in fact, time-reversal symmetric, thermodynamics is not. In particular, the second law of thermodynamics states that the net entropy of a closed system never decreases, and this explains why we often see glass breaking, but not coming back together.

But in statistical mechanics things become more complicated. On one hand, statistical mechanics is far superior to classical thermodynamics, in that thermodynamic behavior, such as glass breaking, can be explained by the fundamental laws of physics paired with a statistical postulate. But statistical mechanics, unlike classical thermodynamics, is time-reversal symmetric. The second law of thermodynamics, as it arises in statistical mechanics, merely states that it is overwhelmingly likely that net entropy will increase, but it is not an absolute law.

Current thermodynamic solutions to the problem of the direction of time aim to find some further fact, or feature of the laws of nature to account for this discrepancy.

Laws solution

A third type of solution to the problem of the direction of time, although much less represented, argues that the laws are not time-reversal symmetric. For example, certain processes in quantum mechanics, relating to the weak nuclear force, are not time-reversible, keeping in mind that when dealing with quantum mechanics time-reversibility comprises a more complex definition. But this type of solution is insufficient because 1) the time-asymmetric phenomena in quantum mechanics are too few to account for the uniformity of macroscopic time-asymmetry and 2) it relies on the assumption that quantum mechanics is the final or correct description of physical processes.

One recent proponent of the laws solution is Tim Maudlin who argues that the fundamental laws of physics are laws of temporal evolution (see Maudlin [2007]). However, elsewhere Maudlin argues: "[the] passage of time is an intrinsic asymmetry in the temporal structure of the world... It is the asymmetry that grounds the distinction between sequences that runs from past to future and sequences which run from future to past" [ibid, 2010 edition, p. 108]. Thus it is arguably difficult to assess whether Maudlin is suggesting that the direction of time is a consequence of the laws or is itself primitive.

Flow of time

The problem of the flow of time, as it has been treated in analytic philosophy, owes its beginning to a paper written by J. M. E. McTaggart, in which he proposes two "temporal series". The first series, which means to account for our intuitions about temporal becoming, or the moving Now, is called the A-series. The A-series orders events according to their being in the past, present or future, simpliciter and in comparison to each other. The B-series eliminates all reference to the present, and the associated temporal modalities of past and future, and orders all events by the temporal relations earlier than and later than. In many ways, the debate between proponents of these two views can be seen as a continuation of the early modern debate between the view that there is absolute time (defended by Isaac Newton) and the view that there is only merely relative time (defended by Gottfried Leibniz).

McTaggart, in his paper "The Unreality of Time", argues that time is unreal since a) the A-series is inconsistent and b) the B-series alone cannot account for the nature of time as the A-series describes an essential feature of it.

Building from this framework, two camps of solution have been offered. The first, the A-theorist solution, takes becoming as the central feature of time, and tries to construct the B-series from the A-series by offering an account of how B-facts come to be out of A-facts. The second camp, the B-theorist solution, takes as decisive McTaggart's arguments against the A-series and tries to construct the A-series out of the B-series, for example, by temporal indexicals.

Presentism and eternalism

According to Presentism, time is an ordering of various realities. At a certain time, some things exist and others do not. This is the only reality we can deal with. We cannot, for example, say that Homer exists because at the present time he does not. An Eternalist, on the other hand, holds that time is a dimension of reality on a par with the three spatial dimensions, and hence that all things—past, present and future—can be said to be just as real as things in the present. According to this theory, then, Homer really does exist, though we must still use special language when talking about somebody who exists at a distant time—just as we would use special language when talking about something far away (the very words near, far, above, below, and such are directly comparable to phrases such as in the past, a minute ago, and so on).

Philosophers such as Vincent Conitzer and Caspar Hare have argued that the philosophy of time is connected to the philosophy of self. Conitzer argues that the metaphysics of the self are connected to the A-theory of time, and that arguments in favor of A-theory are more effective as arguments for the combined position of both A-theory being true and the "I" being metaphysically privileged from other perspectives. Caspar Hare has made similar arguments with the theories of egocentric presentism and perspectival realism, of which several other philosophers have written reviews.

Endurantism and perdurantism

The positions on the persistence of objects are somewhat similar. An endurantist holds that for an object to persist through time is for it to exist completely at different times (each instance of existence we can regard as somehow separate from previous and future instances, though still numerically identical with them). A perdurantist on the other hand holds that for a thing to exist through time is for it to exist as a continuous reality, and that when we consider the thing as a whole we must consider an aggregate of all its "temporal parts" or instances of existing. Endurantism is seen as the conventional view and flows out of our pre-philosophical ideas (when I talk to somebody I think I am talking to that person as a complete object, and not just a part of a cross-temporal being), but perdurantists such as David Lewis have attacked this position. They argue that perdurantism is the superior view for its ability to take account of change in objects.

On the whole, Presentists are also endurantists and Eternalists are also perdurantists (and vice versa), but this is not a necessary relation and it is possible to claim, for instance, that time's passage indicates a series of ordered realities, but that objects within these realities somehow exist outside of the reality as a whole, even though the realities as wholes are not related. However, such positions are rarely adopted.

Quantum suicide and immortality

Quantum suicide is a thought experiment in quantum mechanics and the philosophy of physics. Purportedly, it can falsify any interpretation of quantum mechanics other than the Everett many-worlds interpretation by means of a variation of the Schrödinger's cat thought experiment, from the cat's point of view. Quantum immortality refers to the subjective experience of surviving quantum suicide. This concept is sometimes conjectured to be applicable to real-world causes of death as well.

As a thought experiment, quantum suicide is an intellectual exercise in which an abstract setup is followed through to its logical consequences merely to prove a theoretical point. Virtually all physicists and philosophers of science who have described it, especially in popularized treatments, underscore that it relies on contrived, idealized circumstances that may be impossible or exceedingly difficult to realize in real life, and that its theoretical premises are controversial even among supporters of the many-worlds interpretation. Thus, as cosmologist Anthony Aguirre warns, "[...] it would be foolish (and selfish) in the extreme to let this possibility guide one's actions in any life-and-death question."

History

Hugh Everett did not mention quantum suicide or quantum immortality in writing; his work was intended as a solution to the paradoxes of quantum mechanics. Eugene Shikhovtsev's biography of Everett states that "Everett firmly believed that his many-worlds theory guaranteed him immortality: his consciousness, he argued, is bound at each branching to follow whatever path does not lead to death". Peter Byrne, author of a biography of Everett, reports that Everett also privately discussed quantum suicide (such as to play high-stakes Russian roulette and survive in the winning branch), but adds that "[i]t is unlikely, however, that Everett subscribed to this [quantum immortality] view, as the only sure thing it guarantees is that the majority of your copies will die, hardly a rational goal."

Among scientists, the thought experiment was introduced by Euan Squires in 1986. Afterwards, it was published independently by Hans Moravec in 1987 and Bruno Marchal in 1988; it was also described by Huw Price in 1997, who credited it to Dieter Zeh, and independently presented formally by Max Tegmark in 1998. It was later discussed by philosophers Peter J. Lewis in 2000 and David Lewis in 2001.

Thought experiment

The quantum suicide thought experiment involves a similar apparatus to Schrödinger's cat – a box which kills the occupant in a given time frame with probability one-half due to quantum uncertainty. The only difference is to have the experimenter recording observations be the one inside the box. The significance of this thought experiment is that someone whose life or death depends on a qubit could possibly distinguish between interpretations of quantum mechanics. By definition, fixed observers cannot.

At the start of the first iteration, under both interpretations, the probability of surviving the experiment is 50%, as given by the squared norm of the wave function. At the start of the second iteration, assuming a single-world interpretation of quantum mechanics (like the widely-held Copenhagen interpretation) is true, the wave function has already collapsed; thus, if the experimenter is already dead, there is a 0% chance of survival for any further iterations. However, if the many-worlds interpretation is true, a superposition of the live experimenter necessarily exists (as also does the one who dies). Now, barring the possibility of life after death, after every iteration only one of the two experimenter superpositions – the live one – is capable of having any sort of conscious experience. Putting aside the philosophical problems associated with individual identity and its persistence, under the many-worlds interpretation, the experimenter, or at least a version of them, continues to exist through all of their superpositions where the outcome of the experiment is that they live. In other words, a version of the experimenter survives all iterations of the experiment. Since the superpositions where a version of the experimenter lives occur by quantum necessity (under the many-worlds interpretation), it follows that their survival, after any realizable number of iterations, is physically necessary; hence, the notion of quantum immortality.

A version of the experimenter surviving stands in stark contrast to the implications of the Copenhagen interpretation, according to which, although the survival outcome is possible in every iteration, its probability tends towards zero as the number of iterations increases. According to the many-worlds interpretation, the above scenario has the opposite property: the probability of a version of the experimenter living is necessarily one for any number of iterations.

In the book Our Mathematical Universe, Max Tegmark lays out three criteria that, in abstract, a quantum suicide experiment must fulfill:

  • The random number generator must be quantum, not deterministic, so that the experimenter enters a state of superposition of being dead and alive.
  • The experimenter must be rendered dead (or at least unconscious) on a time scale shorter than that on which they can become aware of the outcome of the quantum measurement.
  • The experiment must be virtually certain to kill the experimenter, and not merely injure them.

Analysis of real-world feasibility

In response to questions about "subjective immortality" from normal causes of death, Tegmark suggested that the flaw in that reasoning is that dying is not a binary event as in the thought experiment; it is a progressive process, with a continuum of states of decreasing consciousness. He states that in most real causes of death, one experiences such a gradual loss of self-awareness. It is only within the confines of an abstract scenario that an observer finds they defy all odds. Referring to the above criteria, he elaborates as follows: "[m]ost accidents and common causes of death clearly don't satisfy all three criteria, suggesting you won't feel immortal after all. In particular, regarding criterion 2, under normal circumstances dying isn't a binary thing where you're either alive or dead [...] What makes the quantum suicide work is that it forces an abrupt transition."

David Lewis' commentary and subsequent criticism

The philosopher David Lewis explored the possibility of quantum immortality in a 2001 lecture titled "How Many Lives Has Schrödinger's Cat?", his first academic foray into the field of the interpretation of quantum mechanics – and his last, due to his death less than four months afterwards. In the lecture, published posthumously in 2004, Lewis rejected the many-worlds interpretation, allowing that it offers initial theoretical attractions, but also arguing that it suffers from irremediable flaws, mainly regarding probabilities, and came to tentatively endorse the Ghirardi–Rimini–Weber theory instead. Lewis concluded the lecture by stating that the quantum suicide thought experiment, if applied to real-world causes of death, would entail what he deemed a "terrifying corollary": as all causes of death are ultimately quantum-mechanical in nature, if the many-worlds interpretation were true, in Lewis' view an observer should subjectively "expect with certainty to go on forever surviving whatever dangers [he or she] may encounter", as there will always be possibilities of survival, no matter how unlikely; faced with branching events of survival and death, an observer should not "equally expect to experience life and death", as there is no such thing as experiencing death, and should thus divide his or her expectations only among branches where he or she survives. If survival is guaranteed, however, this is not the case for good health or integrity. This would lead to a Tithonus-like deterioration of one's body that continues indefinitively, leaving the subject forever just short of death.

Interviewed for the 2004 book Schrödinger's Rabbits, Tegmark rejected this scenario for the reason that "the fading of consciousness is a continuous process. Although I cannot experience a world line in which I am altogether absent, I can enter one in which my speed of thought is diminishing, my memories and other faculties fading [...] [Tegmark] is confident that even if he cannot die all at once, he can gently fade away." In the same book, philosopher of science and many-worlds proponent David Wallace undermines the case for real-world quantum immortality on the basis that death can be understood as a continuum of decreasing states of consciousness not only in time, as argued by Tegmark, but also in space: "our consciousness is not located at one unique point in the brain, but is presumably a kind of emergent or holistic property of a sufficiently large group of neurons [...] our consciousness might not be able to go out like a light, but it can dwindle exponentially until it is, for all practical purposes, gone."

Directly responding to Lewis' lecture, British philosopher and many-worlds proponent David Papineau, while finding Lewis' other objections to the many-worlds interpretation lacking, strongly denies that any modification to the usual probability rules is warranted in death situations. Assured subjective survival can follow from the quantum suicide idea only if an agent reasons in terms of "what will be experienced next" instead of the more obvious "what will happen next, whether will be experienced or not". He writes: "[I]t is by no means obvious why Everettians should modify their intensity rule in this way. For it seems perfectly open for them to apply the unmodified intensity rule in life-or-death situations, just as elsewhere. If they do this, then they can expect all futures in proportion to their intensities, whether or not those futures contain any of their live successors. For example, even when you know you are about to be the subject in a fifty-fifty Schrödinger’s experiment, you should expect a future branch where you perish, to just the same degree as you expect a future branch where you survive."

On a similar note, quoting Lewis' position that death should not be expected as an experience, philosopher of science Charles Sebens concedes that, in a quantum suicide experiment, "[i]t is tempting to think you should expect survival with certainty." However, he remarks that expectation of survival could follow only if the quantum branching and death were absolutely simultaneous, otherwise normal chances of death apply: "[i]f death is indeed immediate on all branches but one, the thought has some plausibility. But if there is any delay it should be rejected. In such a case, there is a short period of time when there are multiple copies of you, each (effectively) causally isolated from the others and able to assign a credence to being the one who will live. Only one will survive. Surely rationality does not compel you to be maximally optimistic in such a scenario." Sebens also explores the possibility that death might not be simultaneous to branching, but still faster than a human can mentally realize the outcome of the experiment. Again, an agent should expect to die with normal probabilities: "[d]o the copies need to last long enough to have thoughts to cause trouble? I think not. If you survive, you can consider what credences you should have assigned during the short period after splitting when you coexisted with the other copies."

Writing in the journal Ratio, philosopher István Aranyosi, while noting that "[the] tension between the idea of states being both actual and probable is taken as the chief weakness of the many-worlds interpretation of quantum mechanics," summarizes that most of the critical commentary of Lewis' immortality argument has revolved around its premises. But even if, for the sake of argument, one were willing to entirely accept Lewis' assumptions, Aranyosi strongly denies that the "terrifying corollary" would be the correct implication of said premises. Instead, the two scenarios that would most likely follow would be what Aranyosi describes as the "comforting corollary", in which an observer should never expect to get very sick in the first place, or the "momentary life" picture, in which an observer should expect "eternal life, spent almost entirely in an unconscious state", punctuated by extremely brief, amnesiac moments of consciousness. Thus, Aranyosi concludes that while "[w]e can't assess whether one or the other [of the two alternative scenarios] gets the lion's share of the total intensity associated with branches compatible with self-awareness, [...] we can be sure that they together (i.e. their disjunction) do indeed get the lion's share, which is much reassuring."

Analysis by other proponents of the many-worlds interpretation

Physicist David Deutsch, though a proponent of the many-worlds interpretation, states regarding quantum suicide that "that way of applying probabilities does not follow directly from quantum theory, as the usual one does. It requires an additional assumption, namely that when making decisions one should ignore the histories in which the decision-maker is absent....[M]y guess is that the assumption is false."

Tegmark now believes experimenters should only expect a normal probability of survival, not immortality. The experimenter's probability amplitude in the wavefunction decreases significantly, meaning they exist with a much lower measure than they had before. Per the anthropic principle, a person is less likely to find themselves in a world where they are less likely to exist, that is, a world with a lower measure has a lower probability of being observed by them. Therefore, the experimenter will have a lower probability of observing the world in which they survive than the earlier world in which they set up the experiment. This same problem of reduced measure was pointed out by Lev Vaidman in the Stanford Encyclopedia of Philosophy. In the 2001 paper, "Probability and the many-worlds interpretation of quantum theory", Vaidman writes that an agent should not agree to undergo a quantum suicide experiment: "The large 'measures' of the worlds with dead successors is a good reason not to play." Vaidman argues that it is the instantaneity of death that may seem to imply subjective survival of the experimenter, but that normal probabilities nevertheless must apply even in this special case: "[i]ndeed, the instantaneity makes it difficult to establish the probability postulate, but after it has been justified in the wide range of other situations it is natural to apply the postulate for all cases."

In his 2013 book The Emergent Multiverse, Wallace opines that the reasons for expecting subjective survival in the thought experiment "do not really withstand close inspection", although he concedes that it would be "probably fair to say [...] that precisely because death is philosophically complicated, my objections fall short of being a knock-down refutation". Besides re-stating that there appears to be no motive to reason in terms of expectations of experience instead of expectations of what will happen, he suggests that a decision-theoretic analysis shows that "an agent who prefers certain life to certain death is rationally compelled to prefer life in high-weight branches and death in low-weight branches to the opposite."

Physicist Sean M. Carroll, another proponent of the many-worlds interpretation, states regarding quantum suicide that neither experiences nor rewards should be thought of as being shared between future versions of oneself, as they become distinct persons when the world splits. He further states that one cannot pick out some future versions of oneself as "really you" over others, and that quantum suicide still cuts off the existence of some of these future selves, which would be worth objecting to just as if there were a single world.

Analysis by skeptics of the many-worlds interpretation

Cosmologist Anthony Aguirre, while personally skeptical of most accounts of the many-worlds interpretation, in his book Cosmological Koans writes that "[p]erhaps reality actually is this bizarre, and we really do subjectively 'survive' any form of death that is both instantaneous and binary." Aguirre notes, however, that most causes of death do not fulfill these two requirements: "If there are degrees of survival, things are quite different." If loss of consciousness was binary like in the thought experiment, the quantum suicide effect would prevent an observer from subjectively falling asleep or undergoing anesthesia, conditions in which mental activities are greatly diminished but not altogether abolished. Consequently, upon most causes of death, even outwardly sudden, if the quantum suicide effect holds true an observer is more likely to progressively slip into an attenuated state of consciousness, rather than remain fully awake by some very improbable means. Aguirre further states that quantum suicide as a whole might be characterized as a sort of reductio ad absurdum against the current understanding of both the many-worlds interpretation and theory of mind. He finally hypothesizes that a different understanding of the relationship between the mind and time should remove the bizarre implications of necessary subjective survival.

Physicist and writer Philip Ball, a critic of the many-worlds interpretation, in his book Beyond Weird, describes the quantum suicide experiment as "cognitively unstable" and exemplificatory of the difficulties of the many-worlds theory with probabilities. While he acknowledges Lev Vaidman's argument that an experimenter should subjectively expect outcomes in proportion of the "measure of existence" of the worlds in which they happen, Ball ultimately rejects this explanation. "What this boils down to is the interpretation of probabilities in the MWI. If all outcomes occur with 100% probability, where does that leave the probabilistic character of quantum mechanics?" Furthermore, Ball explains that such arguments highlight what he recognizes as another major problem of the many-worlds interpretation, connected but independent from the issue of probability: the incompatibility with the notion of selfhood. Ball ascribes most attempts of justifying probabilities in the many-worlds interpretation to "saying that quantum probabilities are just what quantum mechanics look like when consciousness is restricted to only one world" but that "there is in fact no meaningful way to explain or justify such a restriction." Before performing a quantum measurement, an "Alice before" experimenter "can't use quantum mechanics to predict what will happen to her in a way that can be articulated – because there is no logical way to talk about 'her' at any moment except the conscious present (which, in a frantically splitting universe, doesn't exist). Because it is logically impossible to connect the perceptions of Alice Before to Alice After [the experiment], "Alice" has disappeared. [...] [The MWI] eliminates any coherent notion of what we can experience, or have experienced, or are experiencing right now."

Philosopher of science Peter J. Lewis, a critic of the many-worlds interpretation, considers the whole thought experiment an example of the difficulty of accommodating probability within the many-worlds framework: "Standard quantum mechanics yields probabilities for various future occurrences, and these probabilities can be fed into an appropriate decision theory. But if every physically possible consequence of the current state of affairs is certain to occur, on what basis should I decide what to do? For example, if I point a gun at my head and pull the trigger, it looks like Everett's theory entails that I am certain to survive—and that I am certain to die. This is at least worrying, and perhaps rationally disabling." In his book Quantum Ontology, Lewis explains that for the subjective immortality argument to be drawn out of the many-worlds theory, one has to adopt an understanding of probability – the so-called "branch-counting" approach, in which an observer can meaningfully ask "which post-measurement branch will I end up on?" – that is ruled out by experimental, empirical evidence as it would yield probabilities that do not match with the well-confirmed Born rule. Lewis identifies instead in the Deutsch-Wallace decision-theoretic analysis the most promising (although still, to his judgement, incomplete) way of addressing probabilities in the many-worlds interpretation, in which it is not possible to count branches (and, similarly, the persons that "end up" on each branch). Lewis concludes that "[t]he immortality argument is perhaps best viewed as a dramatic demonstration of the fundamental conflict between branch-counting (or person-counting) intuitions about probability and the decision theoretic approach. The many-worlds theory, to the extent that it is viable, does not entail that you should expect to live forever."

Space travel in science fiction

From Wikipedia, the free encyclopedia
Rocket on cover of Other Worlds sci-fi magazine, September 1951

Space travel, or space flight (less often, starfaring or star voyaging) is a science fiction theme that has captivated the public and is almost archetypal for science fiction.[4] Space travel, interplanetary or interstellar, is usually performed in space ships, and spacecraft propulsion in various works ranges from the scientifically plausible to the totally fictitious.

While some writers focus on realistic, scientific, and educational aspects of space travel, other writers see this concept as a metaphor for freedom, including "free[ing] mankind from the prison of the solar system". Though the science fiction rocket has been described as a 20th-century icon, according to The Encyclopedia of Science Fiction "The means by which space flight has been achieved in sf – its many and various spaceships – have always been of secondary importance to the mythical impact of the theme". Works related to space travel have popularized such concepts as time dilation, space stations, and space colonization.

While generally associated with science fiction, space travel has also occasionally featured in fantasy, sometimes involving magic or supernatural entities such as angels.

History

Science and Mechanics, November 1931, showing a proposed sub-orbital spaceship that would reach a 700-mile altitude on a one-hour flight from Berlin to New York
Still from Lost in Space TV series premiere (1965), depicting space travelers in suspended animation

A classic, defining trope of the science fiction genre is that the action takes place in space, either aboard a spaceship or on another planet. Early works of science fiction, termed "proto SF" – such as novels by 17th-century writers Francis Godwin and Cyrano de Bergerac, and by astronomer Johannes Kepler – include "lunar romances", much of whose action takes place on the Moon. Science fiction critic George Slusser also pointed to Christopher Marlowe's Doctor Faustus (1604) – in which the main character is able to see the entire Earth from high above – and noted the connections of space travel to earlier dreams of flight and air travel, as far back as the writings of Plato and Socrates. In such a grand view, space travel, and inventions such as various forms of "star drive", can be seen as metaphors for freedom, including "free[ing] mankind from the prison of the solar system".

In the following centuries, while science fiction addressed many aspects of futuristic science as well as space travel, space travel proved the more influential with the genre's writers and readers, evoking their sense of wonder. Most works were mainly intended to amuse readers, but a small number, often by authors with a scholarly background, sought to educate readers about related aspects of science, including astronomy; this was the motive of the influential American editor Hugo Gernsback, who dubbed it "sugar-coated science" and "scientifiction". Science fiction magazines, including Gernsback's Science Wonder Stories, alongside works of pure fiction, discussed the feasibility of space travel; many science fiction writers also published nonfiction works on space travel, such as Willy Ley's articles and David Lasser's book, The Conquest of Space (1931). 

A roadside replica starship atop a stone base
Roadside replica of Star Trek starship Enterprise

From the late 19th and early 20th centuries on, there was a visible distinction between the more "realistic", scientific fiction (which would later evolve into hard sf)), whose authors, often scientists like Konstantin Tsiolkovsky and Max Valier, focused on the more plausible concept of interplanetary travel (to the Moon or Mars); and the more grandiose, less realistic stories of "escape from Earth into a Universe filled with worlds", which gave rise to the genre of space opera, pioneered by E. E. Smith and popularized by the television series Star Trek, which debuted in 1966. This trend continues to the present, with some works focusing on "the myth of space flight", and others on "realistic examination of space flight"; the difference can be described as that between the authors' concern with the "imaginative horizons rather than hardware".

The successes of 20th-century space programs, such as the Apollo 11 Moon landing, have often been described as "science fiction come true" and have served to further "demystify" the concept of space travel within the Solar System. Henceforth writers who wanted to focus on the "myth of space travel" were increasingly likely to do so through the concept of interstellar travelEdward James wrote that many science fiction stories have "explored the idea that without the constant expansion of humanity, and the continual extension of scientific knowledge, comes stagnation and decline." While the theme of space travel has generally been seen as optimistic, some stories by revisionist authors, often more pessimistic and disillusioned, juxtapose the two types, contrasting the romantic myth of space travel with a more down-to-Earth reality. George Slusser suggests that "science fiction travel since World War II has mirrored the United States space program: anticipation in the 1950s and early 1960s, euphoria into the 1970s, modulating into skepticism and gradual withdrawal since the 1980s."

On the screen, the 1902 French film A Trip to the Moon, by Georges Méliès, described as the first science fiction film, linked special effects to depictions of spaceflight. With other early films, such as Woman in the Moon (1929) and Things to Come (1936), it contributed to an early recognition of the rocket as the iconic, primary means of space travel, decades before space programs began. Later milestones in film and television include the Star Trek series and films, and the film 2001: A Space Odyssey by Stanley Kubrick (1968), which visually advanced the concept of space travel, allowing it to evolve from the simple rocket toward a more complex space ship. Stanley Kubrick's 1968 epic film featured a lengthy sequence of interstellar travel through a mysterious "star gate". This sequence, noted for its psychedelic special effects conceived by Douglas Trumbull, influenced a number of later cinematic depictions of superluminal and hyperspatial travel, such as Star Trek: The Motion Picture (1979).

Means of travel

Artist rendition of a spaceship entering warp drive

Generic terms for engines enabling science fiction spacecraft propulsion include "space drive" and "star drive". In 1977 The Visual Encyclopedia of Science Fiction listed the following means of space travel: anti-gravityatomic (nuclear), bloater, cannon one-shotDean drivefaster-than-light (FTL), hyperspace, inertialess drive, ion thrusterphoton rocket, plasma propulsion engine, Bussard ramjet, R. force, solar sailspindizzy, and torchship.

The 2007 Brave New Words: The Oxford Dictionary of Science Fiction lists the following terms related to the concept of space drive: gravity drive, hyperdrive, ion drive, jump drive, overdrive, ramscoop (a synonym for ram-jet), reaction drive, stargate, ultradrive, warp drive and torchdrive. Several of these terms are entirely fictitious or are based on "rubber science", while others are based on real scientific theories. Many fictitious means of travelling through space, in particular, faster than light travel, tend to go against the current understanding of physics, in particular, the theory of relativity. Some works sport numerous alternative star drives; for example the Star Trek universe, in addition to its iconic "warp drive", has introduced concepts such as "transwarp", "slipstream" and "spore drive", among others.

Many, particularly early, writers of science fiction did not address means of travel in much detail, and many writings of the "proto-SF" era were disadvantaged by their authors' living in a time when knowledge of space was very limited — in fact, many early works did not even consider the concept of vacuum and instead assumed that an atmosphere of sorts, composed of air or "aether", continued indefinitely. Highly influential in popularizing the science of science fiction was the 19th-century French writer Jules Verne, whose means of space travel in his 1865 novel, From the Earth to the Moon (and its sequel, Around the Moon), was explained mathematically, and whose vehicle — a gun-launched space capsule — has been described as the first such vehicle to be "scientifically conceived" in fiction. Percy Greg's Across the Zodiac (1880) featured a spaceship with a small garden, an early precursor of hydroponics. Another writer who attempted to merge concrete scientific ideas with science fiction was the turn-of-the-century Russian writer and scientist, Konstantin Tsiolkovsky, who popularized the concept of rocketryGeorge Mann mentions Robert A. Heinlein's Rocket Ship Galileo (1947) and Arthur C. Clarke's Prelude to Space (1951) as early, influential modern works that emphasized the scientific and engineering aspects of space travel. From the 1960s on, growing popular interest in modern technology also led to increasing depictions of interplanetary spaceships based on advanced plausible extensions of real modern technology. The Alien franchise features ships with ion propulsion, a developing technology at the time that would be used years later in the Deep Space 1, Hayabusa 1 and SMART-1 spacecraft.

Interstellar travel

Slower than light

With regard to interstellar travel, in which faster-than-light speeds are generally considered unrealistic, more realistic depictions of interstellar travel have often focused on the idea of "generation ships" that travel at sub-light speed for many generations before arriving at their destinations. Other scientifically plausible concepts of interstellar travel include suspended animation and, less often, ion drive, solar sail, Bussard ramjet, and time dilation.

Faster than light

Artist rendition of a ship traveling through a wormhole

Some works discuss Einstein's general theory of relativity and challenges that it faces from quantum mechanics, and include concepts of space travel through wormholes or black holes. Many writers, however, gloss over such problems, introducing entirely fictional concepts such as hyperspace (also, subspace, nulspace, overspace, jumpspace, or slipstream) travel using inventions such as hyperdrive, jump drive, warp drive, or space folding. Invention of completely made-up devices enabling space travel has a long tradition — already in the early 20th century, Verne criticized H. G. Wells' The First Men in the Moon (1901) for abandoning realistic science (his spaceship relied on anti-gravitic material called "cavorite"). Of fictitious drives, by the mid-1970s the concept of hyperspace travel was described as having achieved the most popularity, and would subsequently be further popularized — as hyperdrive — through its use in the Star Wars franchise. While the fictitious drives "solved" problems related to physics (the difficulty of faster-than-light travel), some writers introduce new wrinkles — for example, a common trope involves the difficulty of using such drives in close proximity to other objects, in some cases allowing their use only beginning from the outskirts of the planetary systems.

While usually the means of space travel is just a means to an end, in some works, particularly short stories, it is a central plot device. These works focus on themes such as the mysteries of hyperspace, or the consequences of getting lost after an error or malfunction.

QED vacuum

From Wikipedia, the free encyclopedia

The QED vacuum or quantum electrodynamic vacuum is the field-theoretic vacuum of quantum electrodynamics. It is the lowest energy state (the ground state) of the electromagnetic field when the fields are quantized. When the Planck constant is hypothetically allowed to approach zero, QED vacuum is converted to classical vacuum, which is to say, the vacuum of classical electromagnetism.

Another field-theoretic vacuum is the QCD vacuum of the Standard Model.

A Feynman diagram (box diagram) for photon-photon scattering, one photon scatters from the transient vacuum charge fluctuations of the other

Fluctuations

The QED vacuum is subject to fluctuations about a dormant zero average-field condition; Here is a description of the quantum vacuum:

The quantum theory asserts that a vacuum, even the most perfect vacuum devoid of any matter, is not really empty. Rather the quantum vacuum can be depicted as a sea of continuously appearing and disappearing [pairs of] particles that manifest themselves in the apparent jostling of particles that is quite distinct from their thermal motions. These particles are ‘virtual’, as opposed to real, particles. ...At any given instant, the vacuum is full of such virtual pairs, which leave their signature behind, by affecting the energy levels of atoms.

— Joseph Silk On the Shores of the Unknown, p. 62

Virtual particles

It is sometimes attempted to provide an intuitive picture of virtual particles based upon the Heisenberg energy-time uncertainty principle: (where ΔE and Δt are energy and time variations, and ħ the Planck constant divided by 2π) arguing along the lines that the short lifetime of virtual particles allows the "borrowing" of large energies from the vacuum and thus permits particle generation for short times.

This interpretation of the energy-time uncertainty relation is not universally accepted, however. One issue is the use of an uncertainty relation limiting measurement accuracy as though a time uncertainty Δt determines a "budget" for borrowing energy ΔE. Another issue is the meaning of "time" in this relation, because energy and time (unlike position q and momentum p, for example) do not satisfy a canonical commutation relation (such as [q, p] = ). Various schemes have been advanced to construct an observable that has some kind of time interpretation, and yet does satisfy a canonical commutation relation with energy. The many approaches to the energy-time uncertainty principle are a continuing subject of study.

Quantization of the fields

The Heisenberg uncertainty principle does not allow a particle to exist in a state in which the particle is simultaneously at a fixed location, say the origin of coordinates, and has also zero momentum. Instead the particle has a range of momentum and spread in location attributable to quantum fluctuations; if confined, it has a zero-point energy.

An uncertainty principle applies to all quantum mechanical operators that do not commute. In particular, it applies also to the electromagnetic field. A digression follows to flesh out the role of commutators for the electromagnetic field.

The standard approach to the quantization of the electromagnetic field begins by introducing a vector potential A and a scalar potential V to represent the basic electromagnetic electric field E and magnetic field B using the relations: The vector potential is not completely determined by these relations, leaving open a so-called gauge freedom. Resolving this ambiguity using the Coulomb gauge leads to a description of the electromagnetic fields in the absence of charges in terms of the vector potential and the momentum field Π, given by: where ε0 is the electric constant of the SI units. Quantization is achieved by insisting that the momentum field and the vector potential do not commute. That is, the equal-time commutator is:  where r, r are spatial locations, ħ is the reduced Planck constant, δij is the Kronecker delta and δ(rr′) is the Dirac delta function. The notation [ , ] denotes the commutator.
Quantization can be achieved without introducing the vector potential, in terms of the underlying fields themselves:  where the circumflex denotes a Schrödinger time-independent field operator, and εijk is the antisymmetric Levi-Civita tensor.

Because of the non-commutation of field variables, the variances of the fields cannot be zero, although their averages are zero. The electromagnetic field has therefore a zero-point energy, and a lowest quantum state. The interaction of an excited atom with this lowest quantum state of the electromagnetic field is what leads to spontaneous emission, the transition of an excited atom to a state of lower energy by emission of a photon even when no external perturbation of the atom is present.

Electromagnetic properties

The polarization of the observed light in the extremely strong magnetic field suggests that the empty space around the neutron star RX J1856.5−3754 is subject to the vacuum birefringence.

As a result of quantization, the quantum electrodynamic vacuum can be considered as a material medium. It is capable of vacuum polarization. In particular, the force law between charged particles is affected. The electrical permittivity of quantum electrodynamic vacuum can be calculated, and it differs slightly from the simple ε0 of the classical vacuum. Likewise, its permeability can be calculated and differs slightly from μ0. This medium is a dielectric with relative dielectric constant > 1, and is diamagnetic, with relative magnetic permeability < 1. Under some extreme circumstances in which the field exceeds the Schwinger limit (for example, in the very high fields found in the exterior regions of pulsars), the quantum electrodynamic vacuum is thought to exhibit nonlinearity in the fields. Calculations also indicate birefringence and dichroism at high fields. Many of electromagnetic effects of the vacuum are small, and only recently have experiments been designed to enable the observation of nonlinear effects. PVLAS and other teams are working towards the needed sensitivity to detect QED effects.

Attainability

A perfect vacuum is itself only attainable in principle. It is an idealization, like absolute zero for temperature, that can be approached, but never actually realized:

One reason [a vacuum is not empty] is that the walls of a vacuum chamber emit light in the form of black-body radiation...If this soup of photons is in thermodynamic equilibrium with the walls, it can be said to have a particular temperature, as well as a pressure. Another reason that perfect vacuum is impossible is the Heisenberg uncertainty principle which states that no particles can ever have an exact position ...Each atom exists as a probability function of space, which has a certain nonzero value everywhere in a given volume. ...More fundamentally, quantum mechanics predicts ...a correction to the energy called the zero-point energy [that] consists of energies of virtual particles that have a brief existence. This is called vacuum fluctuation.

— Luciano Boi, "Creating the physical world ex nihilo?" p. 55

Virtual particles make a perfect vacuum unrealizable, but leave open the question of attainability of a quantum electrodynamic vacuum or QED vacuum. Predictions of QED vacuum such as spontaneous emission, the Casimir effect and the Lamb shift have been experimentally verified, suggesting QED vacuum is a good model for a high quality realizable vacuum. There are competing theoretical models for vacuum, however. For example, quantum chromodynamic vacuum includes many virtual particles not treated in quantum electrodynamics. The vacuum of quantum gravity treats gravitational effects not included in the Standard Model. It remains an open question whether further refinements in experimental technique ultimately will support another model for realizable vacuum.

Born rule

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Born_rule ...