Search This Blog

Tuesday, February 3, 2015

Symmetry (physics)


From Wikipedia, the free encyclopedia


First Brillouin zone of FCC lattice showing symmetry labels

In physics, a symmetry of a physical system is a physical or mathematical feature of the system (observed or intrinsic) that is preserved or remains unchanged under some transformation.

A family of particular transformations may be continuous (such as rotation of a circle) or discrete (e.g., reflection of a bilaterally symmetric figure, or rotation of a regular polygon). Continuous and discrete transformations give rise to corresponding types of symmetries. Continuous symmetries can be described by Lie groups while discrete symmetries are described by finite groups (see Symmetry group). Symmetries are frequently amenable to mathematical formulations such as group representations and can be exploited to simplify many problems.

An important example of such symmetry is the invariance of the form of physical laws under arbitrary differentiable coordinate transformations.

Symmetry as invariance

Invariance is specified mathematically by transformations that leave some quantity unchanged. This idea can apply to basic real-world observations. For example, temperature may be constant throughout a room. Since the temperature is independent of position within the room, the temperature is invariant under a shift in the measurer's position.

Similarly, a uniform sphere rotated about its center will appear exactly as it did before the rotation. The sphere is said to exhibit spherical symmetry. A rotation about any axis of the sphere will preserve how the sphere "looks".

Invariance in force

The above ideas lead to the useful idea of invariance when discussing observed physical symmetry; this can be applied to symmetries in forces as well.

For example, an electric field due to a wire is said to exhibit cylindrical symmetry, because the electric field strength at a given distance r from the electrically charged wire of infinite length will have the same magnitude at each point on the surface of a cylinder (whose axis is the wire) with radius r. Rotating the wire about its own axis does not change its position or charge density, hence it will preserve the field. The field strength at a rotated position is the same. Suppose some configuration of charges (may be non-stationary) produce an electric field in some direction, then rotating the configuration of the charges (without disturbing the internal dynamics that produces the particular field) will lead to a net rotation of the direction of the electric field. These two properties are interconnected through the more general property that rotating any system of charges causes a corresponding rotation of the electric field.

In Newton's theory of mechanics, given two bodies, each with mass m, starting from rest at the origin and moving along the x-axis in opposite directions, one with speed v1 and the other with speed v2 the total kinetic energy of the system (as calculated from an observer at the origin) is 12m(v12 + v22) and remains the same if the velocities are interchanged. The total kinetic energy is preserved under a reflection in the y-axis.

The last example above illustrates another way of expressing symmetries, namely through the equations that describe some aspect of the physical system. The above example shows that the total kinetic energy will be the same if v1 and v2 are interchanged.

Local and global symmetries

Symmetries may be broadly classified as global or local. A global symmetry is one that holds at all points of spacetime, whereas a local symmetry is one that has a different symmetry transformation at different points of spacetime; specifically a local symmetry transformation is parameterised by the spacetime co-ordinates. Local symmetries play an important role in physics as they form the basis for gauge theories.

Continuous symmetries

 The two examples of rotational symmetry described above - spherical and cylindrical - are each instances of continuous symmetry. These are characterised by invariance following a continuous change in the geometry of the system. For example, the wire may be rotated through any angle about its axis and the field strength will be the same on a given cylinder. Mathematically, continuous symmetries are described by continuous or smooth functions. An important subclass of continuous symmetries in physics are spacetime symmetries.

Spacetime symmetries

Continuous spacetime symmetries are symmetries involving transformations of space and time. These may be further classified as spatial symmetries, involving only the spatial geometry associated with a physical system; temporal symmetries, involving only changes in time; or spatio-temporal symmetries, involving changes in both space and time.
  • Time translation: A physical system may have the same features over a certain interval of time \delta t; this is expressed mathematically as invariance under the transformation t \, \rightarrow t + a for any real numbers t and a in the interval. For example, in classical mechanics, a particle solely acted upon by gravity will have gravitational potential energy \, mgh when suspended from a height h above the Earth's surface. Assuming no change in the height of the particle, this will be the total gravitational potential energy of the particle at all times. In other words, by considering the state of the particle at some time (in seconds) t_0 and also at t_0 + 3, say, the particle's total gravitational potential energy will be preserved.
  • Spatial translation: These spatial symmetries are represented by transformations of the form \vec{r} \, \rightarrow \vec{r} + \vec{a} and describe those situations where a property of the system does not change with a continuous change in location. For example, the temperature in a room may be independent of where the thermometer is located in the room.
  • Spatial rotation: These spatial symmetries are classified as proper rotations and improper rotations. The former are just the 'ordinary' rotations; mathematically, they are represented by square matrices with unit determinant. The latter are represented by square matrices with determinant −1 and consist of a proper rotation combined with a spatial reflection (inversion). For example, a sphere has proper rotational symmetry. Other types of spatial rotations are described in the article Rotation symmetry.
  • Inversion transformations: These are spatio-temporal symmetries which generalise Poincaré transformations to include other conformal one-to-one transformations on the space-time coordinates. Lengths are not invariant under inversion transformations but there is a cross-ratio on four points that is invariant.
Mathematically, spacetime symmetries are usually described by smooth vector fields on a smooth manifold. The underlying local diffeomorphisms associated with the vector fields correspond more directly to the physical symmetries, but the vector fields themselves are more often used when classifying the symmetries of the physical system.

Some of the most important vector fields are Killing vector fields which are those spacetime symmetries that preserve the underlying metric structure of a manifold. In rough terms, Killing vector fields preserve the distance between any two points of the manifold and often go by the name of isometries.

Discrete symmetries

A discrete symmetry is a symmetry that describes non-continuous changes in a system. For example, a square possesses discrete rotational symmetry, as only rotations by multiples of right angles will preserve the square's original appearance. Discrete symmetries sometimes involve some type of 'swapping', these swaps usually being called reflections or interchanges.
  • Time reversal: Many laws of physics describe real phenomena when the direction of time is reversed. Mathematically, this is represented by the transformation, t \, \rightarrow - t . For example, Newton's second law of motion still holds if, in the equation F \, = m \ddot {r} , t is replaced by -t. This may be illustrated by recording the motion of an object thrown up vertically (neglecting air resistance) and then playing it back. The object will follow the same parabolic trajectory through the air, whether the recording is played normally or in reverse. Thus, position is symmetric with respect to the instant that the object is at its maximum height.
  • Spatial inversion: These are represented by transformations of the form \vec{r} \, \rightarrow - \vec{r} and indicate an invariance property of a system when the coordinates are 'inverted'. Said another way, these are symmetries between a certain object and its mirror image.

C, P, and T symmetries

The Standard model of particle physics has three related natural near-symmetries. These state that the actual universe about us is indistinguishable from one where:
T-symmetry is counterintuitive (surely the future and the past are not symmetrical) but explained by the fact that the Standard model describes local properties, not global ones like entropy. To properly reverse the direction of time, one would have to put the big bang and the resulting low-entropy state in the "future." Since we perceive the "past" ("future") as having lower (higher) entropy than the present (see perception of time), the inhabitants of this hypothetical time-reversed universe would perceive the future in the same way as we perceive the past.

These symmetries are near-symmetries because each is broken in the present-day universe. However, the Standard Model predicts that the combination of the three (that is, the simultaneous application of all three transformations) must be a symmetry, called CPT symmetry. CP violation, the violation of the combination of C- and P-symmetry, is necessary for the presence of significant amounts of baryonic matter in the universe. CP violation is a fruitful area of current research in particle physics.

Supersymmetry

A type of symmetry known as supersymmetry has been used to try to make theoretical advances in the standard model. Supersymmetry is based on the idea that there is another physical symmetry beyond those already developed in the standard model, specifically a symmetry between bosons and fermions. Supersymmetry asserts that each type of boson has, as a supersymmetric partner, a fermion, called a superpartner, and vice versa. Supersymmetry has not yet been experimentally verified: no known particle has the correct properties to be a superpartner of any other known particle. If superpartners exist they must have masses greater than current particle accelerators can generate.

Mathematics of physical symmetry

The transformations describing physical symmetries typically form a mathematical group. Group theory is an important area of mathematics for physicists.
Continuous symmetries are specified mathematically by continuous groups (called Lie groups). Many physical symmetries are isometries and are specified by symmetry groups. Sometimes this term is used for more general types of symmetries. The set of all proper rotations (about any angle) through any axis of a sphere form a Lie group called the special orthogonal group \, SO(3). (The 3 refers to the three-dimensional space of an ordinary sphere.) Thus, the symmetry group of the sphere with proper rotations is \, SO(3). Any rotation preserves distances on the surface of the ball. The set of all Lorentz transformations form a group called the Lorentz group (this may be generalised to the Poincaré group).

Discrete symmetries are described by discrete groups. For example, the symmetries of an equilateral triangle are described by the symmetric group \, S_3.

An important type of physical theory based on local symmetries is called a gauge theory and the symmetries natural to such a theory are called gauge symmetries. Gauge symmetries in the Standard model, used to describe three of the fundamental interactions, are based on the SU(3) × SU(2) × U(1) group. (Roughly speaking, the symmetries of the SU(3) group describe the strong force, the SU(2) group describes the weak interaction and the U(1) group describes the electromagnetic force.)

Also, the reduction by symmetry of the energy functional under the action by a group and spontaneous symmetry breaking of transformations of symmetric groups appear to elucidate topics in particle physics (for example, the unification of electromagnetism and the weak force in physical cosmology).

Conservation laws and symmetry

The symmetry properties of a physical system are intimately related to the conservation laws characterizing that system. Noether's theorem gives a precise description of this relation. The theorem states that each continuous symmetry of a physical system implies that some physical property of that system is conserved. Conversely, each conserved quantity has a corresponding symmetry. For example, the isometry of space gives rise to conservation of (linear) momentum, and isometry of time gives rise to conservation of energy.
The following table summarizes some fundamental symmetries and the associated conserved quantity.

Class Invariance Conserved quantity
Proper orthochronous
Lorentz symmetry
translation in time
  (homogeneity)
energy
translation in space
  (homogeneity)
linear momentum
rotation in space
  (isotropy)
angular momentum
Discrete symmetry P, coordinate inversion spatial parity
C, charge conjugation charge parity
T, time reversal time parity
CPT product of parities
Internal symmetry (independent of
spacetime coordinates)
U(1) gauge transformation electric charge
U(1) gauge transformation lepton generation number
U(1) gauge transformation hypercharge
U(1)Y gauge transformation weak hypercharge
U(2) [ U(1) × SU(2) ] electroweak force
SU(2) gauge transformation isospin
SU(2)L gauge transformation weak isospin
P × SU(2) G-parity
SU(3) "winding number" baryon number
SU(3) gauge transformation quark color
SU(3) (approximate) quark flavor
S(U(2) × U(3))
[ U(1) × SU(2) × SU(3) ]
Standard Model

Mathematics

Continuous symmetries in physics preserve transformations. One can specify a symmetry by showing how a very small transformation affects various particle fields. The commutator of two of these infinitessimal transformations are equivalent to a third infinitessimal transformation of the same kind hence they form a Lie algebra.

A general coordinate transformation (also known as a diffeomorphism) has the infinitessimal effect on a scalar, spinor and vector field for example:

\delta\phi(x) = h^{\mu}(x)\partial_{\mu}\phi(x)

\delta\psi^\alpha(x) = h^{\mu}(x)\partial_{\mu}\psi^\alpha(x) +  \partial_\mu h_\nu(x) \sigma_{\mu\nu}^{\alpha \beta} \psi^{\beta}(x)

\delta A_\mu(x) = h^{\nu}(x)\partial_{\nu}A_\mu(x) + A_\nu(x)\partial_\nu h_\mu(x)
for a general field, h(x). Without gravity only the Poincaré symmetries are preserved which restricts h(x) to be of the form:

h^{\mu}(x) = M^{\mu \nu}x_\nu + P^\mu
where M is an antisymmetric matrix (giving the Lorentz and rotational symmetries) and P is a general vector (giving the translational symmetries). Other symmetries affect multiple fields simultaneously. For example local gauge transformations apply to both a vector and spinor field:

\delta\psi^\alpha(x) = \lambda(x).\tau^{\alpha\beta}\psi^\beta(x)

\delta A_\mu(x) = \partial_\mu \lambda(x)
where \tau are generators of a particular Lie group. So far the transformations on the right have only included fields of the same type. Supersymmetries are defined according to how the mix fields of different types.

Another symmetry which is part of some theories of physics and not in others is scale invariance which involve Weyl transformations of the following kind:

\delta \phi(x) = \Omega(x) \phi(x)
If the fields have this symmetry then it can be shown that the field theory is almost certainly conformally invariant also. This means that in the absence of gravity h(x) would restricted to the form:

h^{\mu}(x) = M^{\mu \nu}x_\nu + P^\mu + D x_\mu + K^{\mu} |x|^2 - 2 K^\nu x_\nu x_\mu
with D generating scale transformations and K generating special conformal transformations. For example N=4 super-Yang-Mills theory has this symmetry while General Relativity doesn't although other theories of gravity such as conformal gravity do. The 'action' of a field theory is an invariant under all the symmetries of the theory. Much of modern theoretical physics is to do with speculating on the various symmetries the Universe may have and finding the invariants to construct field theories as models.

In string theories, since a string can be decomposed into an infinite number of particle fields, the symmetries on the string world sheet is equivalent to special transformations which mix an infinite number of fields.

Strong interaction


From Wikipedia, the free encyclopedia


The nucleus of a Helium atom. The two protons have the same charge but still stay together due to the residual nuclear force

In particle physics, the strong interaction is the mechanism responsible for the strong nuclear force (also called the strong force, nuclear strong force or colour force), one of the four fundamental interactions of nature, the others being electromagnetism, the weak interaction and gravitation. Effective only at a distance of a femtometre, it is approximately 100 times stronger than electromagnetism, a million times stronger than the weak force interaction and many orders of magnitude stronger than gravitation at that range. It ensures the stability of ordinary matter, as it confines the quark elementary particles into hadron particles such as the proton and neutron, the largest components of the mass of ordinary matter. Furthermore, most of the mass-energy of a common proton or neutron is in the form of the strong force field energy; the individual quarks provide only about 1% of the mass-energy of a proton[citation needed].

The strong interaction is observable in two areas: on a larger scale (about 1 to 3 femtometers (fm)), it is the force that binds protons and neutrons (nucleons) together to form the nucleus of an atom. On the smaller scale (less than about 0.8 fm, the radius of a nucleon), it is the force (carried by gluons) that holds quarks together to form protons, neutrons, and other hadron particles. The strong force inherently has so high a strength that the energy of an object bound by the strong force (a hadron) is high enough to produce new massive particles. Thus, if hadrons are struck by high-energy particles, they give rise to new hadrons instead of emitting freely moving radiation (gluons). This property of the strong force is called colour confinement, and it prevents the free "emission" of strong force: instead, in practice, jets of massive particles are observed.

In the context of binding protons and neutrons together to form atoms, the strong interaction is called the nuclear force (or residual strong force). In this case, it is the residuum of the strong interaction between the quarks that make up the protons and neutrons. As such, the residual strong interaction obeys a quite different distance-dependent behavior between nucleons, from when it is acting to bind quarks within nucleons. The binding energy that is partly released upon breakup of a nucleus is related to the residual strong force is used in nuclear power and fission type nuclear weapons.[1][2]

The strong interaction is thought to be mediated by massless particles called gluons, that are exchanged between quarks, antiquarks, and other gluons. Gluons, in turn, are thought to interact with quarks and gluons as all carry a type of charge called "colour charge". Colour charge is analogous to electromagnetic charge, but it comes in three types rather than one (+/- red, +/- green, +/- blue) that results in a different type of force, with different rules of behavior. These rules are detailed in the theory of quantum chromodynamics (QCD), which is the theory of quark-gluon interactions.

Just after the Big Bang, and during the electroweak epoch, the electroweak force separated from the strong force. Although it is expected that a Grand Unified Theory exists to describe this, no such theory has been successfully formulated, and the unification remains an unsolved problem in physics.

History

Before the 1970s, physicists were uncertain about the binding mechanism of the atomic nucleus. It was known that the nucleus was composed of protons and neutrons and that protons possessed positive electric charge, while neutrons were electrically neutral. However, these facts seemed to contradict one another. By physical understanding at that time, positive charges would repel one another and the nucleus should therefore fly apart. However, this was never observed. New physics was needed to explain this phenomenon.

A stronger attractive force was postulated to explain how the atomic nucleus was bound together despite the protons' mutual electromagnetic repulsion. This hypothesized force was called the strong force, which was believed to be a fundamental force that acted on the protons and neutrons that make up the nucleus.

It was later discovered that protons and neutrons were not fundamental particles, but were made up of constituent particles called quarks. The strong attraction between nucleons was the side-effect of a more fundamental force that bound the quarks together in the protons and neutrons. The theory of quantum chromodynamics explains that quarks carry what is called a colour charge, although it has no relation to visible colour.[3] Quarks with unlike colour charge attract one another as a result of the strong interaction, which is mediated by particles called gluons.

Details

The word strong is used since the strong interaction is the "strongest" of the four fundamental forces; its strength is around 102 times that of the electromagnetic force, some 106 times as great as that of the weak force, and about 1039 times that of gravitation, at a distance of a femtometer or less.

Behaviour of the strong force

The contemporary understanding of strong force is described by quantum chromodynamics (QCD), a part of the standard model of particle physics. Mathematically, QCD is a non-Abelian gauge theory based on a local (gauge) symmetry group called SU(3).

Quarks and gluons are the only fundamental particles that carry non-vanishing colour charge, and hence participate in strong interactions. The strong force itself acts directly only on elementary quark and gluon particles.

All quarks and gluons in QCD interact with each other through the strong force. The strength of interaction is parametrized by the strong coupling constant. This strength is modified by the gauge colour charge of the particle, a group theoretical property.

The strong force acts between quarks. Unlike all other forces (electromagnetic, weak, and gravitational), the strong force does not diminish in strength with increasing distance. After a limiting distance (about the size of a hadron) has been reached, it remains at a strength of about 10,000 newtons, no matter how much farther the distance between the quarks.[4] In QCD this phenomenon is called colour confinement; it implies that only hadrons, not individual free quarks, can be observed. The explanation is that the amount of work done against a force of 10,000 newtons (about the weight of a one-metric ton mass on the surface of the Earth) is enough to create particle-antiparticle pairs within a very short distance of an interaction. In simple terms, the very energy applied to pull two quarks apart will create a pair of new quarks that will pair up with the original ones. The failure of all experiments that have searched for free quarks is considered to be evidence for this phenomenon.

The elementary quark and gluon particles affected are unobservable directly, but they instead emerge as jets of newly created hadrons, whenever energy is deposited into a quark-quark bond, as when a quark in a proton is struck by a very fast quark (in an impacting proton) during a particle accelerator experiment. However, quark–gluon plasmas have been observed.[citation needed]

Every quark in the universe does not attract every other quark in the above distance independent manner, since colour-confinement implies that the strong force acts without distance-diminishment only between pairs of single quarks, and that in collections of bound quarks (i.e., hadrons), the net colour-charge of the quarks cancels out, as seen from far away. Collections of quarks (hadrons) therefore appear (nearly) without colour-charge, and the strong force is therefore nearly absent between these hadrons (i.e., between baryons or mesons). However the cancellation is not quite perfect. A small residual force remains (described below) known as the residual strong force. This residual force does diminish rapidly with distance, and is thus very short-range (effectively a few femtometers). It manifests as a force between the "colourless" hadrons, and is therefore sometimes known as the strong nuclear force or simply nuclear force.

Residual strong force


An animation of the nuclear force (or residual strong force) interaction between a proton and a neutron. The small coloured double circles are gluons, which can be seen binding the proton and neutron together. These gluons also hold the quark-antiquark combination called the pion together, and thus help transmit a residual part of the strong force even between colourless hadrons. Anticolours are shown as per this diagram.

The residual effect of the strong force is called the nuclear force. The nuclear force acts between hadrons, such as mesons or the nucleons in atomic nuclei. This "residual strong force", acting indirectly, transmits gluons that form part of the virtual pi and rho mesons, which, in turn, transmit the nuclear force between nucleons.

The residual strong force is thus a minor residuum of the strong force that binds quarks together into protons and neutrons. This same force is much weaker between neutrons and protons, because it is mostly neutralized within them, in the same way that electromagnetic forces between neutral atoms (van der Waals forces) are much weaker than the electromagnetic forces that hold the atoms internally together.[5]

Unlike the strong force itself, the nuclear force, or residual strong force, does diminish in strength, and in fact diminishes rapidly with distance. The decrease is approximately as a negative exponential power of distance, though there is no simple expression known for this; see Yukawa potential. This fact, together with the less-rapid decrease of the disruptive electromagnetic force between protons with distance, causes the instability of larger atomic nuclei, such as all those with atomic numbers larger than 82 (the element lead).

Simulated reality


From Wikipedia, the free encyclopedia

Simulated reality is the hypothesis that reality could be simulated—for example by computer simulation—to a degree indistinguishable from "true" reality. It could contain conscious minds which may or may not be fully aware that they are living inside a simulation.
This is quite different from the current, technologically achievable concept of virtual reality. Virtual reality is easily distinguished from the experience of actuality; participants are never in doubt about the nature of what they experience. Simulated reality, by contrast, would be hard or impossible to separate from "true" reality.

There has been much debate over this topic, ranging from philosophical discourse to practical applications in computing.

Types of simulation

Brain-computer interface

In brain-computer interface simulations, each participant enters from outside, directly connecting their brain to the simulation computer. The computer transmits sensory data to the participant, reads and responds to their desires and actions in return; in this manner they interact with the simulated world and receive feedback from it. The participant may be induced by any number of possible means to forget, temporarily or otherwise, that they are inside a virtual realm (e.g. "passing through the veil", a term borrowed from Christian tradition, which describes the passage of a soul from an earthly body to an afterlife). While inside the simulation, the participant's consciousness is represented by an avatar, which can look very different from the participant's actual appearance.

Virtual people

In a virtual-people simulation, every inhabitant is a native of the simulated world. They do not have a "real" body in the external reality of the physical world. Instead, each is a fully simulated entity, possessing an appropriate level of consciousness that is implemented using the simulation's own logic (i.e. using its own physics). As such, they could be downloaded from one simulation to another, or even archived and resurrected at a later time. It is also possible that a simulated entity could be moved out of the simulation entirely by means of mind transfer into a synthetic body.

Arguments

Simulation argument

The simulation hypothesis was first published by Hans Moravec.[1][2][3] Later, the philosopher Nick Bostrom developed an expanded argument examining the probability of our reality being a simulacrum.[4] His argument states that at least one of the following statements is very likely to be true:
1. Human civilization is unlikely to reach a level of technological maturity capable of producing simulated realities, or such simulations are physically impossible to construct.
2. A comparable civilization reaching aforementioned technological status will likely not produce a significant number of simulated realities (one that might push the probable existence of digital entities beyond the probable number of "real" entities in a Universe) for any of a number of reasons, such as, diversion of computational processing power for other tasks, ethical considerations of holding entities captive in simulated realities, etc.
3. Any entities with our general set of experiences are almost certainly living in a simulation.
In greater detail, Bostrom is attempting to prove a tripartite disjunction, that at least one of these propositions must be true. His argument rests on the premise that given sufficiently advanced technology, it is possible to represent the populated surface of the Earth without recourse to digital physics; that the qualia experienced by a simulated consciousness is comparable or equivalent to that of a naturally occurring human consciousness; and that one or more levels of simulation within simulations would be feasible given only a modest expenditure of computational resources in the real world.

If one assumes first that humans will not be destroyed or destroy themselves before developing such a technology, and, next, that human descendants will have no overriding legal restrictions or moral compunctions against simulating biospheres or their own historical biosphere, then it would be unreasonable to count ourselves among the small minority of genuine organisms who, sooner or later, will be vastly outnumbered by artificial simulations.

Epistemologically, it is not impossible to tell whether we are living in a simulation. For example, Bostrom suggests that a window could popup saying: "You are living in a simulation. Click here for more information." However, imperfections in a simulated environment might be difficult for the native inhabitants to identify, and for purposes of authenticity, even the simulated memory of a blatant revelation might be purged programmatically. Nonetheless, should any evidence come to light, either for or against the skeptical hypothesis, it would radically alter the aforementioned probability.

The simulation argument also has implications for existential risks. If we are living in a simulation, then it's possible that our simulation could get shut down. Many futurists have speculated about how we can avoid this outcome. Ray Kurzweil argues in The Singularity is Near that we should be interesting to our simulators, and that bringing about the Singularity is probably the most interesting event that could happen. The philosopher Phil Torres has argued that the simulation argument itself leads to the conclusion that, if we run simulations in the future, then there almost certainly exists a stack of nested simulations, with ours located towards the bottom. Since annihilation is inherited downwards, any terminal event in a simulation "above" ours would be a terminal event for us. If there are many simulations above us, then the risk of an existential catastrophe could be significant.[5]

Relativity of reality

As to the question of whether we are living in a simulated reality or a 'real' one, the answer may be 'indistinguishable', in principle. In a commemorative article dedicated to the 'The World Year of Physics 2005', physicist Bin-Guang Ma proposed the theory of 'Relativity of reality'.[6][unreliable source?] The notion appears in ancient philosophy: Zhuangzi's 'Butterfly Dream', and analytical psychology.[7] Without special knowledge of a reference world, one cannot say with absolute skeptical certainty one is experiencing "reality".

Computationalism

Computationalism is a philosophy of mind theory stating that cognition is a form of computation. It is relevant to the Simulation hypothesis in that it illustrates how a simulation could contain conscious subjects, as required by a "virtual people" simulation. For example, it is well known that physical systems can be simulated to some degree of accuracy. If computationalism is correct, and if there is no problem in generating artificial consciousness or cognition, it would establish the theoretical possibility of a simulated reality. However, the relationship between cognition and phenomenal qualia of consciousness is disputed. It is possible that consciousness requires a vital substrate that a computer cannot provide, and that simulated people, while behaving appropriately, would be philosophical zombies. This would undermine Nick Bostrom's simulation argument; we cannot be a simulate consciousness, if consciousness, as we know it, cannot be simulated. However, the skeptical hypothesis remains intact, we could still be envatted brains, existing as conscious beings within a simulated environment, even if consciousness cannot be simulated.
Some theorists[8][9] have argued that if the "consciousness-is-computation" version of computationalism and mathematical realism (or radical mathematical Platonism)[10] are true then consciousnesses is computation, which in principle is platform independent, and thus admits of simulation. This argument states that a "Platonic realm" or ultimate ensemble would contain every algorithm, including those which implement consciousness. Hans Moravec has explored the simulation hypothesis and has argued for a kind of mathematical Platonism according to which every object (including e.g. a stone) can be regarded as implementing every possible computation.[1]

Dreaming

A dream could be considered a type of simulation capable of fooling someone who is asleep. As a result the "dream hypothesis" cannot be ruled out, although it has been argued that common sense and considerations of simplicity rule against it.[11] One of the first philosophers to question the distinction between reality and dreams was Zhuangzi, a Chinese philosopher from the 4th century BC. He phrased the problem as the well-known "Butterfly Dream," which went as follows:
Once Zhuangzi dreamt he was a butterfly, a butterfly flitting and fluttering around, happy with himself and doing as he pleased. He didn't know he was Zhuangzi. Suddenly he woke up and there he was, solid and unmistakable Zhuangzi. But he didn't know if he was Zhuangzi who had dreamt he was a butterfly, or a butterfly dreaming he was Zhuangzi. Between Zhuangzi and a butterfly there must be some distinction! This is called the Transformation of Things. (2, tr. Burton Watson 1968:49)
The philosophical underpinnings of this argument are also brought up by Descartes, who was one of the first Western philosophers to do so. In Meditations on First Philosophy, he states "... there are no certain indications by which we may clearly distinguish wakefulness from sleep",[12] and goes on to conclude that "It is possible that I am dreaming right now and that all of my perceptions are false".[12]
Chalmers (2003) discusses the dream hypothesis, and notes that this comes in two distinct forms:
  • that he is currently dreaming, in which case many of his beliefs about the world are incorrect;
  • that he has always been dreaming, in which case the objects he perceives actually exist, albeit in his imagination.[13]
Both the dream argument and the simulation hypothesis can be regarded as skeptical hypotheses; however in raising these doubts, just as Descartes noted that his own thinking led him to be convinced of his own existence, the existence of the argument itself is testament to the possibility of its own truth.

Another state of mind in which some argue an individual's perceptions have no physical basis in the real world is called psychosis though psychosis may have a physical basis in the real world and explanations vary.

Computability of physics

A decisive refutation of any claim that our reality is computer-simulated would be the discovery of some uncomputable physics, because if reality is doing something that no computer can do, it cannot be a computer simulation. (Computability generally means computability by a Turing machine. Hypercomputation (super-Turing computation) introduces other possibilities which will be dealt with separately.) In fact, known physics is held to be (Turing) computable,[14] but the statement "physics is computable" needs to be qualified in various ways. Before symbolic computation, a number, thinking particularly of a real number, one with an infinite number of digits, was said to be computable if a Turing machine will continue to spit out digits endlessly, never reaching a "final digit".[15] This runs counter, however, to the idea of simulating physics in real time (or any plausible kind of time). 
 
Known physical laws (including those of quantum mechanics) are very much infused with real numbers and continua, and the universe seems to be able to decide their values on a moment-by-moment basis. As Richard Feynman put it:[16]
"It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do? So I have often made the hypotheses that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities".
The objection could be made that the simulation does not have to run in "real time".[17] It misses an important point, though: the shortfall is not linear; rather it is a matter of performing an infinite number of computational steps in a finite time.[18]

Note that these objections all relate to the idea of reality being exactly simulated. Ordinary computer simulations as used by physicists are always approximations.

These objections do not apply if the hypothetical simulation is being run on a hypercomputer, a hypothetical machine more powerful than a Turing machine.[19] Unfortunately, there is no way of working out if computers running a simulation are capable of doing things that computers in the simulation cannot do. No-one has shown that the laws of physics inside a simulation and those outside it have to be the same, and simulations of different physical laws have been constructed.[20] The problem now is that there is no evidence that can conceivably be produced to show that the universe is not any kind of computer, making the simulation hypothesis unfalsifiable and therefore scientifically unacceptable, at least by Popperian standards.[21]

All conventional computers, however, are less than hypercomputational, and the simulated reality hypothesis is usually expressed in terms of conventional computers, i.e. Turing machines.

Roger Penrose, an English mathematical physicist, presents the argument that human consciousness is non-algorithmic, and thus is not capable of being modeled by a conventional Turing machine-type of digital computer. Penrose hypothesizes that quantum mechanics plays an essential role in the understanding of human consciousness. He sees the collapse of the quantum wavefunction as playing an important role in brain function. (See consciousness causes collapse).

CantGoTu environments

In his book The Fabric of Reality, David Deutsch discusses how the limits to computability imposed by Gödel's Incompleteness Theorem affect the Virtual Reality rendering process.[22][23] In order to do this, Deutsch invents the notion of a CantGoTu environment (named after Cantor, Gödel, and Turing), using Cantor's diagonal argument to construct an 'impossible' Virtual Reality which a physical VR generator would not be able to generate. The way that this works is to imagine that all VR environments renderable by such a generator can be enumerated, and that we label them VR1, VR2, etc. Slicing time up into discrete chunks we can create an environment which is unlike VR1 in the first timeslice, unlike VR2 in the second timeslice and so on. This environment is not in the list, and so it cannot be generated by the VR generator. Deutsch then goes on to discuss a universal VR generator, which as a physical device would not be able to render all possible environments, but would be able to render those environments which can be rendered by all other physical VR generators. He argues that 'an environment which can be rendered' corresponds to a set of mathematical questions whose answers can be calculated, and discusses various forms of the Turing Principle, which in its initial form refers to the fact that it is possible to build a universal computer which can be programmed to execute any computation that any other machine can do. Attempts to capture the process of virtual reality rendering provides us with a version which states: "It is possible to build a virtual-reality generator, whose repertoire includes every physically possible environment". In other words, a single, buildable physical object can mimic all the behaviours and responses of any other physically possible process or object. This, it is claimed, is what makes reality comprehensible.
Later on in the book, Deutsch goes on to argue for a very strong version of the Turing principle, namely: "It is possible to build a virtual reality generator whose repertoire includes every physically possible environment." However, in order to include every physically possible environment, the computer would have to be able to include a recursive simulation of the environment containing itself. Even so, a computer running a simulation need not have to run every possible physical moment to be plausible to its inhabitants.

Nested simulations

The existence of simulated reality is unprovable in any concrete sense: any "evidence" that is directly observed could be another simulation itself. In other words, there is an infinite regress problem with the argument. Even if we are a simulated reality, there is no way to be sure the beings running the simulation are not themselves a simulation, and the operators of that simulation are not a simulation.[24]

"Recursive simulation involves a simulation, or an entity in the simulation, creating another instance of the same simulation, running it and using its results" (Pooch and Sullivan 2000).[25]

Peer-to-Peer Explanation of Quantum Phenomena

In two recent articles, the philosopher Marcus Arvan has argued that a new version of the simulation hypothesis, the Peer-to-Peer Simulation Hypothesis, provides a unified explanation of a wide variety of quantum phenomena. According to Arvan, peer-to-peer networking (networking involving no central "dedicated server") inherently gives rise to (i) Quantum superposition, (ii) Quantum indeterminacy, (iii) The quantum measurement problem, (iv) Wave-particle duality, (iv) Quantum wave-function "collapse”, (v) Quantum entanglement, (vi) a minimum space-time distance (e.g. the Planck length), and (vii) The relativity of time to observers.[26][27]

In fiction

Simulated reality is a theme that pre-dates science fiction. In Medieval and Renaissance religious theatre, the concept of the "world as theater" is frequent. Simulated reality in fiction has been explored by many authors, game designers, and film directors.

Artificial life


From Wikipedia, the free encyclopedia

This article is about a field of research. For artificially created life forms, see synthetic life.
Artificial life (often abbreviated ALife or A-Life[1]) is a field of study and an associated art form which examine systems related to life, its processes, and its evolution, through the use of simulations with computer models, robotics, and biochemistry.[2] The discipline was named by Christopher Langton, an American computer scientist, in 1986.[3] There are three main kinds of alife,[4] named for their approaches: soft,[5] from software; hard,[6] from hardware; and wet, from biochemistry. Artificial life imitates traditional biology by trying to recreate some aspects of biological phenomena.[7]

A Braitenberg simulation, programmed in breve, an artificial life simulator

Overview

Artificial life studies the logic of living systems in artificial environments in order to gain a deeper understanding of the complex information processing that defines such systems.
Also sometimes included in the umbrella term "artificial life" are agent based systems which are used to study the emergent properties of societies of agents.

While life is, by definition, alive, artificial life is generally referred to as being confined to a digital environment and existence.

Philosophy

The modeling philosophy of alife strongly differs from traditional modeling by studying not only “life-as-we-know-it” but also “life-as-it-might-be”.[8]

A traditional model of a biological system will focus on capturing its most important parameters. In contrast, an alife modeling approach will generally seek to decipher the most simple and general principles underlying life and implement them in a simulation. The simulation then offers the possibility to analyse new and different lifelike systems.

Vladimir Georgievich Red'ko proposed to generalize this distinction to the modeling of any process, leading to the more general distinction of "processes-as-we-know-them" and "processes-as-they-could-be" [9]

At present, the commonly accepted definition of life does not consider any current alife simulations or software to be alive, and they do not constitute part of the evolutionary process of any ecosystem. However, different opinions about artificial life's potential have arisen:
  • The strong alife (cf. Strong AI) position states that "life is a process which can be abstracted away from any particular medium" (John von Neumann). Notably, Tom Ray declared that his program Tierra is not simulating life in a computer but synthesizing it.[citation needed]
  • The weak alife position denies the possibility of generating a "living process" outside of a chemical solution. Its researchers try instead to simulate life processes to understand the underlying mechanics of biological phenomena.

Software-based - "soft"

Techniques

  • Neural networks are sometimes used to model the brain of an agent. Although traditionally more of an artificial intelligence technique, neural nets can be important for simulating population dynamics of organisms that can learn. The symbiosis between learning and evolution is central to theories about the development of instincts in organisms with higher neurological complexity, as in, for instance, the Baldwin effect.

Notable simulators

This is a list of artificial life/digital organism simulators, organized by the method of creature definition.
Name Driven By Started Ended
Avida executable dna 1993 NA
breve executable dna 2006 NA
Creatures neural net mid-1990s
Critterding neural net 2005 NA
Darwinbots executable dna 2003
DigiHive executable dna 2006 2009
DOSE executable dna 2012 NA
EcoSim Fuzzy Cognitive Map 2009 NA
Evolve 4.0 executable dna 1996 2007
Framsticks executable dna 1996 NA
Noble Ape neural net 1996 NA
OpenWorm Geppetto 2011 NA
Polyworld neural net 1990 NA
Primordial Life executable dna 1994 2003
ScriptBots executable dna 2010 NA
TechnoSphere modules 1995
Tierra executable dna early 1990s  ?
3D Virtual Creature Evolution neural net 2008 NA

Program-based

Program-based simulations contain organisms with a complex DNA language, usually Turing complete. This language is more often in the form of a computer program than actual biological DNA. Assembly derivatives are the most common languages used. An organism "lives" when its code is executed, and there are usually various methods allowing self-replication. Mutations are generally implemented as random changes to the code. Use of cellular automata is common but not required. Another example could be an artificial intelligence and multi-agent system/program.

Module-based

Individual modules are added to a creature. These modules modify the creature's behaviors and characteristics either directly, by hard coding into the simulation (leg type A increases speed and metabolism), or indirectly, through the emergent interactions between a creature's modules (leg type A moves up and down with a frequency of X, which interacts with other legs to create motion).
Generally these are simulators which emphasize user creation and accessibility over mutation and evolution.

Parameter-based

Organisms are generally constructed with pre-defined and fixed behaviors that are controlled by various parameters that mutate. That is, each organism contains a collection of numbers or other finite parameters. Each parameter controls one or several aspects of an organism in a well-defined way.

Neural net–based

These simulations have creatures that learn and grow using neural nets or a close derivative. Emphasis is often, although not always, more on learning than on natural selection.

Hardware-based - "hard"

Hardware-based artificial life mainly consist of robots, that is, automatically guided machines able to do tasks on their own.

Biochemical-based - "wet"

Biochemical-based life is studied in the field of synthetic biology. It involves e.g. the creation of synthetic DNA. The term "wet" is an extension of the term "wetware".

Related subjects

  1. Artificial intelligence has traditionally used a top down approach, while alife generally works from the bottom up.[10]
  2. Artificial chemistry started as a method within the alife community to abstract the processes of chemical reactions.
  3. Evolutionary algorithms are a practical application of the weak alife principle applied to optimization problems. Many optimization algorithms have been crafted which borrow from or closely mirror alife techniques. The primary difference lies in explicitly defining the fitness of an agent by its ability to solve a problem, instead of its ability to find food, reproduce, or avoid death.[citation needed] The following is a list of evolutionary algorithms closely related to and used in alife:
  4. Multi-agent system - A multi-agent system is a computerized system composed of multiple interacting intelligent agents within an environment.
  5. Evolutionary art uses techniques and methods from artificial life to create new forms of art.
  6. Evolutionary music uses similar techniques, but applied to music instead of visual art.
  7. Abiogenesis and the origin of life sometimes employ alife methodologies as well.

Criticism

Alife has had a controversial history. John Maynard Smith criticized certain artificial life work in 1994 as "fact-free science".[11] However, the recent publication of artificial life articles in widely read journals such as Science and Nature is evidence that artificial life techniques are becoming more accepted in the mainstream, at least as a method of studying evolution.[12]

Curiosity

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Curiosity...