Search This Blog

Friday, July 20, 2018

de Sitter invariant special relativity

From Wikipedia, the free encyclopedia

In mathematical physics, de Sitter invariant special relativity is the speculative idea that the fundamental symmetry group of spacetime is the indefinite orthogonal group SO(4,1), that of de Sitter space. In the standard theory of general relativity, de Sitter space is a highly symmetrical special vacuum solution, which requires a cosmological constant or the stress–energy of a constant scalar field to sustain.

The idea of de Sitter invariant relativity is to require that the laws of physics are not fundamentally invariant under the Poincaré group of special relativity, but under the symmetry group of de Sitter space instead. With this assumption, empty space automatically has de Sitter symmetry, and what would normally be called the cosmological constant in general relativity becomes a fundamental dimensional parameter describing the symmetry structure of spacetime.

First proposed by Luigi Fantappiè in 1954, the theory remained obscure until it was rediscovered in 1968 by Henri Bacry and Jean-Marc Lévy-Leblond. In 1972, Freeman Dyson popularized it as a hypothetical road by which mathematicians could have guessed part of the structure of general relativity before it was discovered.[1] The discovery of the accelerating expansion of the universe has led to a revival of interest in de Sitter invariant theories, in conjunction with other speculative proposals for new physics, like doubly special relativity.

Introduction

De Sitter suggested that spacetime curvature might not be due solely to gravity[2] but he did not give any mathematical details of how this could be accomplished. In 1968 Henri Bacry and Jean-Marc Lévy-Leblond showed that the de Sitter group was the most general group compatible with isotropy, homogeneity and boost invariance.[3] Later, Freeman Dyson[1] advocated this as an approach to making the mathematical structure of general relativity more self-evident.
Minkowski's unification of space and time within special relativity replaces the Galilean group of Newtonian mechanics with the Lorentz group. This is called a unification of space and time because the Lorentz group is simple, while the Galilean group is a semi-direct product of rotations and Galilean boosts. This means that the Lorentz group mixes up space and time such that they cannot be disentangled, while the Galilean group treats time as a parameter with different units of measurement than space.

An analogous thing can be made to happen with the ordinary rotation group in three dimensions. If you imagine a nearly flat world, one in which pancake-like creatures wander around on a pancake flat world, their conventional unit of height might be the micrometre (μm), since that is how high typical structures are in their world, while their unit of distance could be the metre, because that is their body's horizontal extent. Such creatures would describe the basic symmetry of their world as SO(2), being the known rotations in the horizontal (x–y) plane. Later on, they might discover rotations around the x- and y-axes—and in their everyday experience such rotations might always be by an infinitesimal angle, so that these rotations would effectively commute with each other.

The rotations around the horizontal axes would tilt objects by an infinitesimal amount. The tilt in the x–z plane (the "x-tilt") would be one parameter, and the tilt in the y–z plane (the "y-tilt") another. The symmetry group of this pancake world is then SO(2) semidirect product with R2, meaning that a two-dimensional rotation plus two extra parameters, the x-tilt and the y-tilt. The reason it is a semidirect product is that, when you rotate, the x-tilt and the y-tilt rotate into each other, since they form a vector and not two scalars. In this world, the difference in height between two objects at the same x, y would be a rotationally invariant quantity unrelated to length and width. The z-coordinate is effectively separate from x and y.

Eventually, experiments at large angles would convince the creatures that the symmetry of the world is SO(3). Then they would understand that z is really the same as x and y, since they can be mixed up by rotations. The SO(2) semidirect product R2 limit would be understood as the limit that the free parameter μ, the ratio of the height range μm to the length range m, approaches 0. The Lorentz group is analogous—it is a simple group that turns into the Galilean group when the time range is made long compared to the space range, or where velocities may be regarded as infinitesimal, or equivalently, may be regarded as the limit c → ∞, where relativistic effects become observable "as good as at infinite velocity".

The symmetry group of special relativity is not entirely simple, due to translations. The Lorentz group is the set of the transformations that keep the origin fixed, but translations are not included. The full Poincaré group is the semi-direct product of translations with the Lorentz group. If translations are to be similar to elements of the Lorentz group, then as boosts are non-commutative, translations would also be non-commutative.

In the pancake world, this would manifest if the creatures were living on an enormous sphere rather than on a plane. In this case, when they wander around their sphere, they would eventually come to realize that translations are not entirely separate from rotations, because if they move around on the surface of a sphere, when they come back to where they started, they find that they have been rotated by the holonomy of parallel transport on the sphere. If the universe is the same everywhere (homogeneous) and there are no preferred directions (isotropic), then there are not many options for the symmetry group: they either live on a flat plane, or on a sphere with a constant positive curvature, or on a Lobachevski plane with constant negative curvature. If they are not living on the plane, they can describe positions using dimensionless angles, the same parameters that describe rotations, so that translations and rotations are nominally unified.

In relativity, if translations mix up nontrivially with rotations, but the universe is still homogeneous and isotropic, the only option is that spacetime has a uniform scalar curvature. If the curvature is positive, the analog of the sphere case for the two-dimensional creatures, the spacetime is de Sitter space and its symmetry group is the de Sitter group rather than the Poincaré group.

De Sitter special relativity postulates that the empty space has de Sitter symmetry as a fundamental law of nature. This means that spacetime is slightly curved even in the absence of matter or energy. This residual curvature implies a positive cosmological constant Λ to be determined by observation. Due to the small magnitude of the constant, special relativity with its Poincaré group is indistinguishable from de Sitter space for most practical purposes.

Modern proponents of this idea, such as S. Cacciatori, V. Gorini and A. Kamenshchik,[4] have reinterpreted this theory as physics, not just mathematics. They postulate that the acceleration of the expansion of the universe is not entirely due to vacuum energy, but at least partly due to the kinematics of the de Sitter group, which would replace the Poincaré group.

A modification of this idea allows \Lambda to change with time, so that inflation may come from the cosmological constant being larger near the big bang than nowadays. It can also be viewed as a different approach to the problem of quantum gravity.[5]

High energy

The Poincaré group contracts to the Galilean group for low-velocity kinematics, meaning that when all velocities are small the Poincaré group "morphs" into the Galilean group. (This can be made precise with İnönü and Wigner's concept of group contraction.[6])

Similarly, the de Sitter group contracts to the Poincaré group for short-distance kinematics, when the magnitudes of all translations considered are very small compared to the de Sitter radius.[5] In quantum mechanics, short distances are probed by high energies, so that for energies above a very small value related to the cosmological constant, the Poincaré group is a good approximation to the de Sitter group.

In de Sitter relativity, the cosmological constant is no longer a free parameter of the same type; it is determined by the de Sitter radius, a fundamental quantity that determines the commutation relation of translation with rotations/boosts. This means that the theory of de Sitter relativity might be able to provide insight on the value of the cosmological constant, perhaps explaining the cosmic coincidence. Unfortunately, the de Sitter radius, which determines the cosmological constant, is an adjustable parameter in de Sitter relativity, so the theory requires a separate condition to determine its value in relation to the measurement scale.

When a cosmological constant is viewed as a kinematic parameter, the definitions of energy and momentum must be changed from those of special relativity. These changes could significantly modify the physics of the early universe if the cosmological constant was greater back then. Some speculate that a high energy experiment could modify the local structure of spacetime from Minkowski space to de Sitter space with a large cosmological constant for a short period of time, and this might eventually be tested in the existing or planned particle collider.[7]

Doubly special relativity

Since the de Sitter group naturally incorporates an invariant length parameter, de Sitter relativity can be interpreted as an example of the so-called doubly special relativity. There is a fundamental difference, though: whereas in all doubly special relativity models the Lorentz symmetry is violated, in de Sitter relativity it remains as a physical symmetry.[8][9] A drawback of the usual doubly special relativity models is that they are valid only at the energy scales where ordinary special relativity is supposed to break down, giving rise to a patchwork relativity. On the other hand, de Sitter relativity is found to be invariant under a simultaneous re-scaling of mass, energy and momentum,[10] and is consequently valid at all energy scales. A relationship between doubly special relativity, de Sitter space and general relativity is described by Derek Wise.[11] See also MacDowell–Mansouri action.

Newton–Hooke: de Sitter special relativity in the limit vc

In the limit as vc, the de Sitter group contracts to the Newton–Hooke group.[12] This has the effect that in the nonrelativistic limit, objects in de Sitter space have an extra "repulsion" from the origin: objects have a tendency to move away from the center with an outward pointing fictitious force proportional to their distance from the origin.

While it looks as though this might pick out a preferred point in space—the center of repulsion, it is more subtly isotropic. Moving to the uniformly accelerated frame of reference of an observer at another point, all accelerations appear to have a repulsion center at the new point.

What this means is that in a spacetime with non-vanishing curvature, gravity is modified from Newtonian gravity.[13] At distances comparable to the radius of the space, objects feel an additional linear repulsion from the center of coordinates.

History of de Sitter invariant special relativity

  • "de Sitter relativity" is the same as the theory of "projective relativity" of Luigi Fantappiè and Giuseppe Arcidiacono first published in 1954 by Fantappiè[14] and the same as another independent discovery in 1976.[15]
  • In 1968 Henri Bacry and Jean-Marc Lévy-Leblond published a paper on possible kinematics[3]
  • In 1972 Freeman Dyson[1] further explored this.
  • In 1973 Eliano Pessa described how Fantappié–Arcidiacono projective relativity relates to earlier conceptions of projective relativity and to Kaluza Klein theory.[16]
  • Han-Ying Guo, Chao-Guang Huang, Zhan Xu, Bin Zhou have used the term "de Sitter special relativity" from 2004 onwards.[17][18][19][20][21][22][23][24][25][26][27][28][29][30][31]
  • R. Aldrovandi, J.P. Beltrán Almeida and J.G. Pereira have used the terms "de Sitter special relativity" and "de Sitter relativity" starting from their 2007 paper "de Sitter special relativity".[10][32] This paper was based on previous work on amongst other things: the consequences of a non-vanishing cosmological constant,[33] on doubly special relativity[34] and on the Newton–Hooke group[3][35][36] and early work formulating special relativity with a de Sitter space[37][38][39]
  • From 2006 onwards Ignazio Licata and Leonardo Chiatti have published papers on Fantappié–Arcidiacono theory of relativity pointing out that it is the same thing as de Sitter relativity[14][40][41][42][43]
  • In 2008 S. Cacciatori, V. Gorini and A. Kamenshchik[4] published a paper about the kinematics of de Sitter relativity.
  • Papers by other authors include: dSR and the fine structure constant;[44] dSR and dark energy;[45] dSR Hamiltonian Formalism;[46] and De Sitter Thermodynamics from Diamonds's Temperature,[47] Triply special relativity from six dimensions,[48] Deformed General Relativity and Torsion.[49]

Quantum de Sitter special relativity

There are quantized or quantum versions of de Sitter special relativity.

On the Search for the Neural Correlate of Consciousness

June 26, 2002 by David Chalmers
Original link:  http://www.kurzweilai.net/on-the-search-for-the-neural-correlate-of-consciousness
Originally published March 1998 in Toward a Science of Consciousness II: The Second Tucson Discussions and Debates (MIT Press). Published on KurzweilAI.net on June 26, 2002.

There’s a variety of proposed neural systems associated with conscious experience, but no way to directly observe or measure consciousness. Chalmers suggests though that there may be a “consciousness module” — a functional area responsible for the integration of information in the brain, with high-bandwidth communication between its parts.


From the author: This paper appears in Toward a Science of Consciousness II: The Second Tucson Discussions and Debates (S. Hameroff, A. Kaszniak, and A.Scott, eds), published with MIT Press in 1998. It is a transcript of my talk at the second Tucson conference in April 1996, lightly edited to include the contents of overheads and to exclude some diversions with a consciousness meter. A more in-depth argument for some of the claims in this paper can be found in Chapter 6 of my book The Conscious Mind (Chalmers, 1996).

I’m going to talk about one aspect of the role that neuroscience plays in the search for a theory of consciousness. Whether or not neuroscience can solve all the problems of consciousness single handedly, there is no question that it has a major role to play. We’ve seen at this conference that there’s a vast amount of progress in neurobiological research, and that much of it is clearly bearing on the problems of consciousness. But the conceptual foundations of this sort of research are only beginning to be laid. So I will look at some of the things that are going on from a philosopher’s perspective and will see if there’s anything helpful to say about these foundations.

We’ve all been hearing a lot about the "neural correlate of consciousness". This phrase is intended to refer to the neural system or systems primarily associated with conscious experience. I gather that the catchword of the day is "NCC". We all have an NCC inside our head, we just have to find out what it is. In recent years there have been quite a few proposals about the identity of the NCC. One of the most famous proposals is Crick and Koch’s suggestion concerning 40-hertz oscillations. That proposal has since faded away a little but there are all sorts of other suggestions out there. It’s almost got to a point where it’s reminiscent of particle physics, where they have something like 236 particles and people talk about the "particle zoo". In the study of consciousness, one might talk about the "neural correlate zoo". There have also been a number of related proposals about what we might call the "cognitive correlate of consciousness" (CCC?).

A small list of suggestions that have been put forward might include:
  • 40-hertz oscillations in the cerebral cortex (Crick and Koch 1990)
  • Intralaminar nucleus in the thalamus (Bogen 1995)
  • Re-entrant loops in thalamocortical systems (Edelman 1989)
  • 40-hertz rhythmic activity in thalamocortical systems (Llinas et al 1994)
  • Nucleus reticularis (Taylor and Alavi 1995)
  • Extended reticular-thalamic activation system (Newman and Baars 1993)
  • Anterior cingulate system (Cotterill 1994)
  • Neural assemblies bound by NMDA (Flohr 1995)
  • Temporally-extended neural activity (Libet 1994)
  • Backprojections to lower cortical areas (Cauller and Kulics 1991)
  • Neurons in extrastriate visual cortex projecting to prefrontal areas (Crick and Koch 1995)
  • Neural activity in area V5/MT (Tootell et al 1995)
  • Certain neurons in the superior temporal sulcus (Logothetis and Schall 1989)
  • Neuronal gestalts in an epicenter (Greenfield 1995)
  • Outputs of a comparator system in the hippocampus (Gray 1995)
  • Quantum coherence in microtubules (Hameroff 1994)
  • Global workspace (Baars 1988)
  • Activated semantic memories (Hardcastle 1995)
  • High-quality representations (Farah 1994)
  • Selector inputs to action systems (Shallice 1988)
There are a few intriguing commonalities among the proposals on this list. A number of them give a central role to interactions between the thalamus and the cortex, for example. All the same, the sheer number and diversity of the proposals can be a little overwhelming. I propose to step back a little and try to make sense of all this activity by asking some foundational questions.

A central question is this: how is it, in fact, that one can search for the neural correlate of consciousness? As we all know, there are problems in measuring consciousness. It’s not a directly and straightforwardly observable phenomenon. It would be a lot easier if we had a way of getting at consciousness directly; if we had, for example, a consciousness meter.

If we had a consciousness meter, searching for the NCC would be straightforward. We’d wave the consciousness meter and measure a subject’s consciousness directly. At the same time, we’d monitor the underlying brain processes. After a number of trials, we’d say OK, such-and-such brain processes are correlated with experiences of various kinds, so that’s the neural correlate of consciousness.

Alas, we don’t have a consciousness meter, and there seem to be principled reasons why we can’t have one. Consciousness just isn’t the sort of thing that can be measured directly. So: What do we do without a consciousness meter? How can the search go forward? How does all this experimental research proceed?

I think the answer is this: we get there through principles of interpretation. These are principles by which we interpret physical systems to judge whether or not they have consciousness. We might call these pre-experimental bridging principles. These are the criteria that we bring to bear in looking at systems to say (a) whether or not they are conscious now, and (b) what information they are conscious of, and what information they are not. We can’t reach in directly and grab those experiences and "transpersonalize" them into our own, so we rely on external criteria instead.

That’s a perfectly reasonable thing to do. But in doing this we have to realize that something interesting is going on. These principles of interpretation are not themselves experimentally determined or experimentally tested. In a sense they are pre-experimental assumptions. Experimental research gives us a lot of information about processing; then we bring in the bridging principles to interpret the experimental results, whatever those results may be. They are the principles by which we make inferences from facts about processing to facts about consciousness, so they are conceptually prior to the experiments themselves. We can’t actually refine them experimentally (except perhaps through first-person experimentation!), because we don’t have any independent access to the independent variable. Instead, these principles will be based on some combination of (a) conceptual judgments about what counts as a conscious process and (b) information gleaned from our first-person perspective on our own consciousness.

I think we are all stuck in this boat. The point applies whether one is a reductionist or an anti-reductionist about consciousness. A hard-line reductionist might put some of these points slightly differently, but either way, the experimental work is going to require pre-experimental reasoning to determine the criteria for ascription of consciousness. Of course such principles are usually left implicit in empirical research. We don’t usually see papers saying "Here is the bridging principle, here are the data, and here is what follows." But it’s useful to make them explicit. The very presence of these principles has some strong and interesting consequences in the search for the NCC.

In a sense, in relying on these principles we are taking a leap into the epistemological unknown. Because we don’t measure consciousness directly, we have to make something of a leap of faith. It may not be a big leap, but nevertheless it suggests that everyone doing this sort of work is engaged in philosophical reasoning. Of course one can always choose to stay on solid ground, talking about the empirical results in a neutral way; but the price of doing so is that one gains no particular insight into consciousness. Conversely, as soon as we draw any conclusions about consciousness, we have gone beyond the information given, so we need to pay careful attention to the reasoning involved.

So what are these principles of interpretation? The first and by far the most prevalent such principle is a very straightforward one: it’s a principle of verbal report. When someone says "Yes, I see that table now", we infer that they are conscious of the table. When someone says "Yes, I see red now", we infer that they are having an experience of red. Of course one might always say "How do you know?" — a philosopher might suggest that we may be faced with a fully functioning zombie – but in fact most of us don’t believe that the people around us are zombies, and in practice we are quite prepared to rely on this principle. As pre-experimental assumptions go, this is a relatively "safe" one — it doesn’t require a huge leap of faith — and it is very widely used.

So the principle here is that when information is verbally reported, it is conscious. One can extend this slightly, as no one believes that an actual verbal report is required for consciousness; we are conscious of much more than we report on any given occasion. So an extended principle might say that when information is directly available for verbal report, it is conscious.

Experimental researchers don’t rely only on these principles of verbal report and reportability. These principles can be somewhat limiting when we want to do broader experiments. In particular, we don’t want to just restrict our studies of consciousness to subjects that have language. In fact just this morning we saw a beautiful example of research on consciousness in language-free creatures. I’m referring to the work of Nikos Logothetis and his colleagues (e.g. Logothetis & Schall 1989; Leopold & Logothetis 1996). This work uses experiments on binocular rivalry in monkeys to draw conclusions about the neural processes associated with consciousness. How do Logothetis et al manage to draw conclusions about a monkey’s consciousness without getting any verbal reports? What they do is rely on a monkey’s pressing bars: if a monkey can be made to press a bar in an appropriate way in response to a stimulus, we’ll say that that stimulus was consciously perceived.

The criterion at play seems to require that the information be available for an arbitrary response. If it turned out that the monkey could press a bar in response to a red light but couldn’t do anything else, we would be tempted to say that it wasn’t a case of consciousness at all, but some sort of subconscious connection. If on the other hand we find information that is available for response in all sorts of different ways, then we’ll say that it is conscious. Actually Logothetis and his colleagues also use some subtler reasoning about similarities with binocular rivalry in humans to buttress the claim that the monkey is having the relevant conscious experience, but it is clearly the response that carries the most weight.

The underlying general principle is something like this: When information is directly available for global control in a cognitive system, then it is conscious. If information is available for response in many different motor modalities, we will say that it is conscious, at least in a range of relatively familiar systems such as humans and primates and so on. This principle squares well with the previous principle in cases where the capacity for verbal report is present: availability for verbal report and availability for global control seem to go together in such cases (report is one of the key aspects of control, after all, and it is rare to find information that is reportable but not available more widely). But this principle is also applicable more widely.

A correlation between consciousness and global availability (for short) seems to fit the first-person evidence — the evidence gleaned from our own conscious experience — quite well. When information is present in my consciousness, it is generally reportable, and it can generally be brought to bear in the control of behavior in all sorts of different ways. I can talk about it, I can point in the general direction of a stimulus, I can press bars, and so on. Conversely, when we find information that is directly available in this way for report and other aspects of control, it is generally conscious information. I think one can bear this out by consideration of cases.

There are some interesting puzzle cases to consider, such as the case of blindsight, where one has some kind of availability for control but arguably no conscious experience. Those cases might best be handled by invoking the directness criterion: insofar as the information here is available for report and other control processes at all, it is available only indirectly, by comparison to the direct and automatic availability in standard cases. One might also stipulate that it is availability for voluntary control that is relevant, to deal with certain cases of involuntary unconscious response, although that is a complex issue. I discuss a number of puzzle cases in more detail elsewhere (Chalmers 1996, forthcoming), where I also give a much more detailed defense of the idea that something like global availability is the key pre-empirical criterion for the ascription of consciousness.

But this remains at best a first-order approximation of the functional criteria that come into play. I’m less concerned today to get all the fine details right than to work with the idea that some such functional criterion is required and indeed is implicit in all the empirical research on the neural correlate of consciousness. If you disagree with the criterion I’ve suggested here – presumably because you can think of counterexamples — you may want to use those counterexamples to refine it or to come up with a better criterion of your own. But the point I want to focus on here is that in the very act of experimentally distinguishing conscious from unconscious processes, some such criterion is always at play.

So the question I want to ask is: if something like this is right, then what follows? That is, if some such bridging principles are implicit in the methodology of the search for the NCC, then what are the consequences? I will use global availability as my central functional criterion in the discussion that follows, but many of the points should generalize.

The first thing one can do is produce what philosophers might call a rational reconstruction of the search for the neural correlate of consciousness. With a rational reconstruction we can say, maybe things don’t work exactly like this in practice, but the rational underpinnings of the process have something like this form. That is, if one were to try to justify the conclusions one has reached as well as one can, one’s justification would follow the shape of the rational reconstruction. In this case, a rational reconstruction might look something like this:

(1) Consciousness <-> global availability (bridging principle)
(2) Global availability <-> neural process N (empirical work)

so

(3) Consciousness <-> neural process N (conclusion).

According to this reconstruction, one implicitly embraces some sort of pre-experimental bridging principle that one finds plausible on independent grounds, such as conceptual or phenomenological grounds. Then one does the empirical research. Instead of measuring consciousness directly, we detect the functional property. One sees that when this functional property (e.g. global availability) is present, it is correlated with a certain neural process (e.g. 40-hertz oscillations). Combining the pre-empirical premise and the empirical result, we arrive at the conclusion that this neural process is a candidate for the NCC.

Of course it doesn’t work nearly so simply in practice. The two stages are very intertwined; our pre-experimental principles may themselves be refined as experimental research goes along. Nevertheless I think one can make a separation, at least at the rational level, into pre-empirical and experimental components, for the sake of analysis. So with this sort of rational reconstruction in hand, what sort of conclusions follow? There are about six consequences that I want to draw out here.

(1) The first conclusion is a characterization of the neural correlates of consciousness. If the NCC is arrived at through this sort of methodology, then whatever it turns out to be, it will be a mechanism of global availability. The presence of the NCC wherever global availability is present suggests that it is a mechanism that subserves the process of global availability in the brain. The only alternative that we have to worry about is that it might be a symptom rather than a mechanism of global availability; but that possibility ought to be addressable in principle by dissociation studies, by lesioning, and so on. If a process is a mere symptom of availability, we ought to be able to empirically dissociate it from the process of global availability while leaving the latter intact. The resulting data would suggest to us that consciousness can be present even when the neural process in question is not, thus indicating that it wasn’t a perfect correlate of consciousness after all.

(A related line of reasoning supports the idea that a true NCC must be a mechanism of direct availability for global control. Mechanisms of indirect availability will in principle be dissociable from the empirical evidence for consciousness, for example by directly stimulating the mechanisms of direct availability. The indirect mechanisms will be "screened off" by the direct mechanisms in much the same way as the retina is screened off as an NCC by the visual cortex.)

In fact, if one looks at the various proposals that are out there, this template seems to fit them pretty well. For example, the 40-hertz oscillations discussed by Crick and Koch were put forward precisely because of the role they might play in binding and integrating information into working memory, and working memory is of course a central mechanism whereby information is made available for global control in a cognitive system. Similarly, it is plausible that Libet’s extended neural activity is relevant precisely because the temporal extendedness of activity is what gives certain information the capacity to dominate later processes that lead to control. Baars’ global workspace is a particularly explicit proposal of a mechanism in this direction; it is put forward explicitly as a mechanism whereby information can be globally disseminated. All of these mechanisms and many of the others seem to be candidates for mechanisms of global availability in the brain.

(2) This reconstruction suggests that a full story about the neural processes associated with consciousness will to do two things. Firstly, it will explain global availability in the brain. Once we know all about the relevant neural processes, we will know precisely how information is made directly available for global control in the brain, and this will be an explanation in the full sense. Global availability is a functional property, and as always the problem of explaining the performance of a function is a problem to which mechanistic explanation is well-suited. So we can be confident that in a century or two, global availability will be straightforwardly explained. Secondly, this explanation of availability will do something else: it will isolate the processes that underlie consciousness itself. If the bridging principle is granted, then mechanisms of availability will automatically be correlates of phenomenology in the full sense.

Now, I don’t think this gives us a full explanation of consciousness. One can always raise the question of why it is that these processes of availability should give rise to consciousness in the first place. As yet we have no explanation of why this is, and it may well be that the full details concerning the processes of availability still won’t answer this question. Certainly, nothing in the standard methodology I have outlined answers the question; that methodology assumes a relation between availability and consciousness, and therefore does nothing to explain it. The relationship between the two is instead taken as something of a primitive. So the hard problem still remains. But who knows: somewhere along the line we may be led to the relevant insights that show why the link is there, and the hard problem may then be solved. In any case, whether or not we have solved the hard problem, we may nevertheless have isolated the basis of consciousness in the brain. We just have to keep in mind the distinction between correlation and explanation.

(3) Given this paradigm, it is likely that there are going to be many different neural correlates of consciousness. I take it that this is not going to surprise many people; but the rational reconstruction gives us a way of seeing just why such a multiplicity of correlates should exist. There will be many neural correlates of consciousness because there may well be many different mechanisms of global availability. There will be mechanisms of availability in different modalities: the mechanisms of visual availability may be quite different from the mechanisms of auditory availability, for example. (Of course they may be the same, in that we could find a later area that integrates and disseminates all this information, but that’s an open question.) There will also be mechanisms at different stages of the processing path whereby information is made globally available: early mechanisms and later ones. So these may all be candidates for the NCC. And there will be mechanisms at many different levels of description: for example, 40-hertz oscillations may well be redescribed as high-quality representations, or as part of a global workspace, at a different level of description. So it may turn out that a number of the animals in the zoo, so to speak, can co-exist, because they are compatible in one of these ways.

I won’t speculate much further on just what the neural correlates of consciousness are. No doubt some of the ideas in the initial list will prove to be entirely off-track, while some of the others will prove closer to the mark. As we philosophers like to say, humbly, that’s an empirical question. But I hope the conceptual issues are becoming clearer.

(4) This way of thinking about things allows one to make sense of a idea that is sometimes floated: that of a consciousness module. Sometimes this notion is disparaged; sometimes it is embraced. But this picture of the methodology in the search for an NCC suggests that it is at least possible that there could turn out to be such a module. What would it take? It would require that there turns out to be some sort of functionally localizable, internally integrated area, through which all global availability runs. It needn’t be anatomically localizable, but to qualify as a module it would need to be localizable in some broader sense. For example, the parts of the module would have to have high-bandwidth communication among themselves, compared to the relatively low-bandwidth communication that they have with other areas. Such a thing could turn out to exist. It doesn’t strike me as especially likely that things will turn out this way; it seems just as likely that there will be multiple independent mechanisms of global availability in the brain, scattered around without any special degree of mutual integration. If that’s so, we will likely say that there doesn’t turn out to be a consciousness module after all. But that’s another one of those empirical questions.

If something like this does turn out to exist in the brain, it would resemble Baars’ conception of a global workspace: a functional area responsible for the integration of information in the brain and for its dissemination to multiple nonconscious specialized processes. In fact I should acknowledge that many of the ideas I’m putting forward here are compatible with things that Baars has been saying for years about the role of global availability in the study of consciousness. Indeed, this way of looking at things suggests that some of his ideas are almost forced on one by the methodology. The special epistemological role of global availability helps explain why the idea of a global workspace provides a useful way of thinking about almost any empirical proposal about consciousness. If NCC’s are identified as such precisely because of their role in global control, then at least on a first approximation, we should expect the global workspace idea to be a natural fit.

(5) We can also apply this picture to a question that has been discussed frequently at this conference: are the neural correlates of visual consciousness to be found in V1, in the extrastriate visual cortex, or elsewhere? If our picture of the methodology is correct, then the answer will presumably depend on which visual area is most directly implicated in global availability.

Crick and Koch have suggested that the visual NCC is not to be found within V1, as V1 does not contain neurons that project to the prefrontal cortex. This reasoning has been criticized by Ned Block for conflating access consciousness and phenomenal consciousness (see Block, this volume); but interestingly, the picture I have developed suggests that it may be good reasoning. The prefrontal cortex is known to be associated with control processes; so if a given area in the visual cortex projects to prefrontal areas, then it may well be a mechanism of direct availability. And if it does not project in this way, it is less likely to be such a mechanism; at best it might be indirectly associated with global availability. Of course there is still plenty of room to raise questions about the empirical details. But the broader point is that for the sort of reasons discussed in (2) above, it is likely that the neural processes involved in explaining access consciousness will simultaneously be involved in a story about the basis of phenomenal consciousness. If something like this is implicit in their reasoning, Crick and Koch might escape the charge of conflation. Of course the reasoning does depend on these somewhat shaky bridging principles, but then all work on the neural correlates of consciousness must appeal to such principles somewhere, so this can’t be held against Crick and Koch in particular.

(6) Sometimes the neural correlate of consciousness is conceived of as the Holy Grail for a theory of consciousness. It will make everything fall into place. For example, once we discover the NCC, then we’ll have a definitive test for consciousness, enabling us to discover consciousness wherever it arises. That is, we might use the neural correlate itself as a sort of consciousness meter. If a system has 40-hertz oscillations (say), then it is conscious; if it has none, then it is not conscious. Or if a thalamocortical system turns out to be the NCC, then a system without that system is unlikely to be conscious. This sort of reasoning is not usually put quite so baldly as this, but I think one finds some version of it quite frequently.

This reasoning can be tempting, but one should not succumb to the temptation. Given the very methodology that comes into play here, there is no way to definitely establish a given NCC as an independent test for consciousness. The primary criterion for consciousness will always remain the functional property we started with: global availability, or verbal report, or whatever. That’s how we discovered the correlations in the first place. 40-hertz oscillations (or whatever) are relevant only because of the role they play in satisfying this criterion. True, in cases where we know that this association between the NCC and the functional property is present, the NCC might itself function as a sort of "signature" of consciousness; but once we dissociate the NCC from the functional property, all bets are off. To take an extreme example, if we have 40-hertz oscillations in a test tube, that almost certainly won’t yield consciousness. But the point applies equally in less extreme cases. Because it was the bridging principles that gave us all the traction in the search for an NCC in the first place, it’s not clear that anything follows in cases where the functional criterion is thrown it away. So there’s no free lunch here: one can’t get something for nothing.

Once one recognizes the central role that pre-experimental assumptions play in the search for the NCC, one realizes that there are some limitations on just what we can expect this search to tell us. Still, whether or not the NCC is the Holy Grail, I hope that I have said enough to make it clear that the quest for it is likely to enhance our understanding considerably. And I hope to have convinced you that there are important ways in which philosophy and neuroscience can come together to help clarify some of the deep problems involved in the study of consciousness.

References

Baars, B.J. 1988. A Cognitive Theory of Consciousness. Cambridge University Press.
Bogen, J.E. 1995. On the neurophysiology of consciousness, parts I and II. Consciousness and Cognition, 4:52-62 & 4:137-58.
Cauller, L.J. & Kulics, A.T. 1991. The neural basis of the behaviorally relevant N1 component of the somatosensory evoked potential in awake monkeys: Evidence that backward cortical projections signal conscious touch sensation. Experimental Brain Research 84:607-619.
Chalmers, D.J. 1996. The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press.
Chalmers, D.J. (forthcoming). Availability: the cognitive basis of experience? Behavioral and Brain Sciences. Also in N. Block, O. Flanagan, & G. Güzeldere (eds) The Nature of Consciousness (MIT Press, 1997).
Cotterill, R. 1994. On the unity of conscious experience. Journal of Consciousness Studies 2:290-311.
Crick, F. and Koch, C. 1990. Towards a neurobiological theory of consciousness. Seminars in the Neurosciences 2: 263-275.
Crick, F. & Koch, C. 1995. Are we aware of neural activity in primary visual cortex? Nature 375: 121-23.
Edelman, G.M. 1989. The Remembered Present: A Biological Theory of Consciousness. New York: Basic Books.
Farah, M.J. 1994. Visual perception and visual awareness after brain damage: A tutorial overview. In (C. Umilta and M. Moscovitch, eds.) Consciousness and Unconscious Information Processing: Attention and Performance 15. MIT Press.
Flohr, H. 1995. Sensations and brain processes. Behavioral Brain Research 71:157-61.
Gray, J.A. 1995. The contents of consciousness: A neuropsychological conjecture. Behavioral and Brain Sciences 18:659-722.
Greenfield, S. 1995. Journey to the Centers of the Mind. W.H. Freeman.
Hameroff, S.R. 1994. Quantum coherence in microtubules: A neural basis for emergent consciousness? Journal of Consciousness Studies 1:91-118.
Hardcastle, V.G. 1996. Locating Consciousness. Philadephia: John Benjamins.
Jackendoff, R. 1987. Consciousness and the Computational Mind. MIT Press.
Leopold, D.A. & Logothetis, N.K. 1996. Activity-changes in early visual cortex reflect monkeys’ percepts during binocular rivalry. Nature 379: 549-553.
Libet, B. 1993. The neural time factor in conscious and unconscious events. In Experimental and Theoretical Studies of Consciousness (Ciba Foundation Symposium 174). New York: Wiley.
Llinas, R.R., Ribary, U., Joliot, M. & Wang, X.-J. 1994. Content and context in temporal thalamocortical binding. In (G. Buzsaki, R.R. Llinas, & W. Singer, eds.) Temporal Coding in the Brain. Berlin: Springer Verlag.
Logothetis, N. & Schall, J. 1989. Neuronal correlates of subjective visual perception. Science 245:761-63.
Shallice, T. 1988. Information-processing models of consciousness: possibilities and problems. In (A. Marcel and E. Bisiach, eds.) Consciousness in Contemporary Science. Oxford University Press.
Taylor, J.G. & Alavi, F.N. 1993. Mathematical analysis of a competitive network for attention. In (J.G. Taylor, ed.) Mathematical Approaches to Neural Networks. Elsevier.
Tootell, R.B., Reppas, J.B., Dale, A.M., Look, R.B., Sereno, M.I., Malach, R., Brady, J. & Rosen, B.R. 1995. Visual motion aftereffect in human cortical area MT revealed by functional magnetic resonance imaging. Nature 375:139-41.
Copyright © 1998 MIT Press, from Toward a Science of Consciousness II: The Second Tucson Discussions and Debates. Used with permission.

Faster-than-light

From Wikipedia, the free encyclopedia
Faster-than-light (also superluminal or FTL) communication and travel are the conjectural propagation of information or matter faster than the speed of light.

The special theory of relativity implies that only particles with zero rest mass may travel at the speed of light. Tachyons, particles whose speed exceeds that of light, have been hypothesized, but their existence would violate causality, and the consensus of physicists is that they cannot exist. On the other hand, what some physicists refer to as "apparent" or "effective" FTL[1][2][3][4] depends on the hypothesis that unusually distorted regions of spacetime might permit matter to reach distant locations in less time than light could in normal or undistorted spacetime.

According to the current scientific theories, matter is required to travel at slower-than-light (also subluminal or STL) speed with respect to the locally distorted spacetime region. Apparent FTL is not excluded by general relativity; however, any apparent FTL physical plausibility is speculative. Examples of apparent FTL proposals are the Alcubierre drive and the traversable wormhole.

FTL travel of non-information

In the context of this article, FTL is the transmission of information or matter faster than c, a constant equal to the speed of light in a vacuum, which is 299,792,458 m/s (by definition of the meter) or about 186,282.397 miles per second. This is not quite the same as traveling faster than light, since:
  • Some processes propagate faster than c, but cannot carry information (see examples in the sections immediately following).
  • Light travels at speed c/n when not in a vacuum but traveling through a medium with refractive index = n (causing refraction), and in some materials other particles can travel faster than c/n (but still slower than c), leading to Cherenkov radiation (see phase velocity below).
Neither of these phenomena violates special relativity or creates problems with causality, and thus neither qualifies as FTL as described here.

In the following examples, certain influences may appear to travel faster than light, but they do not convey energy or information faster than light, so they do not violate special relativity.

Daily sky motion

For an earth-bound observer, objects in the sky complete one revolution around the Earth in one day.  Proxima Centauri, the nearest star outside the solar system, is about four light-years away.[5] In this frame of reference, in which Proxima Centauri is perceived to be moving in a circular trajectory with a radius of four light years, it could be described as having a speed many times greater than c as the rim speed of an object moving in a circle is a product of the radius and angular speed.[5] It is also possible on a geostatic view, for objects such as comets to vary their speed from subluminal to superluminal and vice versa simply because the distance from the Earth varies. Comets may have orbits which take them out to more than 1000 AU.[6] The circumference of a circle with a radius of 1000 AU is greater than one light day. In other words, a comet at such a distance is superluminal in a geostatic, and therefore non-inertial, frame.

Light spots and shadows

If a laser beam is swept across a distant object, the spot of laser light can easily be made to move across the object at a speed greater than c.[7] Similarly, a shadow projected onto a distant object can be made to move across the object faster than c.[7] In neither case does the light travel from the source to the object faster than c, nor does any information travel faster than light.[7][8][9]

Apparent FTL propagation of static field effects

Since there is no "retardation" (or aberration) of the apparent position of the source of a gravitational or electric static field when the source moves with constant velocity, the static field "effect" may seem at first glance to be "transmitted" faster than the speed of light. However, uniform motion of the static source may be removed with a change in reference frame, causing the direction of the static field to change immediately, at all distances. This is not a change of position which "propagates", and thus this change cannot be used to transmit information from the source. No information or matter can be FTL-transmitted or propagated from source to receiver/observer by an electromagnetic field.

Closing speeds

The rate at which two objects in motion in a single frame of reference get closer together is called the mutual or closing speed. This may approach twice the speed of light, as in the case of two particles travelling at close to the speed of light in opposite directions with respect to the reference frame.

Imagine two fast-moving particles approaching each other from opposite sides of a particle accelerator of the collider type. The closing speed would be the rate at which the distance between the two particles is decreasing. From the point of view of an observer standing at rest relative to the accelerator, this rate will be slightly less than twice the speed of light.

Special relativity does not prohibit this. It tells us that it is wrong to use Galilean relativity to compute the velocity of one of the particles, as would be measured by an observer traveling alongside the other particle. That is, special relativity gives the right formula for computing such relative velocity.

It is instructive to compute the relative velocity of particles moving at v and −v in accelerator frame, which corresponds to the closing speed of 2v > c. Expressing the speeds in units of c, β = v/c:
{\displaystyle \beta _{\text{rel}}={\frac {\beta +\beta }{1+\beta ^{2}}}={\frac {2\beta }{1+\beta ^{2}}}\leq 1.}

Proper speeds

If a spaceship travels to a planet one light-year (as measured in the Earth's rest frame) away from Earth at high speed, the time taken to reach that planet could be less than one year as measured by the traveller's clock (although it will always be more than one year as measured by a clock on Earth). The value obtained by dividing the distance traveled, as determined in the Earth's frame, by the time taken, measured by the traveller's clock, is known as a proper speed or a proper velocity. There is no limit on the value of a proper speed as a proper speed does not represent a speed measured in a single inertial frame. A light signal that left the Earth at the same time as the traveller would always get to the destination before the traveller.

Possible distance away from Earth

Since one might not travel faster than light, one might conclude that a human can never travel further from the Earth than 40 light-years if the traveler is active between the age of 20 and 60. A traveler would then never be able to reach more than the very few star systems which exist within the limit of 20–40 light-years from the Earth. This is a mistaken conclusion: because of time dilation, the traveler can travel thousands of light-years during their 40 active years. If the spaceship accelerates at a constant 1 g (in its own changing frame of reference), it will, after 354 days, reach speeds a little under the speed of light (for an observer on Earth), and time dilation will increase their lifespan to thousands of Earth years, seen from the reference system of the Solar System, but the traveler's subjective lifespan will not thereby change. If the traveler returns to the Earth, she or he will land thousands of years into the Earth's future. Their speed will not be seen as higher than the speed of light by observers on Earth, and the traveler will not measure their speed as being higher than the speed of light, but will see a length contraction of the universe in their direction of travel. And as the traveler turns around to return, the Earth will seem to experience much more time than the traveler does. So, while their (ordinary) coordinate speed cannot exceed c, their proper speed (distance as seen by Earth divided by their proper time) can be much greater than c. This is seen in statistical studies of muons traveling much further than c times their half-life (at rest), if traveling close to c.[10]

Phase velocities above c

The phase velocity of an electromagnetic wave, when traveling through a medium, can routinely exceed c, the vacuum velocity of light. For example, this occurs in most glasses at X-ray frequencies.[11] However, the phase velocity of a wave corresponds to the propagation speed of a theoretical single-frequency (purely monochromatic) component of the wave at that frequency. Such a wave component must be infinite in extent and of constant amplitude (otherwise it is not truly monochromatic), and so cannot convey any information.[12] Thus a phase velocity above c does not imply the propagation of signals with a velocity above c.[13]

Group velocities above c

The group velocity of a wave (e.g., a light beam) may also exceed c in some circumstances.[14][15] In such cases, which typically at the same time involve rapid attenuation of the intensity, the maximum of the envelope of a pulse may travel with a velocity above c. However, even this situation does not imply the propagation of signals with a velocity above c,[16] even though one may be tempted to associate pulse maxima with signals. The latter association has been shown to be misleading, because the information on the arrival of a pulse can be obtained before the pulse maximum arrives. For example, if some mechanism allows the full transmission of the leading part of a pulse while strongly attenuating the pulse maximum and everything behind (distortion), the pulse maximum is effectively shifted forward in time, while the information on the pulse does not come faster than c without this effect.[17] However, group velocity can exceed c in some parts of a Gaussian beam in vacuum (without attenuation). The diffraction causes that the peak of pulse propagates faster, while overall power does not.[18]

Universal expansion

History of the universe - gravitational waves are hypothesized to arise from cosmic inflation, a faster-than-light expansion just after the Big Bang (17 March 2014).[19][20][21]

The expansion of the universe causes distant galaxies to recede from us faster than the speed of light, if proper distance and cosmological time are used to calculate the speeds of these galaxies. However, in general relativity, velocity is a local notion, so velocity calculated using comoving coordinates does not have any simple relation to velocity calculated locally.[22] (See comoving distance for a discussion of different notions of 'velocity' in cosmology.) Rules that apply to relative velocities in special relativity, such as the rule that relative velocities cannot increase past the speed of light, do not apply to relative velocities in comoving coordinates, which are often described in terms of the "expansion of space" between galaxies. This expansion rate is thought to have been at its peak during the inflationary epoch thought to have occurred in a tiny fraction of the second after the Big Bang (models suggest the period would have been from around 10−36 seconds after the Big Bang to around 10−33 seconds), when the universe may have rapidly expanded by a factor of around 1020 to 1030.[23]

There are many galaxies visible in telescopes with red shift numbers of 1.4 or higher. All of these are currently traveling away from us at speeds greater than the speed of light. Because the Hubble parameter is decreasing with time, there can actually be cases where a galaxy that is receding from us faster than light does manage to emit a signal which reaches us eventually.[24][25]

According to Tamara M. Davis, "Our effective particle horizon is the cosmic microwave background (CMB), at redshift z ∼ 1100, because we cannot see beyond the surface of last scattering. Although the last scattering surface is not at any fixed comoving coordinate, the current recession velocity of the points from which the CMB was emitted is 3.2c. At the time of emission their speed was 58.1c, assuming (ΩM,ΩΛ) = (0.3,0.7). Thus we routinely observe objects that are receding faster than the speed of light and the Hubble sphere is not a horizon."[26]

However, because the expansion of the universe is accelerating, it is projected that most galaxies will eventually cross a type of cosmological event horizon where any light they emit past that point will never be able to reach us at any time in the infinite future,[27] because the light never reaches a point where its "peculiar velocity" towards us exceeds the expansion velocity away from us (these two notions of velocity are also discussed in Comoving distance#Uses of the proper distance). The current distance to this cosmological event horizon is about 16 billion light-years, meaning that a signal from an event happening at present would eventually be able to reach us in the future if the event was less than 16 billion light-years away, but the signal would never reach us if the event was more than 16 billion light-years away.[25]

Astronomical observations

Apparent superluminal motion is observed in many radio galaxies, blazars, quasars and recently also in microquasars. The effect was predicted before it was observed by Martin Rees[clarification needed] and can be explained as an optical illusion caused by the object partly moving in the direction of the observer,[28] when the speed calculations assume it does not. The phenomenon does not contradict the theory of special relativity. Corrected calculations show these objects have velocities close to the speed of light (relative to our reference frame). They are the first examples of large amounts of mass moving at close to the speed of light.[29] Earth-bound laboratories have only been able to accelerate small numbers of elementary particles to such speeds.

Quantum mechanics

Certain phenomena in quantum mechanics, such as quantum entanglement, might give the superficial impression of allowing communication of information faster than light. According to the no-communication theorem these phenomena do not allow true communication; they only let two observers in different locations see the same system simultaneously, without any way of controlling what either sees. Wavefunction collapse can be viewed as an epiphenomenon of quantum decoherence, which in turn is nothing more than an effect of the underlying local time evolution of the wavefunction of a system and all of its environment. Since the underlying behavior does not violate local causality or allow FTL communication, it follows that neither does the additional effect of wavefunction collapse, whether real or apparent.

The uncertainty principle implies that individual photons may travel for short distances at speeds somewhat faster (or slower) than c, even in a vacuum; this possibility must be taken into account when enumerating Feynman diagrams for a particle interaction.[30] However, it was shown in 2011 that a single photon may not travel faster than c.[31] In quantum mechanics, virtual particles may travel faster than light, and this phenomenon is related to the fact that static field effects (which are mediated by virtual particles in quantum terms) may travel faster than light (see section on static fields above). However, macroscopically these fluctuations average out, so that photons do travel in straight lines over long (i.e., non-quantum) distances, and they do travel at the speed of light on average. Therefore, this does not imply the possibility of superluminal information transmission.

There have been various reports in the popular press of experiments on faster-than-light transmission in optics — most often in the context of a kind of quantum tunnelling phenomenon. Usually, such reports deal with a phase velocity or group velocity faster than the vacuum velocity of light.[32][33] However, as stated above, a superluminal phase velocity cannot be used for faster-than-light transmission of information.[34][35]

Hartman effect

The Hartman effect is the tunneling effect through a barrier where the tunneling time tends to a constant for large barriers.[36] This was first described by Thomas Hartman in 1962.[37] This could, for instance, be the gap between two prisms. When the prisms are in contact, the light passes straight through, but when there is a gap, the light is refracted. There is a non-zero probability that the photon will tunnel across the gap rather than follow the refracted path. For large gaps between the prisms the tunnelling time approaches a constant and thus the photons appear to have crossed with a superluminal speed.[38]

However, an analysis by Herbert G. Winful from the University of Michigan suggests that the Hartman effect cannot actually be used to violate relativity by transmitting signals faster than c, because the tunnelling time "should not be linked to a velocity since evanescent waves do not propagate".[39] The evanescent waves in the Hartman effect are due to virtual particles and a non-propagating static field, as mentioned in the sections above for gravity and electromagnetism.

Casimir effect

In physics, the Casimir effect or Casimir-Polder force is a physical force exerted between separate objects due to resonance of vacuum energy in the intervening space between the objects. This is sometimes described in terms of virtual particles interacting with the objects, owing to the mathematical form of one possible way of calculating the strength of the effect. Because the strength of the force falls off rapidly with distance, it is only measurable when the distance between the objects is extremely small. Because the effect is due to virtual particles mediating a static field effect, it is subject to the comments about static fields discussed above.

EPR paradox

The EPR paradox refers to a famous thought experiment of Einstein, Podolski and Rosen that was realized experimentally for the first time by Alain Aspect in 1981 and 1982 in the Aspect experiment. In this experiment, the measurement of the state of one of the quantum systems of an entangled pair apparently instantaneously forces the other system (which may be distant) to be measured in the complementary state. However, no information can be transmitted this way; the answer to whether or not the measurement actually affects the other quantum system comes down to which interpretation of quantum mechanics one subscribes to.

An experiment performed in 1997 by Nicolas Gisin at the University of Geneva has demonstrated non-local quantum correlations between particles separated by over 10 kilometers.[40] But as noted earlier, the non-local correlations seen in entanglement cannot actually be used to transmit classical information faster than light, so that relativistic causality is preserved; see no-communication theorem for further information. A 2008 quantum physics experiment also performed by Nicolas Gisin and his colleagues in Geneva, Switzerland has determined that in any hypothetical non-local hidden-variables theory the speed of the quantum non-local connection (what Einstein called "spooky action at a distance") is at least 10,000 times the speed of light.[41]

Delayed choice quantum eraser

Delayed choice quantum eraser (an experiment of Marlan Scully) is a version of the EPR paradox in which the observation (or not) of interference after the passage of a photon through a double slit experiment depends on the conditions of observation of a second photon entangled with the first. The characteristic of this experiment is that the observation of the second photon can take place at a later time than the observation of the first photon,[42] which may give the impression that the measurement of the later photons "retroactively" determines whether the earlier photons show interference or not, although the interference pattern can only be seen by correlating the measurements of both members of every pair and so it can't be observed until both photons have been measured, ensuring that an experimenter watching only the photons going through the slit does not obtain information about the other photons in an FTL or backwards-in-time manner.[43][44]

Superluminal communication

Faster-than-light communication is, by Einstein's theory of relativity, equivalent to time travel. According to Einstein's theory of special relativity, what we measure as the speed of light in a vacuum (or near vacuum) is actually the fundamental physical constant c. This means that all inertial observers, regardless of their relative velocity, will always measure zero-mass particles such as photons traveling at c in a vacuum. This result means that measurements of time and velocity in different frames are no longer related simply by constant shifts, but are instead related by Poincaré transformations. These transformations have important implications:
  • The relativistic momentum of a massive particle would increase with speed in such a way that at the speed of light an object would have infinite momentum.
  • To accelerate an object of non-zero rest mass to c would require infinite time with any finite acceleration, or infinite acceleration for a finite amount of time.
  • Either way, such acceleration requires infinite energy.
  • Some observers with sub-light relative motion will disagree about which occurs first of any two events that are separated by a space-like interval.[45] In other words, any travel that is faster-than-light will be seen as traveling backwards in time in some other, equally valid, frames of reference,[46] or need to assume the speculative hypothesis of possible Lorentz violations at a presently unobserved scale (for instance the Planck scale).[citation needed] Therefore, any theory which permits "true" FTL also has to cope with time travel and all its associated paradoxes,[47] or else to assume the Lorentz invariance to be a symmetry of thermodynamical statistical nature (hence a symmetry broken at some presently unobserved scale).
  • In special relativity the coordinate speed of light is only guaranteed to be c in an inertial frame; in a non-inertial frame the coordinate speed may be different from c.[48] In general relativity no coordinate system on a large region of curved spacetime is "inertial", so it's permissible to use a global coordinate system where objects travel faster than c, but in the local neighborhood of any point in curved spacetime we can define a "local inertial frame" and the local speed of light will be c in this frame,[49] with massive objects moving through this local neighborhood always having a speed less than c in the local inertial frame.

Justifications

Relative permittivity or permeability less than 1

The speed of light
{\displaystyle c={\frac {1}{\sqrt {\varepsilon _{0}\mu _{0}}}}\ }
is related to the vacuum permittivity ε0 and the vacuum permeability μ0. Therefore, not only the phase velocity, group velocity and energy flow velocity of electromagnetic waves but also the velocity of a photon can be faster than c in a special material has the constant permittivity or permeability whose value is less than that in vacuum.[50]

Casimir vacuum and quantum tunnelling

Einstein's equations of special relativity postulate that the speed of light in vacuum is invariant in inertial frames. That is, it will be the same from any frame of reference moving at a constant speed. The equations do not specify any particular value for the speed of the light, which is an experimentally determined quantity for a fixed unit of length. Since 1983, the SI unit of length (the meter) has been defined using the speed of light.

The experimental determination has been made in vacuum. However, the vacuum we know is not the only possible vacuum which can exist. The vacuum has energy associated with it, called simply the vacuum energy, which could perhaps be altered in certain cases.[51] When vacuum energy is lowered, light itself has been predicted to go faster than the standard value c. This is known as the Scharnhorst effect. Such a vacuum can be produced by bringing two perfectly smooth metal plates together at near atomic diameter spacing. It is called a Casimir vacuum. Calculations imply that light will go faster in such a vacuum by a minuscule amount: a photon traveling between two plates that are 1 micrometer apart would increase the photon's speed by only about one part in 1036.[52] Accordingly, there has as yet been no experimental verification of the prediction. A recent analysis[53] argued that the Scharnhorst effect cannot be used to send information backwards in time with a single set of plates since the plates' rest frame would define a "preferred frame" for FTL signalling. However, with multiple pairs of plates in motion relative to one another the authors noted that they had no arguments that could "guarantee the total absence of causality violations", and invoked Hawking's speculative chronology protection conjecture which suggests that feedback loops of virtual particles would create "uncontrollable singularities in the renormalized quantum stress-energy" on the boundary of any potential time machine, and thus would require a theory of quantum gravity to fully analyze. Other authors argue that Scharnhorst's original analysis, which seemed to show the possibility of faster-than-c signals, involved approximations which may be incorrect, so that it is not clear whether this effect could actually increase signal speed at all.[54]

The physicists Günter Nimtz and Alfons Stahlhofen, of the University of Cologne, claim to have violated relativity experimentally by transmitting photons faster than the speed of light.[38] They say they have conducted an experiment in which microwave photons — relatively low-energy packets of light — travelled "instantaneously" between a pair of prisms that had been moved up to 3 ft (1 m) apart. Their experiment involved an optical phenomenon known as "evanescent modes", and they claim that since evanescent modes have an imaginary wave number, they represent a "mathematical analogy" to quantum tunnelling.[38] Nimtz has also claimed that "evanescent modes are not fully describable by the Maxwell equations and quantum mechanics have to be taken into consideration."[55] Other scientists such as Herbert G. Winful and Robert Helling have argued that in fact there is nothing quantum-mechanical about Nimtz's experiments, and that the results can be fully predicted by the equations of classical electromagnetism (Maxwell's equations).[56][57]

Nimtz told New Scientist magazine: "For the time being, this is the only violation of special relativity that I know of." However, other physicists say that this phenomenon does not allow information to be transmitted faster than light. Aephraim Steinberg, a quantum optics expert at the University of Toronto, Canada, uses the analogy of a train traveling from Chicago to New York, but dropping off train cars at each station along the way, so that the center of the ever-shrinking main train moves forward at each stop; in this way, the speed of the center of the train exceeds the speed of any of the individual cars.[58]

Herbert G. Winful argues that the train analogy is a variant of the "reshaping argument" for superluminal tunneling velocities, but he goes on to say that this argument is not actually supported by experiment or simulations, which actually show that the transmitted pulse has the same length and shape as the incident pulse.[56] Instead, Winful argues that the group delay in tunneling is not actually the transit time for the pulse (whose spatial length must be greater than the barrier length in order for its spectrum to be narrow enough to allow tunneling), but is instead the lifetime of the energy stored in a standing wave which forms inside the barrier. Since the stored energy in the barrier is less than the energy stored in a barrier-free region of the same length due to destructive interference, the group delay for the energy to escape the barrier region is shorter than it would be in free space, which according to Winful is the explanation for apparently superluminal tunneling.[59][60]

A number of authors have published papers disputing Nimtz's claim that Einstein causality is violated by his experiments, and there are many other papers in the literature discussing why quantum tunneling is not thought to violate causality.[61]

It was later claimed by the Keller group in Switzerland that particle tunneling does indeed occur in zero real time. Their tests involved tunneling electrons, where the group argued a relativistic prediction for tunneling time should be 500-600 attoseconds (an attosecond is one quintillionth (10−18) of a second). All that could be measured was 24 attoseconds, which is the limit of the test accuracy.[62] Again, though, other physicists believe that tunneling experiments in which particles appear to spend anomalously short times inside the barrier are in fact fully compatible with relativity, although there is disagreement about whether the explanation involves reshaping of the wave packet or other effects.[59][60][63]

Give up (absolute) relativity

Because of the strong empirical support for special relativity, any modifications to it must necessarily be quite subtle and difficult to measure. The best-known attempt is doubly special relativity, which posits that the Planck length is also the same in all reference frames, and is associated with the work of Giovanni Amelino-Camelia and João Magueijo.

There are speculative theories that claim inertia is produced by the combined mass of the universe (e.g., Mach's principle), which implies that the rest frame of the universe might be preferred by conventional measurements of natural law. If confirmed, this would imply special relativity is an approximation to a more general theory, but since the relevant comparison would (by definition) be outside the observable universe, it is difficult to imagine (much less construct) experiments to test this hypothesis.

Spacetime distortion

Although the theory of special relativity forbids objects to have a relative velocity greater than light speed, and general relativity reduces to special relativity in a local sense (in small regions of spacetime where curvature is negligible), general relativity does allow the space between distant objects to expand in such a way that they have a "recession velocity" which exceeds the speed of light, and it is thought that galaxies which are at a distance of more than about 14 billion light-years from us today have a recession velocity which is faster than light.[64] Miguel Alcubierre theorized that it would be possible to create an Alcubierre drive, in which a ship would be enclosed in a "warp bubble" where the space at the front of the bubble is rapidly contracting and the space at the back is rapidly expanding, with the result that the bubble can reach a distant destination much faster than a light beam moving outside the bubble, but without objects inside the bubble locally traveling faster than light. However, several objections raised against the Alcubierre drive appear to rule out the possibility of actually using it in any practical fashion. Another possibility predicted by general relativity is the traversable wormhole, which could create a shortcut between arbitrarily distant points in space. As with the Alcubierre drive, travelers moving through the wormhole would not locally move faster than light travelling through the wormhole alongside them, but they would be able to reach their destination (and return to their starting location) faster than light traveling outside the wormhole.

Dr. Gerald Cleaver, associate professor of physics at Baylor University, and Richard Obousy, a Baylor graduate student, theorized that manipulating the extra spatial dimensions of string theory around a spaceship with an extremely large amount of energy would create a "bubble" that could cause the ship to travel faster than the speed of light. To create this bubble, the physicists believe manipulating the 10th spatial dimension would alter the dark energy in three large spatial dimensions: height, width and length. Cleaver said positive dark energy is currently responsible for speeding up the expansion rate of our universe as time moves on.[65]

Heim theory

In 1977, a paper on Heim theory theorized that it may be possible to travel faster than light by using magnetic fields to enter a higher-dimensional space.[66]

Lorentz symmetry violation

The possibility that Lorentz symmetry may be violated has been seriously considered in the last two decades, particularly after the development of a realistic effective field theory that describes this possible violation, the so-called Standard-Model Extension.[67][68][69] This general framework has allowed experimental searches by ultra-high energy cosmic-ray experiments[70] and a wide variety of experiments in gravity, electrons, protons, neutrons, neutrinos, mesons, and photons.[71] The breaking of rotation and boost invariance causes direction dependence in the theory as well as unconventional energy dependence that introduces novel effects, including Lorentz-violating neutrino oscillations and modifications to the dispersion relations of different particle species, which naturally could make particles move faster than light.

In some models of broken Lorentz symmetry, it is postulated that the symmetry is still built into the most fundamental laws of physics, but that spontaneous symmetry breaking of Lorentz invariance[72] shortly after the Big Bang could have left a "relic field" throughout the universe which causes particles to behave differently depending on their velocity relative to the field;[73] however, there are also some models where Lorentz symmetry is broken in a more fundamental way. If Lorentz symmetry can cease to be a fundamental symmetry at Planck scale or at some other fundamental scale, it is conceivable that particles with a critical speed different from the speed of light be the ultimate constituents of matter.

In current models of Lorentz symmetry violation, the phenomenological parameters are expected to be energy-dependent. Therefore, as widely recognized,[74][75] existing low-energy bounds cannot be applied to high-energy phenomena; however, many searches for Lorentz violation at high energies have been carried out using the Standard-Model Extension.[71] Lorentz symmetry violation is expected to become stronger as one gets closer to the fundamental scale.

Superfluid theories of physical vacuum

In this approach the physical vacuum is viewed as the quantum superfluid which is essentially non-relativistic whereas the Lorentz symmetry is not an exact symmetry of nature but rather the approximate description valid only for the small fluctuations of the superfluid background.[76] Within the framework of the approach a theory was proposed in which the physical vacuum is conjectured to be the quantum Bose liquid whose ground-state wavefunction is described by the logarithmic Schrödinger equation. It was shown that the relativistic gravitational interaction arises as the small-amplitude collective excitation mode[77] whereas relativistic elementary particles can be described by the particle-like modes in the limit of low momenta.[78] The important fact is that at very high velocities the behavior of the particle-like modes becomes distinct from the relativistic one - they can reach the speed of light limit at finite energy; also, faster-than-light propagation is possible without requiring moving objects to have imaginary mass.[79][80]

Time of flight of neutrinos

MINOS experiment

In 2007 the MINOS collaboration reported results measuring the flight-time of 3 GeV neutrinos yielding a speed exceeding that of light by 1.8-sigma significance.[81] However, those measurements were considered to be statistically consistent with neutrinos traveling at the speed of light.[82] After the detectors for the project were upgraded in 2012, MINOS corrected their initial result and found agreement with the speed of light. Further measurements are going to be conducted.[83]

OPERA neutrino anomaly

On September 22, 2011, a preprint[84] from the OPERA Collaboration indicated detection of 17 and 28 GeV muon neutrinos, sent 730 kilometers (454 miles) from CERN near Geneva, Switzerland to the Gran Sasso National Laboratory in Italy, traveling faster than light by a relative amount of 2.48×10−5 (approximately 1 in 40,000), a statistic with 6.0-sigma significance.[85] On 17 November 2011, a second follow-up experiment by OPERA scientists confirmed their initial results.[86][87] However, scientists were skeptical about the results of these experiments, the significance of which was disputed.[88] In March 2012, the ICARUS collaboration failed to reproduce the OPERA results with their equipment, detecting neutrino travel time from CERN to the Gran Sasso National Laboratory indistinguishable from the speed of light.[89] Later the OPERA team reported two flaws in their equipment set-up that had caused errors far outside their original confidence interval: a fiber optic cable attached improperly, which caused the apparently faster-than-light measurements, and a clock oscillator ticking too fast.[90]

Tachyons

In special relativity, it is impossible to accelerate an object to the speed of light, or for a massive object to move at the speed of light. However, it might be possible for an object to exist which always moves faster than light. The hypothetical elementary particles with this property are called tachyonic particles. Attempts to quantize them failed to produce faster-than-light particles, and instead illustrated that their presence leads to an instability.[91][92]

Various theorists have suggested that the neutrino might have a tachyonic nature,[93][94][95][96] while others have disputed the possibility.[97]

Exotic matter

Mechanical equations to describe hypothetical exotic matter which possesses a negative mass, negative momentum, negative pressure and negative kinetic energy are[98]
{\displaystyle E=-{m_{0}c^{2} \over {\sqrt {1+\displaystyle {v^{2} \over c^{2}}}}}>0}, {\displaystyle (E_{0}=-m_{0}c^{2}>0)}
{\displaystyle p={m_{0}v \over {\sqrt {1+\displaystyle {v^{2} \over c^{2}}}}}<0}
{\displaystyle E^{2}=-p^{2}c^{2}+m_{0}^{2}c^{4}}
Considering E=\hbar \omega and p=\hbar k, the energy-momentum relation of the particle is corresponding to the following dispersion relation
{\displaystyle \omega ^{2}=-k^{2}c^{2}+\omega _{p}^{2}}, {\displaystyle (E_{0}=\hbar \omega _{p}=-m_{0}c^{2}>0)}
of a wave that can propagate in the negative index metamaterial. The pressure of radiation pressure in the metamaterial is negative[99] and negative refraction, inverse Doppler effect and reverse Cherenkov effect imply that the momentum is also negative. So the wave in a negative index metamaterial can be applied to test the theory of exotic matter and negative mass. For example, the velocity equals
{\displaystyle v=c{\sqrt {{\frac {E_{0}^{2}}{E^{2}}}-1}}=c{\sqrt {{\frac {\omega _{p}^{2}}{\omega ^{2}}}-1}}}
{\displaystyle {\frac {\omega _{p}^{2}}{\omega ^{2}}}<2}, {\displaystyle v<c}
{\displaystyle {\frac {\omega _{p}^{2}}{\omega ^{2}}}>2}, {\displaystyle v>c}
That is to say, such a wave can break the light barrier under certain conditions.

General relativity

General relativity was developed after special relativity to include concepts like gravity. It maintains the principle that no object can accelerate to the speed of light in the reference frame of any coincident observer. However, it permits distortions in spacetime that allow an object to move faster than light from the point of view of a distant observer. One such distortion is the Alcubierre drive, which can be thought of as producing a ripple in spacetime that carries an object along with it. Another possible system is the wormhole, which connects two distant locations as though by a shortcut. Both distortions would need to create a very strong curvature in a highly localized region of space-time and their gravity fields would be immense. To counteract the unstable nature, and prevent the distortions from collapsing under their own 'weight', one would need to introduce hypothetical exotic matter or negative energy.

General relativity also recognizes that any means of faster-than-light travel could also be used for time travel. This raises problems with causality. Many physicists believe that the above phenomena are impossible and that future theories of gravity will prohibit them. One theory states that stable wormholes are possible, but that any attempt to use a network of wormholes to violate causality would result in their decay.[citation needed] In string theory, Eric G. Gimon and Petr Hořava have argued[100] that in a supersymmetric five-dimensional Gödel universe, quantum corrections to general relativity effectively cut off regions of spacetime with causality-violating closed timelike curves. In particular, in the quantum theory a smeared supertube is present that cuts the spacetime in such a way that, although in the full spacetime a closed timelike curve passed through every point, no complete curves exist on the interior region bounded by the tube.

Variable speed of light

In physics, the speed of light in a vacuum is assumed to be a constant. However, hypotheses exist that the speed of light is variable.

The speed of light is a dimensional quantity and so, as has been emphasized in this context by João Magueijo, it cannot be measured.[101] Measurable quantities in physics are, without exception, dimensionless, although they are often constructed as ratios of dimensional quantities. For example, when the height of a mountain is measured, what is really measured is the ratio of its height to the length of a meter stick. The conventional SI system of units is based on seven basic dimensional quantities, namely distance, mass, time, electric current, thermodynamic temperature, amount of substance, and luminous intensity.[102] These units are defined to be independent and so cannot be described in terms of each other. As an alternative to using a particular system of units, one can reduce all measurements to dimensionless quantities expressed in terms of ratios between the quantities being measured and various fundamental constants such as Newton's constant, the speed of light and Planck's constant; physicists can define at least 26 dimensionless constants which can be expressed in terms of these sorts of ratios and which are currently thought to be independent of one another.[103] By manipulating the basic dimensional constants one can also construct the Planck time, Planck length and Planck energy which make a good system of units for expressing dimensional measurements, known as Planck units.

Magueijo's proposal used a different set of units, a choice which he justifies with the claim that some equations will be simpler in these new units. In the new units he fixes the fine structure constant, a quantity which some people, using units in which the speed of light is fixed, have claimed is time-dependent. Thus in the system of units in which the fine structure constant is fixed, the observational claim is that the speed of light is time-dependent.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...