Search This Blog

Wednesday, May 20, 2015

Multiverse


From Wikipedia, the free encyclopedia

The multiverse (or meta-universe) is the hypothetical set of infinite or finite possible universes (including the Universe we consistently experience) that together comprise everything that exists: the entirety of space, time, matter, and energy as well as the physical laws and constants that describe them. The various universes within the multiverse are sometimes called "parallel universes" or "alternate universes".

The structure of the multiverse, the nature of each universe within it and the relationships among the various constituent universes, depend on the specific multiverse hypothesis considered. Multiple universes have been hypothesized in cosmology, physics, astronomy, religion, philosophy, transpersonal psychology, and fiction, particularly in science fiction and fantasy. In these contexts, parallel universes are also called "alternate universes", "quantum universes", "interpenetrating dimensions", "parallel dimensions", "parallel worlds", "alternate realities", "alternate timelines", and "dimensional planes", among others. The term 'multiverse' was coined in 1895 by the American philosopher and psychologist William James in a different context.[1]

The multiverse hypothesis is a source of debate within the physics community. Physicists disagree about whether the multiverse exists, and whether the multiverse is a proper subject of scientific inquiry.[2] Supporters of one of the multiverse hypotheses include Stephen Hawking,[3] Brian Greene,[4][5] Max Tegmark,[6] Alan Guth,[7] Andrei Linde,[8] Michio Kaku,[9] David Deutsch,[10] Leonard Susskind,[11] Raj Pathria,[12] Alexander Vilenkin,[13] Laura Mersini-Houghton,[14][15] Neil deGrasse Tyson[16] and Sean Carroll.[17] In contrast, those who are not proponents of the multiverse include: Nobel laureate Steven Weinberg,[18] Nobel laureate David Gross,[19] Paul Steinhardt,[20] Neil Turok,[21] Viatcheslav Mukhanov,[22] George Ellis,[23][24] Jim Baggott,[25] and Paul Davies. Some argue that the multiverse question is philosophical rather than scientific, that the multiverse cannot be a scientific question because it lacks falsifiability, or even that the multiverse hypothesis is harmful or pseudoscientific.

Multiverse hypotheses in physics

Categories

Max Tegmark and Brian Greene have devised classification schemes that categorize the various theoretical types of multiverse, or types of universe that might theoretically comprise a multiverse ensemble.

Max Tegmark's four levels

Cosmologist Max Tegmark has provided a taxonomy of universes beyond the familiar observable universe. The levels according to Tegmark's classification are arranged such that subsequent levels can be understood to encompass and expand upon previous levels, and they are briefly described below.[26][27]
Level I: Beyond our cosmological horizon
A generic prediction of chaotic inflation is an infinite ergodic universe, which, being infinite, must contain Hubble volumes realizing all initial conditions.

Accordingly, an infinite universe will contain an infinite number of Hubble volumes, all having the same physical laws and physical constants. In regard to configurations such as the distribution of matter, almost all will differ from our Hubble volume. However, because there are infinitely many, far beyond the cosmological horizon, there will eventually be Hubble volumes with similar, and even identical, configurations. Tegmark estimates that an identical volume to ours should be about 1010115 meters away from us.[6] Given infinite space, there would, in fact, be an infinite number of Hubble volumes identical to ours in the Universe.[28] This follows directly from the cosmological principle, wherein it is assumed our Hubble volume is not special or unique.
Level II: Universes with different physical constants

"Bubble universes": every disk is a bubble universe (Universe 1 to Universe 6 are different bubbles; they have physical constants that are different from our universe); our universe is just one of the bubbles.

In the chaotic inflation theory, a variant of the cosmic inflation theory, the multiverse as a whole is stretching and will continue doing so forever,[29] but some regions of space stop stretching and form distinct bubbles, like gas pockets in a loaf of rising bread. Such bubbles are embryonic level I multiverses. Linde and Vanchurin calculated the number of these universes to be on the scale of 101010,000,000.[30]

Different bubbles may experience different spontaneous symmetry breaking resulting in different properties such as different physical constants.[28]

This level also includes John Archibald Wheeler's oscillatory universe theory and Lee Smolin's fecund universes theory.
Level III: Many-worlds interpretation of quantum mechanics
Hugh Everett's many-worlds interpretation (MWI) is one of several mainstream interpretations of quantum mechanics. In brief, one aspect of quantum mechanics is that certain observations cannot be predicted absolutely.
Instead, there is a range of possible observations, each with a different probability. According to the MWI, each of these possible observations corresponds to a different universe. Suppose a six-sided die is thrown and that the result of the throw corresponds to a quantum mechanics observable. All six possible ways the die can fall correspond to six different universes.

Tegmark argues that a level III multiverse does not contain more possibilities in the Hubble volume than a level I-II multiverse. In effect, all the different "worlds" created by "splits" in a level III multiverse with the same physical constants can be found in some Hubble volume in a level I multiverse. Tegmark writes that "The only difference between Level I and Level III is where your doppelgängers reside. In Level I they live elsewhere in good old three-dimensional space. In Level III they live on another quantum branch in infinite-dimensional Hilbert space." Similarly, all level II bubble universes with different physical constants can in effect be found as "worlds" created by "splits" at the moment of spontaneous symmetry breaking in a level III multiverse.[28] According to Nomura[31] and Bousso and Susskind,[11] this is because global spacetime appearing in the (eternally) inflating multiverse is a redundant concept. This implies that the multiverses of Level I, II, and III are, in fact, the same thing. This hypothesis is referred to as "Multiverse = Quantum Many Worlds".

Related to the many-worlds idea are Richard Feynman's multiple histories interpretation and H. Dieter Zeh's many-minds interpretation.
Level IV: Ultimate ensemble
The ultimate ensemble or mathematical universe hypothesis is the hypothesis of Tegmark himself.[32] This level considers equally real all universes that can be described by different mathematical structures. Tegmark writes that "abstract mathematics is so general that any Theory Of Everything (TOE) that is definable in purely formal terms (independent of vague human terminology) is also a mathematical structure. For instance, a TOE involving a set of different types of entities (denoted by words, say) and relations between them (denoted by additional words) is nothing but what mathematicians call a set-theoretical model, and one can generally find a formal system that it is a model of." He argues this "implies that any conceivable parallel universe theory can be described at Level IV" and "subsumes all other ensembles, therefore brings closure to the hierarchy of multiverses, and there cannot be say a Level V."[6]

Jürgen Schmidhuber, however, says the "set of mathematical structures" is not even well-defined, and admits only universe representations describable by constructive mathematics, that is, computer programs. He explicitly includes universe representations describable by non-halting programs whose output bits converge after finite time, although the convergence time itself may not be predictable by a halting program, due to Kurt Gödel's limitations.[33][34][35] He also explicitly discusses the more restricted ensemble of quickly computable universes.[36]

Brian Greene's nine types

American theoretical physicist and string theorist Brian Greene discussed nine types of parallel universes:[37]
Quilted
The quilted multiverse works only in an infinite universe. With an infinite amount of space, every possible event will occur an infinite number of times. However, the speed of light prevents us from being aware of these other identical areas.
Inflationary
The inflationary multiverse is composed of various pockets where inflation fields collapse and form new universes.
Brane
The brane multiverse follows from M-theory and states that our universe is a 3-dimensional brane that exists with many others on a higher-dimensional brane or "bulk". Particles are bound to their respective branes except for gravity.
Cyclic
The cyclic multiverse (via the ekpyrotic scenario) has multiple branes (each a universe) that collided, causing Big Bangs. The universes bounce back and pass through time, until they are pulled back together and again collide, destroying the old contents and creating them anew.
Landscape
The landscape multiverse relies on string theory's Calabi–Yau shapes. Quantum fluctuations drop the shapes to a lower energy level, creating a pocket with a different set of laws from the surrounding space.
Quantum
The quantum multiverse creates a new universe when a diversion in events occurs, as in the many-worlds interpretation of quantum mechanics.
Holographic
The holographic multiverse is derived from the theory that the surface area of a space can simulate the volume of the region.
Simulated
The simulated multiverse exists on complex computer systems that simulate entire universes.
Ultimate
The ultimate multiverse contains every mathematically possible universe under different laws of physics.

Cyclic theories

In several theories there is a series of infinite, self-sustaining cycles (for example: an eternity of Big Bang-Big crunches).

M-theory

A multiverse of a somewhat different kind has been envisaged within string theory and its higher-dimensional extension, M-theory.[38] These theories require the presence of 10 or 11 spacetime dimensions respectively. The extra 6 or 7 dimensions may either be compactified on a very small scale, or our universe may simply be localized on a dynamical (3+1)-dimensional object, a D-brane. This opens up the possibility that there are other branes which could support "other universes".[39][40] This is unlike the universes in the "quantum multiverse", but both concepts can operate at the same time.[citation needed]
Some scenarios postulate that our big bang was created, along with our universe, by the collision of two branes.[39][40]

Black-hole cosmology

A black-hole cosmology is a cosmological model in which the observable universe is the interior of a black hole existing as one of possibly many inside a larger universe. This includes the theory of white holes of which are on the opposite side of space time. While a black hole sucks everything in including light, a white hole releases matter and light, hence the name "white hole".

Anthropic principle

The concept of other universes has been proposed to explain how our own universe appears to be fine-tuned for conscious life as we experience it. If there were a large (possibly infinite) number of universes, each with possibly different physical laws (or different fundamental physical constants), some of these universes, even if very few, would have the combination of laws and fundamental parameters that are suitable for the development of matter, astronomical structures, elemental diversity, stars, and planets that can exist long enough for life to emerge and evolve. The weak anthropic principle could then be applied to conclude that we (as conscious beings) would only exist in one of those few universes that happened to be finely tuned, permitting the existence of life with developed consciousness. Thus, while the probability might be extremely small that any particular universe would have the requisite conditions for life (as we understand life) to emerge and evolve, this does not require intelligent design as an explanation for the conditions in the Universe that promote our existence in it.

Search for evidence

Around 2010, scientists such as Stephen M. Feeney analyzed Wilkinson Microwave Anisotropy Probe (WMAP) data and claimed to find preliminary evidence suggesting that our universe collided with other (parallel) universes in the distant past.[41][unreliable source?][42][43][44] However, a more thorough analysis of data from the WMAP and from the Planck satellite, which has a resolution 3 times higher than WMAP, failed to find any statistically significant evidence of such a bubble universe collision.[45][46] In addition, there is no evidence of any gravitational pull of other universes on ours.[47][48]

Criticism

Non-scientific claims

In his 2003 NY Times opinion piece, A Brief History of the Multiverse, author and cosmologist, Paul Davies, offers a variety of arguments that multiverse theories are non-scientific :[49]
For a start, how is the existence of the other universes to be tested? To be sure, all cosmologists accept that there are some regions of the universe that lie beyond the reach of our telescopes, but somewhere on the slippery slope between that and the idea that there are an infinite number of universes, credibility reaches a limit. As one slips down that slope, more and more must be accepted on faith, and less and less is open to scientific verification. Extreme multiverse explanations are therefore reminiscent of theological discussions. Indeed, invoking an infinity of unseen universes to explain the unusual features of the one we do see is just as ad hoc as invoking an unseen Creator. The multiverse theory may be dressed up in scientific language, but in essence it requires the same leap of faith.
— Paul Davies, A Brief History of the Multiverse
Taking cosmic inflation as a popular case in point, George Ellis, writing in August 2011, provides a balanced criticism of not only the science, but as he suggests, the scientific philosophy, by which multiverse theories are generally substantiated. He, like most cosmologists, accepts Tegmark's level I "domains", even though they lie far beyond the cosmological horizon. Likewise, the multiverse of cosmic inflation is said to exist very far away. It would be so far away, however, that it's very unlikely any evidence of an early interaction will be found. He argues that for many theorists, the lack of empirical testability or falsifiability is not a major concern. "Many physicists who talk about the multiverse, especially advocates of the string landscape, do not care much about parallel universes per se. For them, objections to the multiverse as a concept are unimportant. Their theories live or die based on internal consistency and, one hopes, eventual laboratory testing." Although he believes there's little hope that will ever be possible, he grants that the theories on which the speculation is based, are not without scientific merit. He concludes that multiverse theory is a "productive research program":[50]
As skeptical as I am, I think the contemplation of the multiverse is an excellent opportunity to reflect on the nature of science and on the ultimate nature of existence: why we are here… In looking at this concept, we need an open mind, though not too open. It is a delicate path to tread. Parallel universes may or may not exist; the case is unproved. We are going to have to live with that uncertainty. Nothing is wrong with scientifically based philosophical speculation, which is what multiverse proposals are. But we should name it for what it is.
— George Ellis, Scientific American, Does the Multiverse Really Exist?

Occam's razor

Proponents and critics disagree about how to apply Occam's razor. Critics argue that to postulate a practically infinite number of unobservable universes just to explain our own seems contrary to Occam's razor.[51] In contrast, proponents argue that, in terms of Kolmogorov complexity, the proposed multiverse is simpler than a single idiosyncratic universe.[28]

For example, multiverse proponent Max Tegmark argues:
[A]n entire ensemble is often much simpler than one of its members. This principle can be stated more formally using the notion of algorithmic information content. The algorithmic information content in a number is, roughly speaking, the length of the shortest computer program that will produce that number as output. For example, consider the set of all integers. Which is simpler, the whole set or just one number? Naively, you might think that a single number is simpler, but the entire set can be generated by quite a trivial computer program, whereas a single number can be hugely long. Therefore, the whole set is actually simpler... (Similarly), the higher-level multiverses are simpler. Going from our universe to the Level I multiverse eliminates the need to specify initial conditions, upgrading to Level II eliminates the need to specify physical constants, and the Level IV multiverse eliminates the need to specify anything at all.... A common feature of all four multiverse levels is that the simplest and arguably most elegant theory involves parallel universes by default. To deny the existence of those universes, one needs to complicate the theory by adding experimentally unsupported processes and ad hoc postulates: finite space, wave function collapse and ontological asymmetry. Our judgment therefore comes down to which we find more wasteful and inelegant: many worlds or many words. Perhaps we will gradually get used to the weird ways of our cosmos and find its strangeness to be part of its charm.[28]
— Max Tegmark, "Parallel universes. Not just a staple of science fiction, other universes are a direct implication of cosmological observations." Scientific American 2003 May;288(5):40–51
Princeton cosmologist Paul Steinhardt used the 2014 Annual Edge Question to voice his opposition to multiverse theorizing:
A pervasive idea in fundamental physics and cosmology that should be retired: the notion that we live in a multiverse in which the laws of physics and the properties of the cosmos vary randomly from one patch of space to another. According to this view, the laws and properties within our observable universe cannot be explained or predicted because they are set by chance. Different regions of space too distant to ever be observed have different laws and properties, according to this picture. Over the entire multiverse, there are infinitely many distinct patches. Among these patches, in the words of Alan Guth, "anything that can happen will happen—and it will happen infinitely many times". Hence, I refer to this concept as a Theory of Anything. Any observation or combination of observations is consistent with a Theory of Anything. No observation or combination of observations can disprove it. Proponents seem to revel in the fact that the Theory cannot be falsified. The rest of the scientific community should be up in arms since an unfalsifiable idea lies beyond the bounds of normal science. Yet, except for a few voices, there has been surprising complacency and, in some cases, grudging acceptance of a Theory of Anything as a logical possibility. The scientific journals are full of papers treating the Theory of Anything seriously. What is going on?[20]
— Paul Steinhardt, "Theories of Anything" edge.com'
Steinhardt claims that multiverse theories have gained currency mostly because too much has been invested in theories that have failed, e.g. inflation or string theory. He tends to see in them an attempt to redefine the values of science to which he objects even more strongly:
A Theory of Anything is useless because it does not rule out any possibility and worthless because it submits to no do-or-die tests. (Many papers discuss potential observable consequences, but these are only possibilities, not certainties, so the Theory is never really put at risk.)[20]
— Paul Steinhardt, "Theories of Anything" edge.com'

Multiverse hypotheses in philosophy and logic

Modal realism

Possible worlds are a way of explaining probability, hypothetical statements and the like, and some philosophers such as David Lewis believe that all possible worlds exist, and are just as real as the actual world (a position known as modal realism).[52]

Trans-world identity

A metaphysical issue that crops up in multiverse schema that posit infinite identical copies of any given universe is that of the notion that there can be identical objects in different possible worlds. According to the counterpart theory of David Lewis, the objects should be regarded as similar rather than identical.[53][54]

Fictional realism

The view that because fictions exist, fictional characters exist as well. There are fictional entities, in the same sense in which, setting aside philosophical disputes, there are people, Mondays, numbers and planets.[55][56]

Telescope


From Wikipedia, the free encyclopedia


The 100 inch (2.54 m) Hooker reflecting telescope at Mount Wilson Observatory near Los Angeles, USA.

A telescope is an instrument that aids in the observation of remote objects by collecting electromagnetic radiation (such as visible light). The first known practical telescopes were invented in the Netherlands at the beginning of the 17th century, using glass lenses. They found use in terrestrial applications and astronomy.

Within a few decades, the reflecting telescope was invented, which used mirrors. In the 20th century many new types of telescopes were invented, including radio telescopes in the 1930s and infrared telescopes in the 1960s. The word telescope now refers to a wide range of instruments detecting different regions of the electromagnetic spectrum, and in some cases other types of detectors.

The word "telescope" (from the Greek τῆλε, tele "far" and σκοπεῖν, skopein "to look or see"; τηλεσκόπος, teleskopos "far-seeing") was coined in 1611 by the Greek mathematician Giovanni Demisiani for one of Galileo Galilei's instruments presented at a banquet at the Accademia dei Lincei.[1][2][3] In the Starry Messenger, Galileo had used the term "perspicillum".

History

Modern telescopes typically use CCDs instead of film for recording images. This is the sensor array in the Kepler spacecraft.

28-inch telescope and 40-foot telescope in Greenwich in 2015.

The earliest recorded working telescopes were the refracting telescopes that appeared in the Netherlands in 1608. Their development is credited to three individuals: Hans Lippershey and Zacharias Janssen, who were spectacle makers in Middelburg, and Jacob Metius of Alkmaar.[4] Galileo heard about the Dutch telescope in June 1609, built his own within a month,[5] and improved upon the design in the following year.

The idea that the objective, or light-gathering element, could be a mirror instead of a lens was being investigated soon after the invention of the refracting telescope.[6] The potential advantages of using parabolic mirrors—reduction of spherical aberration and no chromatic aberration—led to many proposed designs and several attempts to build reflecting telescopes.[7] In 1668, Isaac Newton built the first practical reflecting telescope, of a design which now bears his name, the Newtonian reflector.

The invention of the achromatic lens in 1733 partially corrected color aberrations present in the simple lens and enabled the construction of shorter, more functional refracting telescopes. Reflecting telescopes, though not limited by the color problems seen in refractors, were hampered by the use of fast tarnishing speculum metal mirrors employed during the 18th and early 19th century—a problem alleviated by the introduction of silver coated glass mirrors in 1857,[8] and aluminized mirrors in 1932.[9] The maximum physical size limit for refracting telescopes is about 1 meter (40 inches), dictating that the vast majority of large optical researching telescopes built since the turn of the 20th century have been reflectors. The largest reflecting telescopes currently have objectives larger than 10 m (33 feet), and work is underway on several 30-40m designs.

The 20th century also saw the development of telescopes that worked in a wide range of wavelengths from radio to gamma-rays. The first purpose built radio telescope went into operation in 1937. Since then, a tremendous variety of complex astronomical instruments have been developed.

Types

The name "telescope" covers a wide range of instruments. Most detect electromagnetic radiation, but there are major differences in how astronomers must go about collecting light (electromagnetic radiation) in different frequency bands.

Telescopes may be classified by the wavelengths of light they detect:
Light Comparison
Name Wavelength Frequency (Hz) Photon Energy (eV)
Gamma ray less than 0.01 nm more than 10 EHZ 100 keV – 300+ GeV X
X-Ray 0.01 to 10 nm 30 PHz – 30 EHZ 120 eV to 120 keV X
Ultraviolet 10 nm – 400 nm 30 EHZ – 790 THz 3 eV to 124 eV
Visible 390 nm – 750 nm 790 THz – 405 THz 1.7 eV – 3.3 eV X
Infrared 750 nm – 1 mm 405 THz – 300 GHz 1.24 meV – 1.7 eV X
Microwave 1 mm – 1 meter 300 GHz – 300 MHz 1.24 meV – 1.24 µeV
Radio 1 mm – km 300 GHz3 Hz 1.24 meV – 12.4 feV X
As wavelengths become longer, it becomes easier to use antenna technology to interact with electromagnetic radiation (although it is possible to make very tiny antenna). The near-infrared can be handled much like visible light, however in the far-infrared and submillimetre range, telescopes can operate more like a radio telescope. For example the James Clerk Maxwell Telescope observes from wavelengths from 3 μm (0.003 mm) to 2000 μm (2 mm), but uses a parabolic aluminum antenna.[10]

On the other hand, the Spitzer Space Telescope, observing from about 3 μm (0.003 mm) to 180 μm (0.18 mm) uses a mirror (reflecting optics). Also using reflecting optics, the Hubble Space Telescope with Wide Field Camera 3 can observe from about 0.2 μm (0.0002 mm) to 1.7 μm (0.0017 mm) (from ultra-violet to infrared light).[11]
Another threshold in telescope design, as photon energy increases (shorter wavelengths and higher frequency) is the use of fully reflecting optics rather than glancing-incident optics. Telescopes such as TRACE and SOHO use special mirrors to reflect Extreme ultraviolet, producing higher resolution and brighter images than otherwise possible. A larger aperture does not just mean that more light is collected, it also enables a finer angular resolution.

Telescopes may also be classified by location: ground telescope, space telescope, or flying telescope. They may also be classified by whether they are operated by professional astronomers or amateur astronomers. A vehicle or permanent campus containing one or more telescopes or other instruments is called an observatory.

Optical telescopes


50 cm refracting telescope at Nice Observatory.

An optical telescope gathers and focuses light mainly from the visible part of the electromagnetic spectrum (although some work in the infrared and ultraviolet).[12] Optical telescopes increase the apparent angular size of distant objects as well as their apparent brightness. In order for the image to be observed, photographed, studied, and sent to a computer, telescopes work by employing one or more curved optical elements, usually made from glass lenses and/or mirrors, to gather light and other electromagnetic radiation to bring that light or radiation to a focal point. Optical telescopes are used for astronomy and in many non-astronomical instruments, including: theodolites (including transits), spotting scopes, monoculars, binoculars, camera lenses, and spyglasses. There are three main optical types:
Beyond these basic optical types there are many sub-types of varying optical design classified by the task they perform such as astrographs, comet seekers, solar telescope, etc.

Radio telescopes


The Very Large Array at Socorro, New Mexico, United States.

Radio telescopes are directional radio antennas used for radio astronomy. The dishes are sometimes constructed of a conductive wire mesh whose openings are smaller than the wavelength being observed. Multi-element Radio telescopes are constructed from pairs or larger groups of these dishes to synthesize large 'virtual' apertures that are similar in size to the separation between the telescopes; this process is known as aperture synthesis. As of 2005, the current record array size is many times the width of the Earth—utilizing space-based Very Long Baseline Interferometry (VLBI) telescopes such as the Japanese HALCA (Highly Advanced Laboratory for Communications and Astronomy) VSOP (VLBI Space Observatory Program) satellite. Aperture synthesis is now also being applied to optical telescopes using optical interferometers (arrays of optical telescopes) and aperture masking interferometry at single reflecting telescopes. Radio telescopes are also used to collect microwave radiation, which is used to collect radiation when any visible light is obstructed or faint, such as from quasars. Some radio telescopes are used by programs such as SETI and the Arecibo Observatory to search for extraterrestrial life.

X-ray telescopes


Einstein Observatory was a space-based focusing optical X-ray telescope from 1978.[13]

X-ray telescopes can use X-ray optics, such as a Wolter telescopes composed of ring-shaped 'glancing' mirrors made of heavy metals that are able to reflect the rays just a few degrees. The mirrors are usually a section of a rotated parabola and a hyperbola, or ellipse. In 1952, Hans Wolter outlined 3 ways a telescope could be built using only this kind of mirror.[14][15] Examples of an observatory using this type of telescope are the Einstein Observatory, ROSAT, and the Chandra X-Ray Observatory. By 2010, Wolter focusing X-ray telescopes are possible up to 79 keV.[13]

Gamma-ray telescopes

Higher energy X-ray and Gamma-ray telescopes refrain from focusing completely and use coded aperture masks: the patterns of the shadow the mask creates can be reconstructed to form an image.

X-ray and Gamma-ray telescopes are usually on Earth-orbiting satellites or high-flying balloons since the Earth's atmosphere is opaque to this part of the electromagnetic spectrum. However, high energy X-rays and gamma-rays do not form an image in the same way as telescopes at visible wavelengths. An example of this type of telescope is the Fermi Gamma-ray Space Telescope.

The detection of very high energy gamma rays, with shorter wavelength and higher frequency than regular gamma rays, requires further specialization. An example of this type of observatory is VERITAS. Very high energy gamma-rays are still photons, like visible light, whereas cosmic rays includes particles like electrons, protons, and heavier nuclei.

A discovery in 2012 may allow focusing gamma-ray telescopes.[16] At photon energies greater than 700 keV, the index of refraction starts to increase again.[16]

High-energy particle telescopes

High-energy astronomy requires specialized telescopes to make observations since most of these particles go through most metals and glasses.

In other types of high energy particle telescopes there is no image-forming optical system. Cosmic-ray telescopes usually consist of an array of different detector types spread out over a large area. A Neutrino telescope consists of a large mass of water or ice, surrounded by an array of sensitive light detectors known as photomultiplier tubes. Energetic neutral atom observatories like Interstellar Boundary Explorer detect particles traveling at certain energies.

Other types of telescopes


Equatorial-mounted Keplerian telescope

Astronomy is not limited to using electromagnetic radiation. Additional information can be obtained using other media. The detectors used to observe the Universe are analogous to telescopes, these are:

Types of mount

A telescope mount is a mechanical structure which supports a telescope. Telescope mounts are designed to support the mass of the telescope and allow for accurate pointing of the instrument. Many sorts of mounts have been developed over the years, with the majority of effort being put into systems that can track the motion of the stars as the Earth rotates. The two main types of tracking mount are:

Atmospheric electromagnetic opacity

Since the atmosphere is opaque for most of the electromagnetic spectrum, only a few bands can be observed from the Earth's surface. These bands are visible – near-infrared and a portion of the radio-wave part of the spectrum. For this reason there are no X-ray or far-infrared ground-based telescopes as these have to be flown in space to observe. 
Even if a wavelength is observable from the ground, it might still be advantageous to fly it on a satellite due to astronomical seeing.

A diagram of the electromagnetic spectrum with the Earth's atmospheric transmittance (or opacity) and the types of telescopes used to image parts of the spectrum.

Telescopic image from different telescope types

Different types of telescope, operating in different wavelength bands, provide different information about the same object. Together they provide a more comprehensive understanding.

A 6′ wide view of the Crab nebula supernova remnant, viewed at different wavelengths of light by various telescopes

By spectrum

Telescopes that operate in the electromagnetic spectrum:

Name Telescope Astronomy Wavelength
Radio Radio telescope Radio astronomy
(Radar astronomy)
more than 1 mm
Submillimetre Submillimetre telescopes* Submillimetre astronomy 0.1 mm – 1 mm
Far Infrared Far-infrared astronomy 30 µm – 450 µm
Infrared Infrared telescope Infrared astronomy 700 nm – 1 mm
Visible Visible spectrum telescopes Visible-light astronomy 400 nm – 700 nm
Ultraviolet Ultraviolet telescopes* Ultraviolet astronomy 10 nm – 400 nm
X-ray X-ray telescope X-ray astronomy 0.01 nm – 10 nm
Gamma-ray Gamma-ray astronomy less than 0.01 nm

*Links to categories.

Lists of telescopes

Tuesday, May 19, 2015

An alternative metric to assess global warming


by Roger A. Pielke Sr., Richard T. McNider, and John Christy

Original link:  http://judithcurry.com/2014/04/28/an-alternative-metric-to-assess-global-warming/

The thing we’ve all forgotten is the heat storage of the ocean – it’s a thousand times greater than the atmosphere and the surface.  – James Lovelock

This aspect of the climate system is why it has been proposed to use the changes in the ocean heat content to diagnose the global radiative imbalance, as summarized in Pielke (2003, 2008). In this weblog post, we take advantage of this natural space and time integrator of global warming and cooling.

We present this alternate tool to assess the magnitude of global warming based on assessing the magnitudes of the annual global average radiative imbalance, and the annual global average radiative forcing and feedbacks. Among our findings is the difficulty of reconciling the three terms.

Introduction

As summarized in NRC (2005) “the concept of radiative forcing is based on the hypothesis that the change in global annual mean surface temperature is proportional to the imposed global annual mean forcing, independent of the nature of the applied forcing. The fundamental assumption underlying the radiative forcing concept is that the surface and the troposphere are strongly coupled by convective heat transfer processes; that is, the earth-troposphere system is in a state of radiative-convective equilibrium.”

According to the radiative-convective equilibrium concept, the equation for determining global average surface temperature is ΔQ = ΔF – ΔT/ λ   (1), where ΔQ is the radiative imbalance, ΔF is the radiative forcing, and ΔT is the change in temperature over the same time period. The quantity λ is referred to as the radiative feedback parameter which has been used to relate temperature response to a change in radiative forcing (Gregory et al. 2002, NRC 2005). As such, it has been used as the primary global metric for assessing global warming due to anthropogenic changes in radiative forcing. The quantity ΔT is typically defined as the near-surface global average surface air temperature.

While perhaps conceptually useful, the actual implementation of the equation can be difficult. First, the measurement of ΔT has been shown to have issues with its accurate quantification. In the equation, ΔT is meant to represent both the radiative temperature of the Earth system and the accumulation of heat through the temperature change that would occur as a radiative imbalance occurs. However, changes in temperature at the surface can occur due to a vertical redistribution of heat not necessarily due to an accumulation of heat (McNider et al. 2012), site location issues (Pielke et al. 2007; Fall et al. 2011), as well as due to regional changes in surface temperatures from land-use change, aerosol deposition, and atmospheric aerosols (e.g., Christy et al. 2006, 2009; Strack et al. 2007; Mahmood et al. 2013). Even more importantly, as shown in recent studies (Levitus et al. 2012), a significant fraction of the heat added to the climate system is at depth in the oceans, and thus cannot be sampled completely by ΔT (Spencer and Braswell 2013).

Computing the radiative imbalance ΔQ as a residual from large positive and negative values in the radiative flux budget introduces a large uncertainty. Stephens et al. (2012) reports a value of the global average radiative imbalance (which Stephens et al. calls the “surface imbalance”) as 0.70 Watts per meter squared, but with the uncertainty of 17 W m-2!

We propose an alternate approach based on the analysis of the accumulation rate of heat in the Earth system in Joules per time. We believe the radiative imbalance can much more accurately be diagnosed by the ocean heat update since the ocean, because of the ocean’s density, area, and depth (i.e., its mass and heat capacity), is by far the dominate reservoir of climate system heat changes ( Pielke, 2003, 2005; Levitus et al. 2012; Trenberth and Fasullo 2013). Thus, the difference in ocean heat content at two different time periods largely accounts for the global average radiative imbalance over that time (within the uncertainty of the ocean heat measurements). Once the annual global annual average radiative imbalance is defined by the ocean accumulation of heat (adjusted for the smaller added heating from our parts of the climate system), we can form an equation that drives this imbalance as
Global annual average radiative imbalance [GAARI] = Global annual average radiative forcing [GAARF] + Global annual average radiative feedbacks [GAARFB] (2), where the units are in Joules per time period (and can be expressed as Watts per area).

Levitus et al. (2012) reported that since 1955, the layer from the surface to 2000 m depth had a warming rate of 0.39 W m-2 ± 0.031 W m-2 per unit area of the Earth’s surface which accounts for approximately 90% of the warming of the climate system. Thus, if we add the 10%, the 1955-2010 GAARI= 0.43 W m-2 ± 0.031 W m-2.

The radiative forcing can be obtained from the 2013 IPCC SPM WG1 report (unfortunately, they do not give the values for specific time periods but give a difference from 1750 to 1950, 1980 and 2011). Presumably, some of this forcing has been accommodated by warming over the time period, but the IPCC does not address this.

Figure SPM.5 in IPCC (2013) [reproduced below] yields the net radiative forcing = 2.29 (1.13 to 3.33) W m-2 for the net change in the annual average global radiative forcing from 1750 to 2011.   The report on the change of radiative heating from 1750 to 1950 is 0.57 (0.29 to 0.85) W m-2. If we assume that all of the radiative forcing up to 1950 has already resulted in feedbacks which remove this net positive forcing, the remaining mean estimate for the current GAARF is 1.72 W m-2.

SPM5

For GAARFB, Wielicki et al. (2013; their figure 1; reproduced below) has radiative feedbacks  =  -4.2 W m-2 K-1 (from temperature increases) + water vapor feedback (1.9 W m-2 K-1) + the albedo feedback (0.30 W m-2 K-1) + the cloud feedback (0.79 W m-2 K-1)   =  -1.21 W m-2 K-1.

Wielicki

It needs to be recognized that deep ocean heating is an unappreciated effective negative temperature feedback, at least in terms of how this heat can significantly influence other parts of the climate system on multi-decadal time scales. Nonetheless, we have retained this heating in our analysis.

Over the time period 1955 to 2010, the global surface temperatures supposedly increased by about 0.6 K (Figure SPM1 from IPCC, 2013 and reproduced below).
Figure SPM1
Thus, GAARFB = -1.21 W m-2 K-1 x 0.6K = -0.73 W m-2.

Using the IPCC GAARF of 1.72 W m-2 and the GAARFB of -0.73 W m-2 in equation (2) yields GAARF + GAARFB = 1.72– 0.73 = 0.99 W m-2 = GAARI.  This, however, is more than twice as large as the ocean diagnosed GAARI of 0.43 W m-2 ± 0.031 based on Levitus et al. (2012).

Even the IPCC agrees that the radiative imbalance is relatively smaller than the 0.99 W m-2 calculated above. They report that the global average radiative imbalance is 0.59 W m-2 for 1971-2010 while for 1993-2010 it is 0.71 W m-2. Trenberth and Fasullo (2013) state that the imbalance is 0.5–1W m−2 over the 2000s.

Rather, than using the IPCC (Wielicki, 2013) GAARFB, we can use equation (2) to solve for the radiative feedbacks with the ocean heat data as a real world constraint, i.e. GAARFB = GAARI – GAARF (3).

Inserting the heat changes in the ocean to diagnose GAARI and the IPCC GAARF in (3) 0.43 W m-2 ± 0.031 W m-2 [GAARI] – 1.72 [-1.13 to -3.33] W m-2 [GAARF], then results in the estimate of GAARFB of – 1.29 W m-2 with an uncertainty range from the IPCC and Levitus (2012) yielding -1.10 to -3.36 W m-2.

Thus, even assuming that the fraction of the global average radiative forcing change from 1750 to 1955 has already equilibrated through increasing surface temperatures, the global average radiative imbalance, GAARI, is significantly less than the sum of the global average radiative forcings and feedbacks – GAARF + GAARFB (the use of 1950 and 1955 as a time period should not introduce much added uncertainty).

Also, since there has been little if any temperature increase for a decade or more (nor, apparently little if any recent water vapor increase; Vonder Haar et al. 2012), the disparity between the imbalance and the forcings and feedbacks is even more stark. While including the uncertainty around each of the best estimates of the radiative forcings and feedbacks, and of the radiative imbalance, could still result in a claim that they are not out of agreement, the lack of proper closure of equation (1) in terms of the mean values that are available needs further explanation.

Thus as the next step, the uncertainties in each of the estimates needs to be defined for each of the values in equation (2). The estimates need to be made for the current time (2014). The recognition and explanation for this apparent discrepancy between observed global warming and the radiative forcings and feedbacks needs a higher level of attention than was given in the 2013 IPCC report.

In order to aid in the analyses of equation (2), the combined effects of the radiative forcings and feedbacks over specified time periods (e.g., decades) could be estimated by running the climate models with a set of realizations with and without specific radiative forcings (e.g., CO2).   One could also do assessments of each vertical profile in a global model at snapshots in time with the added forcings since the last snapshot to estimate the radiative forcing change.

Parabolic reflector


From Wikipedia, the free encyclopedia


Circular paraboloid

A parabolic (or paraboloid or paraboloidal) reflector (or dish or mirror) is a reflective surface used to collect or project energy such as light, sound, or radio waves. Its shape is part of a circular paraboloid, that is, the surface generated by a parabola revolving around its axis. The parabolic reflector transforms an incoming plane wave traveling along the axis into a spherical wave converging toward the focus. Conversely, a spherical wave generated by a point source placed in the focus is reflected into a plane wave propagating as a collimated beam along the axis.

Parabolic reflectors are used to collect energy from a distant source (for example sound waves or incoming star light) and bring it to a common focal point, thus correcting spherical aberration found in simpler spherical reflectors. Since the principles of reflection are reversible, parabolic reflectors can also be used to project energy of a source at its focus outward in a parallel beam,[1] used in devices such as spotlights and car headlights.

One of the world's largest solar parabolic dishes at the Ben-Gurion National Solar Energy Center in Israel

Theory

Strictly, the three-dimensional shape of the reflector is called a paraboloid. A parabola is the two-dimensional figure. (The distinction is like that between a sphere and a circle.) However, in informal language, the word parabola and its associated adjective parabolic are often used in place of paraboloid and paraboloidal.

If a parabola is positioned in Cartesian coordinates with its vertex at the origin and its axis of symmetry along the y-axis, so the parabola opens upward, its equation is \scriptstyle 4fy=x^2, where  \scriptstyle f is its focal length. (See "Parabola#Equation in Cartesian coordinates".) Correspondingly, the dimensions of a symmetrical paraboloidal dish are related by the equation:  \scriptstyle 4FD = R^2, where  \scriptstyle F is the focal length,  \scriptstyle D is the depth of the dish (measured along the axis of symmetry from the vertex to the plane of the rim), and  \scriptstyle R is the radius of the rim. All units must be the same. If two of these three quantities are known, this equation can be used to calculate the third.

A more complex calculation is needed to find the diameter of the dish measured along its surface. This is sometimes called the "linear diameter", and equals the diameter of a flat, circular sheet of material, usually metal, which is the right size to be cut and bent to make the dish. Two intermediate results are useful in the calculation: \scriptstyle P=2F (or the equivalent: \scriptstyle P=\frac{R^2}{2D}) and \scriptstyle Q=\sqrt {P^2+R^2}, where  \scriptstyle F,  \scriptstyle D, and  \scriptstyle R are defined as above. The diameter of the dish, measured along the surface, is then given by: \scriptstyle \frac {RQ} {P} + P \ln \left ( \frac {R+Q} {P} \right ), where \scriptstyle \ln(x) means the natural logarithm of  \scriptstyle x , i.e. its logarithm to base "e".

The volume of the dish, the amount of liquid it could hold if the rim were horizontal and the vertex at the bottom (e.g. the capacity of a paraboloidal wok), is given by \scriptstyle \frac {1} {2} \pi R^2 D , where the symbols are defined as above. This can be compared with the formulae for the volumes of a cylinder \scriptstyle (\pi R^2 D), a hemisphere \scriptstyle (\frac {2}{3} \pi R^2 D, where \scriptstyle D=R), and a cone \scriptstyle ( \frac {1} {3} \pi R^2 D ). \scriptstyle \pi R^2 is the aperture area of the dish, the area enclosed by the rim, which is proportional to the amount of sunlight the reflector dish can intercept.

Parallel rays coming in to a parabolic mirror are focused at a point F. The vertex is V, and the axis of symmetry passes through V and F.

The parabolic reflector functions due to the geometric properties of the paraboloidal shape: any incoming ray that is parallel to the axis of the dish will be reflected to a central point, or "focus". (For a geometrical proof, click here.) Because many types of energy can be reflected in this way, parabolic reflectors can be used to collect and concentrate energy entering the reflector at a particular angle. Similarly, energy radiating from the focus to the dish can be transmitted outward in a beam that is parallel to the axis of the dish.

In contrast with spherical reflectors, which suffer from a spherical aberration that becomes stronger as the ratio of the beam diameter to the focal distance becomes larger, parabolic reflectors can be made to accommodate beams of any width. However, if the incoming beam makes a non-zero angle with the axis (or if the emitting point source is not placed in the focus), parabolic reflectors suffer from an aberration called coma. This is primarily of interest in telescopes because most other applications do not require sharp resolution off the axis of the parabola.

The precision to which a parabolic dish must be made in order to focus energy well depends on the wavelength of the energy. If the dish is wrong by a quarter of a wavelength, then the reflected energy will be wrong by a half wavelength, which means that it will interfere destructively with energy that has been reflected properly from another part of the dish. To prevent this, the dish must be made correctly to within about 120 of a wavelength. The wavelength range of visible light is between about 400 and 700 nanometres (nm), so in order to focus all visible light well, a reflector must be correct to within about 20 nm. For comparison, the diameter of a human hair is usually about 50,000 nm, so the required accuracy for a reflector to focus visible light is about 2500 times less than the diameter of a hair.

Microwaves, such as are used for satellite-TV signals, have wavelengths of the order of ten millimetres, so dishes to focus these waves can be wrong by half a millimetre or so and still perform well.

Focus-balanced reflector

It is sometimes useful if the centre of mass of a reflector dish coincides with its focus. This allows it to be easily turned so it can be aimed at a moving source of light, such as the Sun in the sky, while its focus, where the target is located, is stationary. The dish is rotated around axes that pass through the focus and around which it is balanced. If the dish is symmetrical and made of uniform material of constant thickness, and if F represents the focal length of the paraboloid, this "focus-balanced" condition occurs if the depth of the dish, measured along the axis of the paraboloid from the vertex to the plane of the rim of the dish, is 1.8478 times F. The radius of the rim is 2.7187 F.[a] The angular radius of the rim as seen from the focal point is 72.68 degrees.

Scheffler reflector

The focus-balanced configuration (see above) requires the depth of the reflector dish to be greater than its focal length, so the focus is within the dish. This can lead to the focus being difficult to access. An alternative approach is exemplified by the Scheffler Reflector, named after its inventor, Wolfgang Scheffler. This is a paraboloidal mirror which is rotated about axes that pass through its centre of mass, but this does not coincide with the focus, which is outside the dish. If the reflector were a rigid paraboloid, the focus would move as the dish turns. To avoid this, the reflector is flexible, and is bent as it rotates so as to keep the focus stationary. Ideally, the reflector would be exactly paraboloidal at all times. In practice, this cannot be achieved exactly, so the Scheffler reflector is not suitable for purposes that require high accuracy. It is used in applications such as solar cooking, where sunlight has to be focused well enough to strike a cooking pot, but not to an exact point.[2]

Off-axis reflectors


Off-axis satellite dish. The vertex of the paraboloid is below the bottom edge of the dish. The curvature of the dish is greatest near the vertex. The axis, which is aimed at the satellite, passes through the vertex and the receiver module, which is at the focus.

A circular paraboloid is theoretically unlimited in size. Any practical reflector uses just a segment of it. Often, the segment includes the vertex of the paraboloid, where its curvature is greatest, and where the axis of symmetry intersects the paraboloid. However, if the reflector is used to focus incoming energy onto a receiver, the shadow of the receiver falls onto the vertex of the paraboloid, which is part of the reflector, so part of the reflector is wasted. This can be avoided by making the reflector from a segment of the paraboloid which is offset from the vertex and the axis of symmetry. For example, in the above diagram the reflector could be just the part of the paraboloid between the points P1 and P3. The receiver is still placed at the focus of the paraboloid, but it does not cast a shadow onto the reflector. The whole reflector receives energy, which is then focused onto the receiver. This is frequently done, for example, in satellite-TV receiving dishes, and also in some types of astronomical telescope (e.g., the Green Bank Telescope).

Accurate off-axis reflectors, for use in telescopes, can be made quite simply by using a rotating furnace, in which the container of molten glass is offset from the axis of rotation. To make less accurate ones, suitable as satellite dishes, the shape is designed by a computer, then multiple dishes are stamped out of sheet metal.

History

The principle of parabolic reflectors has been known since classical antiquity, when the mathematician Diocles described them in his book On Burning Mirrors and proved that they focus a parallel beam to a point.[3]
Archimedes in the third century BC studied paraboloids as part of his study of hydrostatic equilibrium,[4] and it has been claimed that he used reflectors to set the Roman fleet alight during the Siege of Syracuse.[5] This seems unlikely to be true, however, as the claim does not appear in sources before the 2nd century AD, and Diocles does not mention it in his book.[6] Parabolic mirrors were also studied by the physicist Ibn Sahl in the 10th century.[7] James Gregory, in his 1663 book Optica Promota (1663), pointed out that a reflecting telescope with a mirror that was parabolic would correct spherical aberration as well as the chromatic aberration seen in refracting telescopes.
The design he came up with bears his name: the "Gregorian telescope"; but according to his own confession, Gregory had no practical skill and he could find no optician capable of actually constructing one.[8] Isaac Newton knew about the properties of parabolic mirrors but chose a spherical shape for his Newtonian telescope mirror to simplify construction.[9] Lighthouses also commonly used parabolic mirrors to collimate a point of light from a lantern into a beam, before being replaced by more efficient Fresnel lenses in the 19th century. In 1888, Heinrich Hertz, a German physicist, constructed the world's first parabolic reflector antenna.[10]

Applications


Lighting the Olympic Flame

The most common modern applications of the parabolic reflector are in satellite dishes, reflecting telescopes, radio telescopes, parabolic microphones, solar cookers, and many lighting devices such as spotlights, car headlights, PAR lamps and LED housings.[11]

The Olympic Flame is traditionally lit at Olympia, Greece, using a parabolic reflector concentrating sunlight, and is then transported to the venue of the Games. Parabolic mirrors are one of many shapes for a burning-glass.

Parabolic reflectors are popular for use in creating optical illusions. These consist of two opposing parabolic mirrors, with an opening in the center of the top mirror. When an object is placed on the bottom mirror, the mirrors create a real image, which is a virtually identical copy of the original that appears in the opening. The quality of the image is dependent upon the precision of the optics. Some such illusions are manufactured to tolerances of millionths of an inch.

Antennas of the Atacama Large Millimeter Array on the Chajnantor Plateau.[12]

A parabolic reflector pointing upward can be formed by rotating a reflective liquid, like mercury, around a vertical axis. This makes the liquid mirror telescope possible. The same technique is used in rotating furnaces to make solid reflectors.

Parabolic reflectors are also a popular alternative for increasing wireless signal strength. Even with simple ones, users have reported 3 dB or more gains.[13][14]

Year On

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Year_On T...