Search This Blog

Saturday, December 2, 2017

Quantum fluctuation

From Wikipedia, the free encyclopedia
In quantum physics, a quantum fluctuation (or quantum vacuum fluctuation or vacuum fluctuation) is the temporary change in the amount of energy in a point in space,[1] as explained in Werner Heisenberg's uncertainty principle.

This allows the creation of particle-antiparticle pairs of virtual particles. The effects of these particles are measurable, for example, in the effective charge of the electron, different from its "naked" charge.
Quantum fluctuations may have been very important in the origin of the structure of the universe: according to the model of expansive inflation the ones that existed when inflation began were amplified and formed the seed of all current observed structure. Vacuum energy may also be responsible for the current accelerating expansion of the universe (cosmological constant).

According to one formulation of the principle, energy and time can be related by the relation[2]
{\displaystyle \Delta E\Delta t\geq {h \over 4\pi }}
In the modern view, energy is always conserved, but because the particle number operator does not commute with a field's Hamiltonian or energy operator, the field's lowest-energy or ground state, often called the vacuum state, is not, as one might expect from that name, a state with no particles, but rather a quantum superposition of particle number eigenstates with 0, 1, 2...etc. particles.

Quantum fluctuations of a field

A quantum fluctuation is the temporary appearance of energetic particles out of empty space, as allowed by the uncertainty principle. The uncertainty principle states that for a pair of conjugate variables such as position/momentum or energy/time, it is impossible to have a precisely determined value of each member of the pair at the same time. For example, a particle pair can pop out of the vacuum during a very short time interval.

An extension is applicable to the "uncertainty in time" and "uncertainty in energy" (including the rest mass energy mc^2). When the mass is very large like a macroscopic object, the uncertainties and thus the quantum effect become very small, and classical physics is applicable.

In quantum field theory, fields undergo quantum fluctuations. A reasonably clear distinction can be made between quantum fluctuations and thermal fluctuations[how?] of a quantum field (at least for a free field; for interacting fields, renormalization substantially complicates matters). For the quantized Klein–Gordon field in the vacuum state, we can calculate the probability density that we would observe a configuration {\displaystyle\varphi_t(x)} at a time t in terms of its Fourier transform {\displaystyle\tilde\varphi_t(k)} to be
\rho_0[\varphi_t] = \exp{\left[-\frac{1}{\hbar}
        \int\frac{d^3k}{(2\pi)^3}
            \tilde\varphi_t^*(k)\sqrt{|k|^2+m^2}\;\tilde \varphi_t(k)\right]}.
In contrast, for the classical Klein–Gordon field at non-zero temperature, the Gibbs probability density that we would observe a configuration {\displaystyle\varphi_t(x)} at a time t is
\rho_E[\varphi_t] = \exp{[-H[\varphi_t]/k_\mathrm{B}T]}=\exp{\left[-\frac{1}{k_\mathrm{B}T} \int\frac{d^3k}{(2\pi)^3}
            \tilde\varphi_t^*(k){\scriptstyle\frac{1}{2}}(|k|^2+m^2)\;\tilde \varphi_t(k)\right]}.
The amplitude of quantum fluctuations is controlled by Planck's constant \hbar , just as the amplitude of thermal fluctuations is controlled by k_\mathrm{B}T, where k_{\mathrm {B} } is Boltzmann's constant. Note that the following three points are closely related:
  1. Planck's constant has units of action (joule-seconds) instead of units of energy (joules),
  2. the quantum kernel is  \sqrt{|k|^2+m^2} instead of  {\scriptstyle\frac{1}{2}}(|k|^2+m^2) (the quantum kernel is nonlocal from a classical heat kernel viewpoint, but it is local in the sense that it does not allow signals to be transmitted),[citation needed]
  3. the quantum vacuum state is Lorentz invariant (although not manifestly in the above), whereas the classical thermal state is not (the classical dynamics is Lorentz invariant, but the Gibbs probability density is not a Lorentz invariant initial condition).
We can construct a classical continuous random field that has the same probability density as the quantum vacuum state, so that the principal difference from quantum field theory is the measurement theory (measurement in quantum theory is different from measurement for a classical continuous random field, in that classical measurements are always mutually compatible — in quantum mechanical terms they always commute). Quantum effects that are consequences only of quantum fluctuations, not of subtleties of measurement incompatibility, can alternatively be models of classical continuous random fields.

In the 1930s, Pascual Jordan knew that a star could equal zero energy because its matter energy was positive and its gravitational energy was negative and they cancelled each other out. And this led him to speculate what would prevent a quantum transition from creating a new star. And he had this idea because he was trying to figure out where matter might come from if we existed in an always-here universe.[3]

In December, 1973, the British scientific journal Nature published an article by Edward P. Tryon titled "Is the Universe a Vacuum Fluctuation?" In this paper Tryon said our universe may have originated as a quantum fluctuation of the vacuum.[3] Yet, the idea of our universe coming from a quantum fluctuation or quantum process was not taken seriously until inflationary theory came and was able to explain how our universe could inflate from a tiny particle.[4]

Interpretations

The success of quantum fluctuation theories have given way to metaphysical interpretations on the nature of reality and their potential role in the origin and structure of the universe:
  • The fluctuations are a manifestation of the innate uncertainty on the quantum level[5]
  • Fluctuations of the fields in each element of our universe's spacetime could be coherent throughout the universe by mesoscopic quantum entanglement.
A fundamental particle arising out of its quantum field is always inescapably subject to this reality and is thus describable by an associated wave function.
The wave function of a quantum particle represents the reality of the innate quantum fluctuations at the core of the universe and bestows the particle its counter intuitive quantum behavior.
In the double slit experiment each particle makes an unpredictable choice between alternative possibilities, consistent with an interference pattern with the inherent fluctuations of the underlying quantum field rendering the electron to do so.[6]
Such an underlying immutable quantum field by which quantum fluctuations are correlated in a universal scale may explain the non-locality of quantum entanglement as a natural process[7]

Galaxy formation and evolution

From Wikipedia, the free encyclopedia

The study of galaxy formation and evolution is concerned with the processes that formed a heterogeneous universe from a homogeneous beginning, the formation of the first galaxies, the way galaxies change over time, and the processes that have generated the variety of structures observed in nearby galaxies.

Galaxy formation is hypothesized to occur, from structure formation theories, as a result of tiny quantum fluctuations in the aftermath of the Big Bang. The simplest model for this that is in general agreement with observed phenomena is the Λ-Cold Dark Matter cosmology; that is to say that clustering and merging is how galaxies gain in mass, and can also determine their shape and structure.

Commonly observed properties of galaxies

Hubble tuning fork diagram of galaxy morphology

Because of the inability to conduct experiments in outer space, the only way to “test” theories and models of galaxy evolution is to compare them with observations. Explanations for how galaxies formed and evolved must be able to predict the observed properties and types of galaxies.
Edwin Hubble created the first galaxy classification scheme known as the Hubble tuning-fork diagram. It partitioned galaxies into ellipticals, normal spirals, barred spirals (such as the Milky Way), and irregulars. These galaxy types exhibit the following properties which can be explained by current galaxy evolution theories:
  • Many of the properties of galaxies (including the galaxy color–magnitude diagram) indicate that there are fundamentally two types of galaxies. These groups divide into blue star-forming galaxies that are more like spiral types, and red non-star forming galaxies that are more like elliptical galaxies.
  • Spiral galaxies are quite thin, dense, and rotate relatively fast, while the stars in elliptical galaxies have randomly-oriented orbits.
  • The majority of mass in galaxies is made up of dark matter, a substance which is not directly observable, and might not interact through any means except gravity.
  • The majority of giant galaxies contain a supermassive black hole in their centers, ranging in mass from millions to billions of times the mass of our Sun. The black hole mass is tied to the host galaxy bulge or spheroid mass.
  • Metallicity has a positive correlation with the absolute magnitude (luminosity) of a galaxy.
There is a common misconception that Hubble believed incorrectly that the tuning fork diagram described an evolutionary sequence for galaxies, from elliptical galaxies through lenticulars to spiral galaxies. This is not the case; instead, the tuning fork diagram shows an evolution from simple to complex with no temporal connotations intended.[1] Astronomers now believe that disk galaxies likely formed first, then evolved into elliptical galaxies through galaxy mergers.

Formation of disk galaxies

The earliest stage in the evolution of galaxies is the formation. When a galaxy forms, it has a disk shape and is called a spiral galaxy due to spiral-like "arm" structures located on the disk. There are different theories on how these disk-like distributions of stars develop from a cloud of matter: however, at present, none of them exactly predicts the results of observation.

Top-down theories

Olin Eggen, Donald Lynden-Bell, and Allan Sandage[2] in 1962, proposed a theory that disk galaxies form through a monolithic collapse of a large gas cloud. The distribution of matter in the early universe was in clumps that consisted mostly of dark matter. These clumps interacted gravitationally, putting tidal torques on each other that acted to give them some angular momentum. As the baryonic matter cooled, it dissipated some energy and contracted toward the center. With angular momentum conserved, the matter near the center speeds up its rotation. Then, like a spinning ball of pizza dough, the matter forms into a tight disk. Once the disk cools, the gas is not gravitationally stable, so it cannot remain a singular homogeneous cloud. It breaks, and these smaller clouds of gas form stars. Since the dark matter does not dissipate as it only interacts gravitationally, it remains distributed outside the disk in what is known as the dark halo. Observations show that there are stars located outside the disk, which does not quite fit the "pizza dough" model. It was first proposed by Leonard Searle and Robert Zinn [3] that galaxies form by the coalescence of smaller progenitors. Known as a top-down formation scenario, this theory is quite simple yet no longer widely accepted.

Bottom-up theories

More recent theories include the clustering of dark matter halos in the bottom-up process. Instead of large gas clouds collapsing to form a galaxy in which the gas breaks up into smaller clouds, it is proposed that matter started out in these “smaller” clumps (mass on the order of globular clusters), and then many of these clumps merged to form galaxies,[4] which then were drawn by gravitation to form galaxy clusters. This still results in disk-like distributions of baryonic matter with dark matter forming the halo for all the same reasons as in the top-down theory. Models using this sort of process predict more small galaxies than large ones, which matches observations.

Astronomers do not currently know what process stops the contraction. In fact, theories of disk galaxy formation are not successful at producing the rotation speed and size of disk galaxies. It has been suggested that the radiation from bright newly formed stars, or from an active galactic nuclei can slow the contraction of a forming disk. It has also been suggested that the dark matter halo can pull the galaxy, thus stopping disk contraction.[5]

The Lambda-CDM model is a cosmological model that explains the formation of the universe after the Big Bang. It is a relatively simple model that predicts many properties observed in the universe, including the relative frequency of different galaxy types; however, it underestimates the number of thin disk galaxies in the universe.[6] The reason is that these galaxy formation models predict a large number of mergers. If disk galaxies merge with another galaxy of comparable mass (at least 15 percent of its mass) the merger will likely destroy, or at a minimum greatly disrupt the disk, and the resulting galaxy is not expected to be a disk galaxy (see next section). While this remains an unsolved problem for astronomers, it does not necessarily mean that the Lambda-CDM model is completely wrong, but rather that it requires further refinement to accurately reproduce the population of galaxies in the universe.

Galaxy mergers and the formation of elliptical galaxies

Artist image of a firestorm of star birth deep inside core of young, growing elliptical galaxy.
NGC 4676 (Mice Galaxies) is an example of a present merger.
Antennae Galaxies are a pair of colliding galaxies - the bright, blue knots are young stars that have recently ignited as a result of the merger.
ESO 325-G004, a typical elliptical galaxy.

Elliptical galaxies (such as IC 1101) are among some of the largest known thus far. Their stars are on orbits that are randomly oriented within the galaxy (i.e. they are not rotating like disk galaxies). A distinguishing feature of elliptical galaxies is that the velocity of the stars does not necessarily contribute to flattening of the galaxy, such as in spiral galaxies.[7] Elliptical galaxies have central supermassive black holes, and the masses of these black holes correlate with the galaxy’s mass.
Elliptical galaxies have two main stages of evolution. The first is due to the supermassive black hole growing by accreting cooling gas. The second stage is marked by the black hole stabilizing by suppressing gas cooling, thus leaving the elliptical galaxy in a stable state.[8] The mass of the black hole is also correlated to a property called sigma which is the dispersion of the velocities of stars in their orbits. This relationship, known as the M-sigma relation, was discovered in 2000.[9] Elliptical galaxies mostly lack disks, although some bulges of disk galaxies resemble elliptical galaxies. Elliptical galaxies are more likely found in crowded regions of the universe (such as galaxy clusters).

Astronomers now see elliptical galaxies as some of the most evolved systems in the universe. It is widely accepted that the main driving force for the evolution of elliptical galaxies is mergers of smaller galaxies. Many galaxies in the universe are gravitationally bound to other galaxies, which means that they will never escape their mutual pull. If the galaxies are of similar size, the resultant galaxy will appear similar to neither of the progenitors,[10] but will instead be elliptical. There are many types of galaxy mergers, which do not necessarily result in elliptical galaxies, but result in a structural change. For example, a minor merger event is thought to be occurring between the Milky Way and the Magellanic Clouds.

Mergers between such large galaxies are regarded as violent, but because of the vast distances between stars, there are essentially no stellar collisions. However, the frictional interaction of the gas between the two galaxies can cause gravitational shock waves, which are capable of forming new stars in the new elliptical galaxy.[11] By sequencing several images of different galactic collisions, one can observe the timeline of two spiral galaxies merging into a single elliptical galaxy.[12]

In the Local Group, the Milky Way and the Andromeda Galaxy are gravitationally bound, and currently approaching each other at high speed. Simulations show that the Milky Way and Andromeda are on a collision course, and are expected to collide in less than five billion years. During this collision, it is expected that the Sun and the rest of the Solar System will be ejected from its current path around the Milky Way. The remnant could be a giant elliptical galaxy.[13]

Galaxy quenching

Star formation in what are now "dead" galaxies sputtered out billions of years ago.[14]

One observation (see above) that must be explained by a successful theory of galaxy evolution is the existence of two different populations of galaxies on the galaxy color-magnitude diagram. Most galaxies tend to fall into two separate locations on this diagram: a "red sequence" and a "blue cloud". Red sequence galaxies are generally non-star-forming elliptical galaxies with little gas and dust, while blue cloud galaxies tend to be dusty star-forming spiral galaxies.[15][16]

As described in previous sections, galaxies tend to evolve from spiral to elliptical structure via mergers. However, the current rate of galaxy mergers does not explain how all galaxies move from the "blue cloud" to the "red sequence". It also does not explain how star formation ceases in galaxies. Theories of galaxy evolution must therefore be able to explain how star formation turns off in galaxies. This phenomenon is called galaxy "quenching".[17]

Stars form out of cold gas (see also the Kennicutt-Schmidt law), so a galaxy is quenched when it has no more cold gas. However, it is thought that quenching occurs relatively quickly (within 1 billion years), which is much shorter than the time it would take for a galaxy to simply use up its reservoir of cold gas.[18][19] Galaxy evolution models explain this by hypothesizing other physical mechanisms that remove or shut off the supply of cold gas in a galaxy. These mechanisms can be broadly classified into two categories: (1) preventive feedback mechanisms that stop cold gas from entering a galaxy or stop it from producing stars, and (2) ejective feedback mechanisms that remove gas so that it cannot form stars.[20]

One theorized preventive mechanism called “strangulation” keeps cold gas from entering the galaxy. Strangulation is likely the main mechanism for quenching star formation in nearby low-mass galaxies.[21] The exact physical explanation for strangulation is still unknown, but it may have to do with a galaxy’s interactions with other galaxies. As a galaxy falls into a galaxy cluster, gravitational interactions with other galaxies can strangle it by preventing it from accreting more gas.[22] For galaxies with massive dark matter halos, another preventive mechanism called “virial shock heating” may also prevent gas from becoming cool enough to form stars.[19]

Ejective processes, which expel cold gas from galaxies, may explain how more massive galaxies are quenched.[23] One ejective mechanism is caused by supermassive black holes found in the centers of galaxies. Simulations have shown that gas accreting onto supermassive black holes in galactic centers produces high-energy jets; the released energy can expel enough cold gas to quench star formation.[24]

Our own Milky Way and the nearby Andromeda Galaxy currently appear to be undergoing the quenching transition from star-forming blue galaxies to passive red galaxies.[25]

Gallery

Quasar

From Wikipedia, the free encyclopedia
 
Artist's rendering of the accretion disk in ULAS J1120+0641, a very distant quasar powered by a black hole with a mass two billion times that of the Sun.[1] Credit: ESO/M. Kornmesser

A quasar (/ˈkwzɑːr/) (also quasi-stellar object or QSO) is an active galactic nucleus of very high luminosity. A quasar consists of a supermassive black hole surrounded by an orbiting accretion disk of gas. As gas in the accretion disk falls toward the black hole, energy is released in the form of electromagnetic radiation. Quasars emit energy across the electromagnetic spectrum and can be observed at radio, infrared, visible, ultraviolet, and X-ray wavelengths. The most powerful quasars have luminosities exceeding 1041 W, thousands of times greater than the luminosity of a large galaxy such as the Milky Way.[2]

The term "quasar" originated as a contraction of "quasi-stellar radio source", because quasars were first identified as sources of radio-wave emission, and in photographic images at visible wavelengths they resembled point-like stars. High-resolution images of quasars, particularly from the Hubble Space Telescope, have demonstrated that quasars occur in the centers of galaxies, and that some quasar host galaxies are strongly interacting or merging galaxies.[3]

Quasars are found over a very broad range of distances (corresponding to redshifts ranging from z < 0.1 for the nearest quasars to z > 7 for the most distant known quasars), and quasar discovery surveys have demonstrated that quasar activity was more common in the distant past. The peak epoch of quasar activity in the Universe corresponds to redshifts around 2, or approximately 10 billion years ago.[4] As of 2011, the most distant known quasar is at redshift z=7.085; light observed from this quasar was emitted when the Universe was only 770 million years old.[5]

Overview

Because quasars are distant objects, any light which reaches the Earth is redshifted due to the metric expansion of space.[6] Quasars inhabit the very center of active, young galaxies, and are among the most luminous, powerful, and energetic objects known in the universe, emitting up to a thousand times the energy output of the Milky Way, which contains 200–400 billion stars. This radiation is emitted across the electromagnetic spectrum, almost uniformly, from X-rays to the far-infrared with a peak in the ultraviolet-optical bands, with some quasars also being strong sources of radio emission and of gamma-rays.
Hubble images of quasar 3C 273. At right, a coronagraph is used to block the quasar's light, making it easier to detect the surrounding host galaxy.
Quasar QSO-160913+653228 is so distant its light has taken nine billion years to reach the telescope that took this photo, two thirds of the time that has elapsed since the Big Bang.[7]

In early optical images, quasars appeared as point sources, indistinguishable from stars, except for their peculiar spectra. With infrared telescopes and the Hubble Space Telescope, the "host galaxies" surrounding the quasars have been detected in some cases.[8] These galaxies are normally too dim to be seen against the glare of the quasar, except with special techniques. Most quasars, with the exception of 3C 273 whose average apparent magnitude is 12.9, cannot be seen with small telescopes.

The luminosity of some quasars changes rapidly in the optical range and even more rapidly in the X-ray range. Because these changes occur very rapidly they define an upper limit on the volume of a quasar; quasars are not much larger than the Solar System.[9] This implies an extremely high power density.[10] The mechanism of brightness changes probably involves relativistic beaming of astrophysical jets pointed nearly directly toward Earth. The highest redshift quasar known (as of June 2011) is ULAS J1120+0641, with a redshift of 7.085, which corresponds to a comoving distance of approximately 29 billion light-years from Earth (see more discussion of how cosmological distances can be greater than the light-travel time at metric expansion of space).

Quasars are believed to be powered by accretion of material into supermassive black holes in the nuclei of distant galaxies, making these luminous versions of the general class of objects known as active galaxies. Since light cannot escape the black holes, the escaping energy is actually generated outside the event horizon by gravitational stresses and immense friction on the incoming material.[11] Central masses of 105 to 109 solar masses have been measured in quasars by using reverberation mapping. Several dozen nearby large galaxies, with no sign of a quasar nucleus, have been shown to contain a similar central black hole in their nuclei, so it is thought that all large galaxies have one, but only a small fraction are active (with enough accretion to power radiation), and it is the activity of these black holes that are seen as quasars. The matter accreting onto the black hole is unlikely to fall directly in, but will have some angular momentum around the black hole that will cause the matter to collect into an accretion disc. Quasars may also be ignited or re-ignited when normal galaxies merge and the black hole is infused with a fresh source of matter. In fact, it has been suggested that a quasar could form as the Andromeda Galaxy collides with our own Milky Way galaxy in approximately 3–5 billion years.[11][12][13]

Properties

The Chandra X-ray image is of the quasar PKS 1127-145, a highly luminous source of X-rays and visible light about 10 billion light years from Earth. An enormous X-ray jet extends at least a million light years from the quasar. Image is 60 arcsec on a side. RA 11h 30m 7.10s Dec -14° 49' 27" in Crater. Observation date: May 28, 2000. Instrument: ACIS.

More than 200,000 quasars are known, most from the Sloan Digital Sky Survey. All observed quasar spectra have redshifts between 0.056 and 7.085. Applying Hubble's law to these redshifts, it can be shown that they are between 600 million[14] and 28.85 billion light-years away (in terms of comoving distance). Because of the great distances to the farthest quasars and the finite velocity of light, they and their surrounding space appear as they existed in the very early universe.

The power of quasars originates from supermassive black holes that are believed to exist at the core of all galaxies. The Doppler shifts of stars near the cores of galaxies indicate that they are rotating around tremendous masses with very steep gravity gradients, suggesting black holes.

Although quasars appear faint when viewed from Earth, they are visible from extreme distances, being the most luminous objects in the known universe. The brightest quasar in the sky is 3C 273 in the constellation of Virgo. It has an average apparent magnitude of 12.8 (bright enough to be seen through a medium-size amateur telescope), but it has an absolute magnitude of −26.7.[15] From a distance of about 33 light-years, this object would shine in the sky about as brightly as our sun. This quasar's luminosity is, therefore, about 4 trillion (4 × 1012) times that of the Sun, or about 100 times that of the total light of giant galaxies like the Milky Way.[15] This assumes the quasar is radiating energy in all directions, but the active galactic nucleus is believed to be radiating preferentially in the direction of its jet. In a universe containing hundreds of billions of galaxies, most of which had active nuclei billions of years ago but only seen today, it is statistically certain that thousands of energy jets should be pointed toward the Earth, some more directly than others. In many cases it is likely that the brighter the quasar, the more directly its jet is aimed at the Earth.

The hyperluminous quasar APM 08279+5255 was, when discovered in 1998, given an absolute magnitude of −32.2. High resolution imaging with the Hubble Space Telescope and the 10 m Keck Telescope revealed that this system is gravitationally lensed. A study of the gravitational lensing of this system suggests that the light emitted has been magnified by a factor of ~10. It is still substantially more luminous than nearby quasars such as 3C 273.

Quasars were much more common in the early universe than they are today. This discovery by Maarten Schmidt in 1967 was early strong evidence against the Steady State cosmology of Fred Hoyle, and in favor of the Big Bang cosmology. Quasars show the locations where massive black holes are growing rapidly (via accretion). These black holes grow in step with the mass of stars in their host galaxy in a way not understood at present. One idea is that jets, radiation and winds created by the quasars, shut down the formation of new stars in the host galaxy, a process called 'feedback'. The jets that produce strong radio emission in some quasars at the centers of clusters of galaxies are known to have enough power to prevent the hot gas in those clusters from cooling and falling onto the central galaxy.

Quasars' luminosities are variable, with time scales that range from months to hours. This means that quasars generate and emit their energy from a very small region, since each part of the quasar would have to be in contact with other parts on such a time scale as to allow the coordination of the luminosity variations. This would mean that a quasar varying on a time scale of a few weeks cannot be larger than a few light-weeks across. The emission of large amounts of power from a small region requires a power source far more efficient than the nuclear fusion that powers stars. The release of gravitational energy[16] by matter falling towards a massive black hole is the only process known that can produce such high power continuously. Stellar explosions – supernovas and gamma-ray bursts – can do likewise, but only for a few weeks. Black holes were considered too exotic by some astronomers in the 1960s. They also suggested that the redshifts arose from some other (unknown) process, so that the quasars were not really so distant as the Hubble law implied. This "redshift controversy" lasted for many years. Many lines of evidence (optical viewing of host galaxies, finding 'intervening' absorption lines, gravitational lensing) now demonstrate that the quasar redshifts are due to the Hubble expansion, and quasars are in fact as powerful as first thought.[17]
Gravitationally lensed quasar HE 1104-1805.[18]
Animation shows the alignments between the spin axes of quasars and the large-scale structures that they inhabit.

Quasars have all the properties of other active galaxies such as Seyfert galaxies, but are more powerful: their radiation is partially 'nonthermal' (i.e., not due to black body radiation), and approximately 10 percent are observed to also have jets and lobes like those of radio galaxies that also carry significant (but poorly understood) amounts of energy in the form of particles moving at relativistic speeds. Extremely high energies might be explained by several mechanisms (see Fermi acceleration and Centrifugal mechanism of acceleration). Quasars can be detected over the entire observable electromagnetic spectrum including radio, infrared, visible light, ultraviolet, X-ray and even gamma rays. Most quasars are brightest in their rest-frame near-ultraviolet wavelength of 121.6 nm Lyman-alpha emission line of hydrogen, but due to the tremendous redshifts of these sources, that peak luminosity has been observed as far to the red as 900.0 nm, in the near infrared. A minority of quasars show strong radio emission, which is generated by jets of matter moving close to the speed of light. When viewed downward, these appear as blazars and often have regions that seem to move away from the center faster than the speed of light (superluminal expansion). This is an optical illusion due to the properties of special relativity.

Quasar redshifts are measured from the strong spectral lines that dominate their visible and ultraviolet spectra. These lines are brighter than the continuous spectrum, so they are called 'emission' lines. They have widths of several percent of the speed of light. These widths are due to Doppler shifts caused by the high speeds of the gas emitting the lines. Fast motions strongly indicate a large mass. Emission lines of hydrogen (mainly of the Lyman series and Balmer series), helium, carbon, magnesium, iron and oxygen are the brightest lines. The atoms emitting these lines range from neutral to highly ionized, leaving it highly charged. This wide range of ionization shows that the gas is highly irradiated by the quasar, not merely hot, and not by stars, which cannot produce such a wide range of ionization.

Iron quasars show strong emission lines resulting from low ionization iron (FeII), such as IRAS 18508-7815.

Emission generation

This view, taken with infrared light, is a false-color image of a quasar-starburst tandem with the most luminous starburst ever seen in such a combination.

Since quasars exhibit properties common to all active galaxies, the emission from quasars can be readily compared to those of smaller active galaxies powered by smaller supermassive black holes. To create a luminosity of 1040 watts (the typical brightness of a quasar), a super-massive black hole would have to consume the material equivalent of 10 stars per year. The brightest known quasars devour 1000 solar masses of material every year. The largest known is estimated to consume matter equivalent to 600 Earths per minute. Quasar luminosities can vary considerably over time, depending on their surroundings. Since it is difficult to fuel quasars for many billions of years, after a quasar finishes accreting the surrounding gas and dust, it becomes an ordinary galaxy.
Spectrum from quasar HE0940-1050 after it has travelled through intergalactic medium.

Quasars also provide some clues as to the end of the Big Bang's reionization. The oldest known quasars (redshift ≥ 6) display a Gunn-Peterson trough and have absorption regions in front of them indicating that the intergalactic medium at that time was neutral gas. More recent quasars show no absorption region but rather their spectra contain a spiky area known as the Lyman-alpha forest; this indicates that the intergalactic medium has undergone reionization into plasma, and that neutral gas exists only in small clouds.

Quasars show evidence of elements heavier than helium, indicating that galaxies underwent a massive phase of star formation, creating population III stars between the time of the Big Bang and the first observed quasars. Light from these stars may have been observed in 2005 using NASA's Spitzer Space Telescope,[19] although this observation remains to be confirmed.

Like all (unobscured) active galaxies, quasars can be strong X-ray sources. Radio-loud quasars can also produce X-rays and gamma rays by inverse Compton scattering of lower-energy photons by the radio-emitting electrons in the jet.[20]

History of observation

Picture shows a cosmic mirage known as the Einstein Cross. Four apparent images are actually from the same quasar.

The first quasars (3C 48 and 3C 273) were discovered in the late 1950s, as radio sources in all-sky radio surveys.[21][22][23][24] They were first noted as radio sources with no corresponding visible object. Using small telescopes and the Lovell Telescope as an interferometer, they were shown to have a very small angular size.[25] Hundreds of these objects were recorded by 1960 and published in the Third Cambridge Catalogue as astronomers scanned the skies for their optical counterparts. In 1963, a definite identification of the radio source 3C 48 with an optical object was published by Allan Sandage and Thomas A. Matthews. Astronomers had detected what appeared to be a faint blue star at the location of the radio source and obtained its spectrum. Containing many unknown broad emission lines, the anomalous spectrum defied interpretation — a claim by John Bolton of a large redshift was not generally accepted.

In 1962 a breakthrough was achieved. Another radio source, 3C 273, was predicted to undergo five occultations by the Moon. Measurements taken by Cyril Hazard and John Bolton during one of the occultations using the Parkes Radio Telescope allowed Maarten Schmidt to optically identify the object and obtain an optical spectrum using the 200-inch Hale Telescope on Mount Palomar. This spectrum revealed the same strange emission lines. Schmidt realized that these were actually spectral lines of hydrogen redshifted at the rate of 15.8 percent. This discovery showed that 3C 273 was receding at a rate of 47,000 km/s.[26] This discovery revolutionized quasar observation and allowed other astronomers to find redshifts from the emission lines from other radio sources. As predicted earlier by Bolton, 3C 48 was found to have a redshift of 37% of the speed of light.

The term "quasar" was coined by Chinese-born U.S. astrophysicist Hong-Yee Chiu in May 1964, in Physics Today, to describe these puzzling objects:
So far, the clumsily long name 'quasi-stellar radio sources' is used to describe these objects. Because the nature of these objects is entirely unknown, it is hard to prepare a short, appropriate nomenclature for them so that their essential properties are obvious from their name. For convenience, the abbreviated form 'quasar' will be used throughout this paper.
Cloud of gas around the distant quasar SDSS J102009.99+104002.7, taken by MUSE.[27]

Later it was found that not all quasars have strong radio emission; in fact only about 10% are "radio-loud". Hence the name 'QSO' (quasi-stellar object) is used (in addition to "quasar") to refer to these objects, including the 'radio-loud' and the 'radio-quiet' classes. The discovery of the quasar had large implications for the field of astronomy in the 1960s, including drawing physics and astronomy closer together.[28]

One great topic of debate during the 1960s was whether quasars were nearby objects or distant objects as implied by their redshift. It was suggested, for example, that the redshift of quasars was not due to the expansion of space but rather to light escaping a deep gravitational well. However a star of sufficient mass to form such a well would be unstable and in excess of the Hayashi limit.[29] Quasars also show forbidden spectral emission lines which were previously only seen in hot gaseous nebulae of low density, which would be too diffuse to both generate the observed power and fit within a deep gravitational well.[30] There were also serious concerns regarding the idea of cosmologically distant quasars. One strong argument against them was that they implied energies that were far in excess of known energy conversion processes, including nuclear fusion. At this time, there were some suggestions that quasars were made of some hitherto unknown form of stable antimatter and that this might account for their brightness.[citation needed] Others speculated that quasars were a white hole end of a wormhole.[31][32] However, when accretion disc energy-production mechanisms were successfully modeled in the 1970s, the argument that quasars were too luminous became moot and today the cosmological distance of quasars is accepted by almost all researchers.

In 1979 the gravitational lens effect predicted by Einstein's General Theory of Relativity was confirmed observationally for the first time with images of the double quasar 0957+561.[33]

In the 1980s, unified models were developed in which quasars were classified as a particular kind of active galaxy, and a consensus emerged that in many cases it is simply the viewing angle that distinguishes them from other classes, such as blazars and radio galaxies.[34] The huge luminosity of quasars results from the accretion discs of central supermassive black holes, which can convert on the order of 10% of the mass of an object into energy as compared to 0.7% for the p-p chain nuclear fusion process that dominates the energy production in Sun-like stars.
Bright halos around 18 distant quasars.[35]

This mechanism also explains why quasars were more common in the early universe, as this energy production ends when the supermassive black hole consumes all of the gas and dust near it. This means that it is possible that most galaxies, including the Milky Way, have gone through an active stage, appearing as a quasar or some other class of active galaxy that depended on the black hole mass and the accretion rate, and are now quiescent because they lack a supply of matter to feed into their central black holes to generate radiation.

Role in celestial reference systems

The energetic radiation of the quasar makes dark galaxies glow, helping astronomers to understand the obscure early stages of galaxy formation.[36]

Because quasars are extremely distant, bright, and small in apparent size, they are useful reference points in establishing a measurement grid on the sky.[37] The International Celestial Reference System (ICRS) is based on hundreds of extra-galactic radio sources, mostly quasars, distributed around the entire sky. Because they are so distant, they are apparently stationary to our current technology, yet their positions can be measured with the utmost accuracy by Very Long Baseline Interferometry (VLBI). The positions of most are known to 0.001 arcsecond or better, which is orders of magnitude more precise than the best optical measurements.

Multiple quasars

A multiple-image quasar is a quasar whose light undergoes gravitational lensing, resulting in double, triple or quadruple images of the same quasar. The first such gravitational lens to be discovered was the double-imaged quasar Q0957+561 (or Twin Quasar) in 1979.[38] A grouping of two or more quasars can result from a chance alignment, physical proximity, actual close physical interaction, or effects of gravity bending the light of a single quasar into two or more images.

As quasars are rare objects, the probability of three or more separate quasars being found near the same location is very low. The first true triple quasar was found in 2007 by observations at the W. M. Keck Observatory Mauna Kea, Hawaii.[39] LBQS 1429-008 (or QQQ J1432−0106) was first observed in 1989 and was found to be a double quasar; itself a rare occurrence. When astronomers discovered the third member, they confirmed that the sources were separate and not the result of gravitational lensing. This triple quasar has a red shift of z = 2.076, which is equivalent to 10.5 billion light years.[40] The components are separated by an estimated 30–50 kpc, which is typical of interacting galaxies.[41] An example of a triple quasar that is formed by lensing is PG1115 +08.[42]
Quasars in interacting galaxies.[43]

In 2013, the second true triplet quasars QQQ J1519+0627 was found with redshift z = 1.51 (approx 9 billion light years) by an international team of astronomers led by Farina of the University of Insubria, the whole system is well accommodated within 25′′ (i.e., 200 kpc in projected distance). The team accessed data from observations collected at the La Silla Observatory with the New Technology Telescope (NTT) of the European Southern Observatory (ESO) and at the Calar Alto Observatory with the 3.5m telescope of the Centro Astronómico Hispano Alemán (CAHA).[44][45]
The first quadruple quasar was discovered in 2015.[46]

When two quasars are so nearly in the same direction as seen from Earth that they appear to be a single quasar but may be separated by the use of telescopes, they are referred to as a "double quasar", such as the Twin Quasar.[47] These are two different quasars, and not the same quasar that is gravitationally lensed. This configuration is similar to the optical double star. Two quasars, a "quasar pair", may be closely related in time and space, and be gravitationally bound to one another. These may take the form of two quasars in the same galaxy cluster. This configuration is similar to two prominent stars in a star cluster. A "binary quasar", may be closely linked gravitationally and form a pair of interacting galaxies. This configuration is similar to that of a binary star system.

How Long Does CO2 Stay in the Atmosphere, and What are the Consequences?


A major question concerning how severe, and how long-lasting, human induced climate change might be is the lifetime of carbon dioxide (CO2) in the atmosphere.  Worst case projections of climate change often invoke a lifetime of centuries or more, which makes sense for this must lead to very high accumulations of the gas for very long periods of time.  But is this invocation correct?  Fortunately, we already possess sufficient data, or sufficient quality, to answer the question at least reasonably well, and test alternative answers to it.

But first, we must answer a more basic question:  what is meant by lifetime?  Thinking about it, a CO2 molecule might spend many millennia in the atmosphere, even longer in fact, or it might last only a few seconds before being retaken back into one the several surface sinks on the planet.  What is needed is some kind of average lifetime, and not just any average but one that allows for straightforward calculations pertaining to global warming and climate change.  The lifetime generally preferred in science for these kinds of purposes is called the half-life.

For a complete discussion on half-life, see http://amedleyofpotpourri.blogspot.com/2017/11/half-life.html, or https://en.wikipedia.org/wiki/Half-life.  Half-life is commonly encountered in measurements of radioactive decay, and is easiest to describe in this context.  The half-life of a quantity of radioactive atoms is the time required for half the atoms to decay into other atoms (which may or may not be radioactive themselves).  For example, starting with X number of uranium atoms, the half-life is the time it takes until only X/2 uranium atoms are left.  The important point here is that it is not possible to say when a specific atom will decay; thus, we can only speak in terms of probabilities.  Another way of expressing half-life is the number of years that must elapse for there to be a 50% chance that a specific atom will decay -- or, more generally, for a specific process to occur.

Applied to CO2 in the atmosphere, half-life refers to the time needed for a specific CO2 molecule to leave the atmosphere and enter a surface or oceanic sink; alternatively, it is the time over which half the CO2 residing in the atmosphere will leave it.  The estimate of this half-life used here is 27 years (http://amedleyofpotpourri.blogspot.com/2017/11/the-half-life-of-co2-in-earths.html).  Here there is an important point:  if CO2 levels are constant, as they have apparently been for about the last 10,000 years up to human industrialization beginning 150-200 years ago (at around 280 ppm, or 2.1 trillion tonnes), then over this half-life an equal amount of CO2 must have been entering the atmosphere as leaving it over any time period.

Given the half-life of atmospheric CO2, we can calculate the percentage of the gas that leaves the atmosphere per year.  The necessary calculation is given in the reference above, and it turns out to be about 2.5%.  That is, when the atmosphere contained 280 ppm of CO2 (or 2.1 trillion tonnes), some 7.0 ppm / 52.5 billion tonnes were taken up by sinks every year.  Again, assuming CO2 levels constant, that means an equal amount of the gas moved from those sinks back into the atmosphere / year.

I will make an assumption now, but, as we shall see, it is a reasonable assumption because calculations based on it fit current data, while significant deviations from it do not.  That assumption is that natural sources of CO2 today are close to those 150-200 years ago; that is, they still amount to about 52.5 billion tonnes of the gas transported into the atmosphere every year. Remember, these are natural sources.  Adding in anthropogenic sources yields a total of about 90 billion tonnes of CO2 transported into the atmosphere yearly.

How well does this assumption work?  Go back to the 2.5% / year removal of atmospheric CO2.  Applied to the ~400 ppm -- 3 trillion tons of the gas today, this means some 75 billion tonnes / year are being taken up by surface sinks.

If 90 billion tonnes of CO2 are entering the atmosphere every year, while 75 billion tonnes are leaving, then this gives a net increase of 90-75 = 15 billion tonnes / year.  15 billion tonnes equals 2 ppm, which is very close to the average increase of CO2 per year over the last several decades.  This strongly indicates that the assumption of CO2 outgassing from natural sinks having changed little over the last 150-200 years is at least approximately correct, even with the 1 degree C temperature increase over that period.



If the above analysis is correct, we can posit some reasonable speculations about future levels of atmospheric CO2 , and their possible effects on warming and climate change.  For example, if the current 90 billion tonne atmospheric influx per year is maintained, then levels of the gas will increase until its 27 year half-life leads to an equal amount leaving annually, bringing the system to equilibrium.  It turns out that that 90 billion is 2.5% of 3.6 trillion tonnes, or 480 ppm.  That is, in approximately a century (about 3.7 half-lives), atmospheric CO2 will almost level out at 480 ppm, which is 20% greater than current levels.

There is considerable disagreement about the climate effect of such an increase.  I'm going to take the most straightforward approach, which is that since the Earth has experienced a 40% increase in over the last ~150 years, accompanied by a ~1 degree C increase in temperature, a 20% rise should lead to about another ~0.5 degree C of further warming.  Again though, I emphasize that this is by no means certain.

What about other scenarios?  While there is no way emissions could be reduced to zero now or in the immediate future, we can also calculate how long it would take for CO2 to return to pre-industrial levels, or close to them if that did happen.  If emissions were to suddenly decrease to the natural 52.5 billion tonnes / year, the current uptake of 75 billion tonnes would mean a first year decrease of atmospheric CO2 of 22.5 billion tonnes, reducing the current level from 400 ppm down to about 397 ppm.  Now, as this difference will decrease as CO2 diminishes, to an average of around 11.5 billion tonnes -- 1.5 ppm, in about 80 years, or three half-lives -- up to the year 2100 -- we would be close to the pre-industrial level of 280 ppm.

If emissions were to gradually drop to zero during the coming century, how would this scenario change?  I calculate we would then reach approximately half-way toward pre-industrial CO2 levels from today during that period, or about 340 ppm.

On the other hand, what if emissions double over the course of this century; how much would levels rise?  A doubling of emissions means increasing them from 90 billion tonnes / year to around 125-130 billion tonnes.  Again, we ask what would the atmospheric level of CO2 have to be for this amount to be equal to 2.5% of that level?  The answer is at least five point two trillion tonnes, or 680--700 ppm.  Such an increase could represent a 1 degree C temperature increase above the emissions unchanged scenario (i.e., 1.5 degrees), perhaps even more.



Raising global temperatures should increase CO2 atmospheric half-life, as the solubility of gasses in water decreases with increasing temperature.  As we see, however, a rise of 1 degree doesn't appear to have much effect, so another 0.5-1.5 C should not drastically increase half-life either.

Assuming the analysis here holds up, it means that claims of having to reduce carbon emissions to zero by 2100 or even 2050 are pure hyperbole, without scientific basis.  Yet developing energy sources with lower or zero emissions compared to fossil fuels should have high priority, as increasing CO2 sufficiently could have unpleasant consequences, some of which we are not yet aware of.

Active galactic nucleus

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Active_galactic_nucleus ...