Search This Blog

Saturday, September 20, 2014

Observable universe

Observable universe

From Wikipedia, the free encyclopedia

Hubble Ultra-Deep Field image of a region of the observable universe (equivalent sky area size shown in bottom left corner), near the constellation Fornax. Each spot is a galaxy, consisting of billions of stars. The light from the smallest, most red-shifted galaxies originated nearly 14 billion years ago.

The observable universe consists of the galaxies and other matter that can, in principle, be observed from Earth in the present day because light and other signals from these objects has had time to reach the Earth since the beginning of the cosmological expansion. Assuming the universe is isotropic, the distance to the edge of the observable universe is roughly the same in every direction. That is, the observable universe is a spherical volume (a ball) centered on the observer, regardless of the shape of the universe as a whole.[citation needed] Every location in the universe has its own observable universe, which may or may not overlap with the one centered on Earth.

The word observable used in this sense does not depend on whether modern technology actually permits detection of radiation from an object in this region (or indeed on whether there is any radiation to detect). It simply indicates that it is possible in principle for light or other signals from the object to reach an observer on Earth. In practice, we can see light only from as far back as the time of photon decoupling in the recombination epoch. That is when particles were first able to emit photons that were not quickly re-absorbed by other particles. Before then, the universe was filled with a plasma that was opaque to photons.

The surface of last scattering is the collection of points in space at the exact distance that photons from the time of photon decoupling just reach us today. These are the photons we detect today as cosmic microwave background radiation (CMBR). However, with future technology, it may be possible to observe the still older relic neutrino background, or even more distant events via gravitational waves (which also should move at the speed of light). Sometimes astrophysicists distinguish between the visible universe, which includes only signals emitted since recombination—and the observable universe, which includes signals since the beginning of the cosmological expansion (the Big Bang in traditional cosmology, the end of the inflationary epoch in modern cosmology). According to calculations, the comoving distance (current proper distance) to particles from the CMBR, which represent the radius of the visible universe, is about 14.0 billion parsecs (about 45.7 billion light years), while the comoving distance to the edge of the observable universe is about 14.3 billion parsecs (about 46.6 billion light years),[1] about 2% larger.

The best estimate of the age of the universe as of 2013 is 13.798 ± 0.037 billion years[2] but due to the expansion of space humans are observing objects that were originally much closer but are now considerably farther away (as defined in terms of cosmological proper distance, which is equal to the comoving distance at the present time) than a static 13.8 billion light-years distance.[3] It is estimated that the diameter of the observable universe is about 28 billion parsecs (93 billion light-years),[4] putting the edge of the observable universe at about 46–47 billion light-years away.[5][6]

The universe versus the observable universe

Some parts of the universe are too far away for the light emitted since the Big Bang to have had enough time to reach Earth, so these portions of the universe lie outside the observable universe. In the future, light from distant galaxies will have had more time to travel, so additional regions will become observable. However, due to Hubble's law regions sufficiently distant from us are expanding away from us faster than the speed of light (special relativity prevents nearby objects in the same local region from moving faster than the speed of light with respect to each other, but there is no such constraint for distant objects when the space between them is expanding; see uses of the proper distance for a discussion) and furthermore the expansion rate appears to be accelerating due to dark energy. Assuming dark energy remains constant (an unchanging cosmological constant), so that the expansion rate of the universe continues to accelerate, there is a "future visibility limit" beyond which objects will never enter our observable universe at any time in the infinite future, because light emitted by objects outside that limit would never reach us. (A subtlety is that, because the Hubble 
parameter is decreasing with time, there can be cases where a galaxy that is receding from us just a bit faster than light does emit a signal that reaches us eventually[6][7]). This future visibility limit is calculated at a comoving distance of 19 billion parsecs (62 billion light years) assuming the universe will keep expanding forever, which implies the number of galaxies that we can ever theoretically observe in the infinite future (leaving aside the issue that some may be impossible to observe in practice due to redshift, as discussed in the following paragraph) is only larger than the number currently observable by a factor of 2.36.[1]
Artist's logarithmic scale conception of the observable universe with the Solar System at the center, inner and outer planets, Kuiper belt, Oort cloud, Alpha Centauri, Perseus Arm, Milky Way galaxy, Andromeda galaxy, nearby galaxies, Cosmic Web, Cosmic microwave radiation and the Big Bang's invisible plasma on the edge.

Though in principle more galaxies will become observable in the future, in practice an increasing number of galaxies will become extremely redshifted due to ongoing expansion, so much so that they will seem to disappear from view and become invisible.[8][9][10] An additional subtlety is that a galaxy at a given comoving distance is defined to lie within the "observable universe" if we can receive signals emitted by the galaxy at any age in its past history (say, a signal sent from the galaxy only 500 million years after the Big Bang), but because of the universe's expansion, there may be some later age at which a signal sent from the same galaxy can never reach us at any point in the infinite future (so for example we might never see what the galaxy looked like 10 billion years after the Big Bang),[11] even though it remains at the same comoving distance (comoving distance is defined to be constant with time—unlike proper distance, which is used to define recession velocity due to the expansion of space), which is less than the comoving radius of the observable universe.[clarification needed] This fact can be used to define a type of cosmic event horizon whose distance from us changes over time. For example, the current distance to this horizon is about 16 billion light years, meaning that a signal from an event happening at present can eventually reach us in the future if the event is less than 16 billion light years away, but the signal will never reach us if the event is more than 16 billion light years away.[6]

Both popular and professional research articles in cosmology often use the term "universe" to mean "observable universe". This can be justified on the grounds that we can never know anything by direct experimentation about any part of the universe that is causally disconnected from us, although many credible theories require a total universe much larger than the observable universe. No evidence exists to suggest that the boundary of the observable universe constitutes a boundary on the universe as a whole, nor do any of the mainstream cosmological models propose that the universe has any physical boundary in the first place, though some models propose it could be finite but unbounded, like a higher-dimensional analogue of the 2D surface of a sphere that is finite in area but has no edge. It is plausible that the galaxies within our observable universe represent only a minuscule fraction of the galaxies in the universe. According to the theory of cosmic inflation and its founder, Alan Guth, if it is assumed that inflation began about 10−37 seconds after the Big Bang, then with the plausible assumption that the size of the universe at this time was approximately equal to the speed of light times its age, that would suggest that at present the entire universe's size is at least 3x1023 times larger than the size of the observable universe.[12] There are also lower estimates claiming that the entire universe is in excess of 250 times larger than the observable universe.[13]

If the universe is finite but unbounded, it is also possible that the universe is smaller than the observable universe. In this case, what we take to be very distant galaxies may actually be duplicate images of nearby galaxies, formed by light that has circumnavigated the universe. It is difficult to test this hypothesis experimentally because different images of a galaxy would show different eras in its history, and consequently might appear quite different. Bielewicz et al.[14] claims to establish a lower bound of 27.9 gigaparsecs (91 billion light-years) on the diameter of the last scattering surface (since this is only a lower bound, the paper leaves open the possibility that the whole universe is much larger, even infinite). This value is based on matching-circle analysis of the WMAP 7 year data. This approach has been disputed.[15]

Size

Visualization of the 93 billion light year – or 28 billion parsec – three-dimensional observable universe. The scale is such that the fine grains represent collections of large numbers of superclusters. The Virgo Supercluster – home of Milky Way – is marked at the center, but is too small to be seen in the image.

The comoving distance from Earth to the edge of the observable universe is about 14 gigaparsecs (46 billion light years or 4.3×1026 meters) in any direction. The observable universe is thus a sphere with a diameter of about 29 gigaparsecs[16] (93 Gly or 8.8×1026 m).[17] Assuming that space is roughly flat, this size corresponds to a comoving volume of about 1.3×104 Gpc3 (4.1×105 Gly3 or 3.5×1080 m3).

The figures quoted above are distances now (in cosmological time), not distances at the time the light was emitted. For example, the cosmic microwave background radiation that we see right now was emitted at the time of photon decoupling, estimated to have occurred about 380,000 years after the Big Bang,[18][19] which occurred around 13.8 billion years ago. This radiation was emitted by matter that has, in the intervening time, mostly condensed into galaxies, and those galaxies are now calculated to be about 46 billion light-years from us.[1][6] To estimate the distance to that matter at the time the light was emitted, we may first note that according to the Friedmann–LemaĆ®tre–Robertson–Walker metric, which is used to model the expanding universe, if at the present time we receive light with a redshift of z, then the scale factor at the time the light was originally emitted is given by[20][21]

\! a(t) = \frac{1}{1 + z}.

WMAP nine-year results combined with other measurements give the redshift of photon decoupling as z=1091.64±0.47,[22] which implies that the scale factor at the time of photon decoupling would be 11092.64. So if the matter that originally emitted the oldest CMBR photons has a present distance of 46 billion light years, then at the time of decoupling when the photons were originally emitted, the distance would have been only about 42 million light-years.

Misconceptions

An example of one of the most common misconceptions about the size of the observable universe. Despite the fact that the universe is 13.8 billion years old, the distance to the edge of the observable universe is not 13.8 billion light-years, because the universe is expanding. This plaque appears at the Rose Center for Earth and Space in New York City.

Many secondary sources have reported a wide variety of incorrect figures for the size of the visible universe. Some of these figures are listed below, with brief descriptions of possible reasons for misconceptions about them.
13.8 billion light-years
The age of the universe is estimated to be 13.8 billion years. While it is commonly understood that nothing can accelerate to velocities equal to or greater than that of light, it is a common misconception that the radius of the observable universe must therefore amount to only 13.8 billion light-years. This reasoning would only make sense if the flat, static Minkowski spacetime conception under special relativity were correct. In the real universe, spacetime is curved in a way that corresponds to the expansion of space, as evidenced by Hubble's law. Distances obtained as the speed of light multiplied by a cosmological time interval have no direct physical significance.[23]
15.8 billion light-years
This is obtained in the same way as the 13.8 billion light year figure, but starting from an incorrect age of the universe that the popular press reported in mid-2006.[24][25] For an analysis of this claim and the paper that prompted it, see the following reference at the end of this article.[26]
27.6 billion light-years
This is a diameter obtained from the (incorrect) radius of 13.8 billion light-years.
78 billion light-years
In 2003, Cornish et al.[27] found this lower bound for the diameter of the whole universe (not just the observable part), if we postulate that the universe is finite in size due to its having a nontrivial topology,[28][29] with this lower bound based on the estimated current distance between points that we can see on opposite sides of the cosmic microwave background radiation (CMBR). If the whole universe is smaller than this sphere, then light has had time to circumnavigate it since the big bang, producing multiple images of distant points in the CMBR, which would show up as patterns of repeating circles.[30] Cornish et al. looked for such an effect at scales of up to 24 gigaparsecs (78 Gly or 7.4×1026 m) and failed to find it, and suggested that if they could extend their search to all possible orientations, they would then "be able to exclude the possibility that we live in a universe smaller than 24 Gpc in diameter". The authors also estimated that with "lower noise and higher resolution CMB maps (from WMAP's extended mission and from Planck), we will be able to search for smaller circles and extend the limit to ~28 Gpc."[27] This estimate of the maximum lower bound that can be established by future observations corresponds to a radius of 14 gigaparsecs, or around 46 billion light years, about the same as the figure for the radius of the visible universe (whose radius is defined by the CMBR sphere) given in the opening section. A 2012 preprint by most of the same authors as the Cornish et al. paper has extended the current lower bound to a diameter of 98.5% the diameter of the CMBR sphere, or about 26 Gpc.[31]
156 billion light-years
This figure was obtained by doubling 78 billion light-years on the assumption that it is a radius.[32] Since 78 billion light-years is already a diameter (the original paper by Cornish et al. says, "By extending the search to all possible orientations, we will be able to exclude the possibility that we live in a universe smaller than 24 Gpc in diameter," and 24 Gpc is 78 billion light years),[27] the doubled figure is incorrect. This figure was very widely reported.[32][33][34] A press release from Montana State University – Bozeman, where Cornish works as an astrophysicist, noted the error when discussing a story that had appeared in Discover magazine, saying "Discover mistakenly reported that the universe was 156 billion light-years wide, thinking that 78 billion was the radius of the universe instead of its diameter."[35]
180 billion light-years
This estimate combines the erroneous 156 billion light-year figure with evidence that the M33 Galaxy is actually fifteen percent farther away than previous estimates and that, therefore, the Hubble constant is fifteen percent smaller.[36] The 180 billion figure is obtained by adding 15% to 156 billion light years.

Large-scale structure

Sky surveys and mappings of the various wavelength bands of electromagnetic radiation (in particular 21-cm emission) have yielded much information on the content and character of the universe's structure. The organization of structure appears to follow as a hierarchical model with organization up to the scale of superclusters and filaments. Larger than this, there seems to be no continued structure, a phenomenon that has been referred to as the End of Greatness.

Walls, filaments, and voids


The organization of structure arguably begins at the stellar level, though most cosmologists rarely address astrophysics on that scale. Stars are organized into galaxies, which in turn form galaxy groups, galaxy clusters, superclusters, sheets, walls and filaments, which are separated by immense voids, creating a vast foam-like structure sometimes called the "cosmic web". Prior to 1989, it was commonly assumed that virialized galaxy clusters were the largest structures in existence, and that they were distributed more or less uniformly throughout the universe in every direction. However, since the early 1980s, more and more structures have been discovered. In 1983, Adrian Webster identified the Webster LQG, a large quasar group consisting of 5 quasars. The discovery was the first identification of a large-scale structure, and has expanded the information about the known grouping of matter in the universe. In 1987, Robert Brent Tully identified the Pisces–Cetus Supercluster Complex, the galaxy filament in which the Milky Way resides. It is about 1 billion light years across. That same year, an unusually large region with no galaxies has been discovered, the Giant Void, which measures 1.3 billion light years across. Based on redshift survey data, in 1989 Margaret Geller and John Huchra discovered the "Great Wall",[37] a sheet of galaxies more than 500 million light-years long and 200 million wide, but only 15 million light-years thick. The existence of this structure escaped notice for so long because it requires locating the position of galaxies in three dimensions, which involves combining location information about the galaxies with distance information from redshifts. Two years later, astronomers Roger G. Clowes and Luis E. Campusano discovered the Clowes–Campusano LQG, a large quasar group measuring two billion light years at its widest point, and was the largest known structure in the universe at the time of its announcement. In April 2003, another large-scale structure was discovered, the Sloan Great Wall. In August 2007, a possible supervoid was detected in the constellation Eridanus.[38] It coincides with the 'CMB cold spot', a cold region in the microwave sky that is highly improbable under the currently favored cosmological model. This supervoid could cause the cold spot, but to do so it would have to be improbably big, possibly a billion light-years across, almost as big as the Giant Void mentioned above.
Image (computer simulated) of an area of space more than 50 million light years across, presenting a possible large-scale distribution of light sources in the universe - precise relative contributions of galaxies and quasars are unclear.

Another large-scale structure is the Newfound Blob, a collection of galaxies and enormous gas bubbles that measures about 200 million light years across.

In recent studies the universe appears as a collection of giant bubble-like voids separated by sheets and filaments of galaxies, with the superclusters appearing as occasional relatively dense nodes. This network is clearly visible in the 2dF Galaxy Redshift Survey. In the figure, a three-dimensional reconstruction of the inner parts of the survey is shown, revealing an impressive view of the cosmic structures in the nearby universe. Several superclusters stand out, such as the Sloan Great Wall.

In 2011, a large quasar group was discovered, U1.11, measuring about 2.5 billion light years across. On January 11, 2013, another large quasar group, the Huge-LQG, was discovered, which was measured to be four billion light-years across, the largest known structure in the universe that time.[39] In November 2013 astronomers discovered the Hercules–Corona Borealis Great Wall,[40][41] an even bigger structure twice as large as the former. It was defined by mapping of gamma-ray bursts.[40][42]

End of Greatness

The End of Greatness is an observational scale discovered at roughly 100 Mpc (roughly 300 million lightyears) where the lumpiness seen in the large-scale structure of the universe is homogenized and isotropized in accordance with the Cosmological Principle. At this scale, no pseudo-random fractalness is apparent.[43] The superclusters and filaments seen in smaller surveys are randomized to the extent that the smooth distribution of the universe is visually apparent. It was not until the redshift surveys of the 1990s were completed that this scale could accurately be observed.[44]

Observations

"Panoramic view of the entire near-infrared sky reveals the distribution of galaxies beyond the Milky Way. The image is derived from the 2MASS Extended Source Catalog (XSC)—more than 1.5 million galaxies, and the Point Source Catalog (PSC)--nearly 0.5 billion Milky Way stars. The galaxies are color-coded by 'redshift' obtained from the UGC, CfA, Tully NBGC, LCRS, 2dF, 6dFGS, and SDSS surveys (and from various observations compiled by the NASA Extragalactic Database), or photo-metrically deduced from the K band (2.2 um). Blue are the nearest sources (z < 0.01); green are at moderate distances (0.01 < z < 0.04) and red are the most distant sources that 2MASS resolves (0.04 < z < 0.1). The map is projected with an equal area Aitoff in the Galactic system (Milky Way at center)." [45]

Another indicator of large-scale structure is the 'Lyman-alpha forest'. This is a collection of absorption lines that appear in the spectra of light from quasars, which are interpreted as indicating the existence of huge thin sheets of intergalactic (mostly hydrogen) gas. These sheets appear to be associated with the formation of new galaxies.

Caution is required in describing structures on a cosmic scale because things are often different from how they appear. Gravitational lensing (bending of light by gravitation) can make an image appear to originate in a different direction from its real source. This is caused when foreground objects (such as galaxies) curve surrounding spacetime (as predicted by general relativity), and deflect passing light rays. Rather usefully, strong gravitational lensing can sometimes magnify distant galaxies, making them easier to detect. Weak lensing (gravitational shear) by the intervening universe in general also subtly changes the observed large-scale structure. As of 2004, measurements of this subtle shear showed considerable promise as a test of cosmological models.

The large-scale structure of the universe also looks different if one only uses redshift to measure distances to galaxies. For example, galaxies behind a galaxy cluster are attracted to it, and so fall towards it, and so are slightly blueshifted (compared to how they would be if there were no cluster) On the near side, things are slightly redshifted. Thus, the environment of the cluster looks a bit squashed if using redshifts to measure distance. An opposite effect works on the galaxies already within a cluster: the galaxies have some random motion around the cluster center, and when these random motions are converted to redshifts, the cluster appears elongated. This creates a "finger of God"—the illusion of a long chain of galaxies pointed at the Earth.

Cosmography of our cosmic neighborhood

At the centre of the Hydra-Centaurus Supercluster, a gravitational anomaly called the Great Attractor affects the motion of galaxies over a region hundreds of millions of light-years across. These galaxies are all redshifted, in accordance with Hubble's law. This indicates that they are receding from us and from each other, but the variations in their redshift are sufficient to reveal the existence of a concentration of mass equivalent to tens of thousands of galaxies.

The Great Attractor, discovered in 1986, lies at a distance of between 150 million and 250 million light-years (250 million is the most recent estimate), in the direction of the Hydra and Centaurus constellations. In its vicinity there is a preponderance of large old galaxies, many of which are colliding with their neighbours, or radiating large amounts of radio waves.

In 1987 Astronomer R. Brent Tully of the University of Hawaii's Institute of Astronomy identified what he called the Pisces-Cetus Supercluster Complex, a structure one billion light years long and 150 million light years across in which, he claimed, the Local Supercluster was embedded.[46][47]

Mass of ordinary matter

The mass of the universe is often quoted as 1050 tonnes or 1053 kg.[48] In this context, mass refers to ordinary matter and includes the interstellar medium (ISM) and the intergalactic medium (IGM). However, it excludes dark matter and dark energy. Three calculations substantiate this quoted value for the mass of ordinary matter in the universe: Estimates based on critical density, extrapolations from number of stars, and estimates based on steady-state. The calculations obviously assume a finite universe.

Estimates based on critical density

Critical Density is the energy density where the expansion of the universe is poised between continued expansion and collapse.[49] Observations of the cosmic microwave background from the Wilkinson Microwave Anisotropy Probe suggest that the spatial curvature of the universe is very close to zero, which in current cosmological models implies that the value of the density parameter must be very close to a certain critical density value. At this condition, the calculation for \rho_c critical density, is):[50]

\rho_c = \frac{3H_0^2}{8 \pi G}

where G is the gravitational constant. From The European Space Agency's Planck Telescope results: H_0, is 67.15 kilometers per second per mega parsec. This gives a critical density of 0.85×10−26 kg/m3 (commonly quoted as about 5 hydrogen atoms/m3). This density includes four significant types of energy/mass: ordinary matter (4.8%), neutrinos (0.1%), cold dark matter (26.8%), and dark energy (68.3%).[2] Note that although neutrinos are defined as particles like electrons, they are listed separately because they are difficult to detect and so different from ordinary matter. Thus, the density of ordinary matter is 4.8% of the total critical density calculated or 4.08×10−28 kg/m3. To convert this density to mass we must multiply by volume, a value based on the radius of the "observable universe". Since the universe has been expanding for 13.7 billion years, the comoving distance (radius) is now about 46.6 billion light years. Thus, volume (4/3 Ļ€ r3) equals 3.58×1080 m3 and mass of ordinary matter equals density (4.08×10−28 kg/m3) times volume (3.58×1080 m3) or 1.46×1053 kg.

Extrapolation from number of stars

There is currently no way to know exactly the number of stars, but from current literature, the range of 1022 to 1024 is normally quoted.[51][52][53][54] One way to substantiate this range is to estimate the number of galaxies and multiply by the number of stars in an average galaxy. The 2004 Hubble Ultra-Deep Field image contains an estimated 10,000 galaxies.[55] The patch of sky in this area, is 3.4 arc minutes on each side. For a relative comparison, it would require over 50 of these images to cover the full moon. If this area is typical for the entire sky, there are over 100 billion galaxies in the universe.[56] More recently, in 2012, Hubble scientists produced the Hubble Extreme Deep Field image which showed slightly more galaxies for a comparable area.[57] However, in order to compute the number of stars based on these images, we would need additional assumptions: the percent of both large and dwarf galaxies; and, their average number of stars. Thus, a reasonable option is to assume 100 billion average galaxies and 100 billion stars per average galaxy. This results in 10 22 stars. Next, we need average star mass which can be calculated from the distribution of stars in the Milky Way. Within the Milky Way, if a large number of stars are counted by spectral class, 73% are class M stars which contain only 30% of the Sun's mass. Considering mass and number of stars in each spectral class, the average star is 51.5% of the Sun's mass.[58] The Sun's mass is 2×1030 kg. so a reasonable number for the mass of an average star in the universe is 1030 kg. Thus, the mass of all stars equals the number of stars (1022) times an average mass of star (1030 kg) or 1052 kg. The next
calculation adjusts for Interstellar Medium (ISM) and Intergalactic Medium (IGM). ISM is material between stars: gas (mostly hydrogen) and dust. IGM is material between galaxies, mostly hydrogen. Ordinary matter (protons, neutrons and electrons) exists in ISM and IGM as well as in stars. In the reference, "The Cosmic Energy Inventory“, the percentage of each part is defined: stars = 5.9%, Interstellar Medium (ISM) = 1.7%, and Intergalactic Medium (IGM) = 92.4%.[59] Thus, to extrapolate the mass of the universe from the star mass, divide the 1052 kg mass calculated for stars by 5.9%. The result is 1.7×1053 kg for all the ordinary matter.

Estimates based on steady-state universe

Sir Fred Hoyle calculated the mass of an observable steady-state universe using the formula:[60]
\frac{4}{3}\pi\rho\left(\frac{c}{H}\right)^3
which can also be stated as [61]
\frac{c^3}{2GH} \
Here H = Hubble constant, Ļ = Hoyle's value for the density, G = gravitational constant, and c = speed of light.

This calculation yields approximately 0.92×1053 kg; however, this represents all energy/matter and is based on the Hubble volume (the volume of a sphere with radius equal to the Hubble length of about 13.7 billion light years). The critical density calculation above was based on the comoving distance radius of 46.6 billion light years. Thus, the Hoyle equation mass/energy result must be adjusted for increased volume. The comoving distance radius gives a volume about 39 times greater (46.7 cubed divided by 13.7 cubed). However, as volume increases, ordinary matter and dark matter would not increase; only dark energy increases with volume. Thus, assuming ordinary matter, neutrinos, and dark matter are 31.7% of the total mass/energy, and dark energy is 68.3%, the amount of total mass/energy for the steady-state calculation would be: mass of ordinary matter and dark matter (31.7% times 0.92×1053 kg) plus the mass of dark energy ((68.3% times 0.92×1053 kg) times increased volume (39)). This equals: 2.48×1054 kg. As noted above for the Critical Density method, ordinary matter is 4.8% of all energy/matter. If the Hoyle result is multiplied by this percent, the result for ordinary matter is 1.20×1053 kg.

Comparison of results

In summary, the three independent calculations produced reasonably close results: 1.46×1053 kg, 1.7×1053 kg, and 1.20×1053 kg. The average is 1.45×1053 kg.

The key assumptions using the Extrapolation from Star Mass method were the number of stars (1022) and the percentage of ordinary matter in stars (5.9%). The key assumptions using the Critical Density method were the comoving distance radius of the universe (46.6 billion light years) and the percentage of ordinary matter in all matter (4.8%). The key assumptions using the Hoyle steady-state method were the comoving distance radius and the percentage of dark energy in all mass (68.3%). Both the Critical Density and the Hoyle steady-state equations also used the Hubble constant (67.15 km/s/Mpc).

Matter content — number of atoms

Assuming the mass of ordinary matter is about 1.45×1053 kg (reference previous section) and assuming all atoms are hydrogen atoms (which in reality make up about 74% of all atoms in our galaxy by mass, see Abundance of the chemical elements), calculating the estimated total number of atoms in the universe is straightforward. Divide the mass of ordinary matter by the mass of a hydrogen atom (1.45×1053 kg divided by 1.67×10−27 kg). The result is approximately 1080 hydrogen atoms.

Most distant objects

The most distant astronomical object yet announced as of January 2011 is a galaxy candidate classified UDFj-39546284. In 2009, a gamma ray burst, GRB 090423, was found to have a redshift of 8.2, which indicates that the collapsing star that caused it exploded when the universe was only 630 million years old.[62] The burst happened approximately 13 billion years ago,[63] so a distance of about 13 billion light years was widely quoted in the media (or sometimes a more precise figure of 13.035 billion light years),[62] though this would be the "light travel distance" (see Distance measures (cosmology)) rather than the "proper distance" used in both Hubble's law and in defining the size of the observable universe (cosmologist Ned Wright argues against the common use of light travel distance in astronomical press releases on this page, and at the bottom of the page offers online calculators that can be used to calculate the current proper distance to a distant object in a flat universe based on either the redshift z or the light travel time). The proper distance for a redshift of 8.2 would be about 9.2 Gpc,[64] or about 30 billion light years. Another record-holder for most distant object is a galaxy observed through and located beyond Abell 2218, also with a light travel distance of approximately 13 billion light years from Earth, with observations from the Hubble telescope indicating a redshift between 6.6 and 7.1, and observations from Keck telescopes indicating a redshift towards the upper end of this range, around 7.[65] The galaxy's light now observable on Earth would have begun to emanate from its source about 750 million years after the Big Bang.[66]

Horizons

The limit of observability in our universe is set by a set of cosmological horizons which limit, based on various physical constraints, the extent to which we can obtain information about various events in the universe. The most famous horizon is the particle horizon which sets a limit on the precise distance that can be seen due to the finite age of the Universe. Additional horizons are associated with the possible future extent of observations (larger than the particle horizon owing to the expansion of space), an "optical horizon" at the surface of last scattering, and associated horizons with the surface of last scattering for neutrinos and gravitational waves.
A diagram of our location in the observable universe. (Click here for an alternate image.)

Type II supernova

Type II supernova

From Wikipedia, the free encyclopedia
 
The expanding remnant of SN 1987A, a Type II-P supernova in the Large Magellanic Cloud. NASA image.

A Type II supernova (plural: supernovae) results from the rapid collapse and violent explosion of a massive star. A star must have at least 8 times, and no more than 40–50 times, the mass of the Sun for this type of explosion.[1] It is distinguished from other types of supernovae by the presence of hydrogen in its spectrum. Type II supernovae are mainly observed in the spiral arms of galaxies and in H II regions, but not in elliptical galaxies.

Stars generate energy by the nuclear fusion of elements. Unlike the Sun, massive stars possess the mass needed to fuse elements that have an atomic mass greater than hydrogen and helium, albeit at increasingly higher temperatures and pressures, causing increasingly shorter stellar life spans. The degeneracy pressure of electrons and the energy generated by these fusion reactions are sufficient to counter the force of gravity and prevent the star from collapsing, maintaining stellar equilibrium. The star fuses increasingly higher mass elements, starting with hydrogen and then helium, progressing up through the periodic table until a core of iron and nickel is produced. Fusion of iron or nickel produces no net energy output, so no further fusion can take place, leaving the nickel-iron core inert. Due to the lack of energy output allowing outward pressure, equilibrium is broken.

When the mass of the inert core exceeds the Chandrasekhar limit of about 1.4 solar masses, electron degeneracy alone is no longer sufficient to counter gravity and maintain stellar equilibrium. A cataclysmic implosion takes place within seconds, in which the outer core reaches an inward velocity of up to 23% of the speed of light and the inner core reaches temperatures of up to 100 billion kelvin. Neutrons and neutrinos are formed via reversed beta-decay, releasing about 1046 joules (100 foes) in a ten-second burst. The collapse is halted by neutron degeneracy, causing the implosion to rebound and bounce outward. The energy of this expanding shock wave is sufficient to accelerate the surrounding stellar material to escape velocity, forming a supernova explosion, while the shock wave and extremely high temperature and pressure briefly allow for the production of elements heavier than iron.[2] Depending on initial size of the star, the remnants of the core form a neutron star or a black hole. Because of the underlying mechanism, the resulting nova is also described as a core-collapse supernova.

There exist several categories of Type II supernova explosions, which are categorized based on the resulting light curve—a graph of luminosity versus time—following the explosion. Type II-L supernovae show a steady (linear) decline of the light curve following the explosion, whereas Type II-P display a period of slower decline (a plateau) in their light curve followed by a normal decay. Type Ib and Ic supernovae are a type of core-collapse supernova for a massive star that has shed its outer envelope of hydrogen and (for Type Ic) helium. As a result, they appear to be lacking in these elements.

Formation

The onion-like layers of a massive, evolved star just before core collapse. (Not to scale.)

Stars far more massive than the sun evolve in more complex ways. In the core of the star, hydrogen is fused into helium, releasing thermal energy that heats the sun's core and provides outward pressure that supports the sun's layers against collapse in a process known as stellar or hydrostatic equilibrium. The helium produced in the core accumulates there since temperatures in the core are not yet high enough to cause it to fuse. Eventually, as the hydrogen at the core is exhausted, fusion starts to slow down, and gravity causes the core to contract. This contraction raises the temperature high enough to initiate a shorter phase of helium fusion, which accounts for less than 10% of the star's total lifetime. In stars with fewer than eight solar masses, the carbon produced by helium fusion does not fuse, and the star gradually cools to become a white dwarf.[3][4] White dwarf stars, if they have a near companion, may then become Type Ia supernovae.

A much larger star, however, is massive enough to create temperatures and pressures needed to cause the carbon in the core to begin to fuse once the star contracts at the end of the helium-burning stage. The cores of these massive stars become layered like onions as progressively heavier atomic nuclei build up at the center, with an outermost layer of hydrogen gas, surrounding a layer of hydrogen fusing into helium, surrounding a layer of helium fusing into carbon via the triple-alpha process, surrounding layers that fuse to progressively heavier elements. As a star this massive evolves, it undergoes repeated stages where fusion in the core stops, and the core collapses until the pressure and temperature are sufficient to begin the next stage of fusion, reigniting to halt collapse.[3][4]
Core-burning nuclear fusion stages for a 25-solar mass star
Process Main fuel Main products 25 M star[5]
Temperature
(Kelvin)
Density
(g/cm3)
Duration
hydrogen burning hydrogen helium 7×107 10 107 years
triple-alpha process helium carbon, oxygen 2×108 2000 106 years
carbon burning process carbon Ne, Na, Mg, Al 8×108 106 103 years
neon burning process neon O, Mg 1.6×109 107 3 years
oxygen burning process oxygen Si, S, Ar, Ca 1.8×109 107 0.3 years
silicon burning process silicon nickel (decays into iron) 2.5×109 108 5 days

Core collapse

The factor limiting this process is the amount of energy that is released through fusion, which is dependent on the binding energy that holds together these atomic nuclei. Each additional step produces progressively heavier nuclei, which release progressively less energy when fusing. In addition, from carbon-burning onwards, energy loss via neutrino production becomes significant, leading to a higher rate of reaction than would otherwise take place.[6] This continues until nickel-56 is produced, which decays radioactively into cobalt-56 and then iron-56 over the course of a few months. As iron and nickel have the highest binding energy per nucleon of all the elements,[7] energy cannot be produced at the core by fusion, and a nickel-iron core grows.[4][8] This core is under huge gravitational pressure. As there is no fusion to further raise the star's temperature to support it against collapse, it is supported only by degeneracy pressure of electrons. In this state, matter is so dense that further compaction would require electrons to occupy the same energy states. However, this is forbidden for identical fermion particles, such as the electron – a phenomenon called the Pauli exclusion principle.

When the core's mass exceeds the Chandrasekhar limit of about 1.4 solar masses, degeneracy pressure can no longer support it, and catastrophic collapse ensues.[9] The outer part of the core reaches velocities of up to 70,000 km/s (23% of the speed of light) as it collapses toward the center of the star.[10] The rapidly shrinking core heats up, producing high-energy gamma rays that decompose iron nuclei into helium nuclei and free neutrons via photodisintegration. As the core's density increases, it becomes energetically favorable for electrons and protons to merge via inverse beta decay, producing neutrons and elementary particles called neutrinos. Because neutrinos rarely interact with normal matter, they can escape from the core, carrying away energy and further accelerating the collapse, which proceeds over a timescale of milliseconds. As the core detaches from the outer layers of the star, some of these neutrinos are absorbed by the star's outer layers, beginning the supernova explosion.[11]

For Type II supernovae, the collapse is eventually halted by short-range repulsive neutron-neutron interactions, mediated by the strong force, as well as by degeneracy pressure of neutrons, at a density comparable to that of an atomic nucleus. Once collapse stops, the infalling matter rebounds, producing a shock wave that propagates outward. The energy from this shock dissociates heavy elements within the core. This reduces the energy of the shock, which can stall the explosion within the outer core.[12]

The core collapse phase is so dense and energetic that only neutrinos are able to escape. As the protons and electrons combine to form neutrons by means of electron capture, an electron neutrino is produced. In a typical Type II supernova, the newly formed neutron core has an initial temperature of about 100 billion kelvin, 104 times the temperature of the sun's core. Much of this thermal energy must be shed for a stable neutron star to form, otherwise the neutrons would "boil away". This is accomplished by a further release of neutrinos.[13] These 'thermal' neutrinos form as neutrino-antineutrino pairs of all flavors, and total several times the number of electron-capture neutrinos.[14] The two neutrino production mechanisms convert the gravitational potential energy of the collapse into a ten second neutrino burst, releasing about 1046 joules (100 foes).[15]

Through a process that is not clearly understood, about 1044 joules (1 foe) is reabsorbed by the stalled shock, producing an explosion.[a][12] The neutrinos generated by a supernova were actually observed in the case of Supernova 1987A, leading astronomers to conclude that the core collapse picture is basically correct. The water-based Kamiokande II and IMB instruments detected antineutrinos of thermal origin,[13] while the gallium-71-based Baksan instrument detected neutrinos (lepton number = 1) of either thermal or electron-capture origin.
Within a massive, evolved star (a) the onion-layered shells of elements undergo fusion, forming a nickel-iron core (b) that reaches Chandrasekhar-mass and starts to collapse. The inner part of the core is compressed into neutrons (c), causing infalling material to bounce (d) and form an outward-propagating shock front (red). The shock starts to stall (e), but it is re-invigorated by neutrino interaction. The surrounding material is blasted away (f), leaving only a degenerate remnant.

When the progenitor star is below about 20 solar masses – depending on the strength of the explosion and the amount of material that falls back – the degenerate remnant of a core collapse is a neutron star.[10] Above this mass, the remnant collapses to form a black hole.[4][16] The theoretical limiting mass for this type of core collapse scenario is about 40–50 solar masses. Above that mass, a star is believed to collapse directly into a black hole without forming a supernova explosion,[17] although uncertainties in models of supernova collapse make calculation of these limits uncertain.

Theoretical models

The Standard Model of particle physics is a theory which describes three of the four known fundamental interactions between the elementary particles that make up all matter. This theory allows predictions to be made about how particles will interact under many conditions. The energy per particle in a supernova is typically one to one hundred and fifty picojoules (tens to hundreds of MeV).[18] The per-particle energy involved in a supernova is small enough that the predictions gained from the Standard Model of particle physics are likely to be basically correct. But the high densities may require corrections to the Standard Model.[19] In particular, Earth-based particle accelerators can produce particle interactions which are of much higher energy than are found in supernovae,[20] but these experiments involve individual particles interacting with individual particles, and it is likely
that the high densities within the supernova will produce novel effects. The interactions between neutrinos and the other particles in the supernova take place with the weak nuclear force, which is believed to be well understood. However, the interactions between the protons and neutrons involve the strong nuclear force, which is much less well understood.[21]

The major unsolved problem with Type II supernovae is that it is not understood how the burst of neutrinos transfers its energy to the rest of the star producing the shock wave which causes the star to explode. From the above discussion, only one percent of the energy needs to be transferred to produce an explosion, but explaining how that one percent of transfer occurs has proven very difficult, even though the particle interactions involved are believed to be well understood. In the 1990s, one model for doing this involved convective overturn, which suggests that convection, either from neutrinos from below, or infalling matter from above, completes the process of destroying the progenitor star. Heavier elements than iron are formed during this explosion by neutron capture, and from the pressure of the neutrinos pressing into the boundary of the "neutrinosphere", seeding the surrounding space with a cloud of gas and dust which is richer in heavy elements than the material from which the star originally formed.[22]

Neutrino physics, which is modeled by the Standard Model, is crucial to the understanding of this process.[19] The other crucial area of investigation is the hydrodynamics of the plasma that makes up the dying star; how it behaves during the core collapse determines when and how the "shock wave" forms and when and how it "stalls" and is reenergized.[23]

In fact, some theoretical models incorporate a hydrodynamical instability in the stalled shock known as the "Standing Accretion Shock Instability" (SASI). This instability comes about as a consequence of non-spherical perturbations oscillating the stalled shock thereby deforming it. The SASI is often used in tandem with neutrino theories in computer simulations for re-energizing the stalled shock.[24]

Computer models have been very successful at calculating the behavior of Type II supernovae once the shock has been formed. By ignoring the first second of the explosion, and assuming that an explosion is started, astrophysicists have been able to make detailed predictions about the elements produced by the supernova and of the expected light curve from the supernova.[25][26][27]

Light curves for Type II-L and Type II-P supernovae

This graph of the luminosity as a function of time shows the characteristic shapes of the light curves for a Type II-L and II-P supernova.

When the spectrum of a Type II supernova is examined, it normally displays Balmer absorption lines – reduced flux at the characteristic frequencies where hydrogen atoms absorb energy. The presence of these lines is used to distinguish this category of supernova from a Type I supernova.

When the luminosity of a Type II supernova is plotted over a period of time, it shows a characteristic rise to a peak brightness followed by a decline. These light curves have an average decay rate of 0.008 magnitudes per day; much lower than the decay rate for Type Ia supernovae. Type II are sub-divided into two classes, depending on the shape of the light curve. The light curve for a Type II-L supernova shows a steady (linear) decline following the peak brightness. By contrast, the light curve of a Type II-P supernova has a distinctive flat stretch (called a plateau) during the decline; representing a period where the luminosity decays at a slower rate. The net luminosity decay rate is lower, at 0.0075 magnitudes per day for Type II-P, compared to 0.012 magnitudes per day for Type II-L.[28]

The difference in the shape of the light curves is believed to be caused, in the case of Type II-L supernovae, by the expulsion of most of the hydrogen envelope of the progenitor star.[28] The plateau phase in Type II-P supernovae is due to a change in the opacity of the exterior layer. The shock wave ionizes the hydrogen in the outer envelope – stripping the electron from the hydrogen atom – resulting in a significant increase in the opacity. This prevents photons from the inner parts of the explosion from escaping. Once the hydrogen cools sufficiently to recombine, the outer layer becomes transparent.[29]

Type IIn supernovae

The "n" denotes narrow, which indicates the presence of intermediate or very narrow width H emission lines in the spectra. In the intermediate width case, the ejecta from the explosion may be interacting strongly with gas around the star – the circumstellar medium. [30][31] There are indications that they originate as stars similar to Luminous blue variables with large mass losses before exploding.[32] SN 2005gl is one example of Type IIn; SN 2006gy, an extremely energetic supernova, may be another example.[33]

Type IIb supernovae

A Type IIb supernova has a weak hydrogen line in its initial spectrum, which is why it is classified as a Type II. However, later on the H emission becomes undetectable, and there is also a second peak in the light curve that has a spectrum which more closely resembles a Type Ib supernova. The progenitor could have been a giant star which lost most of its hydrogen envelope due to interactions with a companion in a binary system, leaving behind the core that consisted almost entirely of helium.[34] As the ejecta of a Type IIb expands, the hydrogen layer quickly becomes more transparent and reveals the deeper layers.[34] The classic example of a Type IIb supernova is Supernova 1993J,[35][36] while another example is Cassiopeia A.[37] The IIb class was first introduced (as a theoretical concept) by Ensman & Woosley 1987.

Hypernovae (collapsars)

Hypernovae are a rare type of supernova substantially more luminous and energetic than standard supernovae. Examples are 1997ef (type Ic) and 1997cy (type IIn). Hypernovae are produced by more than one type of event: relativistic jets during formation of a black hole from fallback of material onto the neutron star core, the collapsar model; interaction with a dense envelope of circumstellar material, the CSM model; the highest mass pair instability supernovae; possibly others such as binary and quark star model.
Stars with initial masses between about 25 and 90 times the sun develop cores large enough that after a supernova explosion, some material will fall back onto the neutron star core and create a black hole. In many cases this reduces the luminosity of the supernova, and above 90 masses the star collapses directly into a black hole without a supernova explosion. However if the progenitor is spinning quickly enough the infalling material generates relativistic jets that emit more energy than the original explosion.[38] They may also be seen directly if beamed towards us, giving the impression of an even more luminous object. In some cases these can produce gamma-ray bursts, although not all gamma-ray bursts are from supernovae.[39]

In some cases a type II supernova occurs when the star is surrounded by a very dense cloud of material, most likely expelled during luminous blue variable eruptions. This material is shocked by the explosion and becomes more luminous than a standard supernova. It is likely that there is a range of luminosities for these type IIn supernovae with only the brightest qualifying as a hypernova.

Pair instability supernovae occur when an oxygen core in an extremely massive star becomes hot enough that gamma rays spontaneously produce electron-positron pairs.[40] This causes the core to collapse, but where the collapse of an iron core causes endothermic fusion to heavier elements, the collapse of an oxygen core creates runaway exothermic fusion which completely unbinds the star. The total energy emitted depends on the initial mass, with much of the core being converted to 56Ni and ejected which then powers the supernova for many months. At the lower end stars of about 140 solar masses produce supernovae that are long-lived but otherwise typical, while the highest mass stars of around 250 solar masses produce supernovae that are extremely luminous and also very long lived; hypernovae. More massive stars die by photodisintegration. Only population III stars, with very low metallicity, can reach this stage. Stars with more heavy elements are more opaque and blow away their outer layers until they are small enough to explode as a normal type Ib/c supernova. It is thought that even in our own galaxy, mergers of old low metallicity stars may form massive stars capable of creating a pair instability supernova.

Open government

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Open_gover...